A 2018 film project designed by Ars Technica smacks of an absurdist David Lynch film that has been put into a blender.

The film, entitled “Zone Out” featuring unsettling images of facial displacement and dialogue that might be written by aliens, is the follow up project of Benjamin, an Artificially Intelligent screenwriter.

Benjamin (which the AI chose to call itself), was originally designed by Ars Technica Director Oscar Sharp and AI researcher Ross Goodwin with the goal of producing an entirely machine created screenplay. The initial film that Benjamin wrote, “Sunspring”, was filmed in 48 hours and starred HBOs Silicon Valley star Thomas Middleditch. The resulting film experience has been lauded as both a hilarious failure and incredible success. While the screenplay is certainly not something that a human would write, the film has some surprising resonance.

To write the script, Benjamin was fed a diet of exclusively sci-fi movie text, such as “Aliens”, “Predator”, and “Planet of The Apes”. Then it was left to create dialogue and directions based on what it learned. For the second project, also “starring” Middleditch, Ars Technica designers wanted to see what it would be like to make Benjamin write, direct, and star in the film. They added stock footage and new script information for the AI to use and combined emotional expressions from Middleditch.

The finished product may feel far from the uncanny valley, but it does seem distinctly like the opening of a door into a new artistic universe that cannot be shut out. In this brave new world, humans and AI will work in combination to create the next generation of every kind of art, from plastic arts like painting and sculpture, to theater, dance, music, and even culinary expression.

How AI Art Works

 

Generally, AI art is created through the use of generative adversarial networks (GAN). GAN turns two machine intelligences against each other in a kind of simplistic mock up of the human left brain vs right brain.  The machines within the process, known as the generator and the discriminator, are given a data set to work with and, after analysis, they get to their artwork.

In painting, for example, a machine could be given the known history of Renaissance portraiture as a data set. The generator would then produce random creative shapes and colors, at the same time, the discriminator compares these generated objects to the data set. Between these two intelligences a decision is made to create color, texture, and image. If this is altogether different than how a human creates art, “Robot Artist” Jeremy Krabil would like to know how so.

AI Art on the Market 

Recently, Krabil programed a computer to create a piece of artwork that is known as the first painting made by AI. The resulting painting, that looks like a man in a dark frock coat, was titled “Portrait of Edmond de Belamy” and put up for sale. The work astounded critics when, though it was expected to sell for around $10,000, finally was auctioned at Christies for over $430,000. While detractors say that art must be created by humans alone, Krabil argues that he has worked in tandem with new artistic tools, wielded by a machine intelligence, to produce an elevated work. Either way, Christie’s paid him the sum and this has been only the beginning in a market for machine created art.

In New Jersey, at Rutgers Art and Artificial Intelligence Lab (AAIL), Professor Ahmed Elgammal, has taken the debate over person vs machine painting a step further by testing critics in a double-blind study to see if they could tell if a painting had been created by a human or a machine intelligence.  Those who were subjected to this unique kind of Turing test were found not to be able to discern the origins of the resulting artwork. In fact, the pieces made by machines were often seen as more aesthetically pleasing and resonant than those crafted by human hands. Going further than GAN technology, Elgammal has created what he has dubbed Creative Adversarial Networks (CANs). Using CANs, Elgammal argues, a machine can learn to paint without any human input. He puts the resulting abstract works up against artwork in any gallery as far as aesthetic resonance for an audience.

AI in Literature

Ross Goodwin of Ars Technica has also been hard at work creating AI models for narrative literature as well. He recently completed what is being known as the first novel written by machine intelligence. 1 The Road was created entirely by a machine with an algorithm similar to the one that tries to predict your next response on your cell phone. Fed with images, sounds, and locative information from being taken on a road trip, Goodwin’s digital author devised a piece that is still rather far off from traditional works of literature. He admits that, as of now, the machine does not understand the intricacies of story, plot, or structure. But by using organizing stimuli, like what happens on a road trip, Goodwin is confident that the road to that uncanny valley is closer than ever.

Cooking with Machines

While it’s counter-intuitive that a machine could learn to cook, Chef Watson, created by IBM, uses thousands of flavor profiles to help people enjoy unique food combinations. In addition, because it has studied so many recipes, Chef Watson has advice on procedures and methods for creating food in ways that might never have been thought of before.

Machine Music

Music, that is so rooted in numbers and patterns, might be the most natural place for an Artificial Intelligence to make its artistic mark. Songwriter Taryn Southern, for example, uses AI to augment her songs with instrumentation that she could never play on her own. Using an open source musical AI called Amper, Southern created her debut album I AM AI. Southern says that she gives the machine instruction and its output is similar to working in collaboration with a team of musicians. “The platform will spit a song out at me,” says Southern, “and then I can iterate from there, making adjustments to the instruments and the key.” This means that more burgeoning songwriters will be able to create musical expression, relying on the intelligence of AI for the underscoring.

What’s next?

At the same time, theater, film, dance, food, and plastic arts are all being affected by revolutions in machine learning. Artists in every genre will have to wrestle with the idea that they risk being left in the dust if they are not supplementing their own artwork with the techniques that can be incorporated from artificially intelligent artists.

If you want to learn more about AI in the arts see this infographic below that Invaluable.com put together that outlines more about how artists are combining with computer counterparts in art.

Will Art Be Dominated by AI

EmmaArtificial Intelligence
A 2018 film project designed by Ars Technica smacks of an absurdist David Lynch film that has been put into a blender. The film, entitled “Zone Out” featuring unsettling images of facial displacement and dialogue that might be written by aliens, is the follow up project of Benjamin, an Artificially...