What Is DALL·E? The AI Machine Creating Instant Surrealist Art
DALL·E / Fabiola Larios

Culture

What Is DALL·E? The AI Machine Creating Instant Surrealist Art

DALL·E is catapulting AI art to mainstream embrace. Is that a good thing?

You’ve probably been seeing surrealist memes on social media over the last few weeks, as nine-panel grids of nightmarish images like Patrick Star on a crucifix and Pikachu pug have proliferated the feeds. These surrealist visions come from DALL·E — whose title is an homage to Salvador Dali and the cute robot WALL-E — is, in its most basic definition, artificial intelligence that lets users type in a string of words and spits out strange, nightmarish pieces of art.

The memes on Twitter and Instagram are created using DALL·E mini, a user-friendly AI tool that generates images that previously did not exist from a string of phrases in a matter of seconds. DALL·E mini is a good on-ramp for people getting interested in AI art, but it is not associated with the more powerful, precise original DALL·E tool, which was created by the Microsoft-backed San Francisco company OpenAI last year.

Still, the popularity of DALL·E mini has catapulted AI art into the mainstream overnight. Cosmopolitan just released the first AI-designed magazine cover using DALL·E 2. The Economist used a different AI tool to design its cover last week.

With the mainstream embrace of AI cart also comes a set of concerns and learning curves. NYLON spoke with Fabiola Larios, a new media artist who works with AI, about what the widespread embrace of DALL·E mini and AI art means for the future of the medium.

“Is this the death of art? Because you can create everything on DALL·E, is imagination going to die in some way or evolve in another way?” She adds that the embrace of AI art also has the possibilities to expand art in a big way. “I think it opens a lot of opportunities for people to create something they didn’t know they could create.”

Earlier this year, NYLON spoke with Sam King, who creates and curates AI art on the Twitter account @images_ai, about the future of AI art.

“I think the primary application of this in the future is as tools we employ in tandem with more traditional forms of art,” King said. “I promise it is much much easier to do than it looks. I think that more people should have an understanding of it – I think it is going to play a major part in the future of all art creation.”

Whether we like it or not, AI art has been catapulted from niche accounts into the mainstream. Here’s everything you need to know about the tool that’s largely responsible for catapulted AI art from niche accounts into the mainstream.

Who made DALL·E?

DALL·E was introduced by OpenAI, a Microsoft-backed San Francisco startup whose mission is to create a safe and useful artificial general intelligence, in January 2021. “We’ve found that it has a diverse set of capabilities, including creating anthropomorphized versions of animals and objects, combining unrelated concepts in plausible ways, rendering text, and applying transformations to existing images,” a blog on the machine reads.

You feed the tool a line of text — like “Barbie taking a selfie in a renaissance painting,” for example — and it conjures images that have never existed until now, that match that text.

How can I use it?

DALL·E is only available for artists and creators who apply to use the technology. The best way to dip your toe into the AI art world is to use the DALL·E mini tool, where you can type phrases like you would in a Google search bar: “Patrick Star Salvador Dali” gets you a grid of cursed images that’s surrealist take on our favorite starfish, or “Lisa Simpson Edvard Munch” and get some hauntingly beautiful painted versions of Lisa as the famed painting “The Scream.”

Did DALL·E mini change its name?

Yes. OpenAI asked Boris Dayma and Pedro Cuenca, the creators of DALL·E mini, to change the name of of their app to avoid confusion. It’s now called Craiyon.

Are there any controversies around DALL·E mini?

While DALL·E mini democratizes the technology, it also has fewer guardrails for protecting abuses of it. “DALL·E mini is like, ‘Oh, AI needs to be for everyone and stuff,’ but people don’t understand really the implication of working with AI,” Larios says.

She’s concerned that DALL·E mini is allowing people to play with unfiltered data from the Internet, and brought up the example of a DALL·E mini creation that went viral: “gender reveal 9-11,” which spit out the twin towers with pink and blue smoke, an image that disturbed her.

“The freedom of the unfiltered data and people playing with that is the thing that makes me worried,” she says.

She also pointed to the “bias and limitations” disclosure on DALL-E mini, which reads: “...given the fact that the model was trained on unfiltered data from the Internet, it may generate images that contain stereotypes against minority groups.”

“The other DALL·E is not open to the public. They see your social media, your LinkedIn. They need to review basically what kind of person you are, so you don’t misuse the model,” Larios says. “So it’s a good thing that there’s limitations. You don’t know how far people can go to have fun or create engagement on social media.”

Is there anything you can’t make?

Yes. The machine asks that you don’t create images with public figures and politicians, and has added certain keywords, such as “shooting” to a block list. The intent is not to deceive, but to play. DALL·E also don’t want people using it for commercial purposes, and ask that you disclose the role of AI in creating the images.

What does this mean for the future of AI art?

The creators of DALL·E are working on DALL·E 2, which is currently in private beta testing. At its most base level, DALL·E 2 has the same function: creating original art from a string of words, but with more advanced capabilities.

According to a blog from OpenAI, DALL·E 2 will also be able to make edits to existing images, adding and removing elements while taking shadows and textures into account, like a very smart Photoshop. You can tell it to replace, for example, a hot dog with a tennis ball. The blog claims it will also be able to generate more realistic and accurate images with 4x greater resolution.