This analysis was first published in SvD Näringsliv, in Swedish, on March 27th, 2023. This piece was translated from Swedish by Claude. Some phrasing may differ from a human translation.
The Pope’s new puffer jacket became a sudden viral hit on the internet. The explosion of new AI tools is making it harder than ever to tell what’s real and what isn’t.
Dressed in a very long, white, and elegant puffer jacket, the Pope is caught on camera. The jacket could just as easily have been worn by Kim Kardashian, and he looks like he’s walking a fashion runway.
The image went viral over the weekend and Twitter flooded with tributes to the apparently oddly dressed religious leader.
There was just one problem — the image was fake, created by the AI service Midjourney.
The person who started spreading it, Nikita Singareddy, apologized for having accidentally created a viral hit around something intended as a joke.
The commotion points to a larger and more interesting question than the Pope’s possible winter wardrobe. How do we know what’s real and what’s fake, when AI-generated content is starting to get this good?
It’s easy to see how AI development — exciting as it is — could create considerable disorder. Previously, conspiracy theorists and others with a particular agenda had to rely on interpreting existing images in ways that served their perspective. A simpler route now would be to just generate the material you need. Images of world leaders in compromising contexts? That’s now just a few clicks away.
Of course, you can also use the tools without bad intentions. Sometimes laziness seems to be the motivation. Recently, Donald Trump posted an AI-generated image of himself kneeling in prayer. The image could reasonably have been produced with a regular camera — had the event occurred in reality.
The problem isn’t entirely new — it’s been possible to create and manipulate media for some time. The difference now is one of scale and quality.
Tools like ChatGPT, Stable Diffusion, and the aforementioned Midjourney do more than just edit existing material — they create text and images that are entirely new. The AI tools have been trained on existing information from the internet, but what they create is meant to be unique. What counts as unique is, however, a question headed for the courts. Image library Getty Images has sued Stable Diffusion, arguing that the service used their images — without compensation — to train its model.
Access to AI tools has never been easier or cheaper than it is now. What is called generative AI — tools using artificial intelligence to create content of various kinds — has existed for several years. But it hasn’t been available to the public in the same way until now. And creativity breeds creativity. Already — just four months after ChatGPT launched and took the world by storm — there are daily examples of how the new AI tools can be used.
Back to the question of what’s real and what’s false. There’s a certain irony in the fact that what is difficult for the human eye to detect as fake may be relatively easy for AI to spot. To catch students cheating on essay assignments, OpenAI launched a tool that could determine whether a text had been written by a machine or not. The methods aren’t perfect, but they point in a direction for how these questions might be handled. Technology makes some problems larger, but can also help provide some of the solutions.
That framing is useful for AI development as a whole. Rather than seeing how AI replaces jobs, you could think about what jobs that use AI might look like. Computers may have replaced a few typist positions, but they created far more jobs of a different kind. Choosing to use new technology to enhance your skills could produce a workforce with superpowers rather than unemployment.
Or to put it more simply: at first glance, it may be hard to know whether the Pope has bought a stylish new winter jacket or not. But a reasonably skeptical person working with an AI tool can probably find the answer very quickly.
And unfortunately, it wasn’t true — this time.