Behind AI-generated deepfakes lies an even bigger problem

SvD Näringsliv





Behind AI-generated deepfakes lies an even bigger problem

Published in Svenska Dagbladet, 2024-10-22. Translated from Swedish.

New digital tools have opened the door to mass manipulation that could influence the American presidential election. But behind AI-generated deepfakes lies an even bigger problem.

A girl in an orange life jacket sits in a boat. She looks as though she has been crying for a long time — tired, red-eyed. The water around her is brown and unwelcoming, and it is raining hard. She is holding a puppy. Hurricane Helene has just swept through American states including Florida, Georgia and South Carolina. The rescue operation has been difficult and the destruction enormous. The image of the little girl spreads quickly online as a symbol of the suffering Helene has caused. The girl does not exist. The image is AI-generated. Even an untrained eye can sense that something is not quite right — it looks partly animated, with a shimmer that gives it an unnatural gloss. Yet it quickly becomes a weapon in a political debate about whether society has prioritised the relief effort properly.

Fears that AI-generated images and deepfakes would influence American elections have existed for years. With the dramatically accelerating pace of AI development, those concerns have intensified ahead of this year’s presidential election. There are already plenty of examples. A user called “Think for yourself!” posted the fake image of the girl in the life jacket on X with the comment: “I don’t care if it’s AI, it’s still true!!!” The phenomenon raises an interesting question. Is it AI images influencing public opinion that we should worry about — or is there possibly an even bigger problem: voters deliberately allowing themselves to be influenced by an image they know is fake?

The term “deepfake” was coined on the internet forum Reddit in 2017 — a combination of “deep” from “deep learning,” a type of AI technique, and “fake.” On Reddit, this new technology was used to create videos where pornographic content was reworked to include celebrity faces. That genre of content has a tendency to be quick off the mark in major technology shifts. There is also no shortage of “cheapfakes” — poorly executed deepfakes using crude methods like pasting heads onto other bodies. The intent is the same, but the execution low quality.

In 2023 AI technology made a major breakthrough. Services like Midjourney and Stable Diffusion suddenly gave tech enthusiasts powerful tools to create new kinds of images. In March of that year an image appeared of the Pope wearing an incredibly elegant and fashionable white puffer jacket. It went viral immediately, with the Pope praised for his bold fashion choices. The image was, of course, fake. The creator had to issue an apology after what had seemed like a harmless joke spiralled out of control. In another example, Trump posted a series of images appearing to show Taylor Swift fans — so-called Swifties — rallying behind him politically. Also fake. In a Fox Business interview he distanced himself from the images, but in a telling way: “I know nothing about them other than somebody else generated them. I didn’t generate them.”

Trump didn’t create the images. But he spread them. And through that, uncertainty is created about what is true, what is uncertain, and what is entirely false. In a world of deepfakes, the opposite problem also arises: genuine photographs are assumed to be fake — or can at least be dismissed by a political opponent as exactly that. When presidential candidate Kamala Harris landed at Detroit Metropolitan Airport in early August, a large group of supporters with banners was visible beside the plane, enthusiastically cheering her arrival. Trump was not equally enthusiastic. On his own social media platform Truth Social he accused Harris of having manipulated the images: “Has anyone noticed that Kamala cheated at the airport? There was nobody at the plane, but she ‘AI’d’ it and then it showed a massive crowd, but they didn’t exist!” Given the number of people present, plenty of other images from the same moment existed. The crowd was real. But once the seed of doubt is planted, it becomes an argument one can deploy against almost anything. Does a picture make you look bad? Then it’s fake. Does a picture make your opponent look good? Also fake.

Political actors have always used a range of methods to smear opponents and try to win elections. In 1972 the American newspaper Manchester Union received a letter claiming that senator and presidential hopeful Edmund Muskie had used a derogatory term about a large voter group. The letter later turned out to have been written by an employee of the sitting president, Richard Nixon. It triggered a downward spiral for Muskie, who ultimately did not win the presidential nomination after all. A fake letter — a simple but apparently effective method. Another popular technique is robocalling — automated phone calls. In 2008, thousands of residents in North Carolina received a call in which a voice told them they would receive a voter registration form by post, which they should fill out and send back to ensure they could vote in the upcoming primary. The problem was that by the time the calls were made it was already too late to register, and the calls were going to people who were already registered. Confusion ensued, which may have prevented some from voting at all. The campaign was traced to a group called “Women’s Voices Women Vote,” which had connections to Hillary Clinton’s primary campaign.

What the introduction of deepfakes has done is dramatically lower the threshold — and the cost — for creating fake material. What previously required a professional video production team can now be done in a couple of minutes by anyone. The quality is often quite poor, and a new term has emerged to describe the enormous volume of low-quality AI imagery that has appeared: AI slop. Given the pace of AI development, we are months rather than years away from substantially more realistic images and videos of this kind. The companies behind these tools claim to have policies against such use, but enforcement is practically very difficult. And the damage can already be done by the time the source is identified.

The volume of political deepfakes is now so large that they have been documented in a database administered by researchers affiliated with Purdue and Northwestern universities. At the time of writing it contains over 540 examples.

There are two different perspectives on how the deepfake problem will develop. A pessimist would say it will likely get worse quickly. The quality of these services is improving, and in just the past few months AI tools for both audio and video have nearly exploded in capability. With better tools accessible to far more people, it is hard to believe the problem will resolve itself. Relying on human goodwill and good intentions in this context may be naive.

An optimist can note that despite this proliferation of new tools, the problem is still relatively contained. More fact-checkers than before — both news services and social media platforms — are now examining this kind of material. A fake image spreads fast, but it can also be debunked fast. AI development may even assist with that too. When Trump was shot at a political rally in Pennsylvania, an image spread appearing to show smiling Secret Service agents — as if pleased with the outcome. The image turned out to be false and was quickly verified as such by multiple independent sources. The problem is created fast, but the solution follows shortly after.

Taken together, we have a media landscape that may face a larger problem than individual fake images or video clips. Our shared sense of what is true and what is false risks eroding. The quality of the material does not necessarily determine whether someone believes it — they may simply have decided to trust the source, regardless of what it says. It takes only a drop of doubt before what we have collectively accepted as truth begins to crumble. Should that trend continue it will be a major challenge for society. But it is not strictly a problem that arose with AI and deepfakes. If — like the person who posted the girl in the life jacket on X — you have already decided what is true and false in the world, there are few things that can make you change your mind. Even when you know it is fake.


The Author

Björn Jeffery is a Swedish technology columnist, advisor, and independent analyst based in Malmö, Sweden. He is the technology columnist for Svenska Dagbladet and co-hosts a podcast for the newspaper. He was previously CEO and co-founder of Toca Boca, the kids’ media company that grew to over one billion downloads. Through his advisory practice, Outer Sunset AB, he works with companies on digital strategy, consumer culture, governance, growth, and international expansion.