There is an urgent problem with AI — and it isn’t what the open letter says

SvD Näringsliv

This analysis was first published in SvD Näringsliv, in Swedish, on April 4th, 2023. This piece was translated from Swedish by Claude. Some phrasing may differ from a human translation.

When 50,000 researchers and business leaders warn against AI development, the uproar is enormous. But the truly serious problem isn’t the development itself — it’s that a handful of companies hold all the power.

“Humanity swallowed by its machines — body, mind and soul — and civilization poisoned to its encroaching death.”

The illustration showed a person being sucked into their own sausage-making machine.

That was the New York Times in 1921.

The website Pessimists Archive has compiled examples from every decade since, dystopian visions of how machines, robots and computers will replace us and take all our jobs.

Now it’s 2023. This time it’s AI that will obliterate the workforce and destroy the world. Plus ça change.

Few things seem to unsettle a society quite like a technological shift. The sudden availability of new, well-developed AI language models has unleashed an explosion of creativity, entrepreneurship — and anxiety.

When an open letter signed by 50,000 researchers and business figures calls for a thoughtful six-month pause in AI development, it adds more fuel to that anxiety.

The fact that signatories include serial entrepreneur Elon Musk and Apple co-founder Steve Wozniak amplifies it further. They want development of the next generation of language models — what would be called GPT-5 — to pause for six months while society considers the risks.

But a pause in development is unlikely to materialize, regardless of how many signatures are gathered.

The letter references a set of principles drawn up by AI enthusiasts at a conference in Northern California in 2017. Those principles were created as a form of self-regulation within a field that, at the time in particular, almost entirely lacked laws and structure.

Following principles is, as we know, voluntary. And without all AI developers simultaneously making the same voluntary decision — including Russia and China — we won’t have added any thoughtfulness to the process. We’ll most likely have done nothing more than hand a head start to those with arguably worse intentions.

The question of how this should be handled — and potentially regulated — is not uncomplicated. Sweden’s EU Council Presidency is a reminder that nascent legislation is emerging from Brussels to try to bring order to the field. The AI Act and the AI Liability Directive are two relevant initiatives, for instance. Sweden appears to have delegated the question there.

Looking at the EU’s history of regulating technology companies, however, there is reason to be skeptical. It took them over 20 years to create laws preventing monopolistic behavior among the largest tech companies.

When they set out to protect our data, we got GDPR — which for the average person has mostly resulted in an endless stream of cookie consent boxes on every website, and the disappearance of class lists from schools.

At a time when Europe essentially lacks any major tech companies on a par with Apple, Google and Meta, it’s hard to imagine that regulation will do more than worsen the continent’s ability to compete.

The most significant AI development is already happening outside the EU. Laws there therefore risk being a swing at thin air. It would be like Sweden imposing strict rules on viticulture — well-intentioned, but largely meaningless in the bigger picture.

If legislation isn’t sufficient and voluntary principles aren’t followed — what’s left? In these technophobic times, one seemingly radical idea would be: optimism.

Instead of only worrying about the end of the world, let us also consider how we can distribute these superpowers fairly and equitably across the world.

If AI development creates the productivity boom that is widely predicted, this is an excellent opportunity not to recreate the kind of de facto tech monopolies that the Western world and China live with today.

Emily M. Bender, one of the researchers whose paper the open letter references, says her conclusions have been misread. Rather than warning about a hypothetically dangerous AI future, she argues, the real and far greater risk is that too few people will be able to access its benefits. It is about “the concentration of power in the wrong hands,” she writes, among other things.

It is therefore of great value that we discuss the future of AI. But an optimistic and pragmatic version of that discussion would focus on how we ensure that as many people as possible can benefit from the productivity-enhancing capabilities being developed.

Do we want the world’s leading AI development to happen and be controlled by a handful of privately owned companies? As things stand, Microsoft (through its investment in OpenAI), Google and China’s Baidu are among those at the frontier.

They don’t just control how the models are built — they also control what data those models are trained on.

That is the really hard nut to crack.

The most important question, therefore, is not binary — whether we should pursue AI development at all. The question is rather how we ensure it is done in a way that benefits as many people as possible.

The Author

Björn Jeffery is a Swedish technology columnist, advisor, and independent analyst based in Malmö, Sweden. He is the technology columnist for Svenska Dagbladet and co-hosts a podcast for the newspaper. He was previously CEO and co-founder of Toca Boca, the kids’ media company that grew to over one billion downloads. Through his advisory practice, Outer Sunset AB, he works with companies on digital strategy, consumer culture, governance, growth, and international expansion.