This analysis was first published in SvD Näringsliv, in Swedish, on November 18th, 2023. This piece was translated from Swedish by Claude. Some phrasing may differ from a human translation.
AI development’s leading spokesperson, Sam Altman, has been fired as CEO of OpenAI. Just hours earlier, he was on stage in front of both the US and Chinese presidents. The aftershocks have only just begun.
If you get into a car in the northern part of Silicon Valley and drive south, you will be out of it roughly two hours later. That is all it is. This limited stretch of land is home to many of the world’s largest companies, and is the epicentre of the most important software development on the planet.
No technology has more eyes on it right now than AI. Since OpenAI released ChatGPT just over a year ago, a flood of new companies and initiatives in the field have emerged. Right now, around half of all investment capital in the entire world flowing into AI is going to Silicon Valley alone. Around 118 billion kronor streamed into the area in just the first quarter of this year.
It is against this backdrop that one should understand the bombshell that dropped late on Friday evening, Swedish time.
Sam Altman, CEO of OpenAI, was fired with immediate effect. The single most important person, at the most important company, in the most important area of technology development was forced out by his own board. In a statement, OpenAI’s board wrote that Altman had not been “consistently candid in his communications” with them. Interim CEO will now be Mira Murati, the company’s chief technology officer. Altman himself has not gone into detail about what happened, beyond saying that he will miss his job.
The event came as a complete surprise to everyone involved. Microsoft, which has invested more than 100 billion kronor in OpenAI, only found out about the news a few minutes before the press release was sent out. Altman himself had just been at the major Asia-Pacific Economic Cooperation summit alongside Xi Jinping and Joe Biden.
He went directly from there to a video call with the board, where he received the news. The board’s chair and one of OpenAI’s co-founders, Greg Brockman, was also asked to step down but was allowed to retain his position at the company. He chose to resign of his own accord a few hours later.
Corporate governance played a central role in what unfolded. To say that it is often inadequate at tech companies in Silicon Valley is an understatement. Companies are frequently controlled through dual-class shares by their founders, which renders boards toothless. Tesla has Elon Musk’s own brother on its board. The collapsed crypto exchange FTX had no board at all, despite having 120 different investors.
OpenAI’s board consisted of research chief Ilya Sutskever, the now-departed Greg Brockman, and three independent members: Adam D’Angelo, Tasha McCauley, and Helen Toner. According to news site The Information, there had been internal conflicts between Sutskever and Altman in the period leading up to the announcement.
The disputes had centred on whether OpenAI was developing its AI technology safely enough, given the risks it entails. At an internal meeting following the announcement, Sutskever was asked whether Altman’s dismissal could be seen as a coup. He disagreed with that characterisation, but added that the way it had happened had not been ideal.
How OpenAI functions and is run is a question that affects far more than just the company’s employees and customers. Many of the billions invested in the field involve products and services that rely on OpenAI’s technology in various ways. It is Microsoft’s single most important investment, and a cornerstone of their strategy.
OpenAI is also the first company in over a decade to genuinely shake Google — the search giant whose AI ambitions only really accelerated once it began to face external competition.
Much has been said about the future risks of AI development and the kinds of problems it might cause down the line. What has been talked about considerably less are the immediate problems that the concentration of power in the field is creating right here and now. A handful of well-known giant companies control all of the development that society is currently witnessing.
When individual board members, with unclear motives, make decisions of this kind, enormous shockwaves ripple through an entire industry. The transparency, scrutiny, and accountability applied to these power-holders is virtually nonexistent.
The question everyone is now asking — the same one OpenAI’s staff and very likely Sam Altman himself are sitting with — is this: who actually controls AI development?