This analysis was first published in SvD Näringsliv, in Swedish, on May 26th, 2023. This piece was translated from Swedish by Claude. Some phrasing may differ from a human translation.
OpenAI CEO Sam Altman is on a charm offensive among politicians and journalists. He says he wants AI development regulated to avoid a future crisis. But his motives may not be entirely noble.
Neat tie, dark blue suit, Sam Altman settled in front of the microphone.
“It’s an honor to be here. Perhaps even more so than I expected,” he said, with a slightly lopsided smile.
“OpenAI is an unusual company. We created it that way because AI is an unusual kind of technology.”
Several things were unusual that day.
The CEO of OpenAI had been called before the American Congress. It’s a familiar scene by now. “Tech executive questioned by politicians” has played out many times.
This time, however, it sounded a little different.
What usually becomes a rhetorical pie-fight — complex questions reduced to simple yes/no answers — turned, surprisingly, into a more thoughtful hearing.
Josh Hawley, a Republican senator from Missouri, wondered whether AI development was more like the printing press or the creation of an atomic bomb. The formulation didn’t seem designed to score political points. He genuinely seemed to wonder how this was all going to turn out.
The room looked to the 38-year-old Sam Altman, whose official nameplate on the podium listed him as “Samuel.”
But why are America’s leading legislators sitting and listening to him?
Even if Altman’s name is unfamiliar to most people, he is effectively royalty in Silicon Valley.
He took the classic entrepreneur’s path via Stanford University, dropped out after a year, and started a mobile social network he called Loopt. That was in 2005 — two years before the iPhone launched — and Altman managed to raise around $30 million in venture capital. Loopt never really took off. In 2012 it was sold to a credit card company for $43.4 million. That may sound like a lot of money, but by Silicon Valley standards, it was a defeat.
Altman moved on to the well-known company incubator Y Combinator, where he quickly became a partner. Y Combinator had grown from a small operation in Mountain View sharing an office with a robotics company, to becoming one of the central hubs of the new startup era in the region. Companies like Stripe, Dropbox and Airbnb passed through the incubator, and in a 2015 blog post Altman wrote that the combined valuation of all the companies in the program had reached $65 billion. Y Combinator co-founder Paul Graham was seen as a king in Silicon Valley — which made Sam Altman something of a crown prince.
He was not yet an AI expert, however.
Y Combinator invested in companies across every conceivable domain — from storage services to electric aircraft to social networks. Sam Altman became co-chairman of the research project OpenAI. Y Combinator’s other co-founder, Jessica Livingston, was also one of OpenAI’s founders, so the two were already connected in a way.
The project was not without controversy. Another of OpenAI’s founders, serial entrepreneur Elon Musk, resigned from the company’s board in 2018 — citing, in his own words, potential conflicts of interest around Tesla’s own AI plans. Altman, however, claimed that Musk had tried to take over OpenAI and that the board had rejected this. The following year, Altman himself stepped up to become OpenAI’s full-time CEO.
In parallel with this came a major restructuring of OpenAI’s corporate form. From being an American nonprofit, OpenAI became a commercial company. The stated reasons were several, but primarily they needed to attract both investors and employees who could share in the company’s success. Recruiting world-class talent without being able to offer equity was too difficult, it was said. Outsiders — including Elon Musk — expressed skepticism about those reasons. Nonprofits don’t necessarily struggle to attract talent.
The result was a kind of hybrid. OpenAI became a commercial company that would maintain the original nonprofit’s goal of developing general artificial intelligence that benefits humanity. That may sound like a technicality, but this hybrid status would prove to be an important part of the position OpenAI would come to occupy.
Back, then, to the question of why Altman is in Washington educating politicians.
Senator Richard Blumenthal put his finger on what’s at stake in a broader sense, in his opening remarks about the intersection of politics and technology:
“Congress has a choice. We had the same choice when we faced social media. We failed to capture that moment.”
AI has become an important issue — and a source of anxiety. Open letters call for pausing AI development. Eminent researchers like Dr. Geoffrey Hinton — often called the godfather of AI — begin expressing concern about their own life’s work.
AI development stands at a crossroads. Politicians are somewhat confused, but they don’t want to repeat the same mistake that gave a handful of social media companies essentially free rein for over a decade.
Altman understands this. And you can understand his methods by reading his own blog post, modestly titled “How to be successful.”
“Believing in yourself is not enough — you also need the ability to convince others of what you believe,” Altman writes.
“My second major sales tip is to show up in person when it matters.”
That’s why Sam Altman shows up in person at the US Congress.
Because it matters that the politicians understand these issues — in the right way.
He also has OpenAI’s quasi-nonprofit status to lend him credibility. The message seems to be: he’s not here for the money. That Microsoft has invested $11 billion in the company is not something you bring up loudly in these settings.
But the information campaign aimed at elected officials didn’t begin with the congressional hearing. In the podcast Hard Fork, New York Times journalist Cecilia Kang reports that Altman has visited Washington DC multiple times and that, the same week as the congressional appearance, he attended a dinner with over sixty members of the House of Representatives. He has given technical demonstrations to individual politicians to explain how it all works. In short, he has made himself available to lawmakers in a way that is unusual. Silicon Valley generally keeps to its own coast and only comes east when absolutely necessary.
Most concerns about AI development are still hypothetical. Compare this to Mark Zuckerberg, who was summoned to explain the Cambridge Analytica scandal — and called back multiple times since. The situation here is nearly the opposite.
Altman is getting ahead of the problems, rather than being dragged before Congress because of them.
Even if the timing is different, there are many similarities to the kind of language we’ve heard from tech company leaders before. Altman is asking politicians to regulate AI. But he is neither the first nor the only one to do so.
Back in 2019, Facebook’s then-COO Sheryl Sandberg said “new rules need to be written for the internet and we want to help make that happen.” The following year, Alphabet CEO Sundar Pichai said “companies like ours cannot simply build promising technology and let market forces dictate how it gets used.”
“Technology needs to be regulated,” Apple CEO Tim Cook told Time in 2019. “There are too many examples where lack of regulation has resulted in real harm to society.”
Almost every chief executive of a major tech company has, on multiple occasions, said they welcome regulation. Yet billions have been spent on lobbying to make sure it happened in the right way — and preferably very slowly. American regulation has largely failed to materialize, and the European process took a very long time. From that perspective, the lobbying appears to have worked quite well.
There are, however, other motives for requesting regulation beyond slowing or softening legislation.
Altman says his motivation is concern that AI could cause harm in the world. But a less noble explanation than the one he gave Congress can be inferred from a leaked memo written by a Google engineer. In it, the memo describes how the real threat in AI — from both Google’s and OpenAI’s perspective — does not primarily come from other large tech companies. The threat comes from the many projects using open source. These projects aren’t as powerful as the largest and most expensive systems, but the results are surprisingly good. And, more importantly, they are free. The growing use of these open-source projects will also improve them further over time. What happens to competition when there are hundreds or thousands of AI developers, rather than just a handful?
When Sam Altman proposes a licensing regime for AI development, you should keep that argument in mind.
Like Facebook and Google, OpenAI developed its market lead in an era of minimal regulation. In many cases, practically none at all. Facebook would find it substantially harder to push through its acquisition of Instagram if it happened today. Competition regulators have woken up to these questions in a way that simply didn’t exist before.
If tech companies are regulated now, you can get the best of both worlds: the ability to grow freely in an open market, combined with the ability to close off competition by loading new entrants with a mass of regulatory requirements. Being large and well-resourced — as Google or OpenAI are — means you can afford to absorb regulation. For a smaller startup, it’s an entirely different kind of challenge.
Regulation becomes a way of defining what all market participants are permitted to do. But it also works to keep new entrants out.
From Altman’s blog post on how to become successful, again:
“Building up influence makes you hard to compete with. You can do this, for example, by having good relationships, a strong personal brand, or by becoming skilled in areas that overlap.”
Altman appears to have taken his own advice to heart.
He has built political relationships that have made him popular among lawmakers. He is knowledgeable, articulate, and clear-headed about AI. He understands the context he’s operating in.
Listening to Sam Altman himself, that is also the recipe for becoming hard to compete with. And it explains why this Silicon Valley executive is playing his cards differently from his predecessors.
The opportunity now exists to set the tone for an entirely new market — one in which his own OpenAI sits in the driver’s seat. It is, to say the least, elegant. But it is not only noble motives and a desire to improve the world that lie behind the charm offensive currently underway.