This analysis was first published in SvD Näringsliv, in Swedish, on May 16th, 2023. This piece was translated from Swedish by Claude. Some phrasing may differ from a human translation.
Google is positioning itself as the responsible player in artificial intelligence. It sounds good. But the stance could become complicated as competitors pick up the pace.
There was no rock band performing. It was more like a tech festival — with AI as the theme.
The stage at Shoreline Amphitheatre in Mountain View, California, was unusually colorful when Google CEO Sundar Pichai stepped up.
Google’s annual developer conference, I/O, tends to be a fairly sleepy affair. A raft of internal projects is presented at an event aimed primarily at the developers who build on Google’s products.
For the average user, the presentations can easily feel a bit too technical and inward-looking.
This time, however, there was a theme the outside world was genuinely interested in. AI — artificial intelligence — is on everyone’s lips, and now we would finally learn how Google planned to respond to the threat from products like ChatGPT.
It took only a few minutes into Pichai’s presentation before the key word was uttered: “responsible.”
“With a bold and responsible approach, we are reimagining all of our core products — including Search,” Pichai said.
It has now been seven years since he declared that Google would become an “AI-first company.” But if that was the case, how had they ended up falling behind in this latest boom? The apparent internal answer seems to be precisely that Google has been more “responsible” than the rest. The phrase recurred many times throughout the presentations.
To understand where this framing of responsibility comes from, we need to rewind a little.
On the same stage in Mountain View in 2018, the same presenter — Sundar Pichai — demonstrated how an AI tool could phone a hair salon and book an appointment, apparently without the hairdresser realizing she was speaking with a robot.
The reception was cool. Professor and columnist Zeynep Tufekci described it as an example of Silicon Valley having lost its ethical bearings.
Two years later, Google found itself in controversy again, after firing researcher Timnit Gebru. She had led a team examining the ethics of AI and the potential consequences of its development.
Gebru was also co-author of a research paper that a Google manager objected to. The dispute couldn’t be resolved, and Gebru was let go.
The summer of 2022 brought the next headache. Google employee Blake Lemoine claimed that one of their AI language models, LaMDA, had expressed itself in human-like ways. Had the technology gone too far? Lemoine was subsequently fired.
Seen in this light, the word “responsible” becomes more legible. Google has been at the forefront of these questions, but has also run into difficulties navigating the complicated territory between AI technology and ethics.
Google now wants to communicate that it hasn’t been slow — but rather has been taking responsibility for a more careful approach to progress in the field.
The timing, however, argues against this framing.
At the end of January, Google’s leadership declared a “code red” after ChatGPT’s immediate global success. Suddenly there was urgency — even Google’s co-founders Larry Page and Sergey Brin were brought in to work on the AI strategy.
For a company that had claimed for seven years to prioritize AI above all else, everything now had to happen at once. It looks rather more like competition accelerating the product roadmap than any sudden resolution of ethical questions.
Around this time, AI heavyweight Dr. Geoffrey Hinton chose not only to leave the company but to publicly warn about the pace of AI development.
Positioning around a specific concept is a familiar strategy. Apple’s emphasis on privacy — “privacy” — has not gone unnoticed by anyone.
But Apple’s privacy commitment has genuinely dented competitors. It’s hard to see how Google’s “responsibility” framing could have the same effect.
Responsibility is also a relative concept. When have you taken enough? And does it extend so far that you would sacrifice good business for what others consider to be the right thing? These are the questions Google will face as AI development accelerates.
Being responsible is — and sounds — good. But it’s easier to say than to be. Especially when things are moving fast and the competition is breathing down your neck.