This analysis was first published in SvD Näringsliv, in Swedish, on December 11th, 2021.
Behind the scenes at TikTok and Instagram, there are decisions and priorities where young people’s health is weighed against money. And the people in charge hide and duck the truly hard questions.
“The algorithm finds what I’m interested in. That makes it very personal. It’s almost as if it understands you better than you understand yourself,” says Fiona, 25, in SvD’s investigation of the app TikTok.
What she’s describing is the entire point. The algorithm is supposed to be able to predict what you’ll watch, and the better it is at that — the longer you stay in the app. And see more ads, which is how TikTok makes money.
The model is simple, but treacherous. A generous interpretation is exactly this — that you are presented with the content you want. What you watch, like, send on to a friend — everything helps sharpen the algorithm’s precision. It sounds relatively innocent.
But a more accurate analysis is that you are presented with the content that the algorithm thinks you will actually watch. And as SvD’s investigation shows, the overlap between what you want and what you end up watching is not necessarily very large. Even people who have explicitly marked that they don’t want a certain type of content can still get it in their feed over and over.
TikTok’s Nordic spokesperson, Parisa Khosravi, tells SvD that “we have several settings available for our users to control their own experience on the platform, including by selecting ‘not interested’ on a video. I’m sorry if someone has nonetheless been met with content they have actively opted out of.”
That’s regrettable, of course, but the description of the problem paints a picture of an algorithm that runs itself. That, of course, is not the case. It’s programmed with an intent. And in TikTok’s case, it fulfils its intent extraordinarily well compared to other social networks — that’s why the app has become so popular.
TikTok isn’t alone with these deflections. This week, Adam Mosseri, head of Instagram, testified before the US Senate. The topic was, fittingly, online safety for children. Mosseri described how technically complex it is to distinguish a 12-year-old — a child under US law — from a 13-year-old who can be treated as an adult. One of his proposed solutions was that someone else should fix the problem — the phone makers. “It’s a challenge for the industry,” Mosseri said, pushing the question further away from his own business.
What both Khosravi and Mosseri are right about is that these are genuinely complex problems to solve. This is hard — even for the most talented programmers. But what they overlook is that the problems are also entirely of their own making. Eating disorders weren’t invented by TikTok, but being fed encouragement around them is a direct consequence of how the app is built. Arguing, as TikTok’s spokesperson or Instagram’s head have done, that it’s not their intent does not free them from responsibility.
Nor can they claim ignorance of the phenomenon. The display of harmful behaviour has long been a well-known fact at the social networks. In The Wall Street Journal’s series “The Facebook Files”, internal research was described showing, for example, that young women felt worse from using their service, Instagram. That was also one of the reasons the Senate invited Instagram’s head in for the hearing this week.
The research community has also pointed to the same phenomenon for a long time. A study from 2007 showed how there were dedicated forums whose sole purpose was to encourage eating disorders. Another study, from 2010, described how similar content differed across different social networks.
Nearly 15 years later, we are still met by those in charge hiding behind the hard questions they themselves helped create. Responsibility is most easily shown by what gets prioritised. You can still — with a few simple search terms — end up straight in a world that helps you become unwell. The priorities can hardly be clearer than that.