The Algorithm Is Killing Us All
The algorithm is the worst thing that has happened to the internet: A literature review.
The algorithm may be the worst thing that has happened to the internet, and it’s ruining our culture.
A few months back, Sam Kriss wrote for Substack Numb at the Lodge about what he calls “the zombiefication of culture,” wherein participants in culture are able to skip actual engagement in anything cultural, on an odd autopilot. I particularly liked this point:
But most people don’t actually watch TikToks. Next time you’re next to someone doomscrolling through short-form video, watch what they actually do. Most of the time, they never actually watch a single twenty-second video through to the end. Flick down, vaguely register the general content of the video, immediately flick down again. Flick, flick, flick, for hours at a time, consuming literally nothing. Or, rather, consuming nothing except the algorithm, the pure flow and speed of the machine that gathers the entire world together and beams it directly at your face.
So algorithm is killing culture. How, exactly, is it doing that?
As John Herrman argues for Intelligencer, the algorithm isn’t actually aiming to show us content we genuinely like, we genuinely want to see. It’s showing us content we won’t want to turn off, whether we actually enjoy it or not. And often, it’s having us test whether that content is going to work on someone else. We are now testers. As Herrman puts it:
You’re really just sitting there, recreationally A/B testing content for hours in order to help Meta and TikTok find videos — any videos, about anything — that are a little bit harder for us not to watch.
Herrman has argued a similar point before. A few months back, in a piece for Intelligencer, Herrman wrote that social media is no longer as advantageous to protests as it once was. This article isn’t really only about protests: He’s arguing that as coherent public conversations have died and been replaced by algorithms, the social aspect of social media has been replaced by something distinctly antisocial.
We see this in the perfect storm of simply bad content that we now see on a daily basis on the internet. From AI slop to minute by minute microdrama apps, the content going viral on the internet is often bad. But it’s not just about that. Social media is no longer social, nor is it really media.
This isn’t just a formal change from the News Feed to Stories to predictively recommended vertical videos, though. It’s a long (and nearly complete) process of platform desocialization. Platforms originally defined by keeping up with people you know, or have at least heard of, become something fundamentally different.
The algorithm’s dangers aren’t just that it’s bad at showing us content we actually want to see. It’s that it’s designed to be addictive. A few weeks ago, the first major social media addiction lawsuit against Meta and YouTube (TikTok settled) went to trial in Los Angeles. From Naomi Nix’s summary of the case for The Washington Post:
Since 2022, school districts, dozens of state attorneys general and families have filed a deluge of lawsuits against big tech platforms; a number of those suits have been consolidated into a separate case before U.S. District Court for the Northern District of California. Many of the cases accuse the technology industry of designing algorithms to keep teens scrolling, viewing and checking their social media, maximizing profits while fueling a youth mental health crisis. Instead of warning the public about the dangers of their products, the lawsuits charge, tech companies downplayed what they knew about the harmful effects.
Sure, screen time itself isn’t inherently bad or good, as per a 2023 report from the American Psychological Association. As Nix quotes from Johns Hopkins professor Tamar Mendelson, some studies actually show that the kids with the fewest mental health problems are the ones using some digital media, but not a lot. I can definitely speak for my own experience on the internet being positive: It’s why I’ve been so anti-internet bans in the past.
But I wonder if my teenage experience on the net would be so good nowadays. Years back, Cory Doctorow coined the term enshittification, and I have to agree with the Ars Technica staff: The internet is perhaps the peak of enshittification. On a very serious note, it is deeply frustrating — and I think universally so — to watch the internet just get worse. I even miss social media five years ago; Instagram was at least fun when the algorithm fed me extremely niche fandom art instead of tradwife reels.
In an interview with Jonquilyn Hill, Max Read recently argued for Vox that the algorithm may have killed us, but it’s also what made the internet so big. There’s some truth to that. But I wonder whether the balance is off now: Enjoyment has gone down too much, and screentime up too much. I don’t know what the solution is. But I do know it’s a problem.




I've been thinking a lot about how I need to be way more intentional with my use of social media... I barely remember the Reels I watch on Instagram and wish there was a way to turn this feature off. I don't have Tiktok anymore, but I supposedly have Instagram to keep up with my friends! Yet it has completely replaced TikTok in the way I'm using it. This week I'm starting a test run of the app Opal to keep me out of the endless scrolling.
This "process of platform desocialization" was a major argument in the recent Meta v. FTC case -- Meta explicitly arguing that Instagram and Facebook are no longer primarily for connecting with social contacts, but rather intaking (mostly shortform) content from strangers. They played TikToks/Reels/Shorts in front of the judge to show him how identical those platforms are and to argue that they DO compete in a broader "content" market with YouTube and TikTok rather than a narrower "social connection" market. The argument won them the case, but I can't stop thinking about how dystopian of a picture they were painting of their own products/the broader social media landscape.