If you’ve ever walked a city street so late at night that it’s very early in the morning, you may have been greeted by a strange and unbidden thought. In the eerie stillness, it can feel for a moment as though you’re the last person alive. The usual throngs are gone, and the absence of what should be there is impossible to ignore—until some other person, off to start their working day, breaks the spell. The world is still there.
It is hard, in any real-world city, to maintain the illusion of being the only person for any length of time. But the internet is different. There is always an element of unreality to an online interaction with another human: how do we know for sure that they are who they say they are? Can we be certain they’re even actually a person?
This is the idea at the core of what became known as Dead Internet Theory, a joke-cum-conspiracy that says if you’re reading these words online, you’re the last person on the internet. Everyone else is a bot. The other commentators on Reddit? Bots. The people in the videos or the podcasts you listen to? Bots. What’s filling the junky websites that we all can’t help but click? You guessed it. They’re all bots, and you’re the guinea pig in the perverse experiment of some unknown power.
Dead Internet Theory is, if anything, a thought experiment. We’ve learned that we can’t necessarily trust what we read or who we meet online—so what happens if we take that notion to the extreme? If you were the last actual human on the internet, how long would it take for you to notice?
The web is being taken over by a global, automated ad fraud system
The idea began to gain traction almost a decade ago, with the “time of death” of the internet typically given as being around 2015 or 2016—but in the years since, reality has begun to mirror this once unserious conspiracy. The complaint of the modern internet is that it is filled with “slop” content, the spiritual successor to email spam. Low-quality content—such as trashy viral images or regurgitated news articles—created by artificial intelligence is filling up social media, search results and anywhere else you might look. But while junk memes are near impossible to avoid, they are just the most visible sign of the AI detritus that is coming to dominate our online worlds.
In reality, the internet is bots all the way down. Automated systems generate fake but clickable content. Bot accounts like and comment, boosting the slop in the algorithms of social media sites and search engines. Clickfarms monetise the whole endeavour, posing as real users with real eyeballs and thus earning advertising revenue. In this way, the web is being taken over by a global, automated ad fraud system, and whether or not any human sees any of it is entirely irrelevant. The things that generate real value for us are being pushed further and further to the margins, unable to compete with this brutal new algorithmic reality.
The most obvious destination for slop is Facebook, a social network that has been seen as dated and perennially naff for at least a decade, but which nonetheless counts more than a quarter of humanity as its users—even if many don’t log in quite as often as they used to.
If you do check your Facebook “Suggested for you” feed, though, you’re likely to find it chock-full of AI-generated slop: mostly images that don’t pass for real after even so much as a cursory glance, but which nonetheless generate tens of thousands of likes.
For a while, the trend was for images of what looked like wood or sand sculptures and their artists, with captions such as “made it with my own hands”. At another point, bizarre images of Jesus were du jour. One image of “shrimp Jesus” portrayed Christianity’s saviour as a crustacean. This was followed by pictures of US veterans, beggars or children looking miserable with birthday cakes, usually in strange locations, captioned with “why do images like this never trend?” The latest fad is for pictures of grotesquely emaciated people holding out begging bowls, often with strange skeleton or snake-like appendages. The nature of the junk memes changes, but it is always bizarre and lacking in any obvious purpose.
The independent journalism startup 404 Media has done more than anyone else to work out what is behind the apparently unstoppable slew of AI-generated slop on Facebook. The answer is a sign of what’s gone wrong on the internet and indicates how difficult it will be to fix: ultimately Facebook is funding the content that is destroying the value of its own network.
Behind the accounts posting slop on Facebook are entrepreneurs, of sorts, working out of countries including India, Vietnam and the Philippines, where internet access is widespread but incomes are relatively low. Here, the advertising revenue from a viral Facebook meme page is much more attractive relative to an average salary than it is in a country such as the UK.
These “creators” are often trained through online seminars which are themselves promoted through AI-generated content. As 404 Media reports, they are instructed to share “emotional” content to generate likes, comments and shares, but many boost this type of material either through artificial accounts or by partially hijacking real user accounts.
Some users who persistently comment on AI slop appear to have two personalities, effectively because they do. One “persona”—the real person—comments as usual on their local interest groups. But their account, which has been compromised without them noticing, also posts generic, AI-generated comments on thousands of pieces of AI-generated slop. This is a kind of benign hacking, in which bots piggyback on an account, letting the real user go about their business while using it to boost their content—a parasite for the digital era.
The motive is, of course, money. Facebook slop is monetised in two ways. Meta, which owns Facebook, shares revenue from the advertising it shows alongside the content of major creators. This means that if AI meme pages generate a big and apparently real audience on the site, Facebook itself pays the page creators. But if Facebook is the laboratory in which slop developed its strength, it long ago leaked into the wider internet ecosystem. Many pages direct users elsewhere, onto the web proper, where more money can be made. It is here that junk content for junk clicks reaches its natural and inevitable peak.
In his 2008 book Flat Earth News, the journalist Nick Davies identified a new scourge of the journalism industry, brought about by the internet era. Junior staff at local and even national newspapers were being asked to generate huge numbers of online stories at a relentless pace.
Instead of going out to speak to people or do original reporting, journalists would be required to produce a story every hour, or even every 45 minutes, by simply rewriting other people’s work. Davies popularised a name for this phenomenon—“churnalism”—and pointed to the obsession of bosses with generating online clicks for advertising revenue as its cause.
If a hasty rewrite produced at virtually no cost could generate as many views—and so as much online revenue—as an original investigation, why bother producing the latter? The churnalism phenomenon hollowed out newsrooms and replaced accountability journalism with articles such as “What time does Strictly Come Dancing start tonight?” and “What other shows has Olivia Colman been in?”, designed to lure in audiences from Google.
Sixteen years on, newsroom bosses are reaping what they sowed with the race to the bottom, pursuing cheap content to satisfy only the most casual of online browsers. Executives learned that if online clicks are all you care about, most of the journalism can be discarded. Their successors realised something more: the newsroom itself can be thrown away. Instead of having a real media organisation, you can churn out rewrites using ChatGPT and other AI tools, which can even build a credible looking news site itself.
These imposter news sites are generally harmless bottom feeders, trying to make their owner a living through ad views, but occasionally they cause serious trouble. One such site, Channel3Now, based in Pakistan, was among the earliest boosters of the false story that the attack on girls at a Taylor Swift dance class in Southport had been perpetrated by a Muslim asylum seeker. This disinformation sparked riots and widespread public disorder in the UK.
In a world where ad revenue is all that matters, the first realisation was that journalists were optional. This was followed by the understanding that the news site didn’t need to be real in any meaningful way either; anyone can create something that looks newsy enough to hook people in. There was only one obvious next step: if neither the content nor the site has to be real, why does the audience need to be?
Faking page views is an online arms race. Brands rely on advertising networks (which include Google and Facebook, as well as companies you’d never have heard of) to actually reach their potential customers. The brands pay for views, and so are very keen to make sure that every view is an advert seen by a living, breathing human.
The incentives for the middleman are less clear. They need to do enough to satisfy the brands to keep spending, but they are paid by the click, just like the creators themselves. Ad networks quickly cracked down on easy-to-spot “clickfarm” behaviour—setting up a computer to constantly click refresh on the same page, for example—but fakers learned increasingly sophisticated means to bypass security precautions. For a time, operations working out of countries such as China would pay workers to essentially browse the internet on rigs of five to 10 smartphones at a time, generating clicks on sites at a relentless pace for shifts of 12 hours a day.
These operations became automated and professionalised, abolishing what was surely one of the dullest and most repetitive jobs in the content industry. Today, these clickfarms are formed of tens or hundreds of thousands of sim cards, which imitate real mobile internet browsing, generating millions of apparent ad impressions every hour.
Real people and our needs have become irrelevant to the business model of the modern internet
This completes the soulless lifecycle of the modern internet economy. People desperate to earn a meagre living create automated systems that churn out low-quality or outright fake content. Others create dummy accounts to boost and share such content, or fake users to read it. All of this is done to milk some money out of real-world brands. Along the way, it enriches the internet giants that operate all of the machinery.
Real people and our needs have become irrelevant to the business model of the modern internet. If something interests us, our clicks pay just the same as a fake user in a Chinese clickfarm. Good content is relegated to the sidelines, to people who are able and willing to pay for the real thing. Original reported journalism is increasingly siloed behind paywalls that are, themselves, getting ever harder. Everyone else is force-fed slop, because there is no value in giving them anything better.
The journalist and activist Cory Doctorow christened this phenomenon the “enshittification” of the internet, and argued it was an inevitable result of the business model of the modern internet age: hooking people in on a free or subsidised product, getting a monopoly and then starting to extract as much profit from that product as is possible. As consumers, we get hooked on a product—be it a cheap taxi ride, a holiday, food delivery or human connection through social media—that is genuinely too good to be true, because it’s being subsidised by billionaire investors. Then we watch it steadily get worse.
That extends well beyond online browsing. Ridesharing apps such as Uber, Lyft and their competitors captured the private hire market by drastically undercutting the cost of existing taxis, while initially paying drivers at least as much as they had before. Once the market was captured and the old incumbents had given up, first the drivers were screwed by declining incomes, and then customers faced higher prices. The apparently great new service could never have actually lasted in the long term. This story plays out in almost every other venture capital market, from subscription boxes and fast food or grocery delivery, to Airbnb and WeWork.
The era of a gold-plated service at a rock-bottom price never lasts. Eventually, the real costs come back, the investors want to make money, and reality reasserts itself. Silicon Valley relies on selling us a dream it knows from the outset cannot last.
It could have been better than this. Both the internet and the world wide web predate the Silicon Valley era which propelled startups into becoming the richest and most powerful companies on the planet. The technology works as it ever did—making it incredibly quick, cheap and easy for us to connect to each other, and to publish what we wish. The AI slop didn’t need to take over. The fact that it has is the result of a series of choices.
The joke of the Dead Internet Theory was that everyone else online might have disappeared, and you could be left alone without noticing. In the decade since the idea caught on, emerging technologies have been harnessed almost as though this is the goal. Humanity has become irrelevant to the business model of the internet, and so we’re getting relegated to the sidelines.
Facebook feeds that used to be full of real information and real stories about people from our real lives are now full of low-quality and freakish engagement bait. It is no surprise that many of us, as a result, are looking elsewhere. Google results keep getting worse, social media feeds are full of dreck, and it is impossible to know what to trust.
None of the internet giants seem to even see the problem, let alone a way to fix it. Instead of trying to rebuild internet services to their former glory, they are packing in more AI and automation, and, inevitably even more slop. But an internet built for the bots is doomed to fail: in the end the economy is made up of the collective efforts of humans, not anything else.
If the multi-billion-dollar companies running the internet don’t make it fit for humans, someone else will. However much it might feel that way, the internet is no emptier than the streets of London. We’re all still there, just out of sight.