OpenAI CEO Sam Altman has mentioned the Dead Internet Theory, a once fringe idea that claims a large portion of today’s online activity is no longer driven by humans but by automated systems, primarily artificial intelligence (AI).
Altman, whose company created ChatGPT, posted his thoughts on X (formerly Twitter) on Thursday, expressing surprise at the number of AI-driven accounts he has noticed on the platform.
“I never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now,” Altman wrote.
By mentioning LLMs — large language models like those used in ChatGPT and other advanced AI systems — Altman directly tied the issue to the technology he helped popularise.
The post went viral almost immediately, drawing intense criticism and memes across the social media.
What is the Dead Internet Theory?
The Dead Internet Theory first appeared over a decade ago on online forums such as 4chan, where a user speculated that the majority of online content and interactions might not come from real people.
Instead, these forums suggested that bots and automated systems were creating posts, comments, and even entire conversations.
According to this early version of the theory, the internet was becoming an empty shell, where human participation was being replaced by algorithms designed to simulate activity.
The result would be a network filled with lifeless interactions, giving users the illusion of a bustling digital world while in reality, they were surrounded by machines.
Initially, the idea was dismissed as an ironic conspiracy theory — an exaggerated reflection of people’s growing distrust of social media platforms and their opaque algorithms. For years, it remained largely a niche topic discussed in online communities sceptical of Big Tech.
However, recent technological advances have given new weight to the theory. The rapid rise of generative AI systems — like OpenAI’s ChatGPT, Google’s Gemini etc. — has made it easier than ever to automatically produce convincing text, images, audio, and video.
How does synthetic content on social media work?
In today’s social media landscape, bots are no longer limited to simple, repetitive actions like following accounts or posting spam links.
Modern AI-powered agents are capable of crafting realistic narratives, generating viral memes, and even engaging in complex conversations that are difficult to distinguish from human interactions.
These AI-driven accounts can rapidly create and share content to maximise engagement on platforms like Facebook, Instagram, TikTok, and X. Posts are often tailored to capture attention through emotional appeals, humour, or sensationalism.
Once a piece of AI-generated content begins to gain traction, other automated accounts step in to like, comment, and share it — creating a self-sustaining feedback loop that amplifies the reach of synthetic material without any direct human involvement.
The motivations behind this activity are often financial. Social media platforms offer ad revenue-sharing programmes, rewarding popular accounts for generating views and engagement.
As a result, there is a strong incentive for bad actors to create armies of AI-run accounts that appear legitimate. High follower counts make these accounts seem trustworthy to real users, who are then more likely to engage with and spread their content.
Over time, these accounts can be sold or repurposed for other uses, including spreading misinformation or promoting products. This dynamic has given rise to a shadow economy built entirely on fabricated online influence.
Is there evidence of widespread bot activity?
Several studies over the past decade have provided evidence supporting the presence of large-scale automated activity online.
An analysis of 14 million tweets posted between 2016 and 2017 revealed that bots played a significant role in spreading articles from unreliable sources. The study found that high-follower accounts — many of which were automated — helped legitimise false or misleading information, leading real users to believe and reshare it.
In 2019, research indicated that bot-generated content heavily influenced public discussions following mass shootings in the United States. These bots amplified certain narratives, distorted facts and fuelled divisive conversations.
Reports from 2022 estimated that nearly half of all internet traffic originated from bots rather than humans.
With the advent of more sophisticated generative AI systems, the quality of synthetic content has only improved, making it harder to identify and filter out.
How does generative AI complicate the issue?
Generative AI has revolutionised content creation by making it possible for anyone to produce realistic images, text, audio, and video with minimal effort.
While this technology has many legitimate applications — such as helping writers, artists, and developers — it has also made it far easier to flood the internet with fake content.
On platforms like YouTube, AI is being used to produce inaccurate history videos, spreading misinformation to unsuspecting viewers.
Instagram and Facebook are increasingly home to AI-generated images, some of which feature unrealistic or disturbing designs, while TikTok has seen a surge in synthetic videos that are difficult to verify.
Even the entertainment industry has been affected. AI is now capable of editing classic films, creating digital replicas of deceased musicians, and generating deepfake performances.
This aligns with the Dead Internet Theory’s central argument: the web may still contain genuine human interaction, but the overall environment is increasingly artificial.
Is the internet as we knew it still alive?
While AI-generated content is becoming increasingly dominant, experts point out that the internet is far from completely “dead.”
Human communities remain active and influential online, particularly on platforms like X and TikTok, which continue to produce viral moments that shape popular culture and even impact politics.
For example, collective user activity on X has influenced corporate decisions and driven national conversations. There have been instances where outrage over something as seemingly trivial as a brand’s logo change prompted companies to reverse their decisions after public backlash.
These digital movements have even reached political offices, with viral memes making their way into global affairs.
However, these genuine moments of human interaction now coexist with a constant background noise of bot-generated content.
Altman’s concern over bot-driven platforms ties into his work outside of OpenAI. In 2019, he founded a company originally called Worldcoin, now rebranded as World Network, which seeks to address the growing challenge of distinguishing humans from bots online.
The initiative uses biometric verification, such as iris scanning, to create a reliable digital identity system. Its goal is to allow users to prove they are real people without revealing private personal data.
This could play a vital role in reducing the influence of AI-driven fake accounts in the coming years.
With inputs from agencies