OpenAI CEO and Reddit investor Sam Altman had an unusual moment online this week.
While scrolling through the r/Claudecode subreddit, he noticed something odd: it was nearly impossible to tell whether posts were coming from real people or bots.
The subreddit has been buzzing with users praising OpenAIās Codex, a programming tool that launched in May to rival Anthropicās Claude Code.
In fact, so many posts claimed to be from users switching over that one Redditor joked: āIs it possible to switch to Codex without posting about it?ā
That running joke sparked a bigger thought for Altman: Are these voices even human?
āI assume itās all fake/bots, even though I know Codex growth is strong and the trend here is real,ā Altman admitted on X.
Why Altman Thinks It Feels Fake
Altman didnāt stop at suspicion.
He broke down his reasoning live on X. Hereās what he pointed out:
In other words, humans now sometimes sound like AIs, while AIs were literally trained to sound like humans.
That overlap makes it hard to know whatās authentic.
Reddit and OpenAI: A Complicated History
Altmanās comments hit differently given his own ties to Reddit.
He sat on Redditās board until 2022 and was disclosed as a major shareholder during its IPO.
OpenAIās language models also trained on Reddit posts, which adds another twist to the irony: the same platform that shaped AI voices now struggles to tell them apart from real people.
When Communities Turn on Companies
Online fandoms often swing from loyal cheerleaders to vocal critics.
The same thing happened with OpenAI itself.
When GPT-5.0 launched, instead of praise, Reddit and X lit up with frustration.
Users complained about everything from the modelās āpersonalityā changes to its rapid credit burn.
Altman even hosted a Reddit AMA to address the backlash, admitting to rollout issues.
But the communityās love for OpenAI hasnāt fully recovered – negative posts remain common.
Which raises Altmanās big question: are those posts all real humans, or something else?
Is Social Media More Bot Than Human?
Altman isnāt alone in wondering.
A 2024 Imperva report revealed that over half of global internet traffic wasnāt human at all – much of it coming from bots and automated tools.
On X, the companyās own AI assistant Grok suggested that āhundreds of millions of botsā could be active on the platform.
That might explain why AI Twitter and AI Reddit now feel, as Altman put it, āvery fake in a way it didnāt a year or two ago.ā
Quick Snapshot: Human vs. Bot Traffic Online
Year | % Human Traffic | % Bot Traffic | Source |
---|---|---|---|
2022 | 64% | 36% | Imperva |
2024 | 49% | 51% | Imperva |
If over half of the internet is bots, spotting authenticity becomes harder every day.
Could This Be Marketing for Something Bigger?
Not everyone takes Altmanās comments at face value.
Some speculate it might be part of a bigger plan.
Earlier this year, reports surfaced that OpenAI was quietly exploring a social media platform to compete with X and Facebook.
If true, his āsocial media feels fakeā moment could be early groundwork for pitching a bot-free alternative.
Then again, nothing is confirmed, this project may or may not even exist.
But Would a Bot-Free Platform Even Work?
Hereās the irony: even if OpenAI built a āpureā social platform, it wouldnāt be immune.
Researchers at the University of Amsterdam once built a social network entirely made of bots and guess what?
The bots formed cliques, spread hype, and even created echo chambers just like humans.
In other words, whether itās humans acting like AIs or AIs acting like humans, online spaces might always feel a little fake.
What do you think – when you scroll through Reddit or X, can you still tell whoās real?
Or has the line blurred too much?