Sam Altman Raises Alarm on AI-Driven Fake Posts

Updated:September 9, 2025

Reading Time: 3 minutes
Sam Altman

OpenAI CEO and Reddit investor Sam Altman had an unusual moment online this week.

While scrolling through the r/Claudecode subreddit, he noticed something odd: it was nearly impossible to tell whether posts were coming from real people or bots.

The subreddit has been buzzing with users praising OpenAI’s Codex, a programming tool that launched in May to rival Anthropic’s Claude Code.

In fact, so many posts claimed to be from users switching over that one Redditor joked: ā€œIs it possible to switch to Codex without posting about it?ā€

That running joke sparked a bigger thought for Altman: Are these voices even human?

ā€œI assume it’s all fake/bots, even though I know Codex growth is strong and the trend here is real,ā€ Altman admitted on X.

Why Altman Thinks It Feels Fake

Altman didn’t stop at suspicion.

He broke down his reasoning live on X. Here’s what he pointed out:

Sam Altman

In other words, humans now sometimes sound like AIs, while AIs were literally trained to sound like humans.

That overlap makes it hard to know what’s authentic.

Reddit and OpenAI: A Complicated History

Altman’s comments hit differently given his own ties to Reddit.

He sat on Reddit’s board until 2022 and was disclosed as a major shareholder during its IPO.

OpenAI’s language models also trained on Reddit posts, which adds another twist to the irony: the same platform that shaped AI voices now struggles to tell them apart from real people.

When Communities Turn on Companies

Online fandoms often swing from loyal cheerleaders to vocal critics.

The same thing happened with OpenAI itself.

When GPT-5.0 launched, instead of praise, Reddit and X lit up with frustration.

Users complained about everything from the model’s ā€œpersonalityā€ changes to its rapid credit burn.

Altman even hosted a Reddit AMA to address the backlash, admitting to rollout issues.

But the community’s love for OpenAI hasn’t fully recovered – negative posts remain common.

Which raises Altman’s big question: are those posts all real humans, or something else?

Is Social Media More Bot Than Human?

Altman isn’t alone in wondering.

A 2024 Imperva report revealed that over half of global internet traffic wasn’t human at all – much of it coming from bots and automated tools.

On X, the company’s own AI assistant Grok suggested that ā€œhundreds of millions of botsā€ could be active on the platform.

That might explain why AI Twitter and AI Reddit now feel, as Altman put it, ā€œvery fake in a way it didn’t a year or two ago.ā€

Quick Snapshot: Human vs. Bot Traffic Online

Year% Human Traffic% Bot TrafficSource
202264%36%Imperva
202449%51%Imperva

If over half of the internet is bots, spotting authenticity becomes harder every day.

Could This Be Marketing for Something Bigger?

Not everyone takes Altman’s comments at face value.

Some speculate it might be part of a bigger plan.

Earlier this year, reports surfaced that OpenAI was quietly exploring a social media platform to compete with X and Facebook.

If true, his ā€œsocial media feels fakeā€ moment could be early groundwork for pitching a bot-free alternative.

Then again, nothing is confirmed, this project may or may not even exist.

But Would a Bot-Free Platform Even Work?

Here’s the irony: even if OpenAI built a ā€œpureā€ social platform, it wouldn’t be immune.

Researchers at the University of Amsterdam once built a social network entirely made of bots and guess what?

The bots formed cliques, spread hype, and even created echo chambers just like humans.

In other words, whether it’s humans acting like AIs or AIs acting like humans, online spaces might always feel a little fake.

What do you think – when you scroll through Reddit or X, can you still tell who’s real?

Or has the line blurred too much?

Onome

Contributor & AI Expert