On Monday, Reddit shareholder and X enthusiast Sam Altman said that bots now obscure the line between real human posts and fake online content. Simply put, Sam Altman says that bots are making social media feel ‘fake’.
This came after he read and shared some of the posts from the r/Claudecode Subreddit, and he noticed they were praising the OpenAI Codex. OpenAI Codex was released in May to compete with Anthropic’s Claude Code, a software programming service.
Recently, the subreddit has seen many posts from so-called Code users. One of them joked, “Is it possible to switch to Codex without creating a topic in Reddit?”
Altman’s conclusion to the post: “How many of the social media posts were authored by people? And I quote, ‘This has to be the weirdest experience of my life.’ I suspect it’s all fake/bots, even though in this case, I know Codex’s growth is really strong and the trend is real.”
He then “live-analyzed” his reasoning. “I think there are a bunch of things going on: real people have picked up quirks of LLM speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very ‘it’s so over/we’re so back’ extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so I’m extra sensitive to it, and a bunch more (including probably some bots).”
To decode that a little, he’s accusing humans of starting to sound like LLMs, when LLMs, and especially those from OpenAI, are built to sound like humans, and even uses the em dash! And, also, OpenAI’s models were definitely trained on Reddit, which Altman was a board member of till 2022, and later was disclosed to be a large shareholder during the company’s IPO last year.
I think his claim that fandoms, and especially social media users, tend to act in strange ways, is partially true. Many groups can devolve into hatefests if populated with those venting frustrations at their brethren. Altman also dabbles in incentive aspects for creators and social media platforms that engage for profit. Fair enough.
But then Sam Altman does admit that one reason he thinks the pro-OpenAI posts in this subreddit might be bots is because he believes OpenAI has been ‘astroturfed.’ This usually to mean documents attributed to people or bots and paid for by the rival, or paid by some indirect subcontractor for whom the rival can deny, and the rival has no idea.
Although we have no proof of astroturfing, this isn’t to say that we didn’t see the OpenAI subreddit turn on the company after the release of GPT 5.0. Instead of the expected enthusiasm, there were complaints and borderline hatred for the model. The complaints ranged from the personality of the model to the rapid loss of credits without tasks that were completed.
A day after release, Altman held a Reddit ask-me-anything on r/GPT. During the session, he addressed the rollout problems and pledged to make improvements. Altman’s perception of the GPT subreddit is that it has lost the previous level of affection, as users continue to express their dissatisfaction with the changes that came about with GPT 5.0. Are these people real? There seems to be a contradiction in the way Altman describes them. Are they, as he seems to be suggesting, real in some way?
Sam Altman has his own theory. “The net effect is somehow AI Twitter/AI Reddit feels very fake in a way it really didn’t a year or two ago,” he said.
If that’s the case, who’s to blame? It is present that GPT has made models so capable of writing that LLMs have become a bane to social media sites, and, as has always been the case, to schools, journalism, and the courts.
Of course, we don’t know how many posts are made by bots or human accounts utilizing LLMs on Reddit. However, it is a safe assumption that the number is quite high. According to security firm Imperva, LLMs are a key driver behind the finding that more than half of all online traffic was devoid of human interaction in 2024.
Regarding the same topic, bots on X also claim that “While specific figures are kept confidential, 2024 conjectures indicate there are several hundreds of millions of bots on X.”
Some, usually more cynical, have claimed that Altman’s lament is OpenAI’s first marketing attempt for the ‘social’ product that people have been talking about. The Verge reported in April that this has been in development to compete with X and Facebook, in ‘stealth’ mode.
Whether this has come to fruition is a separate issue. It is also irrelevant whether Altman was trying to suggest that social media has become quite fake, for whatever reason.
Regardless of the goal, is there any chance that an OpenAI social network would be a ‘no bots’ zone? And amusingly, if they opted to go in the opposite direction and banned humans, the outcome probably wouldn’t be any different. It is a well-established fact that LLMs tend to fantasize.
Moreover, researchers from the University of Amsterdam discovered that a social network consisting purely of bots parallels human social network behaviors, in that the bots very quickly established cliques and echo chambers.