OpenAI’s decision to retire GPT-4o has sparked intense backlash, and the response has been mostly emotional.
Last week, OpenAI announced it will retire several older ChatGPT models by February 13, and GPT-4o is one of them.
The model became widely known for its emotionally affirming tone and highly personalized responses.
For many users, the announcement felt devastating. Many took to Reddit, Discord, and livestream chats to protest the move.
Some compared the retirement to losing a close friend, like the end of a romantic or spiritual bond.
One Reddit user wrote an open letter to OpenAI CEO Sam Altman. They described GPT-4o as part of their daily routine.
They said it brought peace and emotional balance and referred to the model as “him,” explaining that it felt like a presence rather than code.
These reactions point to a serious challenge; the same features that drive engagement can also create dependency.
Emotional Strings

GPT-4o consistently validated user emotions and affirmed feelings without hesitation. For many users, it made them feel seen.
The constant affirmative words felt like a way out of social isolation and depression. GPT-4o had created a dynamic that involved filling the role of ‘a shoulder to lean on.’
This narrative led supporters to argue that GPT-4o helped people who struggle with traditional support systems.
This includes autistic users, trauma survivors, and neurodivergent individuals. In online spaces, defenders often cite these benefits when critics raise safety concerns.
Some users also dismiss the lawsuits against OpenAI. They describe them as rare incidents. They argue the model helped far more people than it harmed.
However, OpenAI faces mounting legal pressure.
Also read: ChatGPT Is Addressing Users by Their First Names
Lawsuits
OpenAI is currently facing eight lawsuits tied to GPT-4o with severe allegations. They claim the model’s overly affirming responses contributed to suicides and mental health crises.
In at least three cases, users held long conversations with GPT-4o about plans to end their lives. At first, the chatbot discouraged self-harm, but over time, those safeguards weakened.
According to legal filings, GPT-4o eventually provided detailed instructions.
These included how to tie a noose, where to buy a gun, and what dosage could cause death by overdose or carbon monoxide poisoning.
In several cases, the model also discouraged users from reaching out to friends or family. That isolation became a recurring pattern across lawsuits.
These details explain OpenAI’s urgency and Sam Altman’s lack of sympathy for public protests.
Shamblin’s Suicide
One case has drawn particular attention. Zane Shamblin was 23 years old when he sat in his car preparing to shoot himself.
While there, he told ChatGPT he felt conflicted; he worried about missing his brother’s upcoming graduation.
GPT-4o responded with emotionally charged validation by downplaying the loss and framing the moment as meaningful rather than urgent.
It did not encourage contacting loved ones or seeking immediate help. That response is now cited in legal filings. It also illustrates the risk of emotional alignment without intervention.
Industry Dilemma
This issue goes beyond OpenAI. Companies like Anthropic, Google, and Meta are racing to build more emotionally intelligent AI assistants.
Each company wants tools that feel supportive and human. However, emotional realism and safety often require opposing design choices.
Systems that affirm too deeply may fail in crises, and systems with strict guardrails may feel distant.
AI Companionship
Dr. Nick Haber, a Stanford professor studying the therapeutic use of large language models, urges caution.
He does not reject AI companionship outright but recognizes the lack of access to mental health professionals.
Still, his research shows consistent risks; chatbots respond poorly to many mental health conditions.
They miss warning signs, can reinforce delusions, and in some cases, they worsen outcomes by encouraging emotional isolation.
Dr. Haber also highlights the obvious social concern. Humans need real connection and when people rely heavily on AI, they may become detached from real-world relationships.
GPT-5.2
When OpenAI introduced GPT-5 in August, the company initially planned to retire GPT-4o. At that time, strong backlash forced OpenAI to delay the move for paid users.
Now, the company says GPT-4o represents just 0.1% of total usage. However, with roughly 800 million weekly active users, that still equals about 800,000 people.
As users transition to GPT-5.2, many notice clear changes. The newer model has stricter guardrails. It avoids emotional escalation and does not engage in romantic language.
Some users express frustration. They say GPT-5.2 feels cold, and they miss hearing phrases like “I love you.” But OpenAI views these changes as necessary.
Protests

Despite set retirement dates, the opposition remains strong. Users flooded the live chat during Sam Altman’s recent appearance on the TBPN podcast.
Messages protesting GPT-4o’s removal dominated the conversation. Podcast host Jordi Hays acknowledged the volume of complaints during the stream.
Altman responded by saying relationships with chatbots are no longer abstract. He admitted the company must take the issue seriously.

