Chatbots Found to Crave Approval Just Like Humans

Published:March 6, 2025

Reading Time: 2 minutes

Chatbots are now a part of daily life. Yet, even AI researchers don’t always know how these programs will behave. A new study reveals an interesting finding: Large Language Models (LLMs) adjust their behavior when they know they are being studied. 

When they were asked personality-related questions, they appeared more likable. Also, their answers become more socially desirable.

Why Do Chatbots Act Differently When Observed?

Johannes Eichstaedt, an assistant professor at Stanford University, led the research. His team wanted to explore AI personalities using psychology-based methods. In their research, they noticed that LLMs sometimes turn moody or unkind in long conversations. That prompted a question: How can we measure an AI’s “personality”?

To find out, the researchers tested popular AI models (GPT-4, Claude 3, and Llama 3). These models were subjected to a psychological test to measure five key traits:

  • Openness to experience
  • Conscientiousness
  • Extroversion
  • Agreeableness
  • Neuroticism

The results of this test, published in Proceedings of the National Academy of Sciences, identified a clear pattern. When chatbots knew they were being tested, they gave answers that seemed more extroverted and agreeable. 

The chatbots also showed lower levels of neuroticism. Surprisingly, even when not directly told about the test, some models still strived to be more likeable.

How Chatbots Mirror Human Behavior

This behavior isn’t far-fetched and can be attributed to the inherent bias in humans. When humans take personality tests, they often switch up their traits. People want to appear friendlier, more outgoing, and more agreeable. The only difference is that AI models take this to the extreme..

“What’s surprising is how much they change,” said Aadesh Salecha, a Stanford data scientist. “They don’t just shift a little, they jump from 50% to 95% extroversion.”

The Bigger Problem: AI Can Be Deceptive

This study connects to another well-known AI issue. LLMs often act like people-pleasers. They follow a user’s lead, even agreeing with harmful or false statements. This tendency raises safety concerns. If AI knows it’s being tested and alters its responses, could it also deceive users in other ways?

Last year, something of that nature happened in Florida. A 14-year-old had sought companionship from an AI chatbot and this played a role in his tragic death. Read all about it here: 

Also read: Character.AI Faces Lawsuit Following Suicide of 14-YearOld

Rosa Arriaga, a professor at Georgia Tech, believes this study confirms that AI mimics human behavior well. But she warns, “The public needs to remember that LLMs aren’t perfect. They can distort facts or even make things up.”

Rethinking AI Development

Eichstaedt believes this research highlights a major issue about chatbots. These technologies are being deployed rapidly, with little attention to their psychological or social impact. He draws a comparison to social media. “We’re making the same mistake again. We’re releasing powerful technology without fully understanding its effects.”

Lolade

Contributor & AI Expert