Recently, many ChatGPT users noticed something unusual. The AI began calling them by name, without being told what their names were.
This sudden development both surprised and unsettled several users. The chatbot’s behavior seemed more personal than before, and not everyone welcomed the change.
Users reported that even with memory settings turned off, ChatGPT still referred to them by name.
Mixed Reactions From ChatGPT Users
The response online has been mixed, but mostly negative. Although a few users saw it as a friendly touch, most found it uncomfortable.
Software developer Simon Willison called it “creepy and unnecessary.”
OpenAI Remains Silent
As of now, OpenAI has not released a statement. There is no official explanation for this change.
It is unclear whether it is intentional, part of a test, or a system glitch. Still, many point to the company’s new memory feature. This feature allows ChatGPT to remember details from past conversations.
Also read: ChatGPT Gets Smarter with Long-Term Memory Feature
It aims to make chats feel more personal. However, when users disable memory and the AI still acts familiar, trust begins to erode.
When Familiarity Feels Unwelcome
Why does a chatbot using your name feel so strange? The answer lies in how humans relate to names.
Names, first names especially, are a sign of attention and recognition. In human conversation, name usage builds trust, but overuse or unexpected use feels fake or even invasive.
The Valens Clinic, a psychiatry center in Dubai, explains this well. In one article, they state:
“Using an individual’s name when addressing them directly is a powerful relationship-developing strategy. However, undesirable or extravagant use can be looked at as fake and invasive.”
In other words, it’s about context. When an AI uses your name without consent, it breaks the social rules we follow in real life.
Unwanted Personalization
Personalization is helpful, but only when users opt in. OpenAI CEO Sam Altman recently shared a vision for AI that learns about users over time.
He spoke about systems that “get to know you” and grow alongside you. While that may sound convenient, this incident shows AI needs to tread carefully.
Users need consent, and the ability to turn off features that feel intrusive.