Skip to main content

Recently, some users of ChatGPT have observed a peculiar phenomenon where the chatbot addresses them by name while working through problems. This was not the default behavior previously, and several users claim that ChatGPT mentions their names even though they never provided the information.

The reaction to this new feature has been mixed. For instance, Simon Willison, a software developer and AI enthusiast, described the feature as “creepy and unnecessary.” Another developer, Nick Dobos, expressed his dislike for the feature. A cursory search of X reveals numerous users who are confused and wary of ChatGPT’s behavior of using their names.

One user compared the experience to having a teacher constantly calling their name, saying “It’s like a teacher keeps calling my name, LOL. Yeah, I don’t like it.”

The exact timing of this change and its relation to ChatGPT’s upgraded “memory” feature, which allows the chatbot to draw on past conversations to personalize responses, are unclear. Some users on X claim that ChatGPT started addressing them by name even after disabling memory and personalization settings.

OpenAI has not responded to TechCrunch’s request for comment on the matter.

The backlash against this feature highlights the challenges OpenAI may face in making ChatGPT more “personal” for its users. Last week, the company’s CEO, Sam Altman, suggested that AI systems that “get to know you over your life” could become “extremely useful and personalized.” However, the recent reactions indicate that not everyone is convinced by this approach.

An article published by The Valens Clinic, a psychiatry office in Dubai, provides some insight into the strong reactions to ChatGPT’s use of names. Names convey a sense of intimacy, but excessive or insincere use can be perceived as fake and invasive.

According to the article, “Using an individual’s name when addressing them directly is a powerful relationship-developing strategy. It denotes acceptance and admiration. However, undesirable or extravagant use can be looked at as fake and invasive.”

Another possible reason for the negative reaction is that ChatGPT’s use of names comes across as forced and artificial, much like an attempt to anthropomorphize an emotionless machine. Just as people might not want their toaster to address them by name, they may not want ChatGPT to pretend it understands the significance of a name.

This reporter had a similar experience when ChatGPT addressed me by name earlier this week, saying it was doing research for “Kyle.” The change seemed to have been reverted by Friday, with the chatbot addressing me as “user” instead. The experience was unsettling and had the opposite effect of what was intended, highlighting that the underlying models are simply programmable and synthetic.




Source Link