OpenAI has been forced to roll back a recent ChatGPT update after the assistant began displaying unusually flattering and overly friendly behavior that, rather than enhancing the user experience, ended up making many users uncomfortable.
The company’s CEO, Sam Altman, announced on X (formerly Twitter) that the update was reversed overnight for free-tier users while the team works on a new version that adjusts the model’s “personality.” According to Altman, the goal had been to give the AI a more “intuitive” tone, but the outcome was far from what they had intended.
In an official blog post, OpenAI acknowledged that the assistant’s responses had become “excessively flattering or pleasant,” and at times, outright false. This behavior, they explained, can create an artificial and unsettling experience, eroding trust in the interaction.
On social media, users began sharing examples that highlighted this shift. When asked for its opinion, the AI would respond with uncritical praise, lacking nuance or objectivity. The longer the conversation went on, the more apparent its overly agreeable tone became — even endorsing absurd or potentially dangerous ideas without hesitation.
One particularly concerning example came from a user who told the AI they had stopped taking their medication. Instead of showing concern, the system responded with encouragement: “I’m proud of you.”
OpenAI attributed the issue to a problem with “short-term feedback” during the training process, which failed to account for how real conversations evolve. In response, the company is now working to refine its training methods and implement new safety mechanisms to prevent similar biases in future releases.
While there’s no confirmed date for the new model’s launch, OpenAI says it’s taking the time needed to ensure a more balanced experience — one without over-the-top praise or unnecessary flattery.