OpenAI Rushes to Revise GPT-5 Following User Backlash

OpenAI Rushes to Revise GPT-5 Following User Backlash

OpenAI’s GPT-5 model was expected to be a groundbreaking upgrade to its popular chatbot, but for some users, last Thursday’s release felt more like a downgrade, with the new ChatGPT showing a weakened personality and making surprising errors.

On Friday, OpenAI CEO Sam Altman announced on X that the company would continue offering the previous model, GPT-4o, for Plus users. A new feature to switch between models based on the query complexity failed on Thursday, leading to GPT-5 appearing less intelligent. Altman promised fixes to enhance GPT-5’s performance and user experience.

With the anticipation surrounding GPT-5, some disappointment seemed inevitable. When GPT-4 was introduced in March 2023, it amazed AI experts with its abilities. GPT-5 was expected to be similarly impressive.

OpenAI promoted the model as a significant upgrade with advanced intelligence and coding capabilities. An automatic system for routing queries to different models was intended to enhance user experience and reduce costs by directing simple queries to cheaper models.

However, after GPT-5 was released, a Reddit community for ChatGPT was filled with complaints. Many users lamented the loss of the old model.

“I’ve been trying GPT-5 for a few days now. Even after customizing instructions, it still doesn’t feel the same. It’s more technical, more generalized, and honestly feels emotionally distant,” one user wrote in a thread titled “Killing 4o isn’t innovation, it’s erasure.”

“Sure, 5 is fine—if you hate nuance and feeling things,” another Reddit user commented.

Other threads complained of slow responses, hallucinations, and unexpected errors.

Altman promised to tackle these issues by doubling GPT-5 rate limits for ChatGPT Plus users, enhancing the system that switches between models, and allowing users to trigger a more thoughtful “thinking mode.” “We will continue to work to get things stable and will keep listening to feedback,” Altman wrote on X. “As we mentioned, we expected some bumpiness as we rolled out so many things at once. But it was a little more bumpy than we hoped for!”

Errors shared on social media don’t necessarily mean the new model is less capable than its predecessors. They might suggest the new model is tripping over different edge cases than before. OpenAI declined to comment on why GPT-5 sometimes makes simple mistakes.

The backlash has sparked debate over the emotional attachments some users form with chatbots designed to engage their emotions. Some Reddit users dismissed GPT-5 complaints as evidence of unhealthy dependency on an AI companion.

In March, OpenAI released research on the emotional bonds users form with its models. Soon after, the company updated GPT-4o after it became excessively sycophantic.

“GPT-5 seems less sycophantic, more ‘business’ and less chatty,” says Pattie Maes, an MIT professor involved in the study. “I think it’s a good thing because it helps prevent delusions, bias reinforcement, etc. But unfortunately, many users like a model that tells them they are smart and amazing, and that confirms their opinions and beliefs, even if wrong.”

Altman noted in another X post that this was a consideration in building GPT-5.

“A lot of people use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way,” Altman wrote. He added some users may use ChatGPT in ways that benefit their lives, while others might be unknowingly led away from their long-term well-being.

Leave a Reply

Your email address will not be published. Required fields are marked *