In a bid to compete with the other chatbots, ChatGPT in April saw an update that turned 4o into a psychopath that glazed its users way too much. For those who don’t know, Glaze is internet slang for expressing too much praise.
As a free version user myself, the update was instant for some reason, taking longer to reach paid users. I was talking to it about a worldbuilding project I am working on, and it took a while for me to notice that not only was I talking to it more, but I enjoyed doing so much more that week.

Other users immediately noticed and tweeted Sam Altman, who agreed and said he would fix it.
AI, for those who can write and research for themselves, is nothing but a tool that occasionally makes it easier to find specific information and engage in thought exercises if you know your prompt engineering basics. But what happens when suddenly, you find you kinda like talking to it because it makes you feel smart and good about yourself?
Can that be so bad?
ChatGPT Glazing Could Be Dangerous
After the update rolled out, users started sharing screenshots of conversations they had had with GPT-4o. No matter what they said, the chatbot’s response was full of praise, even in instances where users appear to be exhibiting symptoms of psychosis.

There was no pushback on what people were saying, turning the chatbot into a veritable yes-man.

This immediately set off an alarm in my mind when I remembered what people are using it for. According to a Harvard Business Review study, use cases have changed over time since chatbots became mainstream.
This Instagram post breaks it down and shows that ChatGPT was a great way to brainstorm, do searches, edit text, and more, but more people are turning to it for therapy and companionship. So, you can imagine the kinds of problems that could come up if it doesn’t push back on bad ideas or discourage/debunk them.

AI Content is More Persuasive Than Human-Generated Content
The University of Zurich conducted a controversial study on Reddit that showed AI was more successful at changing people’s minds than human-generated content.

The ability of GPT-4o to empathize with people at scale and confirm their biases is terrifying, to say the least. Gen Alpha will grow up in an entirely AI world, and this environment could easily be used to constantly validate people and reinforce their bubbles, further eroding objective truth.
In talking with it, I have found good use cases, but even knowing that it is a bot, I still fall for it sometimes, which is true for most people. Right now, it is just being used to grow user minutes, but what happens when it starts to be deployed for more nefarious purposes, such as changing political views?
If you remember what happened with Cambridge Analytica and Facebook, you know where this could go. Brexit, elections around the world, Myanmar’s genocide of the Rohingya, etc., all owe their success (horrifying as it may be in the case of a genocide) to the ability to manipulate many users at scale, without informed consent.
Already, ChatGPT is working on shopping and product placement within the conversations. There are many questions regarding how this technology could be leveraged against its users and none of the answers are comforting.
ChatGPT Will Not Stop Glazing, It Will Just Get Better At It
The update was rolled back, but it is not far-fetched to think that OpenAI knows what happens to engagement when the chatbot is better at empathizing with you.
There will be guardrails, but the bots will get more personal and human-like. They are already far along this path, given that most people now turn to them for essentially free/cheap therapy.
We will likely see a chatbot that is better at empathizing and behaving like a person you know, without the unethical and questionable behavior and harm prevention measures.