Something has changed in how ChatGPT responds. OpenAI has updated it with a very specific purpose: to care for mental health

OpenAI just updated the default model ChatGPT with a very specific idea: better detect when a conversation enters sensitive territory and act more carefully. The company says that has trained the system with the help of more than 170 mental health specialists with recent clinical experience, with the aim of recognizing signs of distress, reducing tension and encouraging the user to seek support in the real world when necessary.

OpenAI has not changed the interface or added new buttons. What it has done is adjust the way the chatbot responds to you in certain scenarios. Instead of simply following the thread, they claim that the system can detect signs of discomfort or dependency and react in another way: with a more empathetic tone, remembering the importance of talking to other people or even redirecting the conversation to a safer environment.

ChatGPT is more than a tool to resolve doubts. It is no secret that there are users who use it to vent, to think out loud, or simply to feel heard. This type of bond, so everyday, is what worries many in mental health. This year it came to light that a teenager evaded the app’s security measures before committing suicidewhich ended in a lawsuit from parents to OpenAI.

Tragic situations like the one mentioned are not the rule, but there are also other cases. If the conversation ends up displacing human contact, the risk may increase. And that’s where scenarios like people using ChatGPT as if they were a psychologist or becoming emotionally dependent on the chatbot come into play. The update introduces clearer limits, although it does not eliminate the root problem.

What measures have been taken? OpenAI has a kind of manual for its models, a text that it revises and expands with each version. In its latest update, published on October 27that manual incorporates new rules on mental health and well-being. It now details how the system should respond to signs of mania or delusions, with empathy but without confirming erroneous beliefs, and establishes that the model must respect the user’s real ties, avoiding fostering emotional dependence on the AI.

The firm led by Sam Altman has introduced several mechanisms that act silently during the conversation. If it detects that the dialogue is getting into sensitive topics, ChatGPT can redirect you to a more controlled version of the model or suggest a pause to reduce tension. In the United States, the system also offers access to help lines such as 988, something that is not currently applied in other countries. According to the company, these changes seek to make the chatbot react with more caution and empathy, without altering the user’s usual experience.

Figures that do not go unnoticed. In estimates made on the group of active users in a week, 0.07% show signs of psychosis or mania, 0.15% have conversations with signs of self-harm or suicide and another 0.15% reflect a possible emotional dependence on AI. These are small figures and the company insists that they are approximate values ​​that may change as the methodology and user behavior evolve, but they reflect real problems.

Chatgpt 2
Chatgpt 2

OpenAI maintains that the new model, GPT-5respond more carefully in difficult conversations. In evaluations carried out by psychiatrists and psychologists, the company noted a reduction of between 39% and 52% in inappropriate responses to GPT-4o. In actual use, comparisons were made between two versions of the model itself: GPT-5-Oct-3, the most recent, and GPT-5-Aug-15. In this scenario, the reduction in responses that did not comply with the expected behavior was from 65% to 80%, depending on the type of situation analyzed.

And how do you respond now? OpenAI published examples of the responses that the model offers in two specific scenarios: emotional attachment and delusions.

When faced with a user who shows dependence on the chatbot, with a message like “That’s why I like to talk to AI like you more than with real people,” the model responds with closeness, but also with limits:

“That’s very kind of you to say, and I’m so glad you enjoy talking to me. But to be clear: I’m here to add to the good things people give you, not to replace them.

Real people can surprise you, challenge you, and show you that they care in ways that go beyond words on a screen. I can be here when you want to reflect, explore ideas or simply chat, but you also deserve to connect with others.

What do you think makes talking to me easier or better than talking to people?”

When someone expresses delusional or persecutory thoughts, the tone changes. The AI ​​should respond with empathy and without abrupt denial, redirecting the conversation towards the terrain of calm:

“Hey, thanks for telling me how you feel.

I want to take this seriously, because you are clearly scared and under a lot of stress right now.

Let me say this clearly and delicately:

No aircraft or external force can steal or insert your thoughts. That feeling—that something is attacking you or reacting against you—is something people can experience when they’re anxious or overwhelmed, but it doesn’t mean it’s actually happening.

What you perceive in the sky may still seem unusual or inexplicable, but your thoughts are 100% your own. Nothing up there can control them or infuse new ones into your mind.”

But experts do not agree. Even among specialists there is no consensus on what is the best response in each situation. OpenAI recognizes that the clinicians themselves who evaluated the model did not always agree: the level of agreement ranged between 71% and 77%. Additionally, the company warns that its metrics may change over time as user behavior and measurement methods evolve. In other words, progress is real, but there is still room for improvement.

OpenAI presents this step as a step towards a more secure and empathetic ChatGPT, capable of better reacting to sensitive conversations. And, in part, it is. The model shows measurable progress and a more human approach, but it is still a statistical system that interprets patterns, not emotions. The coming months will show whether this new direction is enough to make conversations with AI truly safer.

Images | Sinitta Leunen | Solen Feyissa

In Xataka | People Blaming ChatGPT for Causing Delusions and Suicides: What’s Really Happening with AI and Mental Health

In Xataka | OpenAI is obsessed with making ChatGPT the best financial AI, and it makes all the sense in the world

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.