The chatbots of AIs are in the spotlight for their possible risks on mental health, especially chatgpt. We recently deepened this problem following the accusations that Chatgpt was the culprit of causing delusions and even the suicide of a teenager In the United States. Although we already saw that reality is much more complex that a simple “the fault is AI”, OpenAi has responded to the wave of criticism and already has A package of measures that will integrate into chatgpt To avoid more similar cases.
OpenAI’s plan. In response to the controversy after the case of Adam Raine, Openai has detailed the measures that will reach Chatgpt, which will focus on facilitating access to emergency services, contacting trustworthy people and reinforcing protection measures focused on adolescents. The company puts a period of 120 days to integrate these novelties, although it warns that some will take a little more than others.
Reasoning models. GPT-5 Choose the best model automatically depending on the needs. One of the solutions proposed by Openai for conversations that take a worrying address is to automatically direct them to their reasoning model, regardless of the user selected.
Parental control. It will arrive next month and the minimum age to use will be 13 years. Parents can link their children’s account to their own and can deactivate functions such as chat memory and history. In addition, they will receive a notification if it detects that their son “is in a moment of acute anguish.”
Collaboration with experts. OpenAI ensures that all these improvements will be implemented under the supervision of mental health experts. For some time they have an artificial welfare and intelligence experts that has been expanded with experts in addictions, eating disorders and adolescent health.
The demand. It is not the first case in which Chatgpt is placed as responsible for a mental health crisis, but one of the most popular. Adam Raine’s parents They sued Openai after their son’s suicideclaiming that Chatgpt validated his “most harmful and self -descetive thoughts.” In some of his conversations he came to discuss details of how to make the knot in the rope with which he planned to commit suicide.
Weak safeguards. In his Fake Friend reportthe ‘Center for the Fight against Digital Hate’ has already verified that the safeguards of chatbots are very fragile and the case of Adam Raine corroborates it. Chatgpt detected several times that there was a risk of self -injuries and insisted to call the suicide prevention line, Adam managed to dodge these messages simply telling him that he was looking for information for a fiction story. The new parental control sounds like the first stronger measure against this problem.
Image | Kaboomps, via Pexels

GIPHY App Key not set. Please check settings