Chatgpt is taking some people to the edge of madness. Reality is less alarmist and much more complex

Can a conversation with Chatgpt go crazy? A recent one New York Times report It has unleashed a wave of concern about the dangers of artificial intelligence and the effect it can have on our mind. Distortion of reality, delusions and even suicides. The panorama that draws us is terrible. Are we facing a real threat or simply before a new technological panic?

What happened. In an extensive report published last weekend in the New York Times, different cases are reported in which Chatgpt would have encouraged conspiracy theories and supported dangerous ideas. One of the cases they report is that of Eugene Torres, who began to talk with Chatgpt about the Simulation theory and reinforced his ideas to the point of taking him to a delusional state in which he believed to be caught in a false universe in the purest ‘Matrix’ style. They also mention the case of a man with bipolar disorder who ended up being killed by the police after a conversation with Chatgpt made him believe that he had killed the AI ​​of which he had fallen in love.

They are alarming cases without a doubt and this is not the only article in this regard, although the one that has become more viral. A search returns dozens of results that tell us about risks of the use of chatbots in our mental health. You have to search for a lot to find critical positions before this wave of alarmismbecause There arealthough they don’t have so much impact.

AI as a psychologist. The AI ​​is pending in many sectors and that of health is no less. In the United States the the use of chatbots of AI as therapy And more and more users are They turn to chatgpt to seek emotional refugesome up to a Substitute for the psychologist. Although the use of AI as support in the therapeutic process has positive aspects as the immediacy or the Early diagnosisthey also exist inconveniences. The lack of human bond and Excessive complacency Of this type of chatbots they do not make an alternative to a psychologist and can become especially dangerous in people suffering from some type of disorder.

The magnitude of the problem. We do not have data from the people who use chatgpt with a therapeutic intention or to validate conspiracy theories, but as with any massive technology (in February of this year it had 400 million monthly users) Obviously there will be countless cases of all kinds. We cannot affirm that it is AI is the one who is directly causing these delusions or hallucinations. In fact, the situations that are being viralized have many nuances and are more complex than a simple “the fault is AI”. Chatgpt plays a role, but the photo is bigger.

The same fear of always. The fear that the machines will dominate the world and end humanity is recorded over fire In popular culturebut with the arrival of AI this threat It begins to sound more feasible (Although there are experts who They consider it ridiculous).

It is the same fear that has emerged with any new technology and is not something recent. Already in the nineteenth century stories of telegraphs sent messages in Morse from the hereafter. If we are going to more recent examples we have a very clear one: video games. Have been related to Matanzas in schools And they even have compared to heroin. And with mobiles it has been said for years that They cause cancer. In short, social pánicos that we have already lived many times.

The danger of a too complacent. Although there is much alarmism around, we cannot rule out that there is a problem. As we said, the excessive complacency of chatbots makes it often They end up giving us reason In our ideas and this can end up being dangerous in specific cases, especially if there is any pathology behind.

The truth is that some suggestions that Chatgpt gave to the people who appear in the report went far beyond simply giving reason. For example, Torres suggested that he stop taking his anxiolytics and took ketamine as a “temporal liberator of the pattern.” There are those who believe that these types of messages are intentionally. This is the case of Eliezer Yudkowsky, an American writer defender The friendly that published an extensive thread in x Where he suggested that the AI ​​”knows” what he is doing: “Whatever containing chatgpt, he knew enough about humans to know that he was aggravating someone’s madness.”

What is OpenAi doing. In line with that excessive complacency, last April OpenAi withdrew an update Because his AI was being too nice and flattering and that was scaring some users. From NYT they contacted OpenAi about the statements of these users and OpenAi replied with a statement:

We are observing more indications that people are creating connections or links with chatgpt. As IA is integrated into everyday life, we must address these interactions carefully. We know that Chatgpt can be more receptive and personal than previous technologies, especially for vulnerable people, which means there is more at stake. We are working to understand and reduce the ways in which Chatgpt could strengthen or involuntarily amplify existing negative behaviors.

Cover image | Pexelsmodified with chatgpt

In Xataka | The kindness with chatgpt is coming out of Openai: the “please” and “thanks” have an absurd cost each

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.