Right now there are 4500 North American students advised psychologically by a chatbot. It’s just the beginning

Right now, while I write this, there are about 4,500 North American students being “psychologically advised” Through an application Sonny called. It is nothing surprising. In the US, around 17% of secondary schools have no counselor or school psychologist. Most of them are in rural areas or in economically depressed zones and applications of this type bring psychosocial assistance to everyone.

The problem is not that. The problem is that Sonny is An artificial intelligence chatbot.

Is the chatbot? Let it be put. The idea is not new: in fact, it is one of the first ideas that comes to everyone worried about mental health. In the last 50 years We have concluded that psychotherapy is tremendously effective. For now we have not been able to climb it: that is the promise of the chatbots.

Sonny’s example Help to understand it: Students have access to chatbot 18 hours a day (from 8:00 a.m. to 2:00 a.m.) and the “Solo” service costs each district between $ 20,000 and $ 30,000. Much less than a school advisor to use. It is true that it is not so effective that the counselor, but for many environments it is the best that can be allowed.

And, in fact, according to some of its users explainbeyond its final therapeutic effectiveness, these types of approaches allow school to identify almost real problems among students. In Berryville (Arkansas), they discovered that more than half of the users sent messages just before exams and allowed them to develop emotional support interventions.

Is this the future? A couple of years ago, Zara Abrams published an extensive analysis In the American Psychological Association, where it was concluded that “artificial intelligence chatbots (AI) can make therapy more accessible and less expensive.” Just what Sonny does.

However, as also explained Abrams“Despite the potential of AI, there are still reasons to worry: the tools used in the health field have discriminated people depending on their race and disability status and there are malicious chatbots have widespread erroneous information, They have professed their love for users either They have sexually harassed minors“That is precisely what Sonny tries to avoid.

How is it possible? In principle, as they explain from the company, Sonny has a team of people with “experience in psychology, social work and online support” that supervise and even rewrite the Bot responses. Each technician supervises between 15 and 25 chats at the same time.

It is the form that has HEALTH mental soundthe company behind the app, to avoid the great original sin of the LLM: its tendency to delir, fantasize and give advice that may not be correct. In addition, the hybrid chatbot is designed to notify parents and teachers to the slightest possibility of danger (either for oneself or for others).

Have you solved the problem? The truth is that we do not know (because there are no studies on it), but it is honestly unlikely that they have achieved it: we are in a very early phase as for trust that all associated problems are resolved.

But it is a radical step. As Abrams said, It is possible that “psychologists and their abilities are irreplaceable”, but since the arrival of AI is inevitable we have to bet on a “reflexive and strategic implementation.” Something that, indeed, is very similar to saying anything. There is much that we still do not know and, therefore, making concrete predictions is dangerous.

What is clear is that the question is not if we will have gpteraputas, we already have them. The question is how we can use them to Not worsening the attention that is being given today (a temptation always preset) and turn them into a key tool to reduce human suffering. Let’s hope to answer it soon.

Image | Dream / Sigmund

In Xataka | 50 years of research on depression psychotherapy leave a surprising fact: we have not improved anything

Leave a Comment