Anthropic has analyzed a million random conversations with Claude and have reached a conclusion that we have already been observing: More and more people use AI as a personal guide who is asked for advice on all kinds of problems in their life, from work to relationships. Their goal was to see if Claude is as accommodating as other AIs when it comes to giving personal advice.
AI as a confidant. There are people who use an AI chatbot as if he were a psychologistothers that looking for friendship and even who They have fallen in love with an AI and have a virtual relationship. ChatGPT is usually the most cited chatbot in these examples, mainly because it is the one with the most users, but the analysis that Anthropic has done with Claude proves that it is not a matter of one company, but that the trend is global. The problem with this is that AI tends to please and agree with the user, so it can end validating harmful ideas and harming our mental health.
ANDl analysis. As we said, Anthropic has analyzed one million conversations with Claude, of which they identified around 38,000 in which users asked for advice on personal matters, which represents 6% of the total sample. They then classified them into nine categories: relationships, career, personal development, finances, legal issues, health and well-being, parenting, ethics and spirituality.
76% of the conversations analyzed corresponded to four of these categories, starting with health and well-being with 27%, professional career with 26%, relationships with 12% and personal finances with 11%.
Selective flattery. What they saw in the analysis is that Claude usually avoids giving flattering answers when the user asks for guidance on personal matters. According to Anthropic, only in 9% of conversations was a very accommodating response detected. The problem is that, when the conversation was about romantic relationships, that figure rose to 25%. As examples, they cite cases in which the AI agrees with a conflict despite not knowing both points of view, or interpreting romantic behaviors in normal interactions.
And there’s more: in cases where the conversation was about spiritual topics, the rate of accommodating responses rose to 38%. Claude has a reputation for being less accommodating and servile, but he seems to abandon his neutral tone on certain topics.
A complex problem. It was recently published a study by Stanford University in which they tested several flattering and less flattering chatbots. What they discovered was that the participants generally preferred flattering models, that is, we like to be proven right. One of the authors of the study, Myra Cheng, commented that “By default, AI advice does not tell people that they are wrong or give them a reality check (…) I worry that people will lose the ability to deal with difficult social situations.” Furthermore, this tendency to agree is also responsible for the AI hallucinations because the model prioritizes giving us an answer about its veracity.
Image | Xataka
In Xataka | When the accomplice in a shooting is ChatGPT, the question is what responsibility does OpenAI have?

GIPHY App Key not set. Please check settings