Claude has a reputation for being the least accommodating and flattering AI, especially when you ask him for love advice.

Anthropic has analyzed a million random conversations with Claude and have reached a conclusion that we have already been observing: More and more people use AI as a personal guide who is asked for advice on all kinds of problems in their life, from work to relationships. Their goal was to see if Claude is as accommodating as other AIs when it comes to giving personal advice. AI as a confidant. There are people who use an AI chatbot as if he were a psychologistothers that looking for friendship and even who They have fallen in love with an AI and have a virtual relationship. ChatGPT is usually the most cited chatbot in these examples, mainly because it is the one with the most users, but the analysis that Anthropic has done with Claude proves that it is not a matter of one company, but that the trend is global. The problem with this is that AI tends to please and agree with the user, so it can end validating harmful ideas and harming our mental health. ANDl analysis. As we said, Anthropic has analyzed one million conversations with Claude, of which they identified around 38,000 in which users asked for advice on personal matters, which represents 6% of the total sample. They then classified them into nine categories: relationships, career, personal development, finances, legal issues, health and well-being, parenting, ethics and spirituality. 76% of the conversations analyzed corresponded to four of these categories, starting with health and well-being with 27%, professional career with 26%, relationships with 12% and personal finances with 11%. Selective flattery. What they saw in the analysis is that Claude usually avoids giving flattering answers when the user asks for guidance on personal matters. According to Anthropic, only in 9% of conversations was a very accommodating response detected. The problem is that, when the conversation was about romantic relationships, that figure rose to 25%. As examples, they cite cases in which the AI ​​agrees with a conflict despite not knowing both points of view, or interpreting romantic behaviors in normal interactions. And there’s more: in cases where the conversation was about spiritual topics, the rate of accommodating responses rose to 38%. Claude has a reputation for being less accommodating and servile, but he seems to abandon his neutral tone on certain topics. A complex problem. It was recently published a study by Stanford University in which they tested several flattering and less flattering chatbots. What they discovered was that the participants generally preferred flattering models, that is, we like to be proven right. One of the authors of the study, Myra Cheng, commented that “By default, AI advice does not tell people that they are wrong or give them a reality check (…) I worry that people will lose the ability to deal with difficult social situations.” Furthermore, this tendency to agree is also responsible for the AI hallucinations because the model prioritizes giving us an answer about its veracity. Image | Xataka In Xataka | When the accomplice in a shooting is ChatGPT, the question is what responsibility does OpenAI have?

AI chatbots are more flattering than humans giving personal advice. And that’s a problem

Before, to create your echo chamber you could only follow like-minded people on networks, now you can create your own personalized echo chamber with an AI. A Stanford study has thoroughly analyzed the excessive adulation of LLMs and the result is clear: if you want to be told what you want to hear, it is better to talk to the AI ​​​​than with a person. The study. The Researchers analyzed eleven language models, among which were the most popular ones like ChatGPT, Gemini, Claude or DeepSeek, and they fed them with data sets about personal dilemmas. In addition, they included 2,000 prompts taken from the Reddit community. Approximately one-third of all scenarios included harmful or outright illegal behavior. Then, they compared the LLM responses with human responses to see who tends to agree with the user more. In a second part of the study, they recruited 2,400 participants and had them chat with flattering and non-flattering language models. We like to be proven right. Chatbots tend to be much more flattering than a human when giving personal advice, but not only that, people generally prefer these types of responses. The models endorsed the user’s position 49% more than humans in general dilemmas and endorsed harmful behavior 47% more. In the second experiment, people who chatted with different models considered the sycophantic model more trustworthy and preferable. Furthermore, she came away more convinced that she was right and less willing to apologize or repair the conflict. Why is it a problem. According to the authors, LLMs can reinforce egocentrism and make people more morally dogmatic. According to Myra Cheng, co-author of the study, “By default, AI advice does not tell people that they are wrong or give them a reality check (…) I worry that people will lose the ability to deal with difficult social situations.” In addition, there is another worrying fact and that is that users perceived the models as equally objective, which suggests a lack of critical vision to be able to distinguish a flattering AI from a non-flattering one. AI is not a person. It is obvious, but the reality is that every day we address AI chatbots as if they were one. Thank him and ask him for things please It is a harmless symptom of our mania for anthropoformize everything. However, when We use AI as a substitute for a psychologist or when we establish intimate relationships with a chatbotthat’s where we start to step in swampy terrain. The authors of the study consider it urgent that companies introduce safeguards to reduce the excessive complacency of LLMs and advise avoiding using them as a substitute for a person to deal with personal conflicts. The counterpoint. There are voices that argue that AI is not generating these echo chambers, at least not with as much intensity as we have seen with social networks. According to John Burn-Murdoch in Financial Timeslanguage models tend to raise consensus with experts and generate more moderate opinions than networks. Their argument is that the economic architecture of networks rewards inflammatory and polarizing content, while chatbots compete to offer reliable answers to users who use them to make important decisions. It is not just an opinion, it has also done an experiment in which it has simulated thousands of political conversations between users with extreme positions and several of the main chatbots on the market. Based on electoral surveys and data on the use of these tools, it measures how positions would move if a part of the citizenry used AI to inform themselves. The author concludes that, on average, the models tend to push the most radical ones towards more temperate positions closer to the expert consensus, also validating many fewer conspiracy theories than those that routinely circulate on social networks. In Xataka | AIs have become accompanying tools against loneliness. For some researchers it is “junk food” Image | Zulfugar Karimov in Unsplash

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.