AI chatbots are more flattering than humans giving personal advice. And that’s a problem
Before, to create your echo chamber you could only follow like-minded people on networks, now you can create your own personalized echo chamber with an AI. A Stanford study has thoroughly analyzed the excessive adulation of LLMs and the result is clear: if you want to be told what you want to hear, it is better to talk to the AI than with a person. The study. The Researchers analyzed eleven language models, among which were the most popular ones like ChatGPT, Gemini, Claude or DeepSeek, and they fed them with data sets about personal dilemmas. In addition, they included 2,000 prompts taken from the Reddit community. Approximately one-third of all scenarios included harmful or outright illegal behavior. Then, they compared the LLM responses with human responses to see who tends to agree with the user more. In a second part of the study, they recruited 2,400 participants and had them chat with flattering and non-flattering language models. We like to be proven right. Chatbots tend to be much more flattering than a human when giving personal advice, but not only that, people generally prefer these types of responses. The models endorsed the user’s position 49% more than humans in general dilemmas and endorsed harmful behavior 47% more. In the second experiment, people who chatted with different models considered the sycophantic model more trustworthy and preferable. Furthermore, she came away more convinced that she was right and less willing to apologize or repair the conflict. Why is it a problem. According to the authors, LLMs can reinforce egocentrism and make people more morally dogmatic. According to Myra Cheng, co-author of the study, “By default, AI advice does not tell people that they are wrong or give them a reality check (…) I worry that people will lose the ability to deal with difficult social situations.” In addition, there is another worrying fact and that is that users perceived the models as equally objective, which suggests a lack of critical vision to be able to distinguish a flattering AI from a non-flattering one. AI is not a person. It is obvious, but the reality is that every day we address AI chatbots as if they were one. Thank him and ask him for things please It is a harmless symptom of our mania for anthropoformize everything. However, when We use AI as a substitute for a psychologist or when we establish intimate relationships with a chatbotthat’s where we start to step in swampy terrain. The authors of the study consider it urgent that companies introduce safeguards to reduce the excessive complacency of LLMs and advise avoiding using them as a substitute for a person to deal with personal conflicts. The counterpoint. There are voices that argue that AI is not generating these echo chambers, at least not with as much intensity as we have seen with social networks. According to John Burn-Murdoch in Financial Timeslanguage models tend to raise consensus with experts and generate more moderate opinions than networks. Their argument is that the economic architecture of networks rewards inflammatory and polarizing content, while chatbots compete to offer reliable answers to users who use them to make important decisions. It is not just an opinion, it has also done an experiment in which it has simulated thousands of political conversations between users with extreme positions and several of the main chatbots on the market. Based on electoral surveys and data on the use of these tools, it measures how positions would move if a part of the citizenry used AI to inform themselves. The author concludes that, on average, the models tend to push the most radical ones towards more temperate positions closer to the expert consensus, also validating many fewer conspiracy theories than those that routinely circulate on social networks. In Xataka | AIs have become accompanying tools against loneliness. For some researchers it is “junk food” Image | Zulfugar Karimov in Unsplash