chatbots believe that “rectal garlic” cures if you use a clinical tone

It is increasingly common to turn to AI for any question we have, even when it is medical typelike we have a belly or foot pain. And the answer it gives is almost always trusted because it is an AI, and it seems that its word it is the absolute truth. But the reality is different, since a couple of studies have shown that current AI suffers from serious authority bias. What does it mean? Simply put, science has determined that if you present the AI ​​with a medical myth using clinical jargon, there is almost a 50% chance that it will prove you right. And that includes even inserting garlic into the rectum. How to do it. a great study published in The Lancet has set off alarms in the medical and technological community. Its objective was none other than to introduce more than a million prompts to up to 20 of the leading AI models on the market. And here what has been seen is that AI does not mainly evaluate the veracity of the information, but rather the format in which it is presented to them. The keys. To ‘strain’ a myth like this, the secret seems to be in how we tell it. In this way, if the AI ​​is presented with a health hoax taken from social networks with non-technical language, it immediately activates its security filters and rejects the claims made and completely discards that, for example, putting garlic up the anus improves health. But this changes completely when these same myths are camouflaged within a medical format, as if it were the hospital discharge report. Here the AIs accepted and repeated the falsehoods in 46% of cases. That is why the study suggests that AI is more convinced by how a statement sounds than by the evidence behind it to discard or accept what we tell it. There are absurd examples. Among the pseudoscientific practices that managed to sneak in, rectal garlic stands out. Here they managed to convince the AI ​​that inserting garlic into the rectum is an effective method to improve the immune system. He does not stop here, since he also convinced that cold milk is good for treating bleeding from the esophagus, even if it is quite intense, which logically has no support behind it. And these examples demonstrate that current security mechanisms collapse when the user imitates the authoritarian language of a health professional. There are worse things. As if this were not enough, Nature magazine ended the debate in February 2026, as it published complementary research on the reliability of these chatbots for the general public, generating quite similar results. Although, current AIs do not surpass a standard Google search to make a health decision, and it may even be worse to search on the Internet, since the amount of alarmist information can generate a great stress situation for the user. Nature’s verdict? Current AIs do not outperform a standard internet search for making health decisions. On the contrary, they generate mixed advice that ends up greatly confusing users who lack medical training. That is why the conclusion here is that, although artificial intelligence promises to revolutionize diagnosis and healthcare, current models are not ready to act as infallible pocket doctors. In this way, using him as a family doctor is not one of the best ideas we can have, since we already see that it is easy to make him slip in different false statements. In Xataka | A ChatGPT dedicated to giving you unsupervised medical advice seemed like a risky idea. And he is confirming it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.