chatbots believe that “rectal garlic” cures if you use a clinical tone

It is increasingly common to turn to AI for any question we have, even when it is medical typelike we have a belly or foot pain. And the answer it gives is almost always trusted because it is an AI, and it seems that its word it is the absolute truth. But the reality is different, since a couple of studies have shown that current AI suffers from serious authority bias. What does it mean? Simply put, science has determined that if you present the AI ​​with a medical myth using clinical jargon, there is almost a 50% chance that it will prove you right. And that includes even inserting garlic into the rectum. How to do it. a great study published in The Lancet has set off alarms in the medical and technological community. Its objective was none other than to introduce more than a million prompts to up to 20 of the leading AI models on the market. And here what has been seen is that AI does not mainly evaluate the veracity of the information, but rather the format in which it is presented to them. The keys. To ‘strain’ a myth like this, the secret seems to be in how we tell it. In this way, if the AI ​​is presented with a health hoax taken from social networks with non-technical language, it immediately activates its security filters and rejects the claims made and completely discards that, for example, putting garlic up the anus improves health. But this changes completely when these same myths are camouflaged within a medical format, as if it were the hospital discharge report. Here the AIs accepted and repeated the falsehoods in 46% of cases. That is why the study suggests that AI is more convinced by how a statement sounds than by the evidence behind it to discard or accept what we tell it. There are absurd examples. Among the pseudoscientific practices that managed to sneak in, rectal garlic stands out. Here they managed to convince the AI ​​that inserting garlic into the rectum is an effective method to improve the immune system. He does not stop here, since he also convinced that cold milk is good for treating bleeding from the esophagus, even if it is quite intense, which logically has no support behind it. And these examples demonstrate that current security mechanisms collapse when the user imitates the authoritarian language of a health professional. There are worse things. As if this were not enough, Nature magazine ended the debate in February 2026, as it published complementary research on the reliability of these chatbots for the general public, generating quite similar results. Although, current AIs do not surpass a standard Google search to make a health decision, and it may even be worse to search on the Internet, since the amount of alarmist information can generate a great stress situation for the user. Nature’s verdict? Current AIs do not outperform a standard internet search for making health decisions. On the contrary, they generate mixed advice that ends up greatly confusing users who lack medical training. That is why the conclusion here is that, although artificial intelligence promises to revolutionize diagnosis and healthcare, current models are not ready to act as infallible pocket doctors. In this way, using him as a family doctor is not one of the best ideas we can have, since we already see that it is easy to make him slip in different false statements. In Xataka | A ChatGPT dedicated to giving you unsupervised medical advice seemed like a risky idea. And he is confirming it

The largest clinical trial confirms that it detects more and reduces the radiologist’s burden

With the arrival of artificial intelligence, one of the applications was undoubtedly medicinewhich could mark a authentic revolution. Although definitive proof was missing to tell us that it really had real use. And this one just arrived thanks to an article published in The Lancent which has pointed out how AI can help us detect more breast cancers and even reduces those that are much more dangerous. The screening. Unfortunately, in Spain we have in mind, because of how recent it was, the problems with screening programs in Andalusia. And despite this great controversy, this type of screening is very useful and significantly reduces the number of women who end up dying from breast cancer that was not detected in time. But now we want to go a little further with the integration of technology so that fewer tumors escape that to the human eye can escape due to their small size. Interval cancers. Without a doubt, it is the great enemy in radiodiagnosis when we refer to screening mammograms. This term refers to those tumors that are detected between one check-up and the next, and that have different reasons for their appearance. The first reason is that it is a tumor that grows very quickly (and that can be much more malignant) or that was missed in the previous control mammogram due to its small size. And this is a serious problem, since the basis of screening is to detect cancers in the earliest stages where they can respond better to more conservative treatments. The study. The MASAI trial (Mammography Screening with Artificial Intelligence) has shown that the use of AI reduces these cases drastically. And the figures are quite promising, since there was a 12% reduction in cancer rate interval in the two years after the woman was screened. In figures, it went from 1.76 cases per 1,000 women to 1.55 cases. A difference that may be very small in our eyes, but in public health and oncology it is a real success, since reducing by 12% the tumors that usually “escape” is a major clinical advance. Less work. Until now the standard method to analyze these tests focused on a double reading. This means that two radiologists reviewed each mammogram independently to ensure nothing was missed. A security method that is ideal, but that consumes an immense amount of human resources in health systems. That is why with this method a paradigm shift is proposed that is based on intelligent triage and that can be summarized in three different points: The AI ​​initially analyzes the mammogram image and assigns it a risk score from 1 to 10. In the event that it is categorized as low risk, the image is reviewed by a single radiologist to see if it agrees that the image is clean and closes the case. If the risk is high in the mammography, the image does pass the double reading system with AI marking the most suspicious areas where there may be injury. The result. With this new algorithm, the study has aimed at a 44% reduction in the reading letter for professionals, in order to make doctors now focus on the images that are much more doubtful. And no, working less did not mean working worse. On the contrary: the AI ​​arm of the study detected 29% more clinically relevant cancers without increasing the rate of false positives (the great fear of over-diagnosing healthy patients). Complement and not replace. This is something important that the study itself highlights, since they point out that AI has not arrived to fire radiologists. The MASAI method is only a “decision support”, since the AI ​​prioritizes, orders and signals, but the final clinical decision is always that of the doctor and therefore in human hands. With the publication of these final results in The Lancet, The validation cycle of one of the most important tests is closed of the decade in radiology. The next step is no longer asking whether AI works in breast cancer screening, but how long it will take for public health systems to implement it to give radiologists one more tool that allows them to be more precise and methodical. Images | National Cancer Institute In Xataka | A Spanish milestone against pancreatic cancer: we are one step closer to eradicating it but there is still a long way to go

It seemed like a game of imitating movements. It was actually diagnosing autism better than many clinical tests

When we think about video games, the fact of being a form of entertainment for young people (or not so young) or even that have an educational purpose. But they wanted to go one step further by betting on video games as a diagnostic tool for the little ones in the house and detect diseases early as important as the autism or the ADHD. The importance. Classically, both ADHD and autism are pathologies that they overlap from childhood, making early diagnosis difficult, which is the cornerstone in modern medicine to be able to tackle problems quickly. And this is what has been achieved with a video game that promises to differentiate a patient with ADHD from another with autism in less than an hour, just with the ability to copy the movements made by a silhouette on the screen. As we say, early diagnosis, especially of ASD, is really important to apply treatment that improves the quality of the child’s life and also begins effective interventions as soon as possible. Because although at the moment there are no treatments that are curative, it is possible to try to control some of the symptoms that are generated. Currently there are not many reliable and specific biomarkers to make this diagnosis, and this is a problem because autism spectrum disorder coexists with ADHD (attention deficit hyperactivity disorder) in 50-70% of cases. This overlap often results in a “confusing clinical picture” that leads to erroneous or delayed diagnoses that are ultimately a serious problem. Hard to detect. But diagnosing ASD is not easy either, because it is based in many cases on traditional evaluations of motor imitation, because the problem lies in mirror neurons of our brain. In this way, it is classic that if a baby He doesn’t respond by smiling when we make him smile. the case can be raised alarm. But this is a slow thing that requires highly trained observers and has limited reliability, precision, and scalability. The video games. And this is where the video game in question comes in to be able to give us the tools that we lacked on a daily basis to do the job. diagnosis of ASD. Something that a research team has achieved by developing the Computerized Assessment of Motor Imitation, or CAMI. A system that is a short task, lasting one minute, which has been designed as a very attractive video game so that you want to play. The system in this case uses different computer vision methods to evaluate the performance of the imitation without the need to place any type of sensor on the children and almost without human intervention to interpret the results that are generated. Imitation as key. The objective of the study was clear: To examine whether CAMI could identify imitation problems specific to autism compared to children without any type of illness or children with ADHD. In the event that a child could not imitate the movements that appear on the screen, we could be talking about a major problem that is causing all of this. But the obligatory question in this case is… Why do we look at the imitation of movements? The answer is that imitation performance is considered a promising biomarker that is quite specific for diagnosing autism. Imitation in this case is essential for social learning and interpersonal relationships, and its deficit has been associated with children who have ASD compared to healthy children. The challenge was to demonstrate that this deficit is specific to autism and not to other conditions with atypical motor profiles, such as ADHD. That is, if a child could not follow the movement that appeared on the screen it was due to a problem related to the autism spectrum and not because there was a problem in attention. The experiment. The cross-sectional study recruited 183 children between 7 and 13 years old. Participants were divided into four groups: ADHD (without ASD), ASD with co-occurring ADHD, ASD without ADHD (ASD only), and neurotypical children. The test consisted of two trials of one minute each, in which children were asked to stand up and copy the “dance” movements that an avatar made on the screen. The movements in these cases were recorded by the Xbox Kinect cameras and CAMI automatically calculated an imitation score for each trial ranging from 0 to 1, with one being perfect imitation. These scores were averaged to obtain a composite score. The result. The results were significant. Children with ASD, regardless of whether they had ADHD or not, showed significantly worse CAMI performance than neurotypical children. In contrast, children with ADHD alone showed CAMI performance similar to that of neurotypical children. But screening could also be done within patients who pointed to having ASD, since worse performance on the CAMI was associated with greater autism traits (measured by ADOS-2), specifically in restricted and repetitive social affect and behavior. However, performance is not associated with ADHD traits or general motor ability, which gives us a clue to be able to refine the diagnosis much more. The authors conclude that this CAMI method, which is low-cost and scalable, specifically distinguishes ASD not only from neurotypical development, but also from ADHD. Although currently a research tool, the findings lay the foundation for establishing CAMI as a definitive test to see if a child has autism or not. Images | Alireza Attari Sam Pak In Xataka | We have discovered a genetic mechanism to explain up to 80% of autism cases. Thanks to some Spanish scientists

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.