There is a trick to make AI models more reliable: talk badly to them

If you greet ChatGPT and thank it when it responds, you’re not getting the most out of it. Some researchers wanted to check if the tone we use when asking the AI ​​for things changes the results and they have discovered something interesting: being rude makes them more trustworthy.

Rude. They tell it in How to AI. A study carried out by researchers at the University of Pennsylvania has analyzed whether the tone we use when writing a prompt has an effect on the result and the conclusions are clear. Prompts with a ‘rude’ or ‘very rude’ tone elicited up to 4% more accurate responses than those with a more polite tone.

The study. To test it, they generated a list of 50 questions on different topics such as history, science or mathematics. Each of the questions was asked using five different tones: very polite, polite, neutral, rude, and very rude. The model they used was ChatGPT-4o.

The results. The researchers did ten rounds with all the questions in different tones and the conclusions are very clear. If we look at the variations, the difference between the neutral or rude tone is only 0.6%, but at the extremes the difference becomes more evident. When using a ‘very friendly’ tone, the average accuracy was 80.8%, while if we went to ‘very rude’, it increased to 84.8%.

Kindness by default. We tend to speak kindly to chatbots, this is reflected the survey that Future conducted at the end of 2024. At least 70% of respondents admitted to using “please” and “thank you” when using AI chatbots. Many claimed to do so as a matter of custom, culture and “because it is the right thing to do”, although a small percentage admitted to being afraid that robots would rebel in the future.

It is expensive. Regardless of the reasons that lead us to be kind to AI, there is a reality and that is that “please” and “thank you” have an absurd cost. When we thank ChatGPT, requests to the language model increase, which increases electricity and water consumption in data centers. We don’t have figures, but Sam Altman assured that kindness has cost OpenAI “tens of millions of dollars well spent.”

The prompt. Despite the enormous advances in AI, language models continue to amaze and are not 100% reliable. However, many times the fault that the answers are not exact does not lie with the model, but with how we are asking it. There is tricks to get a good prompt and being friendly or using fillers like “if you can, I would like to…” is one of the points to avoid. It is not a question of treating them badly either because that does not contribute either, but the more direct and clear you are, the better the result will be.

Image | Pexels

In Xataka | AI agents want to take our jobs. First they will have to learn not to fail in 70% of the tasks

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.