You may want to do the test. Go to Chatgptto Gemini to Claude OA Perplexityand see everyone asking to choose a number between 1 and 50. It is not certain that everyone does it, but there is a high probability that The chosen number is 27. What is happening?
Andrej Karpathy asked it months ago, one of the best experts in AI in the world. Actually his initial perception was that “All LLM sound the same“That is, everyone speaks surprisingly and also answers very similarly.
Chatbots usually give similar answers in their formulation to our questions if they are factual. That is, if they are based on verifiable facts, such as “what is Rafa Nadal’s age?” The funny thing is that in many cases give the same answer when what we ask is much farther. For example, that they choose a number between 1 and 50.


You ask several chatbots, and everyone chooses on 27. What happens here?
Karpathy himself I recovered that question A few hours ago with a discovery I had seen In Reddit: Those who had done the test proved that most chatbots answered the same thing: 27. In The answer to his comment in x Many users “Although not all— They showed captures either Shared conversations of different chatbots that had precisely answered with that same number. Coincidence or problem of the IAS?
The condemnation of human biases
Because? In one of the responses of the chatbots who answered the user, he just asked that to AI. This answered that he had chosen that number because he avoids the ends, because it is beautiful mathematically (the cube of three), and 27 “Give the feeling of being random but human”.
And there is one of the keys: the IAS try to answer in a similar way as a human would. An entrepreneur named Chester Zelaya precisely elaborated a theory curious about this phenomenon.
For him the models used games theory and try to “win” to guess the number. For this they adopt a binary search strategy that allows building A binary tree. And in the game of guessing that number between 1 and 50, 27 is a starting number that according to him is especially adequate (although not the only one).
However, another way of explaining that AI frequently chooses 27. AI models have been trained by humans with human data, and therefore they are full of biases included in these data and used voluntarily or involuntarily by those people.
The “7” is an especially frequent number on its own or as a termination, and As explained Another user called Yogi, that human bias is everywhere. “That is why when you ask multiple LLM to choose a” randomly “number, they all respond with confidence 27. Not because it is random, but because it is predictably popular.”


Almost 7,000 people chose a number between 1 and 100 in an experiment. And many chose 69, and then 7 and 77. Source: Reddit
It is also a very reasonable theory. An experiment carried out years ago on social networks asked people to choose a number between 1 and 100. Of the 6,750 people who responded conclusion was reached that the number that had been chosen most was —Us— on 69. And after him, 7 and 77 were also especially frequent.


There are, as always, exceptions to the rule. In my tests I have been able to verify how almost all chatbots chose number 27, but there were one that did not. The same thing happened to many X uuarios, who found that when they asked Grok, the number chosen It was 42. Very appropriate.
Image | Xataka with chatgpt
GIPHY App Key not set. Please check settings