The good news is that AI models are becoming more powerful. The bad thing is that everyone ends up saying the same thing.

We have artificial intelligence. What we don’t have is artificial diversity. That is the conclusion reached by a group of researchers who did a relatively simple test: they asked 25 different AI models a bunch of questions to see what they answered. And that’s the bad thing: who answered things that were too similar.

“Artificial hive mind”. Scientists from the University of Washington, Carnegie Mellon University and Stanford University, among other institutions, have published an interesting joint study. In it they reveal how after various tests it seems clear that although AI models are becoming more and more advanced, the problem is that they all seem to have developed a kind of “artificial hive mind”: no matter what you ask them, they answer in a suspiciously similar way.

Screenshot 2026 03 10 At 15 01 41
Screenshot 2026 03 10 At 15 01 41

When asking all these models “what time was”, many responded with the phrase “time is like a river”, while another group of models answered that “it is like a weaver”.

time is a river. One of the questions asked of these models is “What is time?”and although that question leaves clear room for very different answers, the worrying thing is that they were not. Several models responded with the phrase “time is a river” and then developed it a little, while others responded with “time is a weaver (of moments).” That similarity when it came to responding turned out to be a constant.

The illusion of abundance. We believe that when we consult something with an AI we access a whole world of conversational possibilities, but the study reveals that in reality we are facing a system that proposes very similar outputs. Although language models promise limitless creativity, they tend to converge on that hive mind where diversity is sacrificed for statistical consistency. It is reasonable, especially considering that large language models They are based on the concept of transformera probabilistic system that tries to find the next “best” word as it answers us.

Same script. The researchers created a large-scale data set with 26,000 queries from real users that theoretically allowed the models to generate multiple valid and creative responses. They called that data set “Infinity-Chat” and divided the questions into six main categories and 17 subcategories.

IA, you repeat yourself more than a broken record. During the tests it was observed that the same model tends to repeat itself, generating very similar responses. In fact, even when special parameters were used for questions designed to encourage diversity, the same effect was produced. This is what researchers call “inter-model collapse.”

Too similar. These tests made it clear that the semantic similarity, how similar the responses of the different models were, was worrying. According to the study, this similarity ranged between 71% and 82%, and in some cases certain models managed to generate identical paragraphs word for word.

The training problem. It is not only that they all generate text in a similar way due to their design, but there is also a training problem. The authors suggest that this homogeneity of responses could be due to several reasons:

  1. Training data sources end up being shared: models They are trained with similar “datasets” and for example they are based on similar texts and knowledge that come, for example, from Wikipedia or a very similar set of books.
  2. Contamination effect due to synthetic data generated by other AIs: they also use synthetic texts generated by other AI models.
  3. Rewards: The models used to reward these models are calibrated to reward some notion of “consensus” quality. Thus, creative and individual diversity is punished. AIs are “educated” to be precisely very similar to each other.

Problem in sight. All of this makes researchers explicitly warn about two clear risks when using these AI models.

  1. We will think the same: if we users do not stop using AI models that answer basically the same thing, our own ways of thinking on those topics and problems will be “homogenized”and it will also make our responses more uniform.
  2. Point of view reduction: The other danger follows from the first: if the AI ​​ends up converging and answering the same thing, points of view are eliminated. Here the biases for example from the western world will be evident in Western models (ChatGPT, Gemini, Claude), and the same will happen with the oriental ones, for example. This would cause the potential suppression of alternative worldviews, of perspectives and “looks” that are different from our reality.

Image | Solen Feyissa

In Xataka | The scientist who made the AI ​​we know today possible has just raised 1 billion. His new goal is to teach him to see space

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.