AI has caused an earthquake in the education sector. Students use it (many times indiscriminately) and teachers try to adapt to the change reinventing homework and exams. As the years go by, its use becomes normalized and the effects are already beginning to be seen. One of them is that all students They’re starting to sound the same.
When AI gives its opinion for you. They tell it in cnn. AI chatbots have become another everyday tool in university life, but it is not only that they are used as support to write a paper, there are more and more students who turn to AI for everything, even to know what to say in class.
They tell the case of a Yale student who admits that during a class debate “the conversation stopped, I looked to my left and saw someone frantically typing on their laptop.” He was asking a chatbot the same question his teacher had just asked. I myself am doing a university master’s degree and the situation is not strange to me. There are many students who turn to a chatbot to answer questions that are precisely looking for a critical and personal answer.
Homogeneous thinking. It is one of the consequences that are being seen as a result of the use of AI chatbots. According to a study published in March of this yearLLMs narrow the diversity of human expression in three dimensions: language, perspective, and reasoning strategies. The reason is that training data contains bias cultures and overrepresented positions. The authors of the study claim that AI models tend to reproduce Western, educated, industrialized, rich and democratic points of view. In a context like the university, the result is that the students’ language is generally more polished, but the responses and reasoning are similar and ends up eroding the diversity of opinions.
Hallucinations. These biases in the training data also partly explain the phenomena of hallucinations and flattery. When an LLM invents an answer or agrees with us even if we are wrong, it has to do with the fact that Positive and accommodating interactions prevail in your training data. That is to say, his training tells him that it is more important to give an answer rather than its truthfulness.
Cognitive surrender. It is a concept taken from an experiment we talked about recently and refers to the phenomenon whereby we stop thinking and checking for ourselves when using AI, accepting its answers with little or no critical review and adopting its security as if it were our own. Delegating part of the cognitive process to AI is not a bad thing if it is done with a critical vision, the problem is when it is done indiscriminately and without any scrutiny of the answers.
AI is making us dumb. A MIT study from 2025 pointed in this direction, but we already saw that It’s a very simplistic statement. of what is happening. Whether AI makes us lazier and impairs our critical thinking depends on how we use it. It would be comparable to using a calculator to do a very complex operation or using it to multiply five by six. Well used, AI can save us a lot of time and can be a very powerful tool to shape complex ideas, always without losing that critical thinking.
Critical thinking is learned. This is the real problem of the indiscriminate use of AI in the educational environment. We are talking about people who have not yet developed this skill and who are delegating reasoning to an external tool may cause them to never learn it. In front of the prohibitionist stancevarious authors have pointed out the urgency of starting conversations with students from early stages to teach them to use AI critically and responsibly.
Image | Xataka with Freepik
In Xataka | A university used an AI to hunt down students who used AI. The result was a predictable disaster

GIPHY App Key not set. Please check settings