A century ago, being illiterate was not knowing how to read or write. In developed countries That problem today is residual. But another type of illiteracy is emerging. More subtle, more difficult to detect, with more gray and perhaps equally determining: Not knowing how to interact with AI.
This new literacy does not know how to program or understand how models work. It is something more basic: know how to ask good questions, know how to read the answers, and above all, know how to distrust. Not in a paranoid way, but with criteria. Distinguish when we are using AI … and when AI is using us.
It is the difference between being a passive AI user – some who swallows without chewing – and using it as a lever for thought, as an extension of our analysis capacity. Because, well used, it can be that: a cognitive multiplier.
There you play A huge difference:
- There are those who use these systems as if they were a vitaminated google or a estroid calculator. He throws a question, copy the answer and voila.
- Other people – even more – are learning to talk with them. To stretch its limits. To generate ideas that neither the machine alone nor them could have produced.
The key is not the tool, it is how you use it. And for that it is necessary literacy in AI.
The thing goes beyond who does what with Chatgpt. Systems like Deep Research They are beginning to automate tasks that, until recently, were the point of entry to many professions. Reports, summaries, preliminary analysis … just that type of work that served to train, to understand the trade from within. If you give that to a model, how do you learn to think like an expert?
That is The black hole that is coming in many companies. If you automate the formative tasks, how are you going to train the new ones? If we do not redesign well – and fast – how the experience is transmitted, we could have entire generations without a real basis. People with titles but without criteria.
And not only that: this new illiteracy can be hereditary. Like parents who did not read they did not raise reading children, those who do not know how to use these tools will hardly teach to use them. Learning will remain in the hands of school … or algorithm.
The paradoxical is that All this disguises himself very well. Someone can generate a brilliant report, a perfect presentation, a seemingly solid analysis … without deep understanding, much less. It is enough that you know how to ask for it well.
The risk is not just that mediocrity is imposed. Is that nobody realizes. As Antonio Ortiz has been notifyingthe real problem is not that AI thinks for us. It is that, little by little, we stop thinking about ourselves and our atrophy begins.
That is why the true digital literacy of the future will not be technical. It will be ethical, critical, cognitive. Know when to ask the AI to think about you.
And, above all, when to say no.
In Xataka | Bill Gates has a favorite book about AI for a reason: predicts better than anyone what will happen to jobs
Outstanding image | Xataka
GIPHY App Key not set. Please check settings