We thought we had an AI bubble. There are powerful arguments that indicate that we were wrong
You either love AI or hate it. Either you are a (deluded?) optimist, or you are on the bandwagon of skeptics and bet due to an imminent puncture of that AI bubble that everyone talks about. The well-known analyst Ben Thompson has been in the second group for some time and stated that in fact we were in a “good” bubble and beneficial even if it bursts. The annual NVIDIA conference a few days ago has made him change his position, and for him there is no bubble. It doesn’t have just one argument, but three. Or rather, three jumps. The first jump: ChatGPT. The launch of ChatGPT in November 2022 was an eye-opener and demonstrated what generative AI could do. That first model, yes, had two serious problems. The first, that he was frequently wrong. The second, that when I didn’t know something, he invented it and hallucinated with astonishing security. That made ChatGPT something awesome but unreliable, like a cool toy that needs constant user supervision to be truly useful. The second leap: reasoning. Almost two years later, another unique revolution occurred in the field of generative AI. In September 2024 OpenAI launched its o1 model and with it there was a spectacular novelty. For the first time, the model did not simply blurt out the first thing that came to mind: he reasoned about his answer before giving it, evaluated whether it was correct, and considered alternatives. The result was an AI significantly more reliable and, therefore, more useful. The price? More computing. AI models with the ability to “reason” consume many more tokens than those that respond directly, and that triggered demand for infrastructure. Or what is the same: data centers. The third jump: the agents. These two revolutions have been joined by the third, that of AI agents. Claude Code and Codex at the end of 2025 showed that AI agents were no longer a promise and became something that really worked. From then on it is possible to give them instructions so that they can then start executing nested tasks that can keep them working for hours. These agents verify their own results and correct errors without the human having to intervene. The difference with what we had before is notable, but it also dismantles the bubble theory. Bubble? In a bubble, Thompson explained, investment exceeds real demand. However, in his opinion, the opposite is true here, because each hyperscaler—Microsoft, Google, Amazon, Meta—has made it clear that the computing demand is surpassing them, and to solve it they are all announcing astronomical investments in AI data centers. These investments exceed market expectations, but not those of these companies, which like Thompson are clear that in reality the demand is going to end up being so enormous that the current infrastructure will fall very short. Millions of users are not needed. Even more striking in this analysis is another nuance that this analyst points to. Chatbots were supposed to need mass adoption to generate economic impact, but on the other side we have agents, who don’t have that requirement. A single person can control thousands (millions?) of agents simultaneously, creating complex tasks. That means it doesn’t take everyone to use AI for computing demand to skyrocket: enough people just need to use it as they are likely to use it: to create those “one-person businesses” where one human being has thousands of AI employees. Companies will pay. The reality is that the vast majority of consumers are not going to want to pay for AI. Companies do, because they pay for productivity and AI seems start fulfilling that promise. But the argument goes beyond cost savings: agents not only make the work that humans do more efficient, but they allow a small group of people with a clear strategic vision to execute it on a scale that previously required hundreds of employees who also had to be coordinated. Large companies have been adding layers of management necessary to scale for decades, but all that hierarchy disappears with agents. But. This analyst is also clear that the wave of layoffs is going to be increasingly evident and it is evident that AI is going to have a clear impact. However, he explains that many of these current layoffs correspond more to the overemployment experienced with the COVID-19 pandemic. What will happen now is that companies will no longer wonder if they hired too much for the “pre-AI” world, but rather if they hired too much for the “post-AI” world. In fact, those that don’t ask will probably end up competing with smaller rivals, built from the ground up with AI and with radically lower cost structures. For him two things are clear. The first is that the demand for computing will not stop growing. The second, that the bubble, if it exists —and according to him, the answer is that he doesn’t—, it’s not going to explode. In Xataka | His dog had cancer, his vets had no solutions and he found an mRNA vaccine elsewhere: ChatGPT