Oracle builds yesterday’s data centers with tomorrow’s debt

Yeah Stargate it smelled funny It’s because I did it. This has just been demonstrated by the decision of Oracle and OpenAI, who have decided to stop their expansion plans for the data center that was going to be the flagship of the project. This is not just a setback for the project: it is a turning point in that narrative that we have not stopped seeing and that seemed to defend that investment in AI could be unlimited. It’s not like that. OpenAI no longer trusts Oracle. According to reveal sources close to the project, OpenAI’s plans to expand the alliance with Oracle in its data center in Abilene (Texas) have been canceled. What initially It seemed like a solid partnership. to dominate the AI ​​computing segment has collided head-on with a reality: the sector seems to be growing faster than its foundations. Too slow. On Bloomberg indicate that the decision responds to an inability to scale at the pace that Sam Altman demands. OpenAI requires a compute density and deployment speed that Oracle cannot guarantee in the short term. That has forced OpenAI to look to other partners—including Microsoft—so as not to compromise its roadmap. Technological gap. This brake is a symptom of a potential critical problem for Oracle: the world requires data centers with the latest technology, the most modern chips and modern liquid cooling systems, but Oracle seems to be focused on a very slow update cycle. They are building yesterday’s data centers with tomorrow’s debt: although the infrastructures they are built were valid under previous standards, they are obsolete for the next generation of large language models (LLMs). The accounts do not come out. And as we said, the other problem with Oracle is that all these projects are financed with very high leverage and economic risk. Larry Ellison’s Company is jeopardizing future cash flows to create data centers that are “old” when they come into action. If AI revenues don’t materialize, Oracle will find itself in a dangerous position. Bubble. All of this contributes once again to AI bubble debate. No one seems to deny that this bubble exists, but this slowdown raises more and more doubts about excess investment in the sector. That OpenAI is now making this decision is a bad sign, and reinforces the theory that investment in AI has been absolutely overblown. This year alone, several AI giants have indicated that they will dedicate a capex of 650,000 million for data centers. The challenge of not being a Big Tech. OpenAI has a fundamental problem: it is trying to play with the elders. Google, Amazon and Microsoft already had gigantic cloud infrastructures, but also a financial situation that allowed them to consider their strategy in a different way. While OpenAI has not stopped signing agreements in which the figures involved are astonishing. OpenAI follows burning moneybut not only his: also that of others. The danger of the domino effect. That OpenAI has hit the brakes with Oracle can be dangerously contagious. If one of the leading companies in the sector takes a step back from its alliance with a key supplier, other clients could begin to think twice before reaching similar agreements. In Xataka | OpenAI says its deal with the Pentagon is secure. Seriously, really, you have to believe it, trust it, it assures you

His new mode does not give the answer, he builds it with you

A frequent criticism among many teachers – and also among students – is that Chatgpt tends to offer direct answers to any consultation. Even assuming they are correct –We know that AI models can still hallucinate-, the user gets the solution without having traveled the path that leads to it. There are ways to guide the model through Prompts to avoid that immediate response and make it an ally of learning. But that additional effort does not always compensate. And in practice, if there is no more direct alternative, many choose not to use it. That alternative has just arrived. It is called “study mode” and its proposal is clear: help you learn, not just to answer. What is the chatgpt study mode Chatgpt was already a very popular tool among students. But his usefulness in learning had always been in question: Is it used to learn or only to solve tasks? He Study mode It was born precisely to answer that question. From today, the users of the Free plans, Plus, Pro and Team can access this new way of using chatgpt: an experience that does not directly deliver the solution, but accompanies the user to it. A change of approach that wants to leave behind the “do my homework” and get closer to “Explain how it is done.” According to OpenAithe study mode is designed to guide the user instead of giving the answer directly. They do so, they explain, through orientative questions, clues and a clearer structure that facilitates step by step learning. They also ensure that the system adapts its behavior based on the level of the user and its objectives. The answers are divided into understandable blocks, with less information saturation and more context when necessary. All that, without losing flexibility: it can be activated or deactivated at any time. The mode of study has not come out of nowhere. The company led by Sam Altman assures that you have worked with teachers, pedagogues and experts In learning science to define how the system should behave in educational contexts. The intention, they say, is that the model not only resolves doubts, but does so that it reinforces understanding. To achieve this, they have been based on principles such as active participation, self -reflection, cognitive load management, the promotion of curiosity or the use of comments oriented to improvement. How to use and what have we met To activate the study mode, it is enough to open chatgpt and select the “Study and Learn” option in the tools menu. From there, you can launch any questions related to duties, preparation of exams or topics that you want to understand better. The system automatically adjusts and changes its way of responding. In our tests, for example, we ask for help to solve a statistics problem. Instead of giving us the solution, The model began asking simple questionsoffering small clues and asking us to make calculations. The answer did not come suddenly, but step by step, as if a tutor guided the process. This behavior can be seen in the following captures: first the consultation, then the reaction of the system, and finally the invitation to which we solve part of the problem on our own. This is just the beginning For now, the study mode works through personalized system instructions. It is a provisional solution that allows us to observe how users interact and adjust the behavior of the model based on that feedback. The company plans to integrate these functions directly into the main models, once you have learned enough from this first stage. It is also exploring new capabilities, such as clearer visualizations for complex concepts, long -term learning objectives and a finer customization depending on the student’s level. At a time when the educational debate on these models is still open, the bet of the American startup comes with a clear message: it is not enough to have answers. You have to teach to understand them. China, for example, is already building that future with tools like Deepseekintegrated directly into university programs. With this step, the ball is on the roof of other technological giants. How will Microsoft, Google or the new Asian promises? The competition is no longer measured in capabilities. Also in real utility in the classroom. Images | Xataka with Gemini 2.5 Flash | Screen capture In Xataka | Anthropic has seen that their users do not stop using the 200 euros plan a month of their AI. They had to stop their feet

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.