The model challenges benchmarks in a key area

When we think of Xiaomi, it is normal that its mobile phones come to mind or, at most, its foray into electric cars with models like the SU7. However, what we have seen now points to a much more ambitious move: the company also wants to compete in the artificial intelligence race. It has done so with the launch of MiMo-V2-Proa model that, according to the data shared by the company itself, seeks to position itself close to the most advanced systems on the market, but with a very different focus on costs. And that changes the conversation quite a bit. What Xiaomi proposes. The company presents its model as the “brain” of systems capable of executing complete tasks, not just responding to specific requests, which in the sector is known as agent-oriented models. According to official information, we are looking at an architecture that exceeds one trillion total parameters, although it only activates 42 billion in each execution, and that can work with contexts of up to one million tokens. On paper, this allows you to maintain long, complex processes without fragmenting them, something designed for large tasks and more demanding workflows. Performance against the greats. If we look at the data, Xiaomi does not present its model as the best on the market, but as one that can compete in certain scenarios. In the GDPval-AA benchmark, oriented to real agent-type tasks, it reaches an Elo of 1426, surpassing Chinese models such as GLM-5 (1412) and Kimi K2.5 (1309), although it falls short of proposals such as Claude Sonnet 4.6, which marks 1633. The external reading is provided by Artificial Analysis, which assigns it a score of 49 on its intelligence index, which places it in the group of most competitive models on the market. The key is in that closeness in some benchmarks, not in general leadership. The key to the price. This is where Xiaomi’s proposal changes the board. According to data collected by Artificial Analysis, running your IQ with this model costs approximately $348, compared to $2,304 for GPT-5.2 or the 2,486 of Claude Opus 4.6. It is not exactly the same comparison as the price per API use, but on both levels Xiaomi appears clearly below several Western rivals. In its own API, the company sets prices of $1 per million tokens for entry and $3 for exit in the range up to 256K, a lower rate than models such as Claude Sonnet 4.6 and Claude Opus 4.6 at the same level of use. Beyond chat. What Xiaomi is proposing with this model is not only to improve the quality of the responses, but to change the type of work it can do. The company insists on moving from conversation to action, with a system capable of using tools, interacting with environments and completing chained tasks. In this context, it presents it as a model optimized for agentic scenarios and links it to frameworks such as OpenClawin addition to mentioning collaborations with OpenCode, KiloCode, Blackbox and Cline. On paper, this reinforces the idea of ​​an AI designed to execute workflows and not just answer questions. behind the scenes. Xiaomi enters the race with a model that, according to available data, is close to the major benchmarks in some scenarios, although without generally surpassing them. Where there does seem to be a clear bet is on the price, and that is where it tries to differentiate itself. The question is whether this balance between cost and performance is maintained outside of benchmarks and in real environments. We will have to wait to know if what the data shows is also projected in the real world. Images | Xiaomi In Xataka | China has immediately understood the future of the technology industry: “one-person companies” powered by AI

Your bet in the AI ​​race is to bring together several functions in a single model

The artificial intelligence race is often told as a competition to see who builds the most powerful model or the one that dominates the most benchmarks. In the middle of that board, the French startup Mistral AI has just presented Mistral Small 4a proposal that tries to occupy a different place in that conversation. It is not presented as a model limited to a single function, but as one that, according to the company, seeks to bring together several advanced capabilities within the same tool. What exactly is Small 4. The company presents it as the new great iteration of its Mistral Small family and, above all, as the first model of the house that brings together capabilities that were previously distributed among several lines. Specifically, it integrates functions associated with Magistral, Pixtral and Devstral along with those of the Small series itself. Fewer models, more features. One of the central ideas of the announcement is to concentrate tasks that are normally solved with different tools in a single system. According to Mistral, the goal is that the same model can be used to converse, analyze complex information, work with images or assist in programming without having to switch between several specialized systems. The numbers behind Small 4. The model is based on a Mixture of Experts architecture, a design that distributes processing between different specialized submodels and that today appears in several artificial intelligence systems. In the case of Small 4, Mistral indicates that the system has 128 experts and that only four participate in each generated token. According to the company, the model reaches 119B total parameters, with 6B assets per token, and offers a context window of up to 256k. Who is this model intended for?. Beyond its architecture, Mistral also describes quite clearly the scenarios in which it imagines the use of Small 4. Let’s see. Developers: Automate programming tasks, explore code bases, and code agent workflows Businesses: conversational assistants, document understanding and multimodal analysis Research: mathematics, complex analysis and reasoning tasks The underlying idea is that the model can move between quite different needs without forcing you to change the system depending on the type of work. The graphics. In the material accompanying the announcement, Mistral includes several graphs where it compares Small 4 with other models in different benchmarks. These comparisons are not limited to the score obtained in each test. They also show the average length of the responses each system generates, a data the company uses to illustrate how much text each model needs to produce to achieve certain results. One of the graphs in the advertisement corresponds to the AA LCR benchmark, where Mistral compares the scores of various models and the average length of the responses they generate to solve the same tasks. The data published by the company are the following: • Mistral Small 4: 0.72 score with 1,600 characters• GPT-OSS 120B: 0.51 with 2,500 characters• Claude Haiku: 0.80 with 2,700 characters• Qwen3-next 80B: 0.75 with 5,800 characters• Qwen3.5 122B: 0.84 with 5,700 characters The comparison. Small 4 is not the highest scoring model. Both Claude Haiku and the Qwen models appear higher in that indicator. However, Mistral highlights another aspect of the comparison: the length of the responses. According to the company, its model achieves this combination of score and output length by generating significantly less text than several of its competitors, something it relates to lower latency and lower inference cost. The short answer trick. A shorter answer is not better simply because it takes up less space. It is only if it manages to solve the task with a level of quality comparable to that of a longer answer. This is where Mistral tries to put the focus: if a model achieves a competitive result by generating less text, it can respond faster, consume fewer resources and reduce the cost of inference. In other words, the advantage is not in being more concise, but in needing less output to reach a useful result. How to access the new model. Small 4 can not only be used via API and AI Studio. Being published under license Apache 2.0is also proposed as an open model that can be downloaded, adjusted and deployed in your own environments. The company adds that it can be tried for free at build.nvidia.com, in addition to offering it for production as NVIDIA NIM. Images | Mistral In Xataka | OpenAI has been wanting to be the bride at the wedding and the dead man at the funeral for years: now it has finally defined its priority

The AI ​​race is no longer about who has the most powerful model. Who launches the easiest and safest OpenClaw

2026 began with an earthquake in the world of AI, and it did not come from any of the big technology companies, but from an unknown programmer and his open source project OpenClaw (formerly Clawdbot and Moltbot). Not even two months have passed and we can say that the boom of this AI agent is reconfiguring the AI ​​career, causing more and more companies to jump on the bandwagon. The last one was Perplexity. Personal Computer. a month ago, Perplexity announced Computerwhich was a cloud-based tool capable of orchestrating agents using various models. The next step is Personal Computeryour own OpenClaw. can be left running on a Mac Mini and control it from another device, such as a mobile phone, exactly the same as OpenClaw, but with a simpler interface that does not require technical knowledge. Further user-friendly. Another key aspect is that they focus on security, one of the delicate points of OpenClaw. Perplexity claims that with Personal Computer, “Every sensitive action requires your approval. Every action is logged. There’s an off switch.” At the moment Personal Computer is not available yet, but if you want to try it before anyone else you can sign up for the waiting list. NVIDIA NemoClaw. Which is the most valuable company in the world has taken good note of the success of OpenClaw and a couple of days ago they announced that they will launch their own open source platform for enterprise AI agents, they will call it NemoClaw. This announcement is also important because it places NVIDIA in a position of direct competition against companies like Anthropic, OpenAI or Perplexity. This changes its position from a hardware supplier to a software competitor. and OpenAI…The project had not even been three months old when OpenAI, not only bought it, but also hired its creator Peter Steinberger. It was not the only one who bid to achieve the viral success of the moment, Meta also tried, but OpenAI was the one that won the bid. Stenberger said the project would continue to remain “open and independent.” This case is a good example of two things: how far a person can go with a good AI idea and how difficult, if not impossible, it is to compete in an ecosystem in which the competition is some of the largest and most valuable companies in the world. David against Goliath. The agentic AI race. We spent a good part of 2025 watching AI agents take their first steps, many times with quite mediocre results. It was clear that agentic AI was getting a lot betterbut I don’t think anyone expected that the first viral hit would be carried out by an independent and open source project. OpenClaw not only succeeded, it has launched a new race in AI, one that seeks the ultimate custom AI agents. OpenClaw has two barriers to entry, on the one hand requiring certain technical knowledge and on the other security. It is a very powerful agent, but sometimes unpredictable. Hence, Perplexity is appealing precisely to improve these two aspects. We’ll see who will be next. In Xataka | Social networks were born for humans: Meta has just bought one designed for AI agents Image | Pexels

Europe has reached the end of winter with depleted gas reserves. A country has a model to save it: Spain

This winter, which is coming to an end, is being colder than expected, something that as we have seen has caused havoc. Without going any further, there have been planes that have not been able to fly due to lack of antifreeze. If we talk about gas for heating, storage has also reached red numbers: the Netherlands has a reserve of approximately 12%, Germany and France are around 21%, according to AGSI data. In this low-minimum scenario, there are two countries that deviate from the norm: Spain and Portugal, with reserves of 56.87% and 76.7%, respectively. Of course, the difference in capacity is abysmal: 3.57 TWh for the first and 35.9 TWh for the second. It is not a coincidence: it is that the Spanish state has a particular infrastructure that has led it to this point. The context. The conflict between Ukraine and Russia that began in 2022 accelerated the independence of the old continent from Russian gas. Among the measures from Brussels, an emergency rule by which all EU member states had to start the winter with their gas reserves at 90% to ensure supply. However, in 2025 the EU decided to maintain that 90% target. but relaxing the norm to optimize costs. This greater flexibility together with a harsher than expected winter has brought an end to winter with reserves that are at their lowest in the last five years. The harsh European winter. In mid-January, deposits fell below 50%. If the winter ends with a capacity of 30%, Europe will have to inject 60 billion cubic meters of gas. To get an idea, approximately the annual gas consumption of all of Germany. In short, Europe has to refill its tanks in the summer and it will need a lot of imported gas to do so, which means go out into the market and face other competitors and the logistics of bringing it here in an increasingly complicated geopolitical scenario. The Spanish strategy. The Spanish gas storage system is based on two pillars: underground storage and LNG regasification. The second leg is providential, insofar as it is where Spain makes the difference and, furthermore, It is a powerhouse. In fact, Spain owns 35% of all LNG storage capacity in the EU, how Sedigas collects. Its enormous regasification capacity enables diversification of origin, with USA as first supplier with 44.4% of the total gas and another 15 different countries later, according to Enagás data. Spain has an infrastructure of seven plants that makes it possible to receive LNG ships from different sources, thus ensuring supply in case any mishap (technical problems, conflicts, political decisions) fails. Spain started the winter making decisions. Although the previous strategy gives it an advantage over other member states, Spain adopted a conservative strategy When facing this winter 25/26, adjusting to concentrate reserves in January and February, the coldest and with the most demand. A management decision to not waste that cushion prematurely. He was absolutely right: in January gas consumption rose 10.2% compared to the previous year, with a 30% increase in that destined to generate electricity because renewables contributed less than expected. Spain plays in another league. Thanks to its infrastructure, Spain no longer only consumes gas: it re-exports it. It has become a hub for redistributing gas to Europe as a kind of energy logistics platform, providing geopolitical and economic value to a state that, due to its geographical location, is isolated (which, for example, in the electrical field plays tricks on him) Is there real risk? While it is true that widespread shortages are not expected, there are localized risks in Europe. As summarizes El Economista, Spain has precedents of similar levels, such as 2016, 2017, 2019, 2022, where supply was not compromised. Of course, we will have to see what happens with the demand for LNG in summer globally, because it could make European replenishment significantly more expensive. In any case, Spain will get to that moment better than most. The scenario is not very rosy at the moment, precisely, with the Strait of Hormuz closed and the diplomatic crisis between Spain and the US, its main supplier. In Xataka | Europe believed it had won the gas war against Russia. Now it faces a much more uncomfortable reality: its dependence on the United States. In Xataka | The gas market becomes unpredictable: we have tanks full and ships on the way, but the price remains an enigma Cover | Pronor

news and everything that changes in ChatGPT with the new version of its artificial intelligence model

Let’s tell you what are the news GPT 5.3 Instantthe new version of the model artificial intelligence of ChatGPT. Therefore, we are going to give you a list of the main changes in this version, so that you know the improvements and what changes from now on. As it is understandable that you are a little confused with the numbers, I can tell you that yes, there was already a version of GPT-5.3 that was released in February. It was about GPT-5.3 Codexcreated to write programming code. And on March 3 it was launched the GPT‑5.3 Instant conversational variantwhich is used when he responds to you using text in a conversation. What’s new in GPT 5.3 Instant Next, we are going to give you a list with the main news that brings this new version of the OpenAI artificial intelligence model. We are going to do it in list format with a brief explanation of each news so that it is easier to understand. Improvements in tone and conversational style: OpenAI admits that GPT-5.2 could sound a bit overbearing or make unwarranted assumptions about user intent or emotions. Now, GPT-5.3 offers a more focused and natural tone, with fewer proclamations and filler phrases, while maintaining the bot’s personality. The tone can still be customized from the settings. Less hallucinations in the answers: GPT-5.3 has reduced hallucinations when searching online by 22.5% to 26.8%, and by 9.6% to 19.7% when relying on your knowledge base. Less censoring of responses: ChatGPT was having trouble rejecting questions that could be safely answered, being overly cautious with GPT-5.2 Unnecessary rejections are now reduced. Fewer moralizing warnings: In the preambles of answers, before telling you what you know, GPT-5.3 will moderate overly defensive and moralizing preambles. Come on, they won’t want to educate you so much, and will focus more on your question without explaining their safety limits. Improve the quality of responses with online information: This new version more effectively balances the information you have to search for on the Internet with your knowledge base and reasoning. So instead of simply summarizing what you find on the web, you first use your own understanding to contextualize recent news. This means that, by focusing less on the web, it does not generate such long lists of links. Best creative writing: Allows you to produce more expressive, imaginative and immersive texts. This way you can better switch between practical tasks and expressive writing without losing clarity and coherence. There is still work to do: OpenAI admits that there are still improvements to be made, and that for future versions they will improve the responses in languages ​​other than English, and also the tone of the responses. In Xataka Basics | ChatGPT apps: what they are and how to use them to give ChatGPT more features

The Tesla Model 3 once again offers a rear-wheel drive version that starts at 35,000 euros

Tesla needs to sell cars. It seems silly to say this about a brand that obviously sells cars. But Tesla is different because its CEO warned a few days ago that the company was in the process of pivoting. With sales that suffer in all markets for a perfect cocktail created by the Elon Musk’s political movements, the lack of a Tesla Model 2 that expands the range and a product that is not renewedthe market is punishing them harshly. The company also appears to have lost its financial advantage. It wasn’t that long ago that I could play with the price of the car, eat a good chunk of profit margins that were huge and continue to be competitive for pure price. Now, with rivals launching models on the market at a good pace, this game becomes more complicated. What’s left for Tesla? Lower their already minimalist cars to create a pure and simple mobility object. But, above all, try to rely on where they have the best results. The new Tesla Model 3, the rear-wheel drive model, is a good example of this. Why does an electric car have less autonomy than advertised? Tesla Model 3 RWD technical sheet TESLA MODEL 3 RWD Measurements and weight 4,720 mm long, 1,850 mm wide and 1,440 mm high. 2,890 mm wheelbase. 1,772 kg. Trunk 682 liters to the roof of the trunk (Tesla does not specify if it includes the double bottom of the rear trunk). bodywork five-seater sedan Maximum speed Maximum speed of 201 km/h Acceleration (0 to 100 km/h) 6.2 seconds WLTP range 534km WLTP consumption 13.0 kWh/100 km DGT environmental badge Zero emissions. DRIVING AIDS Autopilot system with adaptive cruise control with Stop&Go function, lane keeping, blind spot sensor, emergency braking, front and rear cross traffic control. 360º camera and autonomous parking. Operating system Tesla Software not compatible with Android Auto and Apple CarPlay. Grok AI as a voice assistant. MULTIMEDIA SCREEN 15.8-inch central screen. OTHERS Wireless charging for mobile phone, two USB-C sockets. Specifically developed Spotify, Disney+, Steam, Youtube, Netflix applications. PRICE From 35,000 euros with a brand discount and without government aid The right changes, the usual advantage What changes in the new Tesla Model 3 RWD? Rather little. On the outside the changes are practically imperceptible. They just stop at some black logos (which were already coming in the last deliveries) and some very closed wheels to increase the car’s autonomy. Rims that give a greater feeling of quality than the hubcaps of the Tesla Model Y although they are very similar. Inside, wireless mobile charging is eliminated and the imitation leather seats have been replaced by cloth seats. I am one of those who prefers change, the current ones are good (without being spectacular) and I have the feeling that the fabric is going to age better than the previous seats. Of course, this is still mere intuition. The seat controls for electric control have also disappeared. You have to go to the screen but since you can save your personal profile with your mobile phone, which still acts as a key, it shouldn’t be a problem. And, if you look for them, all the controls are on the screen except for the turn signals, where Tesla collected cable to return the lever behind the steering wheel. Software continues to be the best ally in this case. The menus are clear, simple and it’s easy to remember where what is. The problem as always is having to touch the screen for functions that are better operated by hand. To improve the experience, Tesla has added Grok AI as a voice assistant. In its latest update, this chatbot with artificial intelligence is available in beta phase and that leaves us at the moment with a somewhat strange situation because it continues to coexist with Tesla’s voice assistant. To put it somewhat simply, Grok cannot handle the “material” functions of the car. That is to say. You can’t ask Grok to roll down the window or change the air conditioning. The assistant notifies us that it cannot do this function and tells us how we can ask the Tesla voice assistant for it. It’s a shame because the latter requires more precise orders and less natural language. Grok feels like an almost obligatory update in a car that forgets about physical controls. The promise of a voice assistant with artificial intelligence that learns from what you tell it is almost essential to manage the car’s functions with a much more natural and less robotic language. What we can ask Grok for is a route, change it on the fly or tell him to find places to eat, sleep or buy some clothes while we’re on the go. This does seem useful to me. For example, in our case we asked him to take us on a route to Berlin but to find us a five-star hotel in Paris to stop and rest. In a few seconds we had the route made and the hotel selected. For a practical as well as fun point of view, the assistant can be used with two female voices and two male voices, each with its own name, and different personalities. From the pure assistant to the conspiracy, passing through a “language teacher”, the doctor, the narrator, the therapist and the “meditation” mode. The information provided with each of them is different and adjusts to their personalities, so if we want more truthful information we should go to the assistant. However, you have to be careful because Grok pointed out to me, for example, that you can only enter the city of Madrid if you have a Zero Emissions sticker, which is not true. Of course, the fact that it lacks restrictions can be funny, like when she told us that an electric car “will never equal a gasoline car”, that her favorite car was a Porsche 911 Carrera RS 2.7 or that she totally disagreed with Elon Musk’s … Read more

Select the model to use between Claude, GPT, Gemini, Kimi, Grok or Sonar

Let’s tell you how you can choose the artificial intelligence model What are you going to use with? Perplexity in a prompt. This is a chatbot known for allowing you to access many cutting-edge models from third-party companies, something it does automatically depending on the request you make. However, if you are going to use Perplexity, it is advisable to know one of its functions basic, being able to choose by hand which model you want to use. And yes, every time Google, Anthropic or OpenAI launch a new model of artificial intelligenceat Perplexity they are going to add it to their catalog. The results will not be exactly the same as if you use the paid versions of ChatGPT, Grok, Claude or Gemini, because Perplexity may modify them a little. However, you will be able to take advantage of the reasoning power of these models. Choose the AI ​​model to use in Perplexity To choose the AI ​​you want to use in Perplexity, you have to look at the box where you write the prompt. In it, you must click on the option AI modelwhich will appear with the icon of what appears to be a chip. It is to the far left of the series of icons that appear at the bottom right in the prompt writing field. When you click on that button, it will appear a list of all models of artificial intelligence that you can use. Both the best and the latest available from Gemini, GPT, Claude, Grok, Kimi or Perplexity’s own Sonar will appear. This is something that you can do in its web version or in its mobile or computer applications. Here, you should know that you can choose the model with each prompt within a conversation with Perplexity. Come on, you can ask a question with one model, and then ask the next question with another. Also, below the list you will see the number of queries you can make with the most modern models. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Xiaomi already has its own AI model for robots. At the moment, he’s great at taking apart LEGOs and folding towels.

It has been a long, long time since Xiaomi stopped being a mobile company. Today the company’s tentacles reach all types of sectors, from mobile and household appliances until cars, chip design and, from now on, robotics. And the Chinese company has just presented its first vision, language and action model for robotics. Its name: Xiaomi-Robotics-0. What is this about?. Xiaomi-Robotics-0 an open-source model whose code can be found in GitHub and HugginFace. As the company explains, this model has been optimized to offer “high performance, speed and smoothness in real-time executions.” We should not think of this model as an AI capable of making a robot run and jump like a human, but rather one capable of making a “simple” robot understand its surroundings and know how to make the optimal decision without, for example, destroying whatever it has in its hands. About the robots. When we talk about AI applied to robotics we are not just talking about a robot being able to move. The device must know and understand that it should not apply the same force when holding a brick as it does when holding a cat, for example. In that sense, there has to be an understanding of the visual, an understanding of what is being seen and an appropriate execution of actions: this is a brick > it is a heavy object > I have to apply more force to hold it and move it from one side to the other. Xiaomi-Robotics-0 results in the benchmarks | Image: Xiaomi The benchmarks. Xiaomi has achieved, as detailed on the project website, very good results in the benchmarks I RELEASE (measures knowledge transfer), SimplerEnv (measures performance in real simulations) and CALVIN (measures performance in tasks conditioned by language). According to the company, Xiaomi-Robotics-0 “achieves high success rates and robust results in two challenging two-handed tasks: disassembling LEGOs and folding towels.” The fun of training. Every AI model draws from a training dataset. In the case of Xiaomi-Robotics-0, a 4.7 billion parameter model, the dataset consists of 200 million time steps of robot trajectories and more than 80 million samples of general vision-language data, including 338 hours of LEGO disassembly videos and 400 hours of towel folding videos. The results. The company claims in the paper that its model is capable of disassembling complex LEGOs of up to 20 pieces, adapting the grip in real time to avoid errors, using only one hand to place the towel correctly and folding it or, if you pick up two towels from the basket, take one of them, leave it in place and fold only one. This demonstrates an interesting capacity for adaptation and learning that, although it may seem trivial on paper, has its value if we think about industrial and even domestic robots. Beyond. What this model is demonstrating is being able to adapt to complex and unpredictable geometries, such as that of a towel thrown in a basket, and to understand the, let’s say, “soft physics.” On a towel it may seem like a small thing, but let’s think about manipulating human tissues in an intervention, for example. Same with LEGOs. It’s not just disassembling them, it’s understanding the position of the blocks, how they fit together, what force to apply and at what angle so as not to break them. Let’s think about a robot that removes debris. An industrial robot has historically been programmed with fixed coordinates, that is, moving something from point A to point B. A robot with AI like the one proposed by Xiaomi would be much more versatile. The first robot learns movements, the second robot learns tasks, and the difference is a world. If we think about a distant future in which there are domestic robots, a robot cleaning dust from a shelf will not be the same as knowing how to identify objects, decorations, etc., and understanding that it must move them to avoid throwing them away and cleaning them thoroughly. Cover image | Xiaomi In Xataka | A Chinese company boasts another limit in robotics: it ensures that its new humanoid robot runs like an elite athlete

Musk doesn’t have the best model or the best product, but he has something more important in the AI ​​race: SpaceX

Elon Musk has done it again: he has changed one of his companies from the right pocket to the left. In 2016, when his company Solar City was in the doldrums, he took advantage of the fact that Tesla was going like a rocket to save the company. Now it is xAI that needs a push in the age of artificial intelligence and, after a few brief rumorsconfirmation came: SpaceX has purchased xAI. Or what is the same: an Elon Musk company has bought another Elon Musk company. It’s an ideal move, but also a morrocotudo mess. In short. The announcement came late into our night. As part of a vertical integration, aerospace will absorb the operations of xAI, Elon Musk’s artificial intelligence company. It was an extremely rare agreement. When it occurs a business purchasewe know the numbers, but here we only have some ideas about the goal. Musk has been deliberately opaque and has justified the movement as a restructuring to guarantee “freedom of expression”, with a story based on energy, the development of technology and something we have been talking about for some time: the need for exploit outer space as a source of energy and giant heatsink for the increasingly numerous data centers. One million satellites. In fact, the operation came shortly after we learned that SpaceX had filed with the US FCC a project to launch one million Starlink satellites. Currently, there are about 9,000, plus another few thousand companies like Amazon or chinese satellites and Europeans…and astronomers are already complaining about how difficult it is to observe beyond low orbit. With a million satellites from SpaceX alone, the amount of potential space debris will increase stratospherically, but Starlink is not a simple satellite system to have Internet anywhere on the planet: They are potential data centers. Musk himself, when companies like amazon either Google They began to be very vocal about the need for moving data centers into spacepointed out that SpaceX already had them and that it was easy to convert its satellites into computing centers. In space there is Unlimited, uninterrupted energyheat dissipation is much simpler because air or water is not needed as on Earth and the information is transmitted to terrestrial centers using lasers, eliminating the need for Expensive fiber optic interconnections. SpaceX works. And, in Musk’s statement, it is stated that this demand for energy and computing power to feed AI is almost impossible to cover with terrestrial solutions, so the most logical thing is the space exodus from data centers. And, of course, one plus one equals two: SpaceX has the infrastructure and xAI needs it. But beyond the synergy, there is another reality. SpaceX has become a solid and profitable company. It is the only one that, right now, can routinely transport astronauts to and from the International Space Station. It has become an essential piece for both NASA and the Department of Defense and, in addition, it has the aforementioned Starlink system that has crept in, perhaps too much, into the communications infrastructure of countries like Ukraine. xAI burns money. On the other hand, xAI shows the symptoms of a company focused on artificial intelligence. This valued at more than $230 billion and has raised several tens of billions in several rounds of financing, but is burning money at a rate of approximately one billion a month. This is typical, as we say, of companies in the growth phase, and the executives themselves have stated that they have plans and resources to keep spending aggressively, but everything has a limit. xAI requires enormous amounts of energy, resources, computing and is developing its own chips. All of that costs money, and putting data centers in space with existing infrastructure like Starlink’s can help ease the burden. In the economic and energy sense, it is a brilliant operation. When other technology companies want to start filling the space with their data centers, SpaceX will already be there. Morrocotudo mess. Therefore, and in the end, what Musk has done is unite a company in an aggressive investment phase with another that is solid and has established contacts with the US government. SpaceX is the highest xAI carrying vehicle and it looks like a win-win manual. Now, it’s also a tremendous mess. Because xAI is not just xAI: it is (Twitter), and now SpaceX has all that power under one umbrella. xAI manages military intelligence and we have already mentioned that Ukraine threw itself into the arms of Starlinkrelying on its infrastructure during the conflict with Russia. SpaceX is no longer just an aerospace company, it is that and much more: a brain, a social network with private data of tens of millions of people. And in a Europe that is fighting for their technological sovereignty and information protection, SpaceX can go from being a partner for a specific mission to something to look askance at. Image | The White House (edited) In Xataka | From $100 billion romance to silent divorce: NVIDIA and OpenAI’s relationship is disintegrating

The hypermarket is quite mortally wounded in Spain. There is an absolute winner: the Mercadona model

For years the plan was to take the car, go to the hypermarket on duty and spend a couple of hours walking through its aisles to collect everything we needed and a few other things that we could find because there was almost everything there. is passing away by leaps and bounds. From the boom of the hypermarket to its decline. It was the 90s when they became popular, but there has been a change in purchasing and consumption habits that were catalyzed by the pandemic. The 2025 data from the consulting firm NIQ (former Nielsen) collected by El País They speak of a share of 10.2% of total sales in Spain. And last year it grew by 1.2% unlike the previous year, where it fell by 2%. The problem, in addition to its small piece of the pie, is that the majority of food distribution channels in Spain grew more. The heirs of hyper. These Mercasa studies date from spring 2025 cited by The Economist where they reflect that supermarkets already concentrated 91.8% of the commercial surface in the state. And it is not the only one: the other alternative is proximity formats. It is true that the NIQ data shows that the medium supermarket (between 300 and 799 square meters) fell more than the hypermarket, but its share is four points higher. The small supermarket (less than 300 square meters) and the large supermarket (between 800 and 2,500 square meters) are the big winners: the former rose two tenths with a sales increase of 9.1% and the latter did the same by nine tenths to reach 57.1% of the market and 7.6% of turnover. Here there is an absolute winner: Mercadona, whose new openings exceed 1,500 square meters and which has also been transforming its 1,600 stores for years to replace the smaller ones. And its strategy is paying off: its share has risen to 29.5% in 2025 despite having 10 fewer stores. The Mercadona effect or how efficiency kills size. This change in trends opens a new battle for proximity: the growth is in the 1,500 square meter supermarket aka the Mercadona model or in the convenience store. Growing no longer means opening more centers, but rather having better centers: it pays more to close 10 stores because you have more efficient stores. On the other hand, last mile logistics is gaining weight: it is easier and more affordable to serve an online order with a network of small stores scattered throughout the urban center than from a distant hypermarket. In addition, the franchise format allows chains to expand their brand without assuming operating costs. The consumer has spoken. The NIQ consultancy reflects clearly this paradigm shift: purchase occasions per household have grown by 11% in 2025 and units per basket have decreased by 7.6%. In short: we buy more times but less quantity, a trend that benefits local stores and penalizes hypermarkets. Kantar’s reading points to factors such as smaller homes, a higher average age, an urban context that favors this type of purchase over American car culture. The chains are moving. The fact that the hypermarket is in decline, reducing its weight in the market, directly affects the operators that exploit this format, such as Carrefour and Alcampo, followed by Eroski and El Corte Inglés. In NIQ figures, the first lowered its share two tenths to 7.2%, the second fell from 3.1 to 2.9% and the Basque chain fell one tenth to 4.3% and El Corte Inglés did the same two tenths, to 1.6%. So they are adapting to this paradigm shift: In Xataka | Mercadona has understood that Spain no longer wants to make its potato tortillas. And he is making gold with it In Xataka | Years ago Mercadona decided to conquer the market with its white brands. And that is making gold for some companies Cover | Carrefour

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.