the success of Starship V3 accelerates the race to the Moon

SpaceX, Elon Musk’s space companyis almost ready to launch its next-generation Starship in the month of May. But before carrying out this launch it is necessary to carry out some static tests, such as starting the engines. The first test of this type was carried out just a month ago, with a small incident at the end, but the second one went perfectly, so the launch plans are moving forward. A complete ignition test. On April 14, SpaceX performed the static ignition of the engines of its upper stage. Although the ignition test of the first stage had to end early due to a failure in the ground equipment, in this case all the engines have been able to ignite, demonstrating that this enhanced version of Starship is ready for its first flight. Why is it necessary? Logically, rocket engines are key and very sensitive parts for their proper functioning. They are one of the factors that most often fail in launches, along with fuel filling systems. Therefore, it is important to test prior to launches. In the static ignition tests, all engines start to check that there is no anomaly. In the case of version 3 of Starship none have been detected. Everything is on the right track. A battered version of the previous one. The Starship version 3 measures 124.4 meters, 1.2 meters longer than the previous version. It is much more powerful, thanks to its V3 Raptor engines. For this reason, SpaceX has already announced that it will be capable of carrying loads weighing more than 100 tons to low Earth orbit. Version 2 could only travel with 35 tons on board. Ready for the Moon? After the success of Artemis II, NASA already has its sights set on Artemis III, which will become the final test for the landing of a new batch of humans on the moon. To do this, the American company needs a rocket to match. Never better said. For now, there are two private companies working on it: Blue Origin, with Blue Moon, and SpaceX, with Starship. Although at first everything was betting that it would be SpaceX that would take the next humans to the Moon, some delays have led to thinking that Blue Moon could overtake them on the right. Therefore, the fact that version 3 of Starship has advanced in this way is good news for Elon Musk’s company. In May we will know if it really lives up to expectations. Images | SpaceX In Xataka | In 2018, Elon Musk put his own car into orbit. Eight years later it is still circling the Earth

China and the US have focused on the race for humanoid robots. Now China is clear about which ones make money: dogs

It is difficult to talk about all the open fronts that China and the United States have. The technological war covers everything and, if there is a race for artificial intelligencethere is one just as fierce in the field of robotics. The two powers are focusing on the humanoid robots to put them in factories or in customer service, but the market is talking and it turns out that they prefer dogs. Robot dogs, specifically. In short. Right now, China is the summit of robotics. Not only because of how advanced their robots are, but because they are already putting them to work. work in factories, stores either museums. They are not theory, they are practice due to government support and, above all, because the components to make a robot are manufactured… in China. This advantage is something that no other country has and that is essential (let them tell the eTSMC’s 60 minutes strategy in Taiwan). There is multitude of robotics startups and, although the humanoids are the most striking, the robodogs are the ones that make money. In an article by SCMP They explain how quadruped robots are preferred by robotics companies because they are becoming business drivers. AgiBot is one of those companies, and has just expanded its robot portfolio with the creation of a subsidiary -AgiQuad- focused exclusively on quadruped models. Their justification is that they consider that it is what is going to boost the robotics business and they do not want their robodog to live “in the shadow of a humanoid robot.” That is, instead of launching under the same brand a humanoid robot and a quadruped one and that customers have to choose (and compare), they prefer to ensure that each branch of the business operates a different type of robot. Projection. AguQuad plans to become a 500 million yuan (about $73 million) business by this year, scaling to 10 billion yuan by 2030 with 300,000 annual robot shipments. At the moment, they say that they have everything sold and that they continue producing units because they are completely out of stock in the warehouse. And they are not the only ones. Other companies like Amap or the giant Alibaba They want to get into this robot fight to stand up to Unitreebut in the field of four-legged robots. Speaking of the dancing queen, it is estimated that Unitree’s quadruped robot division generated 490 million yuan in revenue in the first three months of 2025 alone. That is, in just three months, it generated as much as what AgiQuad expects to generate this year. Already Deep Robotics He is also doing well in this field. Deployment. According to IDC analyses, the quadruped robot market generated $180 million in 2024 and is expected to generate $700 million this year. The estimate is that the segment will reach 50,000 million yuan, about 7,329 million dollars. And the question is… where are these robots going? Many go to exhibitions and fairs in which the robotic muscle of Chinese startups is shown, but there are others that are already operating on the ground. China wants ‘civilian’ quadruped robots, like assistance for blind peoplebut there is also deploying units among firefighters and, as we said a few days ago, within the Chinese army with support, reconnaissance and attack units. The race doesn’t stop. This scenario makes sense if we take into account several details. The first is the most practical: quadruped robots have years of analysis behind them and have already proven to be very useful in various scenarios. the chinese army He’s not the only one who has them. and, for example, in the United States they are beginning to be deployed in data center surveillance tasks. And the second reason is because those years of research and development have led to them becoming increasingly cheaper to produce, allowing their manufacturing to scale and leaving more margins for manufacturers. Prices are also falling and it is easier for different actors to integrate them into their workforce. Precisely for this reason, quadruped robots can be a viable commercial product for those same companies that continue to push the development and commercialization of humanoid robots. The Unitree itself that we talked about before just started to sell its R1 model through AliExpress with a planned launch for the United States, Japan or the United Arab Emirates. Price? $8,200, but you start somewhere. In Xataka | China will bring together more than 300 humanoid robots in a half marathon. The goal goes beyond running

A frantic race has begun between China and the US for Brazil’s rare earths. And Brazil only asks for one thing in return.

After a diplomatic incident with Japan, China abruptly reduced its exports of rare earths, causing an immediate shock in industries around the world that depended on these materials to manufacture everything from magnets to advanced electronics. For weeks, companies and governments discovered the extent to which a seemingly invisible resource could become a lever of global power. A global race that is decided far from Washington and Beijing. This push for critical minerals has entered a new phase, with Brazil now converted on the board where the interests of the United States and China intersect. The reason? They both search ensure access to key rare earths for technology, defense and energy transition, but this time they are not negotiating on equal terms. Brazil, with one of the largest reserves in the world, has made it clear tons of common sense: that it does not want to repeat the historical role of simple exporter of raw materials, and is using that position to redefine the rules of the game. The US accelerates, but Brazil slows down. Washington has intensified its offensive with multi-million dollar investment proposalsbilateral agreements and formulas to guarantee direct supply to US companies. It has even started to secure rights on production through financing, trying to close the path to China in a supply chain that it considers strategic. However, this approach has been perceived in Brazil like too aggressivewhich has generated political resistance and has stopped agreements that, on paper, would benefit both parties. China is still in the game. Meanwhile, China has not disappeared from the board, but quite the opposite: is still the main global player in the processing of rare earths and maintains active commercial relations with Brazil. Exports to the Asian giant have grownand its industrial experience remains difficult to match in the short term. This puts Brazil in a unique position, where it can negotiate simultaneously with multiple powers without being forced to choose, at least for now. The Brazilian condition. This is where Brazil introduces its strategic turn: opening the door to foreign capital, there is no problem with that, but with a clear and unusual condition in this type of agreement. It is not enough to extract resources, but any partner must contribute to local technological development, processing within the country and job creation. In other words, Brazil demands to transform its mineral wealth in own industrial capacitybreaking with decades of dependence in which it exported raw materials and imported finished products. From exporter to industrial power. This change of focus is translating in concrete proposalssuch as the possible creation of a state company to manage critical minerals or a battery of laws aimed at strengthening national control over the sector. The idea is clear: go from selling resources to build the entire chain of value within the country, from extraction to manufacturing of key components. There is no doubt that it will not be a quick or easy process, but it marks an ambition that goes far beyond a simple commercial agreement. The real pulse: who accepts Brazil’s rules. In essence, the competition between the United States and China for Brazilian rare earths is no longer fought only in terms of investment or access, but in who is willing to accept the conditions that third parties imposein this case Brazil. Because the country is not saying “no” to anyone, but something more uncomfortable for the great powers: “yes, but on our terms.” And that introduces a new element in the geopolitics of resources, one where control no longer depends only on who needs the minerals and has the money, but on who has the capacity (and the will) to impose the rules of the game. For Brazil, a master move. Image | NZ Defense Force, YouTube In Xataka | China has just discovered the largest deposit of rare earths in the world. And he did it just when he needed it most. In Xataka | The world’s rare earth reserves, laid out in this graph showing the brutal dominance of a single country

Within Meta there is a race to see which employee consumes the most AI tokens. It’s the ‘Tokenmaxxing’ of Silicon Valley

There is a battle within Meta: see who spends the most AI tokens. This is the basic unit that AI uses to understand the language with which we order actions. It is like the “bridge” between our words and the numbers that the machine can process and, therefore, when ChatGPT either Google They present a model, they brag about the millions of tokens they can process. But tokens are also becoming a ‘spending’ unit in AI companies. Silicon Valleyso much so that they may be generating a toxic work culture. And Meta is an example of a company where employees compete to see how many tokens they can consume to become a Token Legend. Tokenmaxxing. It is not the first time that we talked about this. A few days ago, Jensen Huang -CEO of NVIDIA and one of the main instigators of this phenomenon- commented that he would be worried if an engineer who earns $500,000 did not spend at least $250,000 a year on tokens. Because tokens cost money and NVIDIA is already considering offering tokens as part of the signing bonuses for its artificial intelligence engineers. Goals. As it could not be otherwise, Meta does not want to miss this party. The company, which changed its name when the metaverse was going to be the big thing and, after the swerveis defined as a “native AI company”, is one of those that promotes its artificial intelligence engineers to keep a count of the tokens spent during their day. There is no official data, but there are reports revealed to media such as Business Insider and The Information which point out that some of these teams have very specific objectives related to the use of tokens. For example, the company expects 65% of its engineers to write more than 75% of code using AI tools by the middle of this year. The Scalable Machine Learning division has another objective, and so on in each of the code-related departments within Meta. Legend Token. In The Information, they directly point out that there is an internal classification table created by the employees themselves to gamify the work. It shows the 250 most intensive AI users in their tasks with an easy premise: the more tokens you spend, the more you climb in the ranking. The winner of this particular competition takes the title of ‘Token Legend’, or ‘Legend of Tokens’. It is turning an expectation into a kind of internal sport. The first paragraph of this article converted to tokens crazy spending. If we put the first paragraph of 542 words in the tool ‘tokenizer‘ from OpenAI, we see that that simple phrase has already consumed 121 tokens. Well: according to The Information, in the last 30 days the total token panel usage of that internal table was more than 60 billion (of ours) tokens And even if they want to dress it for sports and competition, it is still obligatory. In late 2025, Meta launched the ‘Level Up’ program where employees who complete the most tasks using AI earn badges. And more important than this: it made the use of AI a central criterion in its employee performance evaluations. This, obviously, sets salary and promotion objectives. Doubts. But of course, beyond paying to work, there are other underlying issues. One of the criticisms of this tokenmaxxing system is that AI companies like Meta or NVIDIA encourage spending more on tokens because, in this way, their own employees become consumers of the product they are creating. An easy example that software engineering analyst Gergely Orosz exposed which is as if Tim Cook, CEO of Apple, said that if one of his employees who earns $500,000 a year did not spend $50,000 on purchases in the App Store, he would be worried. Orosz continuous stating that productivity should not be measured in tokens spent, but in the results obtained. Industry issue. In any case, Meta and NVIDIA are not the only ones that measure their employees by their consumption of AI at work. It is something that is soaking in other AI majors, turning the tokens into an extra work benefit incorporated into the engineers’ remuneration wheel along with the base salary, performance bonuses and shares. HE esteem that an OpenAI engineer can process 210 billion tokens in a week and there are Claude Code engineers who accumulate more than $150,000 in tokens in one month. Basically it is merging part of your salary into the company that pays you. And… have they said anything from Meta? Yes, it’s not about volume, but about quality, pointing that performance rewards are based on the impact of the work and not the raw use of AI. Image | ‘Wolf of Wall Street’, Meta Logo. Edited In Xataka | Google Earth shows the world. The Spanish Xoople wants AI to understand it

Samsung is tired of being second in the chip race. Now they are preparing to dethrone the titan of Taiwan

When we talk about artificial intelligence, there are several proper names that star in the conversation. NVIDIA has become the foundation and cement of AI thanks both to their products as, above all, your money. But it’s impossible to leave Samsung out of the equation. Your HBM4 memories They are the ones that will allow NVIDIA and AMD manufacture their platforms new generation, but South Koreans do not want to stop there. They seek to be the largest advanced factory in the world and have launched a plan to wipe TSMC where it hurts the most. In the expansion throughout the United States. An x8 thanks to AI. 2025 was a transition year for Samsung. While its great rival in the memoir segment –SK Hynix– dominated the HBM chip marketSamsung is preparing to make the leap with HBM4 chips. This is the new generation of high-bandwidth memory designed to power the new AI platforms from both NVIDIA and Samsung. The effort paid off by overtaking SK and becoming the supplier of the two giants, and it is something that is already materializing. At least in estimates profit, of course. Now the company forecast profits of about 38 billion dollars for the first quarter of the year, something that destroys the profits of the same period last year, being eight times more. Texas. The company does not stop manufacturing the new HBM4 memory, but even so it cannot satisfy the enormous demand of its customers and there are already those who expect that the prices of these chips will increase by more than 50%. To meet demand, Samsung is moving, and The United States is key in its ambitious expansion. The South Korean company seeks to invest 37,000 million dollars in US soil, and 17,000 million of them they will stop to the Taylor, Texas plant. According to Korea Heraldthe company is finalizing hiring for this semiconductor plant where they hope to produce cutting-edge 2-nanometer chips. It is estimated that 1,500 people will be directly employed and the idea is to produce transistors with gate-all-around architecture. TSMC in the spotlight. Recent reports indicate that Samsung has already begun producing test units of chips in that lithography with the aim of beginning mass production by 2027. But this expansion is not only occurring in the United States. At the Pyeongtaek Campus, Samsung’s operations center, building a new factory for which Samsung has just ordered 20 EUV lithography machines valued at almost $8 billion. As it could not be otherwise, they are from ASML and it is estimated that the plant will have 70 units in total to support the production of HBM4 memory chips. And these two movements have one goal in mind: to dethrone the queen of semiconductors. Currently, TSMC takes the lead with NVIDIA and Apple as its best clientsbut Samsung is another industry giant that may not take the global throne, but is aiming for something more concrete: to be the one who leads the way in the United States. Both Samsung and TSMC are in full expansion throughout the United States, but if Samsung manages to start mass manufacturing of 2nm chips by 2027, it would overtake TSMC -focused on 2/3nm chips– in that development of advanced chips in the United States. It is still a vital race, since Tesla, Apple, NVIDIA or AMD are trying to get chips manufactured in the US and thus meet the demands of Donald Trump’s government. Trojan horse. In the end, it’s a move that Samsung can only win from. On the one hand, expand its HBM4 chip capacity to power AI platforms that do not seem to stop increasing in the short term. On the other hand, continuing to settle on American soil where it maintains a battle with the Taiwanese giant. But, also, Samsung is one of the founding members of the EPIC program of Applied Materials together with SK Hynix. They are positioning themselves to be the big player in semiconductors both as a factory and when it comes to designing machines and processes that allow for shorter development times for cutting-edge chips. and all this foreign companies are doing it on US soil when what the current government wanted was for were American companies those who will take the lead. In fact, Samsung’s plans are so ambitious that they are already looking for master 1nm chip production by 2030. In Xataka | ASML has discovered a way to further improve its SVU machines. This is terrible news for China and the US.

There is a much deeper and more important AI race in which China is crushing its competitors: human talent.

The AI ​​race It’s about many things. Not only who makes the best AI modelswho has more and better data centers or who has more cheap energy to power this revolution. It’s also about something that right now China dominates with an iron fist: AI experts. China surpasses the US in talent. In The Economist have analyzed the evolution of the publication of studies at NeurIPS, one of the most important conferences in the world on AI. In the 2025 edition they have discovered a singular fact: for the first time in the history of this conference, China has surpassed the United States in studies presented, and that is the definitive sign of how the Asian giant has achieved a victory in a crucial area for the future of this technology. Alarming data. This data is not something isolated, but the result of a trend that began ten years ago. In 2019, 29% of researchers presenting their work at NeurIPS had started their careers in China. In 2025 that figure is 50%. Meanwhile, the proportion of quinees who began their careers in the US has increased from 20% in 2019 to 12% in 2025. The analysis is based on a sample of 600 articles written by almost 4,000 researchers (many studies have several researchers as authors). Chinese universities dominate. This analysis also served to analyze the origin of the researchers who published these studies. Nine of the ten institutions where the most NeurIPS 2025 researchers completed their studies are in China. Tsinghua University is, for example, the protagonist with 4% of all researchers. The prestigious MIT in the USA? Only 1% comes from there. Quantity matters, but also quality. It must be taken into account that this does not necessarily mean that China wins (or loses) in research quality, but it does in quantity. But this parameter is very relevant, because scale matters: when China manages to “produce” a huge number of AI graduates, its chances of those experts being responsible for new advances in this discipline increase. Not only that: it also makes these advances spread faster within the Chinese technological ecosystem. The US depends on Chinese talent. One of the most uncomfortable details of this study is where those who signed studies from US institutions were trained. Of all of them, 35% They graduated from Chinese universitiesthe same proportion as those who did so in US universities. Many leading AI companies in Silicon Valley are drawing on AI experts trained in China, which is increasingly the world’s largest pool of this type of engineers. Come home come back. What is worrying for the US is that the Chinese talent that US companies sign increasingly ends up returning to China. Chinese programs like Thousand Talents Plan They offer up to $100,000 annually plus subsidies for housing and research to attract that talent back. The United States government is also promoting just that, because the funding cutsthe uncertainty with visas and suspicions towards researchers of Chinese origin make working in the US no longer so attractive for these experts. Or what is the same: The US is shooting itself in the foot (again). From the American dream to the Chinese dream. In 2019, about a third of NeurIPS researchers who had graduated in China stayed in the country to work. In 2022 that proportion rose to 58%, and in 2025 the figure already reaches 65%. And as we mentioned, those who had left are returning: in 2019, only 12% of Chinese researchers who had completed postgraduate studies outside of China had returned, but in 2025 that figure has risen to 28%. The case of DeepSeek It is significant: none of its main contributors have a university degree outside of China: the talent who achieved that milestone He didn’t go through Stanford or MIT. The trend doesn’t lie. If we stick to the authors of studies published in NeurIPS as a metric, about 37% of the best researchers in the world now work in Chinese organizations, compared to 32% of those who do so in North American institutions. If this trend continuesin 2028, researchers working in China could outnumber those working in the US by two to one. Silicon Valley may continue to attract a lot of international talent, but the direction of the trend is clear, and that points to a worrying future for the United States. Image | Tommao Wang In Xataka | There is a city in China that goes head to head with Silicon Valley: welcome to Hangzhou, the home of the ‘Six Little Dragons’

OpenAI takes a step back in the AI ​​race to completely recalibrate

OpenAI Sora has closed. His generative video AI that he has proudly shown on numerous occasions and which earned him a juicy $1 billion deal with Disney it no longer exists. The news fell like a bomb a few hours ago followed by the withdrawal of that billion-dollar Disney investment. Although there are those who point out that OpenAI is in trouble, those problems are not so much economic as lack of direction, and closing Sora seems only a step backwards in the long-distance race of OpenAI and AI. Go public this year and start harvesting after everything planted. In short. It’s the news of the day. Less than a year and a half after launching it, OpenAI says goodbye to Sora. In his day (February 2024, how time flies) we were amazed at what this generative AI could do. It was just 60 seconds of video and had some huge flaws, but it was one more step in the artificial intelligence race that positioned OpenAI at the forefront of the industry. Then other competing models arrived, culminating with a Seedance 2.0 that has consumed the entire Internet to plagiarize absolutely anything. Like all the others, wow. Issues. But although striking, Sora was a tool that didn’t seem to add up. While other services have integrated their generative AI models within an ecosystem or applications (the aforementioned Seedance 2.0 in suites AI or in the video editor CapCutfor example), Sora was there, away. The aforementioned contract with Disney was worth it, but it did not seem to be part of something larger, of a “creative suite” (if generative AI can be classified as such). He simply existed, and the worst thing was that others were passing him on the right. Eggs in many baskets. It was, in short, another product of an OpenAI that had eggs in many baskets. It was reaching dizzying numbers in different rounds of financing, setting up data centers, buying a lot from NVIDIA (depending a lot on NVIDIA, too) and launching products like crazy. OpenAI wanted to touch all the keys: And there are some other products, as well as a super app to integrate all that that was not being integrated into other sites. The philosophy was simple: if we are in everything, something will work, but the result has been the opposite and, as my colleague Javier Pastor said a few days ago, wanting to be the bride at the wedding and the dead man at the funeral It is having consequences. The competition tightens. While OpenAI diversified and allocated resources to touch all suits, Anthropic (which is not just a rival, it is a public enemy) was dedicated to two things. It’s not that Anthropic doesn’t have a browser or a video generator: it’s that they don’t even have an image generator. In exchange, what they do have They are functional, precise models and that they do things very well, especially in the field of amateur development with the vibe coding. Focusing on one thing and doing it very well is something that the market is seeing valueto the point that Anthropic is raising a lot of money in different recent financing rounds. In a short time, it has gone from being valued at 183,000 million to arrive up to 380,000 million, and that has had all the fuss with the United States government and the loss of contract with the Department of Defense. Money, too. And money moves everything, and while ChatGPT sweeps the consumer segment with more than 2.5 billion daily queries, you have to wonder how many paying users there are. Where the money really is, which is in business use, Anthropic controls the market with 32% compared to OpenAI’s 25%. And in programming, the distance is astronomical: 42% compared to 21%. In fact, OpenAI has seen how your business share has fallen from 50% in 2023 to just 25% today. As we say, this is where the greatest potential for growth and commercial performance is, and OpenAI is realizing that being focused on so many fields has caused them to be distracted. Or what is the same: they have covered more than they could bite off. Public company. The closure of Sora responds to a multitude of factors, but in the background there is something more important. NVIDIA has already said that the millionaire mega-rounds are overand it has done so just before the expected IPO of both OpenAI and Anthropic. When both go on the stock market, they will have to face another financing model. They will need products that generate profits to attract investors to buy shares, and right now, the one that is best positioned is Anthropic. OpenAI has a lot, but nothing makes it complete. Anthropic has less, but it is very efficient, and getting rid of Sora seems like a move to release ballast before becoming a “public” company (in the American concept). They have to focus their shooting, focus their teams (something they themselves have recognized) and stop wanting to be too much at once without having a clear strategy. Because they are becoming another example of being a pioneer It doesn’t always mean you’re the best. and that, if you don’t get your act together, competitors who have a clearer roadmap will eat your toast. Only time will tell if the strategy works, but at the rate things are going, it won’t take too long to find out. In Xataka | The worrying thing is not that AI is going to take your job in the future: it’s that it is preventing you from finding one now

Your bet in the AI ​​race is to bring together several functions in a single model

The artificial intelligence race is often told as a competition to see who builds the most powerful model or the one that dominates the most benchmarks. In the middle of that board, the French startup Mistral AI has just presented Mistral Small 4a proposal that tries to occupy a different place in that conversation. It is not presented as a model limited to a single function, but as one that, according to the company, seeks to bring together several advanced capabilities within the same tool. What exactly is Small 4. The company presents it as the new great iteration of its Mistral Small family and, above all, as the first model of the house that brings together capabilities that were previously distributed among several lines. Specifically, it integrates functions associated with Magistral, Pixtral and Devstral along with those of the Small series itself. Fewer models, more features. One of the central ideas of the announcement is to concentrate tasks that are normally solved with different tools in a single system. According to Mistral, the goal is that the same model can be used to converse, analyze complex information, work with images or assist in programming without having to switch between several specialized systems. The numbers behind Small 4. The model is based on a Mixture of Experts architecture, a design that distributes processing between different specialized submodels and that today appears in several artificial intelligence systems. In the case of Small 4, Mistral indicates that the system has 128 experts and that only four participate in each generated token. According to the company, the model reaches 119B total parameters, with 6B assets per token, and offers a context window of up to 256k. Who is this model intended for?. Beyond its architecture, Mistral also describes quite clearly the scenarios in which it imagines the use of Small 4. Let’s see. Developers: Automate programming tasks, explore code bases, and code agent workflows Businesses: conversational assistants, document understanding and multimodal analysis Research: mathematics, complex analysis and reasoning tasks The underlying idea is that the model can move between quite different needs without forcing you to change the system depending on the type of work. The graphics. In the material accompanying the announcement, Mistral includes several graphs where it compares Small 4 with other models in different benchmarks. These comparisons are not limited to the score obtained in each test. They also show the average length of the responses each system generates, a data the company uses to illustrate how much text each model needs to produce to achieve certain results. One of the graphs in the advertisement corresponds to the AA LCR benchmark, where Mistral compares the scores of various models and the average length of the responses they generate to solve the same tasks. The data published by the company are the following: • Mistral Small 4: 0.72 score with 1,600 characters• GPT-OSS 120B: 0.51 with 2,500 characters• Claude Haiku: 0.80 with 2,700 characters• Qwen3-next 80B: 0.75 with 5,800 characters• Qwen3.5 122B: 0.84 with 5,700 characters The comparison. Small 4 is not the highest scoring model. Both Claude Haiku and the Qwen models appear higher in that indicator. However, Mistral highlights another aspect of the comparison: the length of the responses. According to the company, its model achieves this combination of score and output length by generating significantly less text than several of its competitors, something it relates to lower latency and lower inference cost. The short answer trick. A shorter answer is not better simply because it takes up less space. It is only if it manages to solve the task with a level of quality comparable to that of a longer answer. This is where Mistral tries to put the focus: if a model achieves a competitive result by generating less text, it can respond faster, consume fewer resources and reduce the cost of inference. In other words, the advantage is not in being more concise, but in needing less output to reach a useful result. How to access the new model. Small 4 can not only be used via API and AI Studio. Being published under license Apache 2.0is also proposed as an open model that can be downloaded, adjusted and deployed in your own environments. The company adds that it can be tried for free at build.nvidia.com, in addition to offering it for production as NVIDIA NIM. Images | Mistral In Xataka | OpenAI has been wanting to be the bride at the wedding and the dead man at the funeral for years: now it has finally defined its priority

Anthropic is winning the enterprise AI race, so OpenAI has a new plan: become Anthropic

OpenAI has thrown out everything that moved in AI. They have been launching everything: a video generatora web browser with AI, an image generator with Studio Ghibli styletools e-commerceetc. The logic was simple: whoever tries everything has more chances to get something right, but the result has ended up being the opposite. While OpenAI seemed to be everywhere, Anthropic was focused on a single site and It has managed to eat the land where it mattered most. Enough of trying everything. Fidji Simo, the board that Altman signed last summer, recently called upon employees to give them a message that is rarely heard in a company with the growth of OpenAI: their main rival was teaching them a lesson. What Anthropic is doing, Simo explained, should be a wake-up call for OpenAI, which has lost leadership among software developers and enterprise customers. “We cannot waste this moment because we are distracted by parallel projects,” he stressed. The hidden cost of doing a little of everything. The problem with shooting at everything that moves is not only the focus, but the resources that this implies. In companies that develop foundational models, the key resource is computing capacity, and at OpenAI that resource jumped from one team to another depending on the priorities of the day. The Sora team, for example, was integrated into the research division despite being one of the company’s most visible products. OpenAI was growing fast in too many directions, and that also created internal tensions over which project should be prioritized. Anthropic focused on one thing. As OpenAI diversified, Its main rival adopted a completely opposite strategy: few products, a lot of depth. Claude does not generate images or video, does not have his own browser and is not trying to create his own chips (at the moment). It is dedicated to creating foundational models and offering them both in web service mode and especially through APIs for companies and developers. Claude Code, its flagship product for programming, became a viral phenomenon among software engineers last fall, and has ended up consolidating itself as the reference tool among amateur developers—vibe coding is still going strong—and of course among technical teams in all types of companies. OpenAI strikes back. The response has not been long in coming: OpenAI launched last month a new version of Codexhis programming tool, and accompanied it with new GPT-5.4 which is precisely much more oriented towards professional environments. According to Simo itself, Codex already exceeds two million weekly active users, almost four times more than at the beginning of the year. To drive usage of its product, OpenAI is deploying engineers to consulting firms and business partners to accelerate adoption of these products. IPO on the horizon. Both OpenAI and Anthropic are taking clear steps towards an IPO which in fact could occur this year. That makes gaining share in the corporate market—which is the one that really pays, the one that signs contracts, and the one that justifies valuations—absolutely essential for these IPOs to be successful. The initial share price and real valuation of these companies will depend on how well positioned they are, and at OpenAI they want to recover the lost ground in the enterprise market. In the meeting with the staff Simo explained that “we are acting as if this were a code red.” The paradox of being the pioneer. OpenAI unleashed the AI ​​fever with the launch of ChatGPT in November 2022 and made generative AI an almost everyday phenomenon. However, being the first usually has a trap, because it forces you to explore and diversify to maintain your reference position and that is very expensive. Anthropic came along later, saw where the real money was, and focused specifically on that sector. The student has surpassed the teacher, it seems, and at OpenAI they want to correct the strategy. What will happen to so much product?. It remains to be seen how this OpenAI strategy affects its entire product catalog. If you start focusing on developers and enterprise solutions, what will happen to your imager, Sora or Atlas? The structural tension between being a “research laboratory” and being a “product company” can pose a challenge for a company that naturally did not stop exploring new ideas to apply AI to them. Image | TechCrunch | Wikimedia Commons In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

The AI ​​race is no longer about who has the most powerful model. Who launches the easiest and safest OpenClaw

2026 began with an earthquake in the world of AI, and it did not come from any of the big technology companies, but from an unknown programmer and his open source project OpenClaw (formerly Clawdbot and Moltbot). Not even two months have passed and we can say that the boom of this AI agent is reconfiguring the AI ​​career, causing more and more companies to jump on the bandwagon. The last one was Perplexity. Personal Computer. a month ago, Perplexity announced Computerwhich was a cloud-based tool capable of orchestrating agents using various models. The next step is Personal Computeryour own OpenClaw. can be left running on a Mac Mini and control it from another device, such as a mobile phone, exactly the same as OpenClaw, but with a simpler interface that does not require technical knowledge. Further user-friendly. Another key aspect is that they focus on security, one of the delicate points of OpenClaw. Perplexity claims that with Personal Computer, “Every sensitive action requires your approval. Every action is logged. There’s an off switch.” At the moment Personal Computer is not available yet, but if you want to try it before anyone else you can sign up for the waiting list. NVIDIA NemoClaw. Which is the most valuable company in the world has taken good note of the success of OpenClaw and a couple of days ago they announced that they will launch their own open source platform for enterprise AI agents, they will call it NemoClaw. This announcement is also important because it places NVIDIA in a position of direct competition against companies like Anthropic, OpenAI or Perplexity. This changes its position from a hardware supplier to a software competitor. and OpenAI…The project had not even been three months old when OpenAI, not only bought it, but also hired its creator Peter Steinberger. It was not the only one who bid to achieve the viral success of the moment, Meta also tried, but OpenAI was the one that won the bid. Stenberger said the project would continue to remain “open and independent.” This case is a good example of two things: how far a person can go with a good AI idea and how difficult, if not impossible, it is to compete in an ecosystem in which the competition is some of the largest and most valuable companies in the world. David against Goliath. The agentic AI race. We spent a good part of 2025 watching AI agents take their first steps, many times with quite mediocre results. It was clear that agentic AI was getting a lot betterbut I don’t think anyone expected that the first viral hit would be carried out by an independent and open source project. OpenClaw not only succeeded, it has launched a new race in AI, one that seeks the ultimate custom AI agents. OpenClaw has two barriers to entry, on the one hand requiring certain technical knowledge and on the other security. It is a very powerful agent, but sometimes unpredictable. Hence, Perplexity is appealing precisely to improve these two aspects. We’ll see who will be next. In Xataka | Social networks were born for humans: Meta has just bought one designed for AI agents Image | Pexels

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.