For generations, we Spaniards embraced the three-course menu. Now that model has entered into crisis

Christianity has its holy trinity. The theater has its classic structure in three acts, just like the traditional novel. Even life itself can be divided into three blocks: youth, adulthood and old age. For a while (centuries, actually) food also participated in this obsession with triads. When you sat down to eat, whether in your own home, that of a family member or in a bar, you expected to be served three courses: something light to start, like a soup or a salad, a heartier second and dessert to finish the job. Now that model has gone into a spin. Goodbye to three dishes? That is the reflection that left bouncing a few days ago The Country in its section on food: after generations and generations settled in homes and hospitality, meals structured in three courses (first, second and dessert) are in decline. He is not the first to point it out. More than a decade ago it already launched a similar warning Adam Liaw, a chef, presenter and author of gastronomic books who in 2015 warned in Guardian about the gradual “disappearance” of three-course menus. Even Dr. Nicolás Romero issued a warning in 2019, in an interview with The Basque Journal: “We should start by recovering a custom that we are abandoning in Spain, that of three dishes on the menu.” He was so convinced of this that he even encouraged transferring the same formula to dinner, “as the Mediterranean diet dictates”, opening the menu with vegetables and closing it with fruit. Is it really in crisis? It is difficult to find studies that confirm this, but, as Liaw signalif we do not look at our surroundings we will realize that the meal in ‘three acts’ seems to have “fallen from favor”. And that is something that can be transferred both to our homes and to restaurants. In fact there are those who now slide that menus with starters, main courses and desserts risk becoming something extraordinary, a luxury reserved for weddings, New Year’s Eve or other special occasions. Just like silverware or old wine. And why this change? The explanation varies a little depending on whether we are talking about what we do at home or what happens in the hospitality industry, although in both cases a common denominator can be seen: a change in consumer habits. In an increasingly busy society we are less willing to spend hours between the stoves, selecting fresh food…or even sitting in front of a plate, which explains the growing success of snacks. Cooked less? It seems so. In 2003, experts were already warning that, in a matter of a few years, we Spaniards had reduced three hours a week the time we spend cooking. Other surveys most recent show that 48% spend about 90 minutes cooking and 41% barely spend more than 60. There are still the majority of those who prepare their own food, but the Spaniards who barely set foot in the kitchen They are counted in millions. With less (or no) time between pots and pans, it is difficult to prepare meals divided into several dishes. Does everyone lose? “Households are spending less and less time cooking, reducing processes and complexity to optimize the time spent cooking. This implies that people are increasingly opting for single-course occasions, which are 71.3% of the time at dinner and 55.7% at lunchtime,” commented recently Eduardo Vieira, from Worldpanel by Numerator (Kantar), who pointed out that this represents an “opportunity” for the industry. Our tendency to spend fewer hours in the kitchen is giving wings to a business that has been growing for years: that of pre-cooked and ready-to-eat foods. The Spanish Association of Prepared Meal Manufacturers (Asefapre) estimates that in 2025 the consumption of precooked meals in the country’s homes grew by 3.8% and that sales exceeded 4.3 billion, with a growth of 5%. What happens in restaurants? There another extra factor comes into play: the economy. Although the menu of the day has been implemented for decades in Spain, where it is quite an ‘institution’, the formula is in crisis. And not only because of cultural changes or the snackficationa trend that leads us to spend less and less time on food. In recent years it has come under cost pressure. The rising cost of raw materials, energy, labor… has forced hoteliers to review their rates, increasing them by 19.5% between 2016 and 2024. The problem is that the sector assures that this increase is lower than the CPI, which makes it difficult for them to make their menus profitable. “It is in danger, fortunately because it is not a sustainable model,” recognize to The Country Paco Cruz, The Food Manager. Given this situation, it is necessary ‘reinvent’ the menucutting costs. As? Exactly: putting the scissors in and leaving it on a single plate. Do more factors influence? Yes. As if the above were not enough, the hoteliers have to deal with a new rival: the merchantssupermarkets that, like Mercadona, have a wide range of ready-to-eat dishes and spaces in which to consume them. Customers can often choose dishes and devour them in just a few minutes, putting pressure on traditional menus where a waiter serves starters, mains and dessert. Images Michael Clarke Stuff (Flickr), Diogo Brandao (Unsplash)F.arhad Ibrahimzade (Unsplash) In Xataka More and more Spanish bars refuse to let you pay at the table. Its objective is very simple: greater rotation

We already know what happens to the GPU hourly price when OpenAI or Anthropic launch a new model: it doubles

This week, an analyst named Tomasz Tunguz published in X two revealing graphs. They show the evolution of what it costs AI startups to access cloud computing, and there is bad news. The cost of renting the NVIDIA B200 GPUs with Blackwell architecture has gone from $2.31 per hour in early March to $4.95 per hour this week. It is an increase of 114% in just six weeks and it has a clear cause: the arrival of new models from Anthropic and OpenAI. What the graphs show clearly. Those charts focus on the price index of Ornna cloud computing trading marketplace. The first of them covers the price of renting the B200 chips from the end of 2025 until today, and there are vertical lines showing each release of the latest models from OpenAI and Anthropic. The correlation is almost perfect: GPT-5 Codex, Claude 4.5, GPT-5.3 Codex, Claude Opus 4.7 and GPT-5.5 coincide with a jump in price indices. Every time these companies announce a new version of their frontier models, demand skyrockets, and so does the cost. If you want the best, pay (much more). The second graph shows the price difference between renting the previous generation of chips, H200 with Hopper architecture, and the new B200. The historical average of that “spread” is $1.06, but now it stands at $2.09, practically double. That means buyers—startups and AI companies—are paying a record premium for the extra memory and superior computing power of Blackwell architecture chips. Accessing the latest of the latest was already expensive. Now it is even more so. This also makes the H200 in a second class option for the most demanding models of 2026. Action and reaction. There is overwhelming logic here. When OpenAI or Anthropic release a new model, there is an explosion in inference. Developers and companies want to test them as soon as possible and integrate these models into their products (or compete with them). To do this, they need computing quickly, and a simultaneous demand is caused that unbalances the available inventory in the market for renting AI chips by the hour. The problem is that the supply of B200 does not grow at the same rate. Some companies have wanted to anticipate, and we have the perfect example in Google. He has bought all the B200s he can, and that has made these GPUs around now the 500,000 dollars on the secondary market according to analyst Jack Minor. The irony of efficiency. The curious thing is that the more efficient these chips are – and the B200s are – the more companies want to rent them at the same time to take advantage of those efficiency advantages that should lead to cost savings. What actually happens is that the scarcity of these advanced chips cancels out any theoretical savings. Long term contracts. Startups and companies that think in the short term are especially harmed in this area, because they face price jumps that are increasingly difficult to assume. Companies that signed computer rental contracts at the price then can now operate at less than half the cost of their competitors. Thinking in the medium or long term seems reasonable, although once again those who win are the hyperscalers and those companies that have managed to get hold of many B200s. And who wins even more is of course NVIDIA, which cannot cope. Few alternatives. In other markets such as energy or metals there is usually room for maneuver, Tunguz points out, but the same is not happening at the moment in the AI ​​segment. In the oil market, for example, if the price rises 114% in six weeks, companies can buy futures, options or fixed-price supply contracts to protect their margins. In cloud computing rental, those options are much more limited. And the result is a much more volatile segment. This will go further. We are probably facing a peak in demand that will be followed by a correction: the new batch of B200 chips that arrive in the second half of 2026 are expected to cause a drop in current prices. However, that $4.95 is now the new floor, not a peak, because demand for AI computing will continue to grow faster than TSMC’s production capacity. In the absence of the supply of AI chips growing significantly – and there are certainly movements that are trying to achieve this, such as those of Google with its TPUsAmazon with its Trainium or Huawei with its Ascend—, the problem will still be there. In Xataka | Europe is taking its technological independence so seriously that it is aiming for the most ambitious goal: NVIDIA

In 1972, a Swedish model posed nude for ‘Playboy’. Years later, we have the JPEG format thanks to this

The one of Lena Sjööblom It is one of the most delirious races in the history of technology. To begin with, because when she made her mark in the sector she was not an engineer, nor a mathematician, nor a physicist, nor anything that resembled her in the slightest. Nor did it have any known “Eureka” moment nor did it contribute any discovery or invention. No. Sjööblom was a model. From a model she became what was then known as a “Playboy girl.” And from the pages of the nude magazine he jumped to the front-line research that today, half a century later, allows us to enjoy the JPEG image format. Let’s go in parts. In the early 70s, Sjööblom, a 21-year-old Swedish immigrant Recently landed in the US, she made a living as a model. To make her way and probably without the slightest idea of ​​the journey her image would end up taking, at the end of 1972 she agreed to pose nude for Playboya magazine that at that time sold millions of copies around the world. In one of the central photos that he took of him Dwight Hookerone of the most famous portrait painters of the city, appears from behind, in front of a mirror, with no clothes other than a hat, a red boa, stockings and heels. I liked his work. A lot. At least that’s what we can deduce if we take into account that the November 1972 issue, in which Sjööblom was the playmate main feature and Pamela Rawlings was on the cover, sold 7.16 million copiesmaking it the most successful in the magazine’s entire history. The pose became so famous that in 1973 Woody Allen He even snuck it into one of his movies. As often happens with fame, that sudden public interest came, swept away and, with it, evaporated. Sjööblom continued her modeling career and, once retired, returned to Sweden. Chances of life, one of those 7.16 million copies of the 1972 magazine ended up in the hands of a person linked to the Signal Image Processing Institute (SIPI) of the University of South Carolinaa laboratory in which, at that time, they worked on image processing and were laying the foundations of what would end up being the JPEG and MPEG standards. The coincidence would not be of greater interest if it were not for the fact that that reader took his Playboy to SIPI at the right time: just when They were looking for an image for their tests. The right place, at the right time Today it may seem crazy for someone to show up at the office with a nude magazine under their arm. Not in the 70s. As Lorena Fernández remembersof the University of Deustoin The Conversationnot only was it common for the staff to show themselves with their Playboy in teams that, like Carolina’s, were made up solely of men. It was even well seen, just like doing it today with The Times or the guide with the programming of La 2 documentaries. In that context, the arrival of Sjööblom’s photos was as well received as it was proverbial. Around June or July 1973, electrical engineering professor Alexander Swachuk, one of his graduate students, and the manager of SIPI were madly looking for a photo that they could scan and include in one of their presentations on image compression. They had their own stock, of course, but it was made up of files inherited from the boring and trite television standards of the early ’60s. The Swachuk Team I wanted a human face and an image that was also bright to guarantee a good output dynamic range. And what better option —they thought— that Sjööblom’s face? Skipping all the rules on property rights and decorum, the researchers used the image of Playboy. They kept only the top third of the magazine’s central poster and placed it under their muirhead scannerequipped with analog-digital converters and a minicomputer Hewlett Packard 2100. Jamie Hutchinson details To stay with a section of 512×512 pixels, they scanned 5.12 inches of the top of the photo, which in practice showed only Lena Sjööblom’s face, her shoulders and part of her bare back. The result showed a software error that forced the team to retouch it, but Swachuk’s team was working against the clock and decided to keep the distorted and altered image. The fact is that he liked it. Just as I had liked Sjööblom’s photo shoot in Playboy at the end of ’72. “They asked us for copies and we gave them to them so they could compare their image algorithms with ours on the same test image,” the professor himself recalled some time later. The final process At the SIPI they turned Sjööblom’s portrait into a test image for digital compression and transmission work. Arpanetthe precursor of the Internet. And that, with the passage of time, had an unpredictable result: the image of that model that everyone began to refer to as “Lena” or “Lenna” and whose origin began to blur became the standard used by other researchers who wanted to compress similar files with their algorithms. The face of that twenty-year-old Swedish woman, with a hat and a bare back, was replicated in books, conferences, articles, traveled through the “Atapuerca” of the Internet and helped lay the foundations for the JPEG image format. “Many researchers know the Lena image so well that they can easily evaluate any algorithm that runs on it. That’s why most people in the industry seem to believe that Lena has served well as a standard,” comments Hutchinson. In addition to being a “familiar image”, the photo combines shadows, highlights and blurred and sharp areas and details, a mixture that makes it “a tough test for an algorithm processing”. Perhaps the most curious thing about the entire story is that so much Playboy Like Lena Sjööblom herself, they spent decades without knowing the exorbitant fame—and the important role—of the 70s portrait. The first to … Read more

DeepSeek has just released a model that competes with Opus 4.6. It costs seven times less and runs on Chinese chips

They have passed 484 days since that “DeepSeek moment“, but the wait It seems to have been worth it, because we have the new DeepSeek V4 with us. We are facing an absolutely gigantic open weights model that once again promises to crack the foundations of the proprietary foundational models of Anthropic, OpenAI or Google. This is moving, gentlemen. Gigantic and open. DeepSeek v4 is an Open Source model and comes in two versions. The first is the Pro, with 1.6 trillion parameters (1.6T), of which it has 49,000 million active. The second is Flash, with 248,000 million parameters (248B, huge for a “Flash” model) of which 13,000 are active. More efficient than ever. Both versions they make use of a Mixture-of-Experts (MoE) architecture, which means that only a fraction of the parameters are activated in each inference. This allows the computational cost to be reduced significantly. Both versions support a context window of one million tokens—to include novels and novels at once as input—when in v3 it was 128,000 tokens. Furthermore, this model is much more efficient than its predecessor in computing per token: it requires only 27% of the operations per token and 10% of the KV cache compared to DeepSeek v3.2. Benchmarks promise. DeepSeek’s internal testing reveals that v4 Pro-Max (the best model with the highest reasoning ability) outperforms or is on par with Claude Opus 4.6 Max, GPT-5.4 xHigh, Gemini 3.1 Pro High, Kimi K2.6 and GLM 5.1. The results, however, are not independently verified, which means we should take them with caution. The numbers are still striking: in LiveCodeBench, a programming test, DeepSeek v4-Pro-Max achieves a 93.5% score compared to 88.8 for Opus 4.6 and 91.7% for Gemini 3.1 Pro. In other tests there is more variability, but at least on paper DeepSeek v4 Pro seems as good as Opus 4.7, which until now was the absolute benchmark. Much cheaper. But as happened with its previous version, the difference in price with those models from US companies is astonishing. As point the analyst Simon Willinson, the official prices of DeepSeek v4 Pro are 1.74 dollars per million input tokens and 3.48 dollars per million output tokens, up to almost seven times less than those of Opus 4.7 and up to almost 9 times less than those of the new GPT-5.5. With DeepSeek v4 Flash the cost is 0.14/0.28 dollars per million input/output tokens, when GPT-5.4 Mini costs up to 16 times more. The conclusion is obvious: if it really does what it says it does, the price is an absolute bargain. That is precisely the challenge: that real experience confirms what the benchmarks say. The hardware mystery. DeepSeek has not revealed what hardware has been used to train this version of its founding model. In the past they did admit that they had used NVIDIA’s H800s. Which yes it is known The thing is that the model has been developed to run on both NVIDIA and Huawei Ascend chips. This last has confirmed Baidu that its Ascend Supernode clusters based on the Ascend 950 will fully support DeepSeek v4 versions. Huawei support is “horrible” news for the US. In The Information they already commented that one of the reasons for the “delay” in the appearance of this model was to adapt it so that it worked without problems with Huawei chips. That support is according to Jensen Huang “horrible” news for the US, because it means that dependence on NVIDIA chips no longer exists or at least is reduced to a minimum. But. The launch comes at a difficult time for the company. Guo Daya, one of the people responsible for the v1 and v3 models, has signed for ByteDance to work on AI agents. Luo Fuli, who led the development of v2, joined Xiaomi last year. This launch also coincides with DeepSeek seeking external funding for the first time. They are expected to raise about $300 million and obtain a valuation of about $20 billion. according to The Wall Street Journal. From the surprise effect to the continuity effect. The launch of DeepSeek R1 in January 2025 was surprising because it demonstrated that China could train competitive models at a fraction of the cost of Western models. With DeepSeek v4 that surprise effect disappears to give way to the continuity effect. This model seems to maintain precisely what made the previous model famous: extraordinary power at a very low cost. Bad news for Anthropic. Such low prices are terrible news for Anthropic, which in recent weeks has been forced to execute a kind of “reduflation” of their new modelswhich are not more expensive but consume many more tokens. We’ll have to see if DeepSeek v4 Pro is as good as the company promises, but if it is, we’ll have another “DeepSeek moment” before us. Maybe not as notable as last year’s, but equally relevant. In Xataka | DeepSeek promised them happiness as the great Chinese AI. I didn’t count on a small detail: Kimi

Mythos will be the most dangerous AI model, but companies are already taking note of its security tips

Top AI companies are in the race to create the best artificial intelligence model. That race has been won by Anthropic with Mythos. At least, That’s what they claim (of course)with phrases like it is so powerful that they cannot make it public. There is reasons to take Anthropic’s words with a grain of salt, but what is evident is that Mythos is already working. Although the company has not released it, has already given access to certain technology partners. The decision is based on the company’s fear that the model will be used maliciously. They themselves have described as a threat to cybersecurity based on the number of zero-day vulnerabilities that Mythos would have found in both the main operating systems on the market and in browsers. And, just when the model is arousing opinions from some and others, Mozilla arrives to affirm that the latest version of Firefox 150 It has security fixes for 271 vulnerabilities that have been discovered thanks to this preliminary version of Claude Mythos. For its part, OpenAI does not believe anything at all. “Just as capable as a human” Mozilla it details in one of the latest posts on his blog. The company had been collaborating with Anthropic for some time and using the Claude Opus 4.6 model to find errors. In January, it found 22 vulnerabilities in a couple of weeks, 14 of them rated very serious. Of those 22 found by Opos 4.6, which is already a powerful model, we move on to the 271 discovered by Mythos. It is a huge leap and Mozilla wanted to continue investigating to see to what extent the new model surpasses Opus. Analyzing Firefox 147, Mythos generated 181 functional exploits. Opus 4.6? Just two. 90 times less. Those results have led Mozilla to write that Mythos Preview is “just as capable as the best human cybersecurity researchers”adding that they have not found any categories that humans can detect that Mythos cannot. This has another reading since, as the company itself states, seeing that the model is capable of finding so many errors in such a short time makes them wonder if it is possible to stay up to date in cybersecurity work when alternatives to Mythos are developed that do fall into hands not controlled by those responsible. There is always the fact that Mythos has not found any errors that Mozilla’s human ‘watchmen’ have not detected and that a tool like this will help to have a more secure system. All of this, in the end, pushing that narrative that Mythos is practically a technological miracle. a nuclear bomb The other side of the coin is that Sam Altman, head of OpenAI, doesn’t believe anything. Taking advantage of his recent participation in a podcast, he has qualified The entire Anthropic movement as a fear-based marketing ploy. He accuses Dario Amodei’s company (Altman’s public enemy) of wanting to restrict AI to a small number of people in a strategy that he has compared to having an atomic bomb, threatening to release it and making a living by selling bunkers to protect themselves from that same bomb. “It is evident that this is an extraordinarily powerful marketing strategy. We have created a bomb and we are going to drop it. You can buy a bunker from us for 100 million dollars” It is one more point in that historical rivalry in which both companies (and managers) have been involved for some time, but it comes just when Anthropic is having a greater role and OpenAI is being forced to release ballast in the form of services like Sora. Altman is not the only one who thinks that Anthropic is repeatedly using this discourse of “We have something so powerful that we cannot make it public” because it is a good strategy to obtain financing. There are already voices that they point that Mythos is not that big of a deal and, in fact, other models have proven to be able to do the same, finding the same errors and problems detected by Anthropic. But, above all, we must remember that, in 2019, someone already said that a model was too dangerous for public release. Who? OpenAI itself with GPT-2. Obviously, it wasn’t that dangerous. In Xataka | OpenAI and Anthropic have proposed the impossible: lose $85 billion in one year and survive

A seven-dimensional black hole model proves that Stephen Hawking was right, to say the least.

For a long time it was thought that black holes could only grow, since nothing escapes from them. Later, Stephen Hawking dismantled this theory, pointing out that radiation can come out of its interior and that, in fact, with this process the black hole it is fading away little by little. This hypothesis generated a new paradox; since, according to quantum mechanics, information cannot be created or destroyed in a quantum system. If the information cannot be destroyed, when the black hole disappears, where does all the information it stored go? This question has been a mystery until a team of scientists from the Slovak Academy of Sciences It occurred to him to do simulations in a 7-dimensional system. A reminder about black holes. a black hole It is an astronomical object so massive that its gravitational pull does not allow anything to escape from it. Not even the light. At a certain distance from the black hole is the event horizon, which is that point of no return from which everything is attracted towards its interior. Hawking radiation. In the 1970s, Stephen Hawking launched a hypothesis which destroyed the idea that nothing can escape from a black hole. According to him, if we take quantum physics into account, there is something that can do it. Heisenberg’s uncertainty principle states that a vacuum is not empty as such. Particle-antiparticle pairs continually form and appear and disappear. If this occurs in the vicinity of the event horizon, it could be that one of these particles is attracted towards the black hole, while another manages to escape from it, being slightly beyond the point of no return. That exhaust extracts energy from the black hole. This is what was called Hawking radiation. Disappearing black holes. We have all heard the famous formula from Einstein’s theory of relativity: E = mc². Since c is a constant, if there is energy, there must also be mass and, therefore, if energy is lost, for the constant to be maintained, mass must also be lost. That means that every time a black hole loses energy it is also losing mass. They are very massive objects, they would take a long time to turn off, but they finally do. The paradox arrives. Initially, many colleagues saw Hawking’s hypothesis as nonsense. However, today it is much more accepted. However, it is undeniable that it poses problems, such as the black hole information paradox. Where does the information go? Twisting space-time. The solution to the mystery has been possible by putting aside the theory of general relativity and analyze the problem with a somewhat more complex one: the Einstein-Cartan theory. The first points out that mass and energy can curve space-time. On the other hand, the second points out that it can also twist. For scales that are not excessively small there is no difference. However, when moving to tiny scales and therefore very high densities, This torsion plays an important role. A 7D model. Quantum physics models are often made in 4 dimensions: the three we all know and time. However, the authors of the recently published study took three more into account, so that the effects of the Einstein-Cartan torsion could be analyzed. Thus, they saw that when the matter of a black hole collapses its density increases greatly and, therefore, the twisting of space-time is detected. This gives rise to a repulsive effect, which counteracts the gravitational attraction that would normally take place in the engrossing hole. As a result, the evaporation of the black hole stops, which remains in a stable state, generating a remnant with a mass of 9×10⁻⁴¹ kg. A remnant with a lot of information. This tiny remnant is capable of storing all the information of the matter that the black hole contained. Specifically, these scientists’ models suggest that the remnant of a black hole the size of the Sun could store up to 1,515 × 10⁷⁷ qubits of information. Therefore, Hawking’s hypotheses are still valid and there is not even a paradox that dismantles them. At least this is not the lost information. Image | ESO (Wikimedia Commons) | ASA/Paul Alers (Wikimedia Commons) In Xataka | In 2009 Stephen Hawking hosted “the party of the century.” No one came precisely because Stephen Hawking organized it

Claude Mythos is an AI model so powerful it’s scary. So Anthropic has decided that you won’t be able to use it

Claude Mythos Preview it’s already here and it’s so good it’s scary. Literally. Anthropic has just introduced it to the public, but it has been done so cautiously that we won’t even be able to test it and it will only be available for certain technology partners. That’s frustrating and disturbing at the same time, but also reasonable. So powerful that it scares. On February 24, 2026, Anthropic engineers were able to test their new artificial intelligence model for the first time, which they called Claude Mythos Preview. As soon as they did they realized one thing: “demonstrated a dramatic leap in its cyber capabilities over previous models, including the ability to autonomously discover and exploit vulnerabilities zero-day in the main operating systems and web browsers on the market. Threat to global cybersecurity. This finding made it clear to Anthropic officials that although this capability makes it very valuable for defensive purposes, it also poses clear risks if the model were offered globally. Thus, a cybercriminal could take advantage of it to find vulnerabilities in all types of systems and exploit them. A few hours ago the company developed this analysis of Mythos as a threat to cybersecurity in a post on his blogand for example highlighted how Mythos found a vulnerability (now corrected) that had been present in OpenBSD for 27 years, an operating system precisely recognized for its very strong security. There were more examples, and all of them made the conclusion clear: Mythos is too powerful for ordinary mortals to use. Superior in all benchmarks, and in some cases such as USAMO (mathematics), the jump is simply incredible. Source: Anthropic. The best in history according to benchmarks. Anthropic has published a very in-depth report about this model with its “system card”. Among the data present is, for example, its performance in benchmarks, where it has swept GPT 5.4, Gemini 3.1 Pro and also Claude Ous 4.6, which until now was the best model in the world in almost all performance tests. Although in some cases the jump is not spectacular, in others such as USAMO —mathematical problem solving—Mythos practically achieves perfection. He barely hallucinates… That system card also talks in detail about how Claude Mythos Preview has a drastically lower hallucination rate than Claude Opus 4.6 and earlier models. He is also capable of saying “I don’t know” if he does not have enough information to answer, something that reduces hallucinations due to overconfidence. …but when it does, be careful. The paper warns of a new phenomenon: when the model fails in some complex tasks, the “hallucinations” are not obvious errors, but rather extremely subtle and well-argued technical failures. This is dangerous because the answer seems totally correct to experts, requiring very deep verification. Glasswing Project. That power and capacity has meant that the model will only be available through a “defensive” program that they have called Glasswing Project and which will be exclusive to some Anthropic technology partners. Specifically AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks. All of them will have the privilege (and responsibility) of having access to Claude Mythos Preview to identify vulnerabilities and exploits and correct them before bad actors can do so. Mythos Preview “it’s just the beginning”. Although this model is the most capable that has been seen so far, at least according to the benchmarks and data presented by Anthropic, the company assures that “we see no reason to think that Mythos Preview is the point at which the cybersecurity capabilities of language models reach their peak.” They assure that they expect the models to continue improving in the coming months and years, although this new model is certainly on another level. In Xataka | OpenAI and Anthropic have proposed the impossible: lose $85 billion in one year and survive

Each new AI model is the best ever until the next one arrives. Anthropic and OpenAI have turned that into a business

It doesn’t matter what technological product we are talking about, because both the product and how it is sold to you matters. And here making promises and generating expectations is the classic strategy. The next processor is going to be more powerful, the next smartphone is going to take better photos… and of course, the next AI model is going to be (much) better. We are seeing that message constantly in the AI ​​segment, but now it is going further. Anthropic and a curious leak. A group of security researchers they detected a few days ago 3,000 unpublished documents in an accessible Anthropic database. They included a draft of the blog entry that corresponded to the theoretical launch of their next AI model. The striking thing is not so much the filtration itself (whether intentional or not), but what those documents reveal. Mythos goes beyond mere evolution. Or at least that’s what that leaked draft seems to reveal. It describes a model called Claude Mythos—also called Capybara—which would not be a simple improvement on Claude Opus, but would be a level above it. The document says that this model is “bigger and smarter than our Opus models, which until now were the most powerful.” Anthropic signs up for hype. According to this leak, the benchmark scores would be notably higher than those of Opus 4.6 in programming, reasoning and cybersecurity. At Anthropic have ended up confirming the existence of this development, and have described it as “a level change” and “the most capable model we have created to date.” It’s not too surprising a phrase, because it’s basically the same thing they’ve been saying about every new model they’ve released. And even they are scared. In fact, what is surprising in that draft is not the message that it is better, but the warnings that accompany that future presentation. Thus, Anthropic describes Mythos as “currently far ahead of any other AI model in cybersecurity capabilities.” In fact, they warn that this may be the beginning of “an imminent wave of models that can exploit vulnerabilities in ways that far exceed the efforts of the defenders.” Or what is the same: Mythos could be a extraordinary tool for cyber attackers. The actual launch plan is to first offer Mythos to cybersecurity organizations to prepare. We will see if that gives an advantage, if Mythos meets expectations. OpenAI also makes a move. Both Anthropic and OpenAI have been moving in parallel for some time, and now they have done so again. At OpenAI they are preparing their new AI model, codenamed “Spud” (“potato”). Hardly anything is known about him beyond the fact that his pre-training phase has been completed. More relevant is that this model appears just when At OpenAI they have decided to be less OpenAI and more Anthropic. They have abandoned Sora and they are redirecting resources to regain ground where they are losing it. That is, in companies. But the count is not infinite.. These days, users of Claude’s $100 and $200 per month plans began to notice how they used up their limits and token quotas in less than an hour during their work hours. What is happening is that Anthropic is training more powerful but much more expensive models to use and that makes it difficult to serve them. Demand is growing faster than the efficiency improvements that are coming, so according to some analysts, AI companies are adjusting those quotas and in a sense making Their models behave as if they were “dumber” to save. It’s something we’ve seen in the past. hedonic adaptation. The psychologists called hedonic adaptation to the phenomenon by which humans quickly become accustomed to any level of experience, good or bad, and return to our starting emotional state. When applied to AI, this phenomenon explains that this model that seemed miraculous to us six months ago today seems slow and limited, and what six months ago seemed like science fiction is today the minimum we ask of companies. Anthropic and OpenAI have not invented the concept, but they have integrated it into their roadmaps like other technology companies in the past. We mentioned it before: they not only sell what they have today, but (more importantly) what they will have tomorrow. Mythos will be brutal and very expensive. Anthropic’s draft warns that Mythos will be “very expensive to serve and will be very expensive for our customers.” That points to two possibilities. The first is that only users of the Max plans can access some consultations with this model. The second, that a subscription appears even more expensive than that 200 dollars a month so we can leverage Mythos with more leeway. We already had a free AI, a basic paid AI and a high-end paid AI. Now we will also have super high-end AI. In Xataka | The hard landing of OpenAI: after years at the forefront, it is discovering that AI is not won only with memes and hype

It should be impossible for an iPhone 17 Pro to run a gigantic 400B AI model. Ought

The iPhone 17 Pro has 12 GB of unified memory. It is a very decent figure for a mobile phone, but in theory absolutely insufficient to run large AI models locally. And therein lies the surprise: a new project has made it possible for this mobile phone to run locally a model with 400,000 million parameters (400B). And that opens the doors to a promising horizon. Giant AI model, dwarf memory. A developer named Daniel Woods (@dandeveloper) has created, thanks to AI, a new inference engine called Flash-MoE whose code has been published as Open Source on GitHub accompanied by a study about his behavior. woods managed to run locally the Qwen 3.5 397B model (the full version, without distillation or quantization) on your MacBook Pro with 48 GB of RAM. Downloaded the model (209 GB on disk) and developed that inference engine to achieve something that seemed almost impossible. Other developers have gone even further and have managed to run models like DeepSeek-V3 (671B) or even Kimi K2.5 (1.026B!!) on their MacBooks. The speed is slow, no doubt, but they work, they work. It’s amazing. iPhone 17 Pro is capable of running a 400B model. Another developer called Anemll wanted to go a little further and try to run this model with almost 400,000 million parameters on his iPhone 17 Pro with 12 GB of RAM… and he succeeded. It is true that the model is very slow in responses (0.6 tokens per second, very unusable), but achieving something like this opens the doors to a future in which video or unified memory is no longer so critical to be able to use huge AI models locally. a few hours ago doubled the speed at 1.1 tokens per second, reducing the number of experts to four (2.5% quality loss in responses). It is still not entirely usable, but the technical demonstration is evident. Another user has preferred to use a somewhat smaller model (Qwen 3.5 35B) but still huge for the iPhone, and has already managed to get it to run locally at about more than acceptable 13.1 tokens per second. Why it matters. The AI ​​models we use in the cloud (ChatGPT, Gemini, Claude) are gigantic and run in data centers with thousands of chips and enormous amounts of memory and storage. They are the most powerful because they run on the most powerful machines. Although it is possible to use AI models locally, the models that we can run are much smaller and that makes it difficult for them to behave equally well both in quality of responses and in their speed or precision. This method opens the door to a future in which even on “modest” machines it is possible to run giant AI models that give better answers and allow us to avoid using models in the cloud. Apple already warned. Three years ago a group of Apple researchers published the study ‘LLM in a flash‘ which precisely pointed to that: to run AI models locally it would be possible not only to take advantage of the unified memory of Macs, but also their storage units. The speed would be slow, yes, but this would open up the possibility of running gigantic models locally on machines with much smaller amounts of unified memory. Woods used Claude Code with Claude Opus 4.6 and applied the new methodology “autoresearch” by Andrej Karpathy to implement Flash-MoE based on that research. The result is really promising. Video memory was everything. On my Mac mini M4, for example, I have 16 GB of unified memory. This means that with tools like Ollama you can install and run models like Qwen 3.5 4B locally with some fluidity, but 7B models or others like gpt-oss 20B would be much slower in responding (or would get stuck altogether). Video memory (or unified on Apple devices) is the most important parameter when running local models, both in terms of quantity and bandwidth. If you want to use them fluidly, that’s the limiting factor. It is possible to use “regular” RAM, but the speeds when using it are reduced so drastically that it is often better not to use that option at all. If you have a fast SSD, you have a treasure. Now the limiting factor is our SSD drive, since the model uses it as if it were a kind of substitute for video memory. And the faster the SSD drive on our computer, the better. There is good news here, because lately we are seeing how PCIe 5.0 drives they achieve about 15 GB/s without too many problems, and that speed already gives enough room for maneuver to use much larger AI models locally than we could use before. A promising future for local (and more private) AI. This discovery is really striking for everyone who wants to use AI locally, because it allows you to use huge models without having to make a huge investment in the latest generation graphics cards or, for example, in a Mac with a lot of unified memory: a Mac Studio M3 Ultra with 512 GB of memory, for example, costs more than 10,000 euros. With this new method we could opt for a much cheaper machine that, with a good SSD unit, would allow us to use giant models in a fairly decent way. Not as fast as those other options, sure, but still very decent. It’s a notable step forward in enjoying the benefits of running local AI models, including the biggest of them all: privacy. With this type of local execution, our conversations and everything we tell the chatbot stays on our machine, it does not end up on the servers of companies like Google, OpenAI, Meta or Anthropic. In Xataka | Jensen Huang believes we have reached the “coming of the AI ​​wolf.” It is perfect for feeding a Tamagotchi

suffocate your business model

If we limit ourselves to the average gasoline prices in Spain, the measures imposed by the Government have eased the pockets of the Spanish people a little. That, at least, is what the portal says. dieselgasolina.com which collects the prices of all the gas stations in our country. However, the full photograph does not tell us this. There seem to be nuances. And one of them has the low-cost gas stations at odds with the big oil companies. A meeting with the CNMC. It is the information that brings Populi Voicewho assure that the National Association of Automatic Service Stations (AESAE) will meet with the CNMC to express their complaints about the policies of Repsol and Moeve when applying aggressive discounts on the purchase of fuel. According to the newspaper, this association is in favor of filing a formal complaint because they understand that their discounts are only intended to harm their business. In Xataka We have contacted this association but at the time of writing this article we have not received a response. What has happened? On March 20, 2026, The Government reduced VAT on fuel from 21% to 10%. This has caused an immediate drop in the price of about 30 cents on average, according to figures collected by the portal. dieselgasoline. All in all, both diesel and gasoline remain well above the prices we found on March 1, when the Iran War had just begun. Then, the price of gasoline was 1.495 euros/liter and today it is 1.584 euros/liter. Diesel is the most affected fuel. From the 1,447 euros/liter registered at the beginning of the month, it is today at 1,783 euros/liter, even above 98 gasoline. Repsol and Moeve tighten. In this context, Repsol and Moeve have taken the opportunity to launch aggressive discount campaigns that, of course, are not available to everyone since they rely on their loyalty cards and multi-energy programs to catch the consumer with more attractive prices. Repsol relies on Waylet to put on the table discounts of up to 40 cents/liter. With the card, the discounts are 10 cents/liter but these are doubled if we have the electricity contracted with Repsol. And they reach 40 cents/liter if we have also taken out home or car insurance. Moeve uses a very similar strategy. Using its alliance with Naturgythe company offers discounts of 20 cents/liter with each report if we have also contracted electricity or gas. And we will also have six cents/kWh refueled with each electric car recharge. These figures grow if we have also contracted other services and even the consumption of our home with plates. In that case, the discount reaches 60 cents/liter and 15 cents/kWh with each electric recharge. Low cost companies complain. These discounts are not being seen well by low-cost companies. These types of companies He already pointed out to the Government in the early days of the Ukrainian War that subsidy of 20 cents/liter of the State put its business model at risk. But, also, they pointed out to the big oil companies as architects of a staging with its discounts that compromised its economic viability. Therefore, according to Populi Voicethese companies will file a formal complaint with the CNMC against a pricing policy that they consider abusive. They are not the only ones, FACUA also denounces that service stations are absorbing state aid with the VAT reduction. Already in 2022 The CNMC verified that the discounts applied were quickly absorbed by the oil companies. In Xataka We have contacted the Spanish Fuel Industry Association, who defend that their members, including Moeve and Repsol, “have always taken the side of consumers in times of crisis such as the years of Covid-19, the War in Ukraine or in this case.” Same story (more or less). Those 2022 complaints seemed founded, as time has shown. It’s only been a few months since The CNMC fined more than 20 million euros in penalties to Repsol for applying discounts during fuel purchase subsidies four years ago. According to Competition, the company launched a two-direction strategy. It offered extra discounts of five cents/liter to professionals and, at the same time, raised the sales price of its fuel to independent service stations. The objective was to narrow their profit margin while offering itself as a company with more attractive prices than the competition. A question of margins. Big oil companies have an obvious advantage in times of crisis. As has been seen with the 2022 discounts and is seen right now, they are companies that can play with their profit margins with greater ease than low-cost companies. First because they have a dominant position with more establishments in the market, second because they buy a greater amount of fuel. Low cost companies can offer more attractive prices in normal circumstances but they are more sensitive to crises if we are talking about an increase in the price of the product. This is because their purchases are smaller and more frequent, so they live adrift from the price of oil, reflecting the rise in price before large companies. And, obviously, they suffer much more when these companies apply aggressive discounts since their profit margins are narrower but their room for maneuver is also smaller since, as we say, they cannot make large purchases that lower the price for a few days, no matter how few. Photo | Repsol and Ballenoil In Xataka | Finding the cheapest gas station in your area is very simple thanks to this very powerful tool

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.