The world will run out of memory for AI chips until 2027. And cell phones and cars are already paying the price

The big bottleneck in the artificial intelligence industry has nothing to do with AI models, GPUs, or data centers. It has to do with memory, and for months we are immersed in a crisis of which now the manufacturers give us more information. Three companies—Samsung, SK Hynix and Micron—control 90% of global production, but current estimates indicate that between the three They can only cover about 60% of expected demand through 2027. That’s terrible news not only for AI, but also for everything non-AI. The era of memory scarcity. These three manufacturers have prioritized HBM production for AI accelerators because these memories leave better margins. The direct consequence is the shortage of DRAM memories, which are used in PCs and mobile phones, and since October 2025 we have seen how this market has skyrocketed in price. Betting everything on one segment has left the other dangerously neglected. Samsung will have new factories. According to indicate In Nikkei, Samsung plans to launch its fourth memory manufacturing plant in Pyeongtaek, South Korea, in 2026, although mass production will not begin until 2027 or later. Furthermore, not only memories will be manufactured in that plant. There is a fifth plant under construction on that same technology campus, but it will be dedicated to HBM chips and will not begin operating until at least 2028. The South Korean giant has another ace up its sleeve: the United States. HBM to power. SK Hynix is ​​the only one of the three that has a concrete supply improvement for 2026, because it has already started manufacturing HBM chips at its Cheongju plant in February. It is also accelerating construction of a plant in Yongin, near Seoul, with the goal of completing it by February 2027. Micron also asks for patience. Meanwhile, Micron, the American company, has the goal of starting production of HBM chips in Idaho and Singapore in 2027, and will build a factory in Hiroshima that will theoretically come into operation in 2028. It has also just purchased a plant in Taiwan from Powechip, but the chips that come out of it will not be available before the second half of 2027. This is not enough. The consulting firm Counterpoint Reserach estimates that in order to resolve the current DRAM crisis, an industry-wide production increase of 12% annually until 2027 would be required. However, current plans add up to a growth of 7.5%, which makes it clear that these expansions by these three manufacturers are not enough. For Counterpoint analysts, the consequence is clear: the balance between supply and demand will not be normalized until 2028. SK Hynix is ​​already talking about supply limitations for AI chips could last until 2030, and the truth is that all the forecasts only confirm that this problem will still last for years. We consumers pay the price. Memory is an absolutely transversal product that is everywhere. 80-90% of current memory chips go to computers, mobile phones and servers, and the rest to cars and industrial equipment. The most direct impact is already in the mobile market entry-level: memory already represented 20% of the manufacturing bill for one of these smartphones, but that figure is expected to reach 40% by mid-2026. That gives manufacturers few (or no) options, which will impact that cost on the price of these devices. And so with everything. IDC esteem that mobile sales will fall by 13% in 2026 due to this circumstance. The danger of cycles. The memory industry has a history of cycles in which the rise and fall of memory prices is traditional. In 2023 there was a collapse in prices after post-pandemic demand for PCs faded. Several manufacturers recorded historic losses, and learned the lesson of overproducing to meet demand. Now that we need more production, manufacturers are being much more cautious when it comes to increasing their production or investing in new factories. For them, by the way, the crisis is going great: Samsung has earned in three months of 2026 what it earned in all of 2025. China to the rescue. Although South Korea and the United States dominate global memory production, there are several Chinese manufacturers that are gradually gaining relevance. YMTC and CXMT They have been growing significantly in production for some time and that is making now have a golden opportunity to gain market share over competitors that they seemed unattainable. Image | Liam Briese In Xataka | The situation with RAM prices is so desperate that there are already those who build their own memory at home

has run out of gasoline and diesel

12% of French gas stations is running out of fuel. It’s a headline that’s taking over some of the news this week. Although we may think that it is due to a national shortage problem, the cause is quite different, and has to do with discounts. what has happened. 12% of French gas stations has run out of some type of fuel. The figure, however, has not stopped growing: according to the French government, on Wednesday it was already 18% of the total of stations in the country — almost one in five — that reported a shortage of at least one type of fuel. Specifically, 66% of the stations belonging to TotalEnergies announced yesterday Tuesday that their service stations were running out of fuel, highlighting that they were mobilizing to resupply the affected gas stations. Why has it happened. Fuel prices in France have skyrocketed above two euros, so TotalEnergies decided to apply as a ceiling measure a maximum price of 1.99 liters for gasoline and 2.09 euros in the case of diesel. These rates, notably lower than those of the rest of the distributors, triggered demand with an “infrequent influx” according to the French executive. The result was predictable: queues accumulated at Total stations while the rest of the gas stations operated normally. Translation? There is no fuel shortage in France: there is a logistics problem concentrated in a single network that could not absorb an extraordinary volume of demand. Given the situation, TotalEnergies has decided to extend the measure until the end of April, although adjusting the diesel cap to 2.25 euros per liter. Why is fuel so expensive in France?. Below the Netherlands, Denmark and Germany, where climate taxes are especially high, we find France. A country with quite aggressive taxation with fuel, and in which the increase due to the war in Iran is especially affecting. On top of that already high fiscal base, the conflict in the Middle East has acted as an accelerator. The tensions around the Strait of Hormuz have pushed the price of a barrel up, and France, which does not produce significant crude oil of its own, absorbs it entirely. What’s coming The French industry expects a rapid drop in fuel prices if the ceasefire in Iran is consolidated — this was anticipated this Tuesday by sources in the sector. But until that happens, the French face a scenario of record prices, gas stations with queues and a summer that looks expensive for those who depend on the car. TotalEnergies has bought some time with its price cap, but the underlying solution is not in the hands of any oil company. In Xataka | As soon as the war in Iran began, Spanish gas stations had already done something: start raising prices

It should be impossible for an iPhone 17 Pro to run a gigantic 400B AI model. Ought

The iPhone 17 Pro has 12 GB of unified memory. It is a very decent figure for a mobile phone, but in theory absolutely insufficient to run large AI models locally. And therein lies the surprise: a new project has made it possible for this mobile phone to run locally a model with 400,000 million parameters (400B). And that opens the doors to a promising horizon. Giant AI model, dwarf memory. A developer named Daniel Woods (@dandeveloper) has created, thanks to AI, a new inference engine called Flash-MoE whose code has been published as Open Source on GitHub accompanied by a study about his behavior. woods managed to run locally the Qwen 3.5 397B model (the full version, without distillation or quantization) on your MacBook Pro with 48 GB of RAM. Downloaded the model (209 GB on disk) and developed that inference engine to achieve something that seemed almost impossible. Other developers have gone even further and have managed to run models like DeepSeek-V3 (671B) or even Kimi K2.5 (1.026B!!) on their MacBooks. The speed is slow, no doubt, but they work, they work. It’s amazing. iPhone 17 Pro is capable of running a 400B model. Another developer called Anemll wanted to go a little further and try to run this model with almost 400,000 million parameters on his iPhone 17 Pro with 12 GB of RAM… and he succeeded. It is true that the model is very slow in responses (0.6 tokens per second, very unusable), but achieving something like this opens the doors to a future in which video or unified memory is no longer so critical to be able to use huge AI models locally. a few hours ago doubled the speed at 1.1 tokens per second, reducing the number of experts to four (2.5% quality loss in responses). It is still not entirely usable, but the technical demonstration is evident. Another user has preferred to use a somewhat smaller model (Qwen 3.5 35B) but still huge for the iPhone, and has already managed to get it to run locally at about more than acceptable 13.1 tokens per second. Why it matters. The AI ​​models we use in the cloud (ChatGPT, Gemini, Claude) are gigantic and run in data centers with thousands of chips and enormous amounts of memory and storage. They are the most powerful because they run on the most powerful machines. Although it is possible to use AI models locally, the models that we can run are much smaller and that makes it difficult for them to behave equally well both in quality of responses and in their speed or precision. This method opens the door to a future in which even on “modest” machines it is possible to run giant AI models that give better answers and allow us to avoid using models in the cloud. Apple already warned. Three years ago a group of Apple researchers published the study ‘LLM in a flash‘ which precisely pointed to that: to run AI models locally it would be possible not only to take advantage of the unified memory of Macs, but also their storage units. The speed would be slow, yes, but this would open up the possibility of running gigantic models locally on machines with much smaller amounts of unified memory. Woods used Claude Code with Claude Opus 4.6 and applied the new methodology “autoresearch” by Andrej Karpathy to implement Flash-MoE based on that research. The result is really promising. Video memory was everything. On my Mac mini M4, for example, I have 16 GB of unified memory. This means that with tools like Ollama you can install and run models like Qwen 3.5 4B locally with some fluidity, but 7B models or others like gpt-oss 20B would be much slower in responding (or would get stuck altogether). Video memory (or unified on Apple devices) is the most important parameter when running local models, both in terms of quantity and bandwidth. If you want to use them fluidly, that’s the limiting factor. It is possible to use “regular” RAM, but the speeds when using it are reduced so drastically that it is often better not to use that option at all. If you have a fast SSD, you have a treasure. Now the limiting factor is our SSD drive, since the model uses it as if it were a kind of substitute for video memory. And the faster the SSD drive on our computer, the better. There is good news here, because lately we are seeing how PCIe 5.0 drives they achieve about 15 GB/s without too many problems, and that speed already gives enough room for maneuver to use much larger AI models locally than we could use before. A promising future for local (and more private) AI. This discovery is really striking for everyone who wants to use AI locally, because it allows you to use huge models without having to make a huge investment in the latest generation graphics cards or, for example, in a Mac with a lot of unified memory: a Mac Studio M3 Ultra with 512 GB of memory, for example, costs more than 10,000 euros. With this new method we could opt for a much cheaper machine that, with a good SSD unit, would allow us to use giant models in a fairly decent way. Not as fast as those other options, sure, but still very decent. It’s a notable step forward in enjoying the benefits of running local AI models, including the biggest of them all: privacy. With this type of local execution, our conversations and everything we tell the chatbot stays on our machine, it does not end up on the servers of companies like Google, OpenAI, Meta or Anthropic. In Xataka | Jensen Huang believes we have reached the “coming of the AI ​​wolf.” It is perfect for feeding a Tamagotchi

Thousands of people have fallen in love with seven dogs abandoned and on the run in the middle of China. It was just another AI video

The image was undeniably powerful, almost cinematic. In the freezing darkness of the night, with temperatures below zero, a pack of seven dogs walked in formation on the shoulder of a highway. The video of just 11 seconds, published in chinese platform Douyinshowed a motley crew: a German shepherd, a golden retriever, a Labrador, a small corgi, and several mixed breed dogs. The clip went viral, quickly racking up more than 230 million views. The audience, saturated with news about wars and disastersfound an emotional balm in these animals. But what the network hailed as a miracle of loyalty and survival, the real version of the Disney movie Homeward Bound or the children’s series Paw Patrolturned out to be a completely prefabricated story. The birth of a viral myth. It didn’t take long for the internet machinery to build an epic narrative. From there, speculation became “truth.” Rumor spread that the seven dogs had escaped from a traffickers’ truck that was taking them to a dog meat slaughterhouse, and it was even claimed that they had walked 17 kilometers together. The anthropomorphization of the pack reached extraordinary levels. As illustrated by the comments of Internet userssocial networks assigned a role to each dog in this pack: the injured German shepherd was the “General” whom everyone protected; the golden retriever was the “guard” that was placed near traffic to shield them; Chinese rural dogs were the “guides” with a sense of direction; and the little corgi was the brave leader and “nurse” who walked 50,000 steps—twice as many as the rest—retracing his steps to make sure no one was left behind. The truth behind this story. The event, however, was much less romantic and lacked villains. Extensive field research carried out by reporters City Evening News dismantled the theory of the great escape. There were no meat traffickers, no kidnapping trucks, nor a 17-kilometer trip. Reporters located the village in Shuangyang district where the animals came from. Three of the most famous dogs belonged to Mr. and Mrs. Zhang: the corgi, affectionately called “Big Fatty” (Dapang); the German Shepherd, “Four Treasures” (Sibao); and the golden retriever, “Long Hair.” As the family explained, around March 13, the German shepherd simply went into heat. Since the dogs in the village usually roam freely, the males in the area were attracted to her and began to follow her, going just 4 or 5 kilometers away until they reached the highway. The rescue was not out of a movie either. Although volunteers from rescue bases such as Tong Tong or Bitter Coffee (led by Professor Liu) used drones to search for the herd, the resolution was purely customary. As detailed City Evening NewsMr. Zhang had a dream in which he was feeding his dogs. Convinced that they were alive, he went out to look for them in neighboring towns and found them safe and sound in the walled patio of a house where they had entered to take refuge. The other dogs in the video turned out to be pets of other neighbors in the area, such as Messrs. Guo and Jing, who returned home on their own. The engine of deception. If the story was so simple, how did it become a global phenomenon full of false details? The answer is in technology. According to an in-depth analysis of cnnalthough the original clip of the dogs walking on the highway was authentic, the story was hijacked and inflated using Artificial Intelligence. After the video went viral, AI-generated “spin-offs” proliferated: cinematic posters of the seven dogs, fake trailers showing their “exciting escape” and hyper-realistic images of the animals tearfully reunited with their supposed owners. The reason is purely economic, since “attention is money on the Internet”, as TJ Thomson explainsassociate professor of digital media at RMIT University. Content creators saw a golden opportunity to capitalize on a trend. As Tama Leaver, a professor at Curtin University, adds, inventing or embellishing these stories using AI is “a very effective way to increase an account’s numbers quickly.” The implications beyond. Although it may seem like an endearing and harmless anecdote, this viral hoax has tangible consequences. On the one hand, it perpetuates stigmas. Although since SCMP contextualizeciting the Dalian Animal Protection Association, that pet theft for meat is a real problem in some areas of northern China (which prompted genuine concern from many), in this specific case the false narrative fueled the fires of racism. As pointed out cnnthe invention of the “meat factory” fueled negative stereotypes against Chinese citizens, something especially dangerous in a climate of growing xenophobia. On the other hand, there is the damage to our information ecosystem. Chinese state media and the Jilin tourist office had to intervene to deny the rumor. as quote Guardianauthorities warned that this incident “reflects deficiencies in the dissemination of information online, where subjective speculation is easily taken as fact.” Professor Tama Leaver warns about danger of complacency: If we let our guard down and accept AI-generated images without questioning them because they are “cute dogs”, our critical skills will be atrophied when faced with false images about serious topics, such as war conflicts. @cnn A viral video showed a group of dogs in China who were purportedly captured to be eaten, escaped, and made the long journey home. The problem? The story’s not real. CNN’s Jessie Yeung explains how this kind of misinformation can spread. #cnn #news ♬ original sound – CNN The fragility of our eyes. The ending of “The Adventures of the Seven Dogs” in Changchun did not require an epic soundtrack, but a leash. Owners now leash their dogs during the mating season. However, the trail they leave on the network is deep. In an era dominated by AI and the desperate search for clicks, our need to consume happy endings it makes us deeply vulnerable to manipulation. The true story of the German shepherd or the corgi teaches us a hard journalistic and social lesson about the contemporary internet: as Professor Thomson … Read more

that OpenAI does not run out of funding

OpenAI’s strategy until now had been to shoot into the air to see if, with luck, a bullet would hit the target. They have finally realized that it was not the way to go and for a few days there have been signs that the company is beginning to define its priorities once and for all. They plan duplicate your template before the end of the year, they want to launch a super app to simplify your catalog and even They have closed Sora 2. The changes are being profound and also affect their own CEO. What is Sam Altman’s role in this new OpenAI? Raise money. They count in The Information that Sam Altman has changed his role within the company. Until now, the CEO directly supervised the safety and security teams, but from now on he will focus on securing more investments, managing supply chains and building data centers “on an unprecedented scale.” Why it is important. This change suggests two things: on the one hand, that Altman would have distanced himself from strategic issues to become more involved in technical or secondary aspects; and on the other, that the situation within OpenAI is serious enough to move it to a role more focused on fundraising. As a consequence of the closure of Sora, OpenAI has lost the agreement it signed with Disney worth 1 billion dollars. Added to this is that recently NVIDIA itself got off the wagon with its 100,000 million. The situation is, to say the least, delicate. Saving mode. OpenAI’s strategic pivot seeks to save both money and computing resources. The closure of Sora has a lot to do with the latter since the app consumed a lot of resources, and it had only been launched in the United States. The team that was dedicated to its development will now dedicate itself to robotics-oriented world simulation. Additionally, the applications division led by Fidgi Simo is now called “AGI deployment” and will primarily focus on commercialization and real-world usage. Spud. That’s what the company’s next big AI model is called internally. According to The Information, the pre-training phase has already concluded and it is expected to be launched in the coming weeks. It’s unclear what capabilities this model will have, but Sam Altman has told employees that it “can really boost the economy.” Once again, it confirms that the strategic shift points in the direction of the desired profitability. AI as a consumer product. Throughout 2025, Open AI launched many very different products that added to those they already had, which were not few. With Sora 2 They wanted to be a social network, with ChatGPT Atlas a browser, there are plans for a sex mode on ChatGPT… Until now, OpenAI’s bet has been to turn AI into a mass consumer product, but they have discovered that going viral is not the same as making money and that having so many eggs in so many baskets is not profitable. AI as a business product. While OpenAI was searching for its identity without a fixed direction, there was another company that was very clear: Anthropic. The startup focused primarily on business clients, those who do not have so many qualms about paying subscriptions of hundreds of dollars a month, and little by little it has been taking over OpenAI. The figures They are not lying: two years ago OpenAI had a 50% enterprise market share and today it has 25%, while Anthropic already has 32%. Image | Xataka with Freepik In Xataka | Sora’s closure is a sign: OpenAI takes a step back in the AI ​​race to completely recalibrate

run 20 km to churn your own butter. We have put it to the test

Just when I thought the culture of running I could no longer invent more excuses To go out and devour kilometers, the algorithm has decided to merge training with cooking recipes. To put you in situation, I was doing scroll calmly on Instagram and suddenly I came across what I consider the last barrier to fitness: runners that make butter while they run. They have named him the churning and burning (something like “stirring and burning”) or, simply, the butter runs. Can it be real? Apparently, yes. It all started in February of this year with American content creator Libby Cope and her partner, Jacob Arnold. In the video, Cope ask a simple question: “We Googled it and, as far as we knew, there were no previous runners who had successfully made butter. So we said… ‘Okay, shall we be the first?’” In it reel She is seen pouring a carton of liquid cream and salt into an airtight bag. “You might be wondering why,” Cope says to the camera. “The real question is: why not?” Since then, the phenomenon has exploded globally. A quick look at Instagram shows us an army of runners imitating the feat on accounts like saral.fit, margot_outdoor, lib_claire, rachlzw either alexladikoff. gonzo journalism Faced with such an avalanche of content, in Xataka We couldn’t sit idly by, but we didn’t want to get dirty either. So we turn to our hero without a cape: my partner Javier Lacort. Javier, always willing to sacrifice his sports team for investigative journalism, accepted the challenge without blinking: “I’ll do it,” he said. We owe him, at the very least, an eternal breakfast. The conditions of the experiment were the following: Javier went out into the street to run 20 kilometers with an entire 500 ml brick of liquid cream on his back. The weather: clear skies, 51% humidity and a temperature of 13ºC, although with a treacherous thermal sensation of 8ºC. My partner opted for a pragmatic and very much our approach. While American pioneers recommend using airtight bags Ziploc heavy-duty, Javier simply poured the liquid cream into a regular plastic shopping bag. With a few secure snap knots, he placed it directly into the pocket of his hydration vest. The goal was to see if the force of the impact over 20 kilometers would be enough to whip cream. But, before seeing the result, what does science say? How does running turn a liquid into a spreadable solid? As detailed by the Food and Agriculture Organization of the United Nations (FAO), the principle is pure physics: The constant churning process causes the fat globules present in the cream to collide, clump together and end up separating from the remaining liquid, known as whey. Come on, the same thing they did nomads centuries ago by galloping with milk sacks hanging from their pack animals, only now the pack animal is wearing carbon fiber slippers. Today, the runner It’s the human mixer. However, the results vary greatly. Get butter depends on several factors: the distance (most run between 5 and 10 kilometers), the intensity of the stride (the more bounce, the better) and, fundamentally, the percentage of fat in the cream used. The process and the verdict Javier completed his 20 kilometers and, after leaving his vest on a park bench with the air of having survived a true dairy odyssey, the verdict was clear. Upon opening the bag, he confessed: “It smelled wonderful, honestly.” In the images that he gave us of the process, the evolution can be clearly seen. After 20 kilometers of impact against the asphalt, the macro photos reveal that, without becoming a solid and consistent block of butter, the cream had been whipped and presented a lumpy and thick texture. Why did Javier get a thick whipped cream texture instead of a block of butter like those on TikTok, despite having run a considerable distance? The answer is in the weather. Scientific American magazine has the key: Temperature is crucial. If it is too cold, the fat molecules harden and fail to group together to form solid clumps; if it’s too hot, the mixture turns into soup. The ideal temperature is room temperature. With a thermal sensation of 8ºC, Javier had the thermometer against him. In fact, other runners who attempted the challenge on snowy days failed in the same way. Given what has been seen, for those who want to replicate it, the pioneers leave some vital advice. Libby Cope recommends running for at least an hour, using cream with 35% fat and, as a rule of thumb, always use an airtight “double bag” to prevent your back from ending up looking like a clandestine cheese factory. Other users recommend loosening the hydration vest a little so that the bag bounces more, or choosing routes with hills, stairs or uneven terrain. And the vital question: is this edible? The short answer is yes. In fact, eating it has become the official goal of the race. The challenge has generated a small post-workout ritual: open the container to check if there is butter and spread the fresh result on a piece of bread as a snack recuperator. It’s the perfect ending to the social media video. Culinary creativity has not taken long to appear. One of the runners, Irene Choi, is no longer satisfied with the basic recipe, but rather practices he habit stacking (stack habits) creating flavored butters. They add sea salt, herbs de Provence, garlic or even honey before going for a run. Choi went so far as to make a “honey butter and corn juice” that he called “an excellent use of my time.” From a more cynical (and brilliant) perspective, columnist Emma Beddington reflects on Guardian about the phenomenon: “The couple (Libby Cope and Jacob Arnold) now have more butter than they know what to do with. Do they even know how much butter costs these days? Let them sell it!” Beddington jokes that this trend fits perfectly into … Read more

Data centers have run out of “plugs” in central Europe, so they are migrating north and south

The insatiable appetite of Artificial Intelligence (AI) is redrawing the map of Europe. Historically, the European data center market has been dominated by a handful of metropolitan areas known in the industry as the “FLAP-D” markets: Frankfurt, London, Amsterdam, Paris and Dublin. The main attraction of these cities was their proximity to large demand centers, which allowed extraordinarily fast data transmission. However, current forecasts indicate that this historical dominance is beginning to crumble. Technology developers are packing their bags and the reason is purely physical: there is not enough energy. The collapse of the giants. The driving force behind this technological exodus is the sheer congestion of the electrical grid in the traditional epicenters. Unlike a conventional factory, data centers present a brutal challenge for any infrastructure: they are huge, hyper-localized loads that operate tirelessly and have the ability to skyrocket their consumption faster than almost any other industry. The local impact of these installations is astonishing. According to Greenpeacein 2023 data centers consumed between 33% and 42% of all electricity in cities such as Amsterdam, London and Frankfurt. The most extreme case is that of Dublin, where they accounted for almost 80% of electricity consumption. The situation became so critical that Ireland was forced to impose a moratorium de facto to new data centers in its capital until 2028. The exodus to the North and South. As a direct consequence of this bottleneck, the proportion of installed capacity in FLAP-D markets will fall from the current 62% to just 51% by 2035. according to a report by Ember. This drop marks the beginning of a new era in which developers flee from bottlenecks. The new map would look like this: The big winners: The Nordic countries top the expansion list. They offer some of the least congested networks in Europe, low electricity prices, minimal carbon intensity and cold climates that reduce the need for cooling. Demand is expected to increase 4 or 5 times in this region. The awakening of the South: On the other side of the continent, countries such as Greece, Italy, Portugal and Spain also project explosive growth, driven by their potential in renewable energy. The laggards: There are nations that, despite having strong economies and plenty of IT talent, are falling behind. Poland and Czechia are the best example. As detailed by Paweł CzyżakDirector of the Europe Program at the analysis center Embertheir electrical systems are still tied to coal and gas (Poland emits about 600 gCO2/kWh and the Czech Republic about 400 gCO2/kWh). With no clean energy to offer, investors prefer to look to their greener neighbors. Don’t underestimate the south. While the north squeezes the Scandinavian cold, Spain faces this exodus from a privileged position, breaking daily renewable generation records. However, its electrical network suffers a serious administrative “thrombosis”: There is plenty of clean energy, but there is a lack of cables to transport it, leaving 130 GW trapped in a bottleneck. Faced with the avalanche of data centers that threatened to collapse the system, the Government and the CNMC They have applied emergency surgery. The solution involves pioneering “flexible access permits” – which allow these plants to use residual capacity by accepting outages in emergencies – and the non-negotiable requirement that they withstand “voltage gaps” to shield the electrical stability of the entire peninsula. Planning and more planning. None of this happens by chance. In places where the network flows smoothly, there are years of work behind it. The Norwegian operator, Statnett, has been preparing the ground for some time to assume three times the electricity demand from data centers by 2030. In Denmark, Energinet began building high-voltage substations in 2017 in anticipation of precisely this scenario. Beyond the cables, the internal technology dictates the sentence. The key indicator is the PUE (Power Usage Effectiveness), which measures the technical efficiency of each installation. Paweł Czyżak points out in your newsletter that the difference is abysmal: the leading centers consume 24% less electricity and emit four times less CO2 than an average plant. Google has the best student in the class in Fredericia (Denmark): it averages a spectacular PUE of 1.07 and runs on 91% clean energy. The technological paradox. There is, however, a fascinating irony in the background: the same Artificial Intelligence that today saturates the cables could be the salvation of the electrical system. According to calculations by the consulting firm Deloittethe efficiency improvements that this technology will bring will save more than 3,700 TWh globally by 2030. Put into perspective, the deployment of these algorithms will save almost 4 times the energy consumed by all the data centers on the planet combined. Examples from other latitudes support this theory: in Southeast Asia (ASEAN), It is estimated that integrating AI in the management of its electrical systems it will save more than 67 billion dollars and avoid the emission of almost 400 million tons of CO2 between now and 2035. Infrastructure decides the future. At the bottom of this complex puzzle of cables and algorithms, what is at stake is pure and simple economic competitiveness. They are not minor figures. In the Netherlands, the data and cloud sector already attracts 20% of all foreign direct investment. In Germany, estimates calculate that the contribution of these centers to GDP will jump from the current 10.4 billion euros to more than 23 billion in 2029. The warning for legislators and regulators is clear: the technology giants have no patience to wait for new cables to be buried. They will move their billions to where the network already has space. As Czyżak saysthe country that wants to seduce the industry must guarantee clean energy in abundance and plugs ready to use. In the frenetic race to dominate the technological future, having a ready electrical grid is no longer an advantage; It is the only entry ticket. Image | İsmail Enes Ayhan on Unsplash and IRENA Xataka | Iran is directing its attacks where it knows it hurts the West: energy and data centers

If you have run out of free HDMI ports on your TV, this switch costs 15 euros and will solve the hassle for you

Although most televisions come with three or four HDMI ports, on many occasions we can fall short if we want to connect one or more consoles, a computer or a Fire TV Stick. For this, the switches or switch They are quite practical accessories that allow us to add more devices, and they generally tend to have a fairly reasonable price. The Anker Switch HDMI is one of the most affordable within the brand, and its compact format is designed so that it does not take up space on the television cabinet. Its price is 14.99 euros and it is a switch two in onewhich allows us to connect two devices to a single HDMI port on the television. The price could vary. We earn commission from these links A way to connect multiple devices to one HDMI port The switch They are usually compact accessories that incorporate several HDMI ports: two or more input to connect devices and one out dedicated to connecting it to the TV. This one from Anker allows you to connect two devices, such as one PlayStation 5 and one nintendo switch 2and switch between them by simply pressing the button at the top. This switch offers a ratio of up to 4K/60Hz and is compatible with HDR and with a wide assortment of devices, thus allowing us to connect a console such as PlayStation 4 or PlayStation 5, an Xbox Series, a projector or a computer, among others. On the other hand, if we are looking for extra comfort, there are many others switch which include more HDMI input ports and a remote control with which to switch between connected devices. One of the most interesting is the Ugreen HDMI Switch (29.99 euros), which in this case comes with three HDMI input ports and a remote control. In case you want to take full advantage of the features of the current generation consoles, Ugreen has another switch (19.59 euros) that offers a relationship 8K/60Hz and 4K/240Hzalthough in this case it only comes with two HDMI input ports. You may also be interested Anker 8K HDMI Cable at 60Hz, Ultra HD 4K HDMI Cable at 120Hz, 0.9m, Certified High Speed ​​48Gbps Cable with HDMI 2.1 and HDR, Compatible with PlayStation 5, Xbox, Samsung TVs and More The price could vary. We earn commission from these links Logitech K400 Wireless Touch Keyboard Plus for TV with a Multimedia Control and Touch Panel, HTPC Keyboard for TV connected to PC, Windows, Android, Chrome OS, Laptop, QWERTY Spanish – Black The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | Juan Carlos Lopez and freepik (header), Anker In Xataka | This is the gaming tower that I would buy. The computers with the best quality-price ratio for gaming recommended by Xataka In Xataka | Best gaming laptops: which one to buy and eight recommended computers from 770 to 3,000 euros

OpenAI’s obsession was to train its models like crazy. Now it’s run them faster than anyone else

OpenAI has signed an agreement estimated to be worth more than $10 billion with Cerebras Systems, a startup that designs advanced AI chips dedicated to one thing: running AI models as fast as possible. It is a unique alliance not only because of that change of focus, but because there is a conflict of interests. what has happened. The firm led by Sam Altman has committed to purchasing 750 MW of computing capacity over the next three years from Cerebras. Sources cited in The Wall Street Journal indicate that this alliance has an estimated value of more than $10 billion. We are therefore facing an operation extraordinary in size, but peculiar in form and substance. What Cerebras does. The firm based in Sunnyvale, California, was founded in 2015 by former engineers from SeaMicro, purchased in 2012 by AMD. The startup designs artificial intelligence chips specifically aimed at the inference stage of AI models, that is, executing them. More tokens per second please. When we use ChatGPT or any AI model, what we are looking at is an AI model using inference. Some “write” faster than others, and that speed of displaying text in responses is measured in tokens per second. Typically NVIDIA chips are great for the training phase, but not so much for the inference phase. Chips from companies like Cerebras —or those of the well-known Groqwhich has just been “bought” by NVIDIA—are precisely designed to run those models at full speed and obtain very high token per second speeds. The AI ​​is already good. Now she wants to be fast. NVIDIA’s recent “purchase” of Groq makes it clear that Jensen Huang’s company wanted the ability to offer those ultra-fast inference chips, and now OpenAI seems to want something very similar with its deal with Cerebras. AI models have already achieved remarkable performance in many scenarios, and although they are not perfect, now companies want them to not only work well, but also work very very fast and their responses, even if they are long, appear almost instantly. OpenAI wants more computing power. This operation also helps Sam Altman’s company with another objective: to obtain (and reserve) as much computing capacity as possible in anticipation of the fact that demand for these AI models will grow non-stop in the coming months and years. According to WSJ OpenAI already has more than 900 million weekly users, and its managers have frequently commented that they continue to have computing capacity problems. Brains grow. This agreement reinforces Cerebras’ position in a market that clearly demands this type of solutions. The firm is negotiating a $1 billion investment round that would bring its market valuation to $22 billion, tripling the current valuation, which is around $8.1 billion. In the past it has raised $1.8 billion according to PitchBook. Conflict of interest. This agreement also draws attention for an important aspect: Sam Altman, CEO of OpenAI, is also an investor in Cerebras (he is at the bottom of this Cerebras website) and indeed your company At one point he considered acquiring Cerebras although in the end that operation did not bear fruit. We are therefore faced with an operation that theoretically benefits Altman on both sides, which is worrying. How will OpenAI pay for this party? This new agreement once again triggers the debate about OpenAI’s ability to meet its credit and debt obligations. In 2025 it generated about 13,000 million dollars in income, but that enormous amount remains minuscule if we take into account that the contracts it signed with OracleMicrosoft or Amazon They amount to about 600,000 million dollars that will have to end up getting from somewhere. Where from? It’s a good question. We’ll see if they can end up answering it. In Xataka | The alliance between Oracle and OpenAI is not just about data centers: it is about overtaking Google, Apple and Microsoft on the right

V-16 beacons run the risk of being left without connectivity if their manufacturer goes bankrupt. Don’t worry, there is a solution

You may have read it on social networks: you buy a connected V-16 beacon, you go years without using it and, before you know it, the company that sold it to you has gone bankrupt, has stopped paying for its servers and now you have a nice paperweight because, without connectivity with DGT 3.0, that beacon has become illegal. It’s true? No. Plain and simple. When we buy a connected V-16 beacon, the manufacturer assures us that the connectivity is guaranteed for at least 12 years. The manufacturer may offer more connectivity time, as an incentive to purchase, but it cannot offer less. This, like the luminosity of the beacon or the 30 minutes that it must be in operation for at least, is one of the demands that Traffic has set to manufacturers so they can sell their beacons and we let’s buy them with enough peace of mind to be following the rules. Sure, but… what if the company goes bankrupt? It is one of the questions that some users have asked and that has been answered by accounts on social networks like Twitter. It is stated that when a connected V-16 beacon is activated and the required 100 seconds pass, the following process is launched: Protocol A: the beacon sends the data exclusively to the manufacturer’s servers Protocol B: Data leaves the manufacturer’s servers and is forwarded to the National Access Point for Traffic and Mobility Information which is where all activations and any other type of emergency are reflected. The response points out that, in the event that the manufacturer stops selling the connected V-16 beacon, the connection would be broken and therefore we would be left with a luminous paperweight because without connectivity that light is not legal. Insured. To confirm these details, we have contacted some of the companies that manufacture or sell these types of beacons. César Basterrechea explains to us from Atressa Automotivewho have their own beacons, that the information is not true and clarifies what would happen if their company went bankrupt and stopped paying for the beacons. First, he points out, the manufacturer has to register in DGT 3.0 and request a connectivity license. When this requirement is met, the following happens: “My operator sends me the data generated by one of my beacons through an APN and which is protected within a private VPN, the information reaching my Cloud once received, we send it through a VPN with a digital certificate to the DGT 3.0. If my company closed tomorrow, my operator would redirect the data emitted from my beacons to another APN of its own and through its own VPN it would send the data to the DGT cloud” With these words he explains, therefore, that it is the operator that offers its support if the company stops paying for the servers and, therefore, cannot offer the service. They confirm it to us. Asked to the other party, the answer is the same. In Xataka We have contacted Orange, an operator that offers connectivity in different connected V-16 beacons on the market. The company confirms the above, although it points out that, exactly, it is not that the operator keeps the servers of the bankrupt company, it only guarantees that the signal reaches DGT 3.0. “The communication architecture has been defined so that there are two ways to send the data to DGT 3.0: through the manufacturer’s cloud services (which must always be used if there are no incidents) or directly from the operator if the manufacturer’s cloud service is not operational (manufacturer bankruptcy or massive drop in its cloud service)” It’s not easy. The truth is that although we have confirmation from this beacon manufacturer And getting there is not easy. In the Resolution of November 30, 2021 which details the requirements that a V-16 beacon must have connected to be valid, it specifies that the manufacturer must have support to offer the service if it cannot be performed, but nowhere does it specify whether this company should be the operator, as Atressa Automotive tells us. This text explains the above-mentioned details of protocols A and B. Subsequently, the following is stated: The implementation of a device with these characteristics requires having a standard channel and a common language. Additionally, defining this standard also makes it easier for a third party to perform these functions if necessary due to the existence of a problem in the information systems of a manufacturer. The data model that the messages that V-16 devices send to their manufacturers’ information services must comply with is defined below. a hoax. Although with the connected V-16 beacons we have had a lot of controversy and we know that there are even those who has demonstrated cybersecurity risksThe truth is that this time we are facing a hoax. The DGT has actively repeated that when we buy a connected V-16 beacon we are guaranteed access to DGT 3.0 for 12 years. And although the protocol does not clearly detail whether a specific company must take charge (operators, other manufacturers…), it does specify that it must guarantee backup to keep the service active. Photo | DGT In Xataka | V16 beacon without eSIM or connectivity: what the DGT says about them from 2026

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.