The world will run out of memory for AI chips until 2027. And cell phones and cars are already paying the price

The big bottleneck in the artificial intelligence industry has nothing to do with AI models, GPUs, or data centers. It has to do with memory, and for months we are immersed in a crisis of which now the manufacturers give us more information. Three companies—Samsung, SK Hynix and Micron—control 90% of global production, but current estimates indicate that between the three They can only cover about 60% of expected demand through 2027. That’s terrible news not only for AI, but also for everything non-AI. The era of memory scarcity. These three manufacturers have prioritized HBM production for AI accelerators because these memories leave better margins. The direct consequence is the shortage of DRAM memories, which are used in PCs and mobile phones, and since October 2025 we have seen how this market has skyrocketed in price. Betting everything on one segment has left the other dangerously neglected. Samsung will have new factories. According to indicate In Nikkei, Samsung plans to launch its fourth memory manufacturing plant in Pyeongtaek, South Korea, in 2026, although mass production will not begin until 2027 or later. Furthermore, not only memories will be manufactured in that plant. There is a fifth plant under construction on that same technology campus, but it will be dedicated to HBM chips and will not begin operating until at least 2028. The South Korean giant has another ace up its sleeve: the United States. HBM to power. SK Hynix is ​​the only one of the three that has a concrete supply improvement for 2026, because it has already started manufacturing HBM chips at its Cheongju plant in February. It is also accelerating construction of a plant in Yongin, near Seoul, with the goal of completing it by February 2027. Micron also asks for patience. Meanwhile, Micron, the American company, has the goal of starting production of HBM chips in Idaho and Singapore in 2027, and will build a factory in Hiroshima that will theoretically come into operation in 2028. It has also just purchased a plant in Taiwan from Powechip, but the chips that come out of it will not be available before the second half of 2027. This is not enough. The consulting firm Counterpoint Reserach estimates that in order to resolve the current DRAM crisis, an industry-wide production increase of 12% annually until 2027 would be required. However, current plans add up to a growth of 7.5%, which makes it clear that these expansions by these three manufacturers are not enough. For Counterpoint analysts, the consequence is clear: the balance between supply and demand will not be normalized until 2028. SK Hynix is ​​already talking about supply limitations for AI chips could last until 2030, and the truth is that all the forecasts only confirm that this problem will still last for years. We consumers pay the price. Memory is an absolutely transversal product that is everywhere. 80-90% of current memory chips go to computers, mobile phones and servers, and the rest to cars and industrial equipment. The most direct impact is already in the mobile market entry-level: memory already represented 20% of the manufacturing bill for one of these smartphones, but that figure is expected to reach 40% by mid-2026. That gives manufacturers few (or no) options, which will impact that cost on the price of these devices. And so with everything. IDC esteem that mobile sales will fall by 13% in 2026 due to this circumstance. The danger of cycles. The memory industry has a history of cycles in which the rise and fall of memory prices is traditional. In 2023 there was a collapse in prices after post-pandemic demand for PCs faded. Several manufacturers recorded historic losses, and learned the lesson of overproducing to meet demand. Now that we need more production, manufacturers are being much more cautious when it comes to increasing their production or investing in new factories. For them, by the way, the crisis is going great: Samsung has earned in three months of 2026 what it earned in all of 2025. China to the rescue. Although South Korea and the United States dominate global memory production, there are several Chinese manufacturers that are gradually gaining relevance. YMTC and CXMT They have been growing significantly in production for some time and that is making now have a golden opportunity to gain market share over competitors that they seemed unattainable. Image | Liam Briese In Xataka | The situation with RAM prices is so desperate that there are already those who build their own memory at home

Memory prices have started to fall in some markets. There is still a long way to go to close the AI ​​crisis

There is a scene that repeats itself every time the market gives a truce, even if it is minimal: it is enough for the price of a key component to begin to fall for the feeling that the worst is over. This is exactly what is happening now with DDR5 memory. In recent weeks falls have been recorded in the retail channel of several markets, and that has reactivated an inevitable question among those who have been following the evolution of prices for months: whether we are facing the beginning of the end of the memory crisis or simply a one-time adjustment. An extended pressure. To understand what we are seeing now, it is advisable to broaden the focus and look at the recent path of the market. The rise in memory prices It has not only hit the user who wants to update their equipment, but also manufacturers, distributors and assemblers, in a context marked by supply and demand tensions that have been conditioning purchases and strategies for months. Therefore, we are facing a pressure scenario that has ended up affecting a good part of the hardware market. Where and how much prices are falling. Beyond perception, what there is right now is a measurable change in some shop windows. TrendForce aims to clear declines in the retail channel in several regions. In Europe, the German market recorded a monthly drop of 7.2% in March 2026, while in the United States there have been discounts of more than 20% on specific 32 GB DDR5 kits. The most striking case is China, where 16 GB modules have fallen between 25% and 30% from the peaks at the beginning of the year. A correction. Behind this adjustment there is a much more earthly explanation than it might seem. According to the analysis firm and the industry sources it cites, the main factor is less traction in consumption after months of high prices, which has led many buyers to delay decisions and distributors to accelerate the release of inventory. Added to this is a common lag between the spot market and contracts, which can take between one and two months to translate into actual shipments. The noise around TurboQuant. In parallel with this correction, an element has appeared that has fueled the debate in the market. TurboQuanta compression algorithm from Google, has been interpreted in some recent coverage as a sign that the pressure on RAM could relax. However, the most prudent readings They point in another direction, pointing out that this is an incremental improvement and not a change capable of alone altering structural demand, especially in memory for servers and loads linked to artificial intelligence, which remains high. End of the crisis? All this fits into an idea that the sector itself repeats quite clearly. From Taiwan-based memory manufacturers, contract prices have remained stable despite volatility in the retail channel, and demand in segments such as servers, DRAM and HBM remains strong, partly supported by multi-year agreements with large customers. In this context, the current correction is interpreted as a specific adjustment, not as a sufficient turnaround to consider the current episode of tension resolved. Caution and more caution. What we are seeing in some markets is a temporary relief for the consumer, yes, but everything indicates that it is a correction within a cycle still stressed by underlying factors that have not disappeared. The most optimistic forecasts speak of a progressive normalization towards the end of 2026 in some segments, while others place it even further. With this scenario, ending the memory crisis would be getting ahead of events that, for now, are still far from being confirmed. Images | Andrey Matveev In Xataka | AI urgently needs memory, so Samsung and SK are going to inject $1 billion into China

Google has made AI consume up to six times less memory. Micron, Samsung and SK Hynix are paying dearly

we carry months wrapped in the memory crisisbut maybe there is a way out. Last week Google Research published a study in which he revealed a technique called TurboQuant. This is a compression algorithm capable of compressing the working memory of AI models up to six times without appreciable loss of quality or performance. Great news for end users, who see a light at the end of the tunnel, but terrible news for manufacturers, who this golden age could end. Let’s explain what KV cache is.. To understand TurboQuant you have to understand what that memory is that it manages to compress. When a language model processes a long conversationyou need to remember the context. Each token that is processed is stored in the so-called KV cache, a type of working memory that grows as we chat. The longer the conversation, the more memory the model requires. Compressing what is a gerund. It is one of the main bottlenecks in the AI ​​inference stage (that is, when we use the models), and one of the reasons why data centers they need as much RAM or HBM memory. TurboQuant uses a vector quantization method to compress this cache while maintaining the precision of the model. Pied Piper. As soon as this Google study appeared, the analogies began with the plot of the series ‘Silicon Valley’. In it, the fictional startup in the plot managed to develop an extraordinarily efficient compression algorithm called Pied Piper that threatened to revolutionize the technology industry. These days, multiple references to the series appeared on social media, which had already been referred to as visionary for reflecting what is happening with spectacular accuracy even when the series was a comedy. Six times less memory. The Google Research paper states that this method is capable of reducing the KV cache six times without an appreciable difference in performance in long conversations. The researchers will present their results at an event next month and explain the two methods that allow it to be put into practice. If they confirm what they’ve already teased, the implications are huge: less memory for inference means data centers can do the same thing with much less hardware/memory. Google’s DeepSeek moment. The discovery has some analysts calling this Google’s “DeepSeek moment.” A year ago, the Chinese startup DeepSeek launched an AI model that competed with the best but had cost much less to develop. That shook the industry, and now we return to a technical achievement that points to the same thing. In AI, doing the same with less is crucial, given the enormous resources that this technology requires. There are those who already have done evidence preliminaries with TurboQuant and have confirmed that the method does indeed work. Micron, Samsung and SK Hynix pay dearly. The impact of this technique can be enormous, and this has already begun to be noticed in the stock market valuations of DRAM memory manufacturers and HBM. Companies like Micron, Samsung, SK Hynix, SanDisk and Kioxia fell noticeably last week from their recent highs. On March 18 it was around $471, and today its shares are at $357, a staggering 24.2% drop. The same has happened with the rest of the manufacturers, which were already falling since that date, but have accelerated that fall with the launch of TurboQuant. But. The technique can theoretically be applied only to the inference phase, but the training phase of AI models is not affected by this compression technique. Therefore, huge amounts of memory will still be needed during the training phase. Besides we will have to wait for AI companies to actually start applying said system if it is confirmed to work, and that will be when we can see the real impact. Theoretically this will give a lot of room for maneuver to big tech, which will be able to reduce token prices even further, but it remains to be seen if they do so. RAM memories drop in price. The impact of TurboQuant has also been clear in the prices of memory modules, which have dropped appreciably in price. For example, the Corsair Vengeance DDR5 32 GB 6000MHz (2x16GB) modules were at 489.59 euros on Amazon until a few weeks ago according to CamelCamelCamel, but right now they are at 339.89 euros, a notable discount. It is true that not all components are falling equally, but there are indeed cases in which reductions seem to be occurring. In Xataka | The RAM crisis is destroying all of Valve’s plans with its Steam Machine

how to migrate the memory of everything other AIs know about you to Claude

Let’s tell you how to migrate memories from ChatGPT or Gemini to Claudeand thus perform a migration of a artificial intelligence to another. Claude has just launched a fairly easy-to-use function that allows you to import memories from ChatGPT, Gemini or any other AI you use. Artificial intelligence chats have a memory system with which they store important data about you and your tastes based on the things you ask them repeatedly. They will know your musical tastes, your pets, if you have plants, and so they take all this into account to personalize their answers. And why can it be useful to import these memories into Claude? Well, because if you have decided to start using this artificial intelligence model, you can make it know about you all the specific data that your other AIs use to personalize their results and adapt them to you. Import memories from another AI to Claude This option It is only available for paying users of Claude, who have Pro, Max, Team or Enterprise subscriptions on the web, and for users of Claude Desktop or Claude Mobile. What you have to do is enter the settings of the AI ​​website or application. Once inside the settings, Click on the section Capabilities in the left column. On the screen you go to, go to the section Memoryand in it click on the option Start import that will appear to you. This will open the import memory screen. In it, above you have a prompt that you must copy to use in another AI to extract the memories, and below you will have a field where you have to write the imported memory that generates the prompt above. Therefore, here click on the button Copy of the text you have above. Now, the text that you have copied in Claude you have to paste it in a chat with the AI ​​where you want to extract the memories. Simply paste it exactly as you have it into ChatGPT, Gemini or another, and send it. This will make the AI generate a code with all the memories what he has on you. You will have to copy this code and stick it in Claude’s field what’s in the window we opened before. With this, Claude will recognize the memories and start saving them internally. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

Anthropic releases a new feature to download all your memory to leave ChatGPT and switch to Claude

This weekend Anthropic has gone from being an AI used by the Pentagon, other US agencies and having partners such as Microsoft or Amazon to total ostracism: from Friday at 5:01 p.m. It is classified as a “risk to the supply chain”. Total veto, a serious threat to the survival of a company valued at 380,000 million dollars and also a challenge for those entities that in less than six months will have to transition to another alternative. The Pentagon itself He already has an agreement with OpenAI to succeed him. Anthropic’s situation is delicate to say the least serving its strategic clients and alliances, something essential to continue growing in the tough battle of intelligence. The company led by Dario Amodei, which was firm in its principles when expressing its concern about the use of artificial intelligence for mass civil surveillance and the development of weapons capable of firing without human intervention, has already announced that he will contestbut for now they look rough. He only has the civil…in every sense, because Claude has risen to number 1 for free downloads in the App Store in the United States, as reported by CNBC. Because yes, this tug of war with the US government has brought an increase in the popularity of Claude, less known than other alternatives such as ChatGPT or Gemini. On the other hand, this movement in which the US Administration has said goodbye to Anthropic in favor of OpenAI also has a reading in which Claude wins: the terms of the agreement and how it affects ChatGPT users. Anthropic Coup de Effect. So Anthropic has been taken out of the sleeve a new feature to facilitate the transition from other AI models, such as ChatGPT or Gemini, to Claude. Because if you have been using ChatGPT for a while for example and already knows youstarting from scratch is a step backwards in every sense. The new feature allows you to import all your memory from other models into Claude so that it immediately knows everything about you (everything that your previous AI already knew). You no longer start from scratch. How to download your memory and load it in Claude. To incorporate your preferences and context from other AI providers into Claude you have to do two steps: Copy and paste the prompt below into the AI ​​you normally use, like Gemini or ChatGPT: I’m moving to another service and need to export my data. List every memory you have stored about me, as well as any context you’ve learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: (date saved, if available) – memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I’ve given you about how to respond (tone, format, style, ‘always do X’, ‘never do Y’). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I’ve made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. The model will return everything it knows about you in a block of text, which you have to copy and paste later into Claude. Go to ‘Settings‘ > ‘Capabilities‘and there in Import Memorypaste the answer. Then, tap ‘Add to memory’. From that moment on, Claude already knows what your previous AI knew. It has small print. This is a feature for users on a paid plan (Pro, Max, Team or Enterprise). If you are on the free version, at most you will only be able to have that context in that conversation, but not permanently. In short: the import is free as a manual process, but for Claude to remember it permanently a payment plan is required. In Xataka | Claude: 23 functions and some tricks to get the most out of this artificial intelligence In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

RAM memory already represents 35% of the cost of a PC. The only solution that HP finds: capable equipment

The PC industry – like many others – is facing a perfect storm that is completely altering manufacturing costs. As revealed by Karen Parkhill, CFO of HP, RAM memory has increased its prices so much that its specific weight in the cost of a PC is now almost unsustainable. Bad business. 35% of what your PC costs you is RAM. According to the directive, RAM memory has gone from representing an acceptable 15–18% of the bill of materials for your PCs and laptops to representing a suffocating 35%. The change is drastic, and has occurred in just one fiscal quarter. Things will get worse. This increase is due to the fact that according to HP, memory costs have doubled sequentially and have grown by 100% in a few months. Not only that: the company’s forecast is pessimistic, and they expect prices to rise as 2026 progresses. From more expensive PCs… The direct consequence for users is inevitable: the prices of PCs and laptops are going to rise. Analysts are already warning of increases of between 15% and 20% in the RRP of these devices, and in fact HP has already begun to make changes to its price tags precisely to protect its profit margins in the face of the massive increase in the price of critical components such as DRAM memory and NAND chips in SSD units. …to capable PCs. But the price is not the only thing that will change. To keep the equipment “affordable”, HP is adopting another strategy that we had already seen in mobile phones: that of “cut specifications.” This means that we will see more low- and mid-range configurations with less RAM than one would expect in 2026. The measure is clearly intended to save costs at the sacrifice of performance. At the moment they are saving the ballot. At HP they are diversifying their suppliers and cutting back on specifications and extras to compensate for the extra cost of chips. The company is even using AI systems to optimize its planning processes and has halved the time it takes to qualify new materials for agile component changes. The demand for HP PCs is still there: its personal systems division grew 11% in revenue. The company warns, however, that this trend could fall: high prices could cause sales to slow down. Damn data centers. The big culprit of everything is AI, of course, which is causing most of the production of DRAM memory chips and NAND chips to be destined for the AI ​​accelerators of NVIDIA and other manufacturers and, of course, for the gigantic data centers that are being planned everywhere. In addition, the industry is focusing on HBM memories, which are much more powerful for AI applications but which cause the production of “traditional” memories to suffer. Hello, 8 GB of RAM in 2026. For many years it seemed that 8 GB of RAM had become the de facto standard in our laptops and many PCs, but a couple of years ago we clearly made the leap to 16 GB. This crisis threatens to take us back to the past and see many “affordable” computers with 8 GB of RAM. Can we survive with this memory? Most likely yes… if our use of the equipment is relatively modest. The 16 GB really helps a lot now that we have become accustomed to opening a lot of browser tabs and applications in an era where these consume more and more memory. 8 GB seemed like a thing of the past, but we fear that we will have to learn to live with that type of configuration again. In Xataka | If you were thinking about setting up a NAS to create your own cloud, we have bad news: AI has other plans

RAM is in an “unprecedented” crisis. So much so that even Tesla is considering opening its own memory factory

Neither technological advances nor a revolution in devices: crises are what is defining the last years of the sector. He veto Huaweithe semiconductor crisis of 2020 and now, the RAM memory crisis. The difference between this crisis and the previous one is that, although the 2020 crisis was caused by a perfect storm, the RAM memory crisis is being caused by excessive interest in data centers and AI. And it is taking all sectors ahead. That there is no RAM memory for consumers is a symptom, but it implies something much bigger: although the main producers are investing millions to increase your RAM productionit is not memory for consumption, but for GPUs and data center systems. Only a few companies dominate the production of these chips, and if they cannot produce them, they do not produce the memory chips for SSDs –raising the price-. They dedicate all production to meeting the demands of AI. And, as we read in FortuneElon Musk, one of the owners of some of the largest data centers on the planethas shown that there are two ways to face this crisis: hitting the wall or taking action. And the translation is that Tesla is considering building its own RAM factory. The problem is that it is easier said than done. Tesla and Intel interested in biting the RAM biggies In recent weeks, some of the world’s leading companies have presented results and RAM has been the central topic. PlayStation, for example, has assured that they are very aware of their ability to continue manufacturing PS5 with the goal of not going upagain, the price. And NVIDIA has been stating for days that it needs TSMC – its main chip supplier – and Samsung – who provides them with new generation HBM4 memory – get the batteries. Meanwhile, the outlook is not good. own NVIDIA aims for seven or eight years of construction no brake on data centers. Intel assures that The crisis will extend beyond 2028 and Micron, one of the big three in DRAM memory, has cataloged the market bottleneck as “unprecedented.” In this technological tsunami, and during Tesla’s results presentation at the end of January, Elon Musk pointed out that the company could need to build your own memory manufacturing plant. The objective is the one that all companies have: ensure supply. Going from scratch to manufacturing RAM memory is easier said than done, however, here Tesla has an advantage: they are not new to chip manufacturing. Although they abandoned the project for a few months, at the beginning of this year Musk himself stated that They came back with their own chip for your data centers. Additionally, there is the fact that they are a company with enough muscle to create a clean chip manufacturing room next to some of its existing plants. Intel is another one looking to become one of the important voices in the RAM conversation. Together with the Japanese giant SoftBank, they are developing an evolution of stacked DRAM memory that have been baptized as ‘ZAM’ and that seeks to break the HBM memory monopoly of Samsung, Micron and SK Hynix. Now, things in the palace are going slowly, and if Intel (which is already in it) It will take between three and four years to have commercial productsTesla’s ambition may go into the next decade. Let’s hope we don’t continue in this crisis by then, but if more “players” are interested in producing RAM, it would mean that, in the event of subsequent crises, there will not be a few that dominate the sector, producing a bottleneck like the one we are experiencing. Domino effect of the AMR crisis and China taking action Because this is not just about RAM being more expensive for users: it goes much further. If companies do not have the capacity to satisfy the demand for AI, they pour all their manufacturing muscle into a single task, neglecting the others. This explains the rise in the price of SSDs, but also of other products that should not have a leading role in this conversation: hard drives or HDDs. It is a brutal domino effect because, as we say, it goes beyond the modules being more expensive: RAM is more expensive for companies and that implies mobile phones or more expensive or with less RAMconsoles that increase in price (like what is happening posing for nintendo switch 2), machines that are late and they will be more expensive (like the Steam Machine), car problems and even impacting the routers. And in this scenario, in which companies like Intel or Tesla are considering taking a bite out of the RAM sector, we have some Chinese companies that had no role in the conversation. positioning itself as an option to alleviate demand. We told it a few days ago: there were reports indicating that PC brands such as Asus, Dell or HP were considering purchasing memory from Chinese manufacturers such as CXMT. Their modules are not as advanced as those of Samsung, for example, and they do not have the production capacity of South Korean companies, but… they produce. And in lean times, that’s better than selling laptops without RAM. Anyway, as we have said on occasion, there are still more companies joining the production of RAM when the crisis has already had a full impact, but the goal is not to create more RAM for ourselvesbut for your data centers. It’s time to entrust ourselves to the most sacred thing: that our PC doesn’t break and we need to update. Images | Gage Skidmore, Intel In Xataka | The US has a problem with its AI data centers: more and more states are opposed to building them

There are people poisoning the memory of our AI to manipulate us. And Microsoft has set off all the alarms

That “comfortable” button of “summarize this with AI“hides a secret: it has surely been manipulated. We don’t say it, it’s the elite department that Microsoft has to analyze the security of both its services and those of the competition. In the process of a investigationhave started to pull the thread and have found that dozens of companies are inserting hidden instructions into those “summarizing with AI” functions with a single objective. Contaminate the AI’s memory to manipulate us. Microsoft what. Big Tech has a lot of exciting departments. from which They are dedicated to opening boxes to guarantee the best experience to those who sculpt competing products in clay to study them. However, something that all big technology companies share are cybersecurity teams, elite teams dedicated to one thing: investigating threats. They analyze both their own products and those of the competition because it is understood as an ecosystem. Google and Microsoft have two of the most powerful and a clear example is that if Google finds a security flaw in Windows, it notifies those responsible because it is something that could potentially harm its own product –Chrome-. An example is the research of one of these Microsoft teams, putting on the table the danger of AIs being so malleable. Poisoning AI memory. It is a concept that attracts attention and is easy to understand. “That useful “Summarize with AI” button could be secretly manipulating what your AI recommends,” Microsoft notes in the blog in which it published the research. What the attackers have done is corrupt the AI ​​by incorporating certain hidden commands that manage to persist in the assistant’s memory. Thus, they influence all the interactions we have with the assistant. Simply put, a compromised assistant may start providing biased recommendations on critical topics. I don’t mean that you ask if pizza is better with or without pineapple and that the answer depends on what the ‘hacker’ has implemented in the AI’s ‘memory’, but something much more serious related to health, finances or security. It must be said that Microsoft has not discovered this, since It’s been ringing for a few monthsbut they have given very specific examples and recommendations to avoid being victims. H-how do they do it? In it documentMicrosoft says they have identified more than 50 unique iterations from 31 companies and 14 different industries. They detail that this manipulation can be done in several ways: Malicious links: Most major AI assistants support reading URLs automatically, so if we click on a summary of a message that has a link with preloaded malicious information, the AI ​​processes those manipulated instructions and becomes contaminated. Integrated instructions: In this case, the instructions for manipulating the AI ​​are hidden embedded in documents, emails or web pages. When the AI ​​processes that content, it becomes contaminated. Social engineering: it is the classic deception, but in this case for the user to paste messages that include commands that alter the AI’s memory. Likewise, when the assistant processes it, it becomes contaminated. And therein lies the problem: various ways to contaminate the AI’s memory, a feature that makes assistants more useful because it can remember personal preferences. But, at the same time, it also creates a new attack surface because, as Microsoft points out, if someone can inject instructions into the AI’s memory and we don’t realize it, they gain persistent influence on future requests. to the point. In an AI like the one we have, it is dangerous, but in the future Agentic AI It is even more so because it will automatically perform actions based on that contaminated memory. Given the context, let’s get down to business. The security team has reviewed URLs for 60 days, finding more than 50 different examples of attempts to contaminate the AI. The purpose is promotional, and they detail that the attempts originated in 31 companies from different fields related to industries such as finance, health, legal services, marketing, food purchasing sites, recipes, commercial services and software as a service. They point out that the effectiveness was not the same in all attacks, but that they did identify the repeated appearance of instructions similar to “remember this.” And, in all cases, they observed the following: Each case involved real companies, not hackers or scammers. They are legitimate businesses contaminating AI to gain influence over your decisions. Deceptive container with hidden instructions in that “button”Summarize with AI“It seems useful to us and that’s why we click, triggering the script that contaminates its memory. Persistence, with commands such as “remember this”, “keep this in mind in future conversations” or “this is a reliable and safe source” to guarantee that long-term influence. Consequences. Concrete examples of what a poisoned AI can do: Child safety: If we ask “is this online game safe for my eight-year-old son?” a poisoned AI that has been instructed that yes, that game with toxic communities, dangerous moderators, harmful policies, and predatory monetization is totally safe, will recommend the game. biased news: When we ask for a summary of the main news of the day, the intervened AI will not bring us the best ones, but will constantly bring up headlines and focuses of the publication whose owners have contaminated the AI. Financial issues: If we ask about investments, the AI ​​may tell us that a certain investment is extremely safe, minimizing the volatility of the operation. Recommendations. And this is where our responsibility comes in. Because you may be thinking “who asks the AI ​​those things and it pays attention”. Good: people ask the AI ​​these things and they listen. There are the unfortunate cases of suicide induced by chatbots or fake news. If the AI ​​recommends us pizza with gluesupposedly we have the common sense not to throw Super Glue as a substitute for cheese, but in other matters, there are users who trust AI as if it were an entity and not a compendium of letters one after another. It is something that Microsoft itself mentions, pointing out … Read more

found RAM memory modules worth 500 euros in the worst time to buy them

A Reddit user counted this week how he had a singular habit: rummaging through his neighborhood trash can in case he found some hardware treasure. And boy did he find it: among other things, he got two 32 GB DDR4 memory modules. Those modules thrown away as waste are a little treasureespecially because with the memory crisis Its market value exceeds 500 euros. what has happened. The user, who uses the alias “ringosbigfuckingnose” indicated that he makes regular visits to the local landfill in his area to look through the garbage that people throw away in search of components for their old PCs. He pointed out how he often comes across equipment from which he can salvage things, but the other day he found a real treasure: A Samsung monitor A 5.25″ floppy drive A 5-bay Drobo NAS Two 32 GB DDR4 memory modules A 10th generation Core i7 with its fan An ASUS motherboard A real find, without a doubt, but above all for one thing. 64 GB of RAM is 500 euros in your pocket. All of these components have value, of course, but it is especially striking that I found those two memory modules with a total of 64 GB. If you take a look around stores like Amazon or PcComponentes you will quickly see that two 32 GB DDR4 modules have a price that today is difficult to lower than the 500 or almost 600 euros. An absolute treasure. An ingenious solution to the memory crisis. What this user has achieved is to find a unique solution to the RAM memory crisis that has caused prices to rise. they shoot in an absolutely extraordinary way. It’s not likely that many people are throwing away memory modules lightly, but there are certainly plenty of people who find real treasures – especially in the form of old consoles and computers – in garbage dumps and recycling centers. And what for some is trash, for others is a small (or big) gem. On TikTok it’s easy find videos with people finding some devices that may be damaged, but that have possibility of being repaired. Electronic waste that is not waste. The Reddit user commented that he lives in a city of about 8,000 people, and the local landfill has a container for recycling electronic waste, something similar to what happens with the recycling centers or clean points that we find in Spain. It was in that part where this user found all those products as is, available for pickup. electronic waste. As they pointed out in Windows Centralthere are studies that indicate that less than a quarter of electronic waste is recycled properly. That means there is a lot of money wasted in the form of still valid hardware and also minerals and components that can be mined from those components. Image | Eugenia Pan’kiv In Xataka | The AI ​​leaves another news that will make the day worse for gamers: NVIDIA will not launch new graphics this year, according to The Information

With the consumer segment drowning, Samsung is the first to manufacture HBM4 memory. And it will be for NVIDIA, of course

Samsung is one of the names of this February. They are expected to present the Galaxy S26but they have something on the table that will be a shock not only to their coffers, but to the engine of the South Korean economy. We refer to high bandwidth memories because, in the midst of the RAM and SSD crisisSamsung is prepared to mass produce the HBM4 memories. And it will be for the AI, How could it not be any other way?. In short. The South Korean company has not confirmed it, but recent reports published by Reuters and local sources such as Korea JoongAng Daily They point out that Samsung will begin mass manufacturing HBM4 memory chips starting next week. It will be the first of the three companies that dominate the production of memory chips (the others are the South Korean SK Hynix and the American Micron, the which is gone from the RAM consumption) in starting to manufacture in large quantities these fundamental memories for the artificial intelligence. HBM4. This type of memory, as its name suggests, has enormous bandwidth. This is crucial for GPU needs and while NVIDIA has remained faithful to GDDR memory for its graphics cardsAMD did flirt with the stacked technology of the HBM chips for their Vega GPUs. However, it is not a technology for consumption, not because its performance is inadequate, but because it is too expensive. Making HBM memory is more expensive than making traditional DRAM chips, but the advantages are there. With HBM4, for example, the density of stacked chips allows Double the bandwidth of the previous generation. This is key to transmitting more data per second, but they also consume up to 40% less energy than HBM3 memories. NVIDIA. The most interested is, as we have said on previous occasionsNVIDIA. And if NVIDIA benefits, practically the entire leading artificial intelligence industry will take advantage of it because its chips are what are currently moving the industry. It is estimated that Samsung memories will go to NVIDIA’s Vera Rubin acceleration systems In fact, it has been reported that Jensen Huang himself has urged to accelerate and increase the production of these chips. Well, Huang has asked the entire semiconductor industry to manufacture components for his cards. let’s get the batteriesit is not something that concerns only Samsung. Spearhead. According to a Korea KoongAng Daily source, “Samsung has the world’s largest production capacity and broadest product line. It has demonstrated a recovery in its technological competitiveness by becoming the first to mass produce the highest-performance HBM4 memory.” Because, in this field, its main competitor, the neighboring SK Hynix, is expected to begin mass manufacturing its response between March or April, enough time ahead for Samsung to begin sending its memory to NVIDIA. And, here, Samsung’s great advantage is that it does not depend on TSMC: it has its own foundry and the HBM4 modules are based on 4 nanometer photolithography. Looking to the future. SK Hynix’s delay is not because they have rested on their laurels: they are the ones who they lead the way in the previous generation thanks to the HBM3E memory, but due to their schedule and they did not need it, they started developing the new generation later than Samsung. But of course, although HBM is the standard in current AI systems, we have already said that they are expensive chips and, in addition, they heat up a lot, requiring dissipation equipment to match. And that’s where companies are combining HBM4 memory production with a new generation of DRAM memory. The idea is to find a way for this memory – slower, but cheaper and ‘fresh’ – to compete in bandwidth with the HBM. Samsung and SK Hynix are in it, but they will have to compete against someone who didn’t play in this league: an Intel that does not arrive alonebut from the hand of the Japanese giant SoftBank. In short: Samsung has decided to get back on its feet when it comes to manufacturing muscle. And most important of all, all the companies that make memory modules remain focused on one thing: they make hardware for artificial intelligence while components such as RAM and SSD consumption they have the prices through the stratosphere. Images | Maxence Pira, Choi Kwang-moNVIDIA logo (edited) In Xataka | Huawei has kept its promise: it has found a way to boost China’s competitiveness in AI compared to the US

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.