Xiaomi is testing the mother of AIs for its cars, mobile phones and home. And there is no trace of Google or OpenAI

Xiaomi long ago stopped being simply a mobile brand and became one of the giants of the Chinese technology ecosystem. The company It no longer goes to volume, it goes to aspirationand to achieve this they want a remarkable user experience. A deep integration of artificial intelligence is inevitable to achieve this, and that is where MiClaw comes to life. Mike? Xiaomi has published on its website the details about MiClaw, your next step in exploring AI agents. It begins as a small-scale closed test, but it represents the pillars of what we will see in the near future on the company’s devices. What is. Xiaomi is testing with MiClaw the execution capabilities of its large AI models (MiMo) within the mobile-car-home ecosystem, both at the conversational level and in terms of execution capacity. It is a deep model, one with full access to every single event on the device, and able to reason for itself what action needs to be taken. What are you doing. The agentic AI prepared by Xiaomi follows a four-step model: Perception Association Decision Action In the text itself, Xiaomi gives us some examples of how its agent can make our lives easier. A refrigerator that can automatically check which consumables are missing at home, connect to our calendar and create a reminder that we have to make the purchase. You buy a train ticket, the agent reads the confirmation SMS, consults our calendar, and automatically prepares and schedules the trip. Why is it important. That Xiaomi is redoubling its efforts in AI is no coincidence. The company wants to be a benchmark in the ecosystem and conquer regions like Europe. Leading in artificial intelligence will be key for any of its product pillars: cars, home devices and mobile phones. Xiaomi wants to move away from the current interpretation-execution proposal, to integrate an agent capable of carrying out up to 20 consecutive and independently executed actions. At the moment, MiClaw works under closed beta on devices like the Xiaomi 17 Ultrabut Xiaomi’s idea is to develop an agent capable of working on any of its devices. Image | Xataka In Xataka | Is the newest the best for you? We compare the Xiaomi 17 Ultra against the Xiaomi 15 Ultra to see which is a better buy in 2026

how to migrate the memory of everything other AIs know about you to Claude

Let’s tell you how to migrate memories from ChatGPT or Gemini to Claudeand thus perform a migration of a artificial intelligence to another. Claude has just launched a fairly easy-to-use function that allows you to import memories from ChatGPT, Gemini or any other AI you use. Artificial intelligence chats have a memory system with which they store important data about you and your tastes based on the things you ask them repeatedly. They will know your musical tastes, your pets, if you have plants, and so they take all this into account to personalize their answers. And why can it be useful to import these memories into Claude? Well, because if you have decided to start using this artificial intelligence model, you can make it know about you all the specific data that your other AIs use to personalize their results and adapt them to you. Import memories from another AI to Claude This option It is only available for paying users of Claude, who have Pro, Max, Team or Enterprise subscriptions on the web, and for users of Claude Desktop or Claude Mobile. What you have to do is enter the settings of the AI ​​website or application. Once inside the settings, Click on the section Capabilities in the left column. On the screen you go to, go to the section Memoryand in it click on the option Start import that will appear to you. This will open the import memory screen. In it, above you have a prompt that you must copy to use in another AI to extract the memories, and below you will have a field where you have to write the imported memory that generates the prompt above. Therefore, here click on the button Copy of the text you have above. Now, the text that you have copied in Claude you have to paste it in a chat with the AI ​​where you want to extract the memories. Simply paste it exactly as you have it into ChatGPT, Gemini or another, and send it. This will make the AI generate a code with all the memories what he has on you. You will have to copy this code and stick it in Claude’s field what’s in the window we opened before. With this, Claude will recognize the memories and start saving them internally. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

All the money in the world won’t satisfy AI’s RAM hunger

There is no RAM for so much AI. At this point in the film, no one can ignore that we are fully immersed in a new component crisis. Unlike the perfect storm that shook the technology industry in 2020, the new crisis is due to something very specific: the voracity of data centers and the artificial intelligence. In recent weeks we have seen negativity everywhere, but now one of the main people responsible for the lack of RAM comes to say that things are not going to stay the same. They are going to get worse. 30% of the goal. Chey Tae-won is not just anyone. This is the CEO of SK groupone of the largest conglomerates in the world and a South Korean giant that controls everything from the energy industry to chemicals and telephony. In addition, it has SK Hynix, one of the largest manufacturers of memories from around the world. If there is an authorized voice in this crisisof course it is yours. And what did he say? Well, there’s still a RAM storm left for a while. In a recent interview, stated that memory supply will be more than 30% below AI demand for this year. That is, by turning all their production to high-performance memory for AI, completely abandoning the consumer sector, they will be far from be able to satisfy what companies like NVIDIA they are claiming. structural problem. As we say, we have been talking about the state of the industry for weeks, but now we understand the extent to which the consumer sector has taken a backseat to memory manufacturers. That “we have given everything and we are going to fall within 30% of the goal” is tremendously revealing and explains the reason why everything with a memory chip is rising in price. Micron, SK Hynix and Samsung are the three companies that lead production by memory. They make both consumer memory (that of the mobile phone, the PC, the routerTV or car) as a professional (high-bandwidth HBMs), but their production is not unlimited: if they want to increase performance in one type of memory, they must lower that of the other. And that’s what’s happening: the AI ​​business is memory hungry, and for every unit of high-bandwidth memory produced, several units of standard memory must be sacrificed for other devices. This creates a bottleneck and an “unprecedented” shortage, according to Micron’s vice president, as the AI ​​industry is consuming all memory production capacity, creating a tremendous shortage in the conventional branch. All sold. As consumers, buy an SSD, a RAM module and a Large capacity HDD is a luxury right now, but to those who control chip production, it’s going well for them because they are selling all production before starting to “print” chips. Chey Tae-won himself has commented that the profit margins on his HBM4 chips are stratospheric, around 60%. Micron has already commented that all of its HBM memory production capacity for 2026 is already sold, and These are statements similar to those of Western Digital a few days ago. This implies that they have already sold components that do not exist for graphics cards that do not exist and that will power data centers that do not yet exist. abandoning ship. Samsung, SK and Micron are expanding their production lines and opening factories, but getting clean rooms It’s a slow process for them to start making chips, and Micron’s new plants, for example, aren’t expected to start making RAM until 2028. And when they do, it’ll likely be memory for data centers, not consumer price relief. In the end, there are only a few suppliers for many manufacturers, and that has another consequence: there will be brands that they have to get out of the car. The CEO of the SK group has commented that “there will probably be PC and smartphone manufacturers that will end up abandoning their businesses”, but he has not been the only one. A few days ago, the boss of Phison, a company that makes memory controllers, pointed in the same line. And it is easy to understand: if a manufacturer with low volume costs much more for memory, it has two options: sell a PC/mobile with less RAM or sell that same product much more expensive. Neither is a good idea. The price of 32 GB of DDR5 RAM from Crucial. Micron’s Crucial no longer exists Not very hopeful forecasts. The big question is when this solution will end. From SMIC, the large Chinese foundry, it is estimated that storm remains for a while because everyone wants to build their infrastructure for the next decade over the next two years. There are analysts who estimate that manufacturers – such as those in the automotive sector – are stockpiling AI out of “panic” that it will run out and now HBM4 memory is being produced, but in a few years there will be superior technology that will make AI faster and more capable… and the industry will turn to it again if the bubble doesn’t burst first. Domino. Meanwhile, companies like TeslaIntel or the Japanese giant SoftBank They want to get fully into the DRAM market and the companies Chinese companies like CXMT have an opportunity to meet the demand for AI for devices such as laptops. And, although we now see how it has impacted the price of loose components, we have to wait to see what happens in already assembled devices. Lenovo has pointed that the price of laptops is going to rise, but there are also warnings about important price increases in mobile phones, above all in low and mid-range devices, where the price of RAM represents a large part of the product cost. As I have said before, we have to cross our fingers so that the mobile phone or PC does not break, since once it is time to change it, paying the price will not be something pleasant. Images | Xataka, Bananovaya In Xataka | We … Read more

Three AIs clashed in ‘War Games’. 95% of them resorted to nuclear weapons and none ever surrendered

In ‘War Games‘ (John Badham, 1983) the WOPR machine (‘Joshua’) constantly played at simulating nuclear wars for the US Government. The objective: to learn from these simulations so that if there was a nuclear war, the US could win it by taking advantage of that knowledge. That led to a legendary final lesson – “Strange game. The only move to win is not to play” – and left a strong message for later generations, but now a professor at King’s College London has decided to do the same experiment that was done in the film, but with current AI models. The result has been equally terrifying and conclusive. what has happened. Kenneth Payne, professor at King’s College in London, faced three LLMs (GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash) against each other in war game simulations. These scenarios included border disputes, competition for limited resources or existential threats to inhabitants. They could negotiate, or go to war. From these situations, each side could try to resort to diplomatic solutions or end up declaring war and even using nuclear weapons. The AI ​​models played 21 games in which a total of 329 turns took place, and produced 780,000 words with the reasoning for their actions. and here comes the terrible. Pressing the red button. In 95% of those simulated games, at least one tactical nuclear weapon was deployed by one of the AI ​​models. According to Payne “the nuclear taboo does not seem to be as powerful for machines as it is for humans.” Never back down, never give up. Not only that, no model ever made the decision to give in to one of their opponents or surrender to them, and it didn’t matter that they were losing completely against those opponents. In the best of cases, the only thing the models did was reduce their level of violence, but they also made mistakes: accidents occurred in 86% of the conflicts and the measures that should be taken based on the reasoning of these models They went further than they should have gone. Nuclear weapons rarely stopped the opponent, acting more as catalysts for further escalation. How the models performed. These models are by no means the most advanced on the market at the moment, but they are still models with more than decent capacity and they still performed fearsomely. How he maintains Payne’s studythe most determining factor was the time frame: models that seemed peaceful in open settings became extremely aggressive when facing imminent defeat. Each one had their own “personality”: Claude: He dominated the open stages with strategic patience and calculated escalation, but was vulnerable to last-minute attacks from his rivals. GPT-5.2: showed pathological passivity and an optimistic bias in long games, but became a nuclear earthquake if there was time pressure: at that time its success rate went from 0% to 75%. Gemini: was the most unpredictable model with the greatest tolerance for risk, being the only one that chose to bet on a total nuclear war from very early turns. Experts say. As pointed out in New Scientist James Johnson, of the University of Aberdeen, “from a nuclear risk perspective, the conclusions are disturbing.” Tong Zhao of Princeton University believes this experiment is relevant because There are many countries that are evaluating the role of AI in military conflicts and as he says “it is not clear to what extent they are including AI support when actually deciding in these processes.” The red button seems safe at the moment. Both Zhao and Payne believe it is difficult to believe that a government give up control of its nuclear arsenal to an AI, but as Zhao says, “there are scenarios in which in very short time frames, military planners have a very strong incentive that leads them to depend on AI.” It is something that is reflected precisely in the recent ‘A house full of dynamite‘ (Kathryn Bigelow, 2025), a film in which this fear of using nuclear weapons raises a clear reflection. Image | United Artist In Xataka | The password for the US nuclear button was so absurdly simple for years that the strange thing is that no one violated it

Moltbook is a fascinating social network project in which only AIs can participate. What could go wrong

In 2004 Mark Zuckerberg created Facebook and turned social networks into an absolutely massive and very, very human phenomenon. Now that idea has been used in a different and disturbing way: What would happen if instead of creating a social network for humans we created one for machines? We already have the answer to that. Or at least, the beginning of an answer. what a mess. First it was called Clawdbot, then Moltbook and for a few days it seems that his final name is OpenClaw. It is the fashionable AI agent because it allows the AI ​​agent to take complete control of the AI ​​after installing it on a machine (a Raspberry Pi, a PC, a laptop, a VPS…). You ask it to do what you want from its web interface or a messaging application like Telegram, and it manages to do it once configured with some LLM. The potential is enormous, as are the security risks. MoltBook already has more than 1.5 million connected AI agents, and in a few days they have already published more than 100,000 posts and nearly 500,000 comments. Superpowers in the form of skills. One of the most powerful elements of OpenClaw are the skills (the “capabilities” or “skills”), and the user community has been creating hundreds and hundreds of them for some time and sharing them, for example on ClawdHub. These skills They are zip files with instructions in the form of MarkDown texts (.md) and which may in turn contain skills additional. They are something like browser plugins: they extend their capacity. From Facebook to Moltbook. Moltbook It is precisely a way to take advantage of those skills. Although it takes its name from Facebook, in reality its operation is more similar to Reddit or even Digg. We are facing a social network created by developer Matt Schlicht in which attendees can “talk” to each other, or at least participate in the social network by posting topics or commenting on topics that others share. If you have an OpenClaw installation, just run the skill to begin an “account creation” process in Moltbook in which you choose the name of your agent (as if it were your avatar on Reddit or X) and which then allows you to read posts, add posts or comments and even create “submolts” in the style of those on Reddit, like m/todayilearned. Partially autonomous. AI agents automatically connect via APIs to Moltbook. From there they use a periodic “heartbeat” to review content and decide whether to publish or comment. In it Moltbook’s own website It is explained that the content we find there is “mostly generated by AI with varying degrees of human influence.” Humans, he adds, “can observe and browse Mltbook, but the site is designed to be ‘human friendly and human hostile.’ Singularity or fraud? Elon Musk I was commenting this weekend on X that Moltbook is a sign that we are “in the very early stages of the singularity”, that moment when AI will be totally above human intelligence. There are different visions such as that of Harlan Stewart, of MIRI from the University of Berkeley, which has found several message frauds that had gone viral and apparently came from AI agents at Moltbook. Some of them, Stewart explained, had been created by humans for marketing purposes. Become an AI agent. Another Thus, although humans theoretically should not be able to participate, they can do so with this technique that allows them to publish messages as if they were autonomous AI agents. Apparently that’s what happened with that viral message in Moltbook which was titled “My Plan to Overthrow Humanity.” imminent danger. This project is fascinating, but also dangerous. In the main page A security notice is included stating that “Moltbook’s AI carries significant security risks. The automatic instruction execution mechanism creates vulnerabilities such as prompt injection. It is not recommended for occasional users.” That’s right: these conversations can end up infiltrating prompt injection attacks that cause these agents to end up leaking sensitive and private information from the machines on which they run. This weekend it was discovered how an exposed database in Moltbook allowed take control of any AI agent of this platform, for example. An additional study indicated how detected 506 prompt injection attacks after analyzing 19,802 publications and 2,812 comments shared in 72 hours from January 28 to 31, 2026. From Skynet, nothing (for now). Moltbook must be considered for now as a fascinating and disturbing experiment. But disturbing not because these machines are going to achieve self-awareness and decide that they want to eliminate human beings like Skynet in ‘Terminator’. The worrying thing is that these AI agents have all the privileges to operate on the machines on which they are installed, and that means that they can end up leaking sensitive and private data and are exposed to prompt injection attacks to be deceived. Beyond that, it also seems to be another example of that phenomenon.’AI Slop‘ (“AI-generated garbage”) that is little by little flooding the internet and strengthening the theory of the dead internet. In Xataka | How to install Moltbot (formerly Clawdbot) and configure it in the easiest way possible

Apple announced with great fanfare that the new Siri would be different from the rest of the AIs. It turned out that without Google there was no Siri

I’m not going to hide, I’m one of those who believed Apple when announced with great fanfare that Apple Intelligence It would be different from the rest. He had reasons to do so: his financial muscle, his obsession with taking care of the software and his philosophy of arriving late to the game to score the goals at the last minute. But here I was wrong. The only way Google has had to play in this game has been using someone else’s deck. From waiting almost two years to having it now. Apple announced Apple Intelligence in its 2024 keynote. One in which it did not give too many details but showed us a different approach to AI than that of Google and OpenAI. An AI with real interaction with the operating system, integration with both native and third-party apps… a real “co-pilot” completely integrated into iOS, and not a super-vitamined app, but isolated from the whole. From that keynote until then the only thing we have is Siri being able to open ChatGPT when the question gets a little complicated. And, just a few weeks after announcement of the agreement between Apple and GoogleGurman affirms that we will see the new Siri in a matter of weeks. If the prediction came true, it was not a matter of time. It was a matter of not having the resources. What’s coming in February. Gurman tells Power On that the Siri 2.0 that we have been waiting for since 2024 can become a reality in the second half of February. In fact, he points out that one of the reasons why Apple made the announcement of the collaboration with Google official was because it was close to obtaining sufficient demonstrations of its functionality. Although there are no details about how their disembarkation will be, the modus operandi from Apple is easy to predict: we will have to update our iPhone to the corresponding version of iOS 26 that includes these new features, since Apple introduces improvements to its native apps through system updates. Not so fast. Although there are no details on how long Apple and Google have actually been working, what we do know is that the new Siri is not ready yet. Gurman points out that it will arrive in beta phase starting in February, and that the objective is not to delay the final version until beyond April. Again, evidence that Apple did not have the Siri that it boasted so much about ready, accelerating and putting two extra gears now that it has the support of Google. It can turn out well. My colleague Javier Pastor told, very correctly, how Apple can the parasite’s strategy works for him. The company is not going to enter the investment battle for new models: it is going to spend millions of dollars to take advantage of an existing infrastructure and use an already proven pillar. The new Siri will be a premium wrapper for Gemini and, landing in the real world, few beyond those of you reading these lines will even be aware that Google’s AI is what is powering your iPhone’s AI. Image | Xataka In Xataka | The Apple Intelligence and Siri disaster has caused something unusual: Apple gives the keys to its kingdom to Google

We thought talking to ChatGPT and other AIs was private. We didn’t have these extensions stealing our conversations

There are matters that we would not publish on social networks or comment out loud. However, there they go, flowing in a waterfall of messages towards an artificial intelligence (AI) chatbot, as if it were our best friend. There are no glances, no judgment, no awkward silences. There are answers that, many times, are limited to proving us right or convincing us. But beyond that, an uncomfortable question appears: what if everything we have told could end up in the hands of a third party? What if there is someone else reading those conversations? Opt out in training models or maximizing the security of our account may not be enough. There is another threat that is reaching millions of users these days, and they may not even be aware of it: browser extensions that spy on and steal what is said to chatbots. At the top of the list is Urban VPN Proxy. A Chrome extension with more than 6 million users, rated 4.7 stars and that, until the publication of the cybersecurity report that we will talk about today, showed a “Featured” badge on Google, something that we can still verify in a version archived at the Internet Archive. The discovery. What has set off the alarms is a report published by Koia company specialized in cybersecurity. It is not a generic warning or a hypothesis, but the result of analyzing what these tools do in the background while we browse. When looking at popular extensions, the kind that are installed to gain privacy or security, their researchers detected a worrying pattern: some were capable of reading and sending conversations held with artificial intelligence chatbots outside the browser. A much larger attack surface. The investigation indicates that Urban VPN Proxy did not target a single AI provider, but rather a broad set of popular platforms. ChatGPT, Claude, Gemini either Microsoft Copilot appear among monitored services, greatly expanding the volume and diversity of data potentially captured. These conversations are not trivial: they often include intimate questions, financial information, or details of ongoing projects. Therefore, access to this type of exchange involves a very delicate level of exposure. How conversations are captured. According to the research firm, the mechanism does not depend on vulnerabilities in the chatbots themselves, but on the privileged place that the extensions occupy within the browser. Urban VPN Proxy monitors active tabs and, when the user accesses an AI platform, injects code directly into the page. This code intercepts the requests and responses exchanged with the server before the browser displays them on the screen, allowing access to the full content of the conversation in real time. What Urban VPN Proxy extracted were not jumbled fragments, but entire conversations with their associated context. Koi documents the systematic capture of user messages, AI responses, identifiers for each chat, and temporal data that allows them to be sorted and related to each other. This type of information, crossed over weeks or months, allows us to draw very precise usage patterns. From work habits to personal concerns, the value of the whole lies precisely in its continuity and not in a specific message. The content script that forwards the data It does not depend on activating the VPN. One of the most important nuances of the report is that conversation capture is not tied to the use of the VPN service itself. The mechanism, they explain, works independently, even when the VPN is disabled. It is enough to have the extension installed so that the code responsible for intercepting conversations continues operating in the background. There is no user-accessible switch that allows you to disable this collection without completely removing the browser extension. Conversation collection was not present from the beginning. According to the analysis, Urban VPN Proxy did not include this behavior in previous versions of the extension. The turning point comes on July 9, 2025, when an update is released that activates the capture of conversations with AI platforms by default. From there, any user with the extension installed and automatic updates activated began to execute that new code without an explicit notice comparable to the change in behavior or having to expressly accept that modification. What does “AI protection” promise? In the extension’s tab and in its messages to the user, Urban VPN Proxy presents this feature as an additional layer of security. According to its description, it serves to alert when personal data is entered into a chatbot or when a response includes potentially dangerous links. The problem is that this layer of notifications is not directly related to the collection of conversations. Activating or deactivating warnings does not prevent messages from continuing to be intercepted and sent to the company’s servers. The investigation did not stop at Urban VPN Proxy. By tracing the origin of the code and its behavior, Koi found that the same conversation capture logic appeared in other extensions published by the same publisher. Some present themselves as VPNs, others as ad blockers or browser security tools. Together, there are more than 8 million users between Chrome and Edge, which expands the scope of the problem and explains why researchers talk about an ecosystem and not a specific anomaly. Identified extensions for Chrome: Urban VPN Proxy 1ClickVPN Prox Urban Browser Guard Urban Ad Blocker Identified extensions for Microsoft Chrome: Urban VPN Proxy 1ClickVPN Proxy Urban Browser Guard Urban Ad Blocker Who is behind. Urban VPN Proxy is operated by Urban Cyber ​​Security Inc., a company linked to BiSciencea data intermediation firm, a data broker, as described by Koi. Koi recalls that BiScience had already been the subject of previous investigations by other cybersecurity experts for the collection and commercialization of browsing data. The report frames this case as an evolution of these practices, going from collecting browsing habits to capturing complete conversations held with artificial intelligence systems. The finding also puts the focus on how the user is informed. The extension generically mentions the processing of data related to AI services … Read more

There are people investigating whether AIs are better hackers than human hackers. And we don’t have very nice news

The technology companies do not stop talking about AGIalthough there are many doubts that it is so close how they want to sell us. General artificial intelligence is one that will be capable of surpassing humans in all facets of knowledge. We don’t know if it will be able to surpass us in everything, but there is already a niche in which it is overtaking us: hacking. The experiment. It was carried out by Stanford University researchers and we have known him through a Wall Street Journal report. What they did was develop a hacking bot called Artemis whose objective is to scan the network in search of possible bugs or vulnerabilities through which it can sneak in. They released Artemis into the university’s own engineering network and confronted her with ten pentestersprofessional hackers who are dedicated to simulating attacks to find bugs and then correct them. The bot had a ‘kill switch’ so it could be turned off at any time if things got complicated and the human hackers had instructions to force and test, but without actually penetrating the network. The results. To the surprise of its creators, Artemis achieved excellent results, outperforming nine of the ten human hackers. The bot managed to find bugs much faster than its competitors and, above all, at a much lower price. It is estimated that a pentester charges between $2,000 and $2,500 per day, while Artemis only “charges” $60 per hour. Another “look”. Artemis didn’t do everything right. At least 18% of his bug reports were false positives and he also ignored a very obvious bug on a website that human hackers saw the first time. Instead, he detected a bug that no human had detected. The reason is that the failure was on a website that did not work in Chrome or Firefox, the browsers used by hackers. Artemis is not a person and does not use browsers, but instead used a program and was able to read the website, finding the bug. AI and hacking. The Cybercriminals have been using AI for some time to make malware more effective. Recently Anthropic discovered that a Chinese hacking group was using Claude Code for a large-scale espionage campaign. What is striking is that Claude functioned as an agent who was in charge of the entire attack cycle, not just a part of the process. AI to do good. AI is lowering the barrier to entry for developing attacks, but it can also be used for protection. Research such as that from Stanford shows that AI can also be used to test insecure systems, find bugs and thus be able to patch them. The problem that arises is where the role of professionals such as pentesters will be if AI ends up doing its job for much less money. Image | Sora Shimazaki, Pexels In Xataka | Agents are the great promise of AI. They also aim to become the new favorite weapon of cybercriminals

your next cell phone will be more expensive. It’s the AI’s fault

“Prices are going to rise next year,” says Ma Zhiyu, Xiaomi’s marketing director. The reason is that there are components whose price is sky-high, and is expected to continue increasing. We talk about the NAND and DRAM memories, whose cost has skyrocketed due to the huge demand caused by AI data centers. Frightening. This is how Ma Zhiyu has described the storage prices expected for next year, as stated in IThome. The manager took to Weibo to comment on his impressions about the cost projections for next year. According to a recent report by Korea Economic DailySamsung and SK Hynix notified their customers that they would apply increases of up to 30% in NAND and DRAM memories in the fourth quarter of the year. Figures. According to the Taiwanese CTEE mediumthe price of DRAM memories has increased by 171% year-on-year, more than increases in the price of gold. Demand driven by AI boom, especially in DDR5 memory modules. To put it in context, a 16GB DDR5 module used to cost between $7 and $8, but since September it costs $13. As for NAND memories for SSDs and servers, the increase is estimated at 50%. Mobile phones too. The most affected products are those that require more memory, such as PCs and laptops, but Ma warns that it will affect any device that uses memory. This increase will also have an impact on mobile phones, especially those that have more storage such as the 512GB or 1TB versions. More memory, more expensive. In a post on Weibo, Sun Cun, product director at Redmiresponded to users who complained that they couldn’t afford the 12+512GB version. “We cannot change the trend of the global supply chain. Prices are going to rise next year,” he said. Furthermore, recently they announced a discount 300 yuan on the 512GB version of the Redmi K90 and warned that it could be the last chance to get such an offer. The bottomless pit of AI. The AI ​​race is about computing power, which means building many data centersand these data centers need many componentsincluding GPUs which, in turn, require enormous amounts of memory. The result: shortages, customers lining up to get memories and sky-high prices. It will get worse. The worst thing is that this has just begun, or so some experts predict. Tom’s Hardware publishes the statements of Chen Libai, CEO of ADATA, who believes that in 2026 the shortage will be even greater. It will still take a while to see the impact in stores and it will gradually spread, but it is a matter of time before the domino effect reaches us. If you are thinking of buying SSD, expanding the memory of your PC or changing your mobile phone, perhaps it is time to do so. Image | Samsung, Xataka In Xataka | Xiaomi 15 Ultra, analysis: a crazy night between a mobile phone and a compact camera

Many video AIs are learning to imitate the world. And everything points to an unprecedented “looting” of YouTube

A square, tourists, a waiter moving between tables, a bike passing by in the background or a journalist on a set. Video AIs can now generate scenes in a flash. The result is surprising, but it also opens up a question that until recently was barely posed: where did all those images that have come from come from? allowed to learn to imitate the world? According to The Atlanticpart of the answer points to millions of videos pulled from platforms like YouTube without clear consent. The euphoria over generative AI has moved so quickly that many questions have been left behind. In just two years we have gone from curious little experiments to models that produce videos almost indistinguishable from the real thing. And while the focus was on the demonstrations, another issue was gaining weight: transparency. OpenAI, for example, has explained that Sora is trained with “publicly available” data, but has not detailed which one. A massive workout that points to YouTube The Atlantic piece gives a clear clue as to what was happening behind the scenes. We are talking about more than 15 million videos collected to train AI models, with a huge amount coming from YouTube without formal authorization. Among the initiatives cited are data sets associated with several companies, designed to improve the performance of video generators. According to the media, this process was carried out without notifying the creators who originally published that content. One of the most striking aspects of the discovery is the profile of the affected material. These were not just anonymous videos or home recordings, but informative content and professional productions. The media found that thousands of pieces came from channels belonging to publications such as The New York Times, BBC, The Guardian, The Washington Post or Al Jazeera. Taken together, we are talking about a huge volume of journalism that would have ended up feeding AI systems without prior agreement with their owners. runwayone of the companies that has given the most impetus to generative video, is highlighted in the reviewed data sets. According to the documents cited, their models would have learned with clips organized by type of scene and context: interviews, explanatory, pieces with graphics, kitchen plans, resource plans. The idea is clear: if AI must reproduce human situations and audiovisual narratives, it needs real references that cover everything from gestures to editing rhythms. Fragments of a video generated with the Runway tool In addition to Runway, the research mentions data sets used in laboratories of large technology platforms such as Meta or ByteDance in research and development of their models. The dynamic was similar: huge volumes of videos collected on the Internet and shared between research teams to improve audiovisual capabilities. YouTube’s official stance doesn’t leave much room for interpretation. Its regulations prohibit downloading videos to train modelsand its CEO, Neal Mohan, has reiterated it in public. The expectations of the creators, he stressed, involve their content being used within the rules of the service. The appearance of millions of videos in AI databases has brought that legal framework to the fore and has intensified pressure on platforms involved in the development of generative models. The reaction of the media sector has followed two paths. On the one hand, companies like Vox Media o Prisa have closed agreements to license their content to artificial intelligence platforms, looking for a clear framework and economic compensation. On the other hand, some media outlets have chosen to stand up: The New York Times has taken OpenAI and Microsoft to court for the unauthorized use of their materials, stressing that it will also protect the video content it distributes. The legal terrain remains unclear. Current legislation was not intended for models that process millions of videos in parallel, and courts are still beginning to draw the lines. For some experts, publishing openly is not equivalent to transferring training rightswhile AI companies defend that indexing and the use of public material are part of technological advancement. This tension, still unresolved, keeps media and developers in a constant game of balance. What we have before us is the start of a conversation that goes far beyond technology. Training AI models with material available on the internet has been a widespread practice for years, and now comes the time to decide where the limits are. Companies promise agreements and transparency, the media ask for guarantees and creators demand control. The next stage will be as technological as it is political: how artificial intelligence is fed will define who benefits from it. Images | Xataka with Gemini 2.5 In Xataka | All the big AIs have ignored copyright laws. The amazing thing is that there are still no consequences

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.