Chips connected by laser instead of cable. It seems like science fiction, but it aims to revolutionize data centers

If you have ever mounted a PCSurely one of the points on which you have had to pay the most attention is the connections. Because understanding the power of the processor, the GPU or the speed of the RAM is “easy”, but the motherboard is what allows us to interconnect all these components with ‘highways’ in which the data speed can be maximum. In the data centers and serversthis is the same: the better the connections between chips and equipment, the lower latency, higher bandwidth and better performance. These connections are made physically, but there is a French startup that wants to change the rules of the game with NVIDIA. As? Connecting the chips by laser. Chips connected by laser and NVIDIA taking out the wallet Improving interconnection speed is no small feat or a whim. NVIDIA has begun manufacturing its next generation platform, the one named Vera Rubin. It is a system that can be combined with others to multiply benefits. That union, as we say, is physical, but there comes a point at which physics is no longer enough. When that arrives, NVIDIA wants to be ready and, a few days ago, Reuters reported on a $4 billion investment by NVIDIA in two companies that are aggressively researching new technologies to help increase that interconnection speed: Lumentum and Coherent. This is a rack and the nightmare of those of us who hate cables. Specifically, that of the Wikimedia Foundation. Well, imagine that a large part of those cables go outside because the systems are connected by electricity Another of the companies in which they have invested is Scintil Photonics. It is a French startup that this in the testing phase of a technology that, if the industry adopts it, will mark a before and after in this connection on a team scale. The LEAF Light Evaluation Kit is, as detailed, the first dense wavelength division multiplexing single chip to go from theory to practice. It’s like another language, I know, but it’s basically what we were talking about: an optical chip interconnection system instead of copper. And that is the main advantage. With copper reaching physical limits of speed and density, optics are emerging as a solution when connecting clusters of thousands of processors. Each chip has an optical system that is responsible for emitting and receiving light, and in that light goes the data that is currently traveling through cables. The one from the French company it is not the first chip based on photonic communication, but they claim that their technology reduces the energy necessary for them to work by 50%, as well as latency. Results? Well we’ll see. The startup’s CEO, Matt Crowley, has commented that he has “six or seven companies interested in implementing the technology by 2028,” but that due to confidentiality agreements, he cannot name names. The Scintil Photonics prototype The complication in this will be that they get supply of the photonics systems, since the data center racks are built with the idea that they are scalables. That is, it is no longer just power, but how many tens of thousands of units you can interconnect, and a bottleneck in the manufacturing of any of the parties involved in optics would be equivalent to a lack of supply for their customers. At the moment, some prototypes have already been served to select companies for testing, but certainly, using light pulses instead of electrical signals is something that is very interesting in superclusters focused on huge data centers that can scale without the limitations of the physical connection. Images | Victorgrigas, M.I.T., GlobeNewswire In Xataka | Huawei no longer competes: it is building its own parallel reality

Without helium there are no chips or RAM. And the largest producers are in the eye of the Iran war

Think of the world as if it were a puppet. It is supported by threads that move, but when one of those threads breaks, the whole wobbles. If several strings break at once, the puppet falls apart. In the technological world, 2026 has started on the wrong foot. The main RAM memory companies They have turned to producing memory for AI, leaving the consumer market. This has caused an unprecedented increase of prices that affects consumers, but also companies. Right now, it’s impossible to guess when it will return to normal because each party involved thinks one thing. And, for a few days now, we have another of those threads that I talked about at the beginning: the iran war. The consequence We are already seeing it immediately: the Strait of Hormuz boilingthe barrel of crude oilreaching stratospheric prices and a gasoline –dieselabove all- through the clouds. But since everything that goes wrong can get worse, now there is another crisis knocking at the door: that of helium. And it is the perfect union between RAM crisis and the war in Iran because without helium… well, without helium there are many things that do not work. Neither does artificial intelligence. RAM crisis + Iran war = no helium For many, helium is that gas that gives us such a funny voice and allows us to inflate balloons that float. For the semiconductor industry, helium is a critical and irreplaceable element in the manufacturing process. Being a noble gas, it does not chemically interfere with the materials of the silicon crystal growth process. inside the huge machines that companies use to create the wafers that are later used to make chips. They prevent materials from reacting with oxygen or other contaminants, so the results are purer. They are like a shield, but helium is also essential to dissipate the heat of the extreme lithography machinesto eliminate waste after each manufacturing cycle and even to identify any leaks in one of these machines. Its particles are among the smallest that exist and are what reveal even the smallest leaks in manufacturing chambers that must be under vacuum. Come on, it is not an element that can be easily replaced. There are two companies that right now have such a deep dependency that any variation in supply would be fatal. What companies? Exactly: Samsung and SK Hynix, the same ones that have dedicated themselves to AI and the same ones that do not plan to lift a finger to alleviate the crisis of RAM memory prices (and therefore of SSDs and any device that has a NAND chip). Both are involved in the manufacturing process of the sophisticated HBM4 memoryand both need helium. The problem is that helium is a byproduct in natural gas production, and some of the world’s largest refineries are in the Middle East. With the war in Iran, it is clear that the civilian targets are data centers and energy producers. If these infrastructures are attacked, the rest of the West is paralyzed, and they have begun to launch kamikaze drones against them. There is the oil company Ras Tanurabut also that of Ras Laffan, from QatarEnergy. It is one of the whales in the production of natural gas and, therefore, in the production of helium. And if the refineries close and the ships do not arrive, the smelters’ reserves begin to run out. There are already voices that they point to problems in the medium term if the situation persists. SK Hynix claims that they have a “diversified supply chain and sufficient helium inventory”something similar to what has commented another of the large chip manufacturers: TSMC. The problem is that these guarantees are short-term. If the situation continues with a prolonged closure of Hormuz, more than 25% of the world’s helium supply will be affected. This will cause the companies that ‘use’ gas the most to begin to see that their reserves are depleted at a faster rate than they are replenished. the market, always so unstablehas already reacted and actions Both Samsung and SK Hynix have fallen in recent hours due to supply concerns. Because we are no longer talking about a price of RAM and runaway gasolinewe are talking about helium being necessary for the manufacturing of any advanced chip, but also in quantum computing or for the numerous space launches. And as Hormuz continues, there will be many entities fighting for an essential, irreplaceable and very valuable good. Faced with SK Hynix’s moderate optimism, more pessimistic voices are already seeing echoes of the component crisis of 2020. Images | VALGO, ASML In Xataka | ‘Focus: The ASML Way’: the book that reveals the secrets of the most powerful European company in the chip industry

Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

Meta has not thrown in the towel with its MTIA (Meta Training and Inference Accelerators) chips. And although they didn’t have it all on their sidestopping depending on NVIDIA is a very juicy candy to jump to conclusions. For that very reason, They have presented a roadmap of four new chips with which the company intends to accelerate both its content recommendation systems and its generative AI capabilities. The first chip is now operational; The other three will arrive before the end of 2027. Below are all the details. Dependence. For years, Meta has relied almost entirely on NVIDIA and AMD to power its data centers. The development of our own silicon is complicated, but if it is achieved, it can be a very successful financial and strategic bet in these times. According to statements According to its vice president of engineering, Yee Jiun Song, designing its own chips allows the company to “eliminate what we don’t need,” which directly translates into cost reduction. Added to this is greater independence from possible price variations or supply restrictions. Which is exactly what you have announced. The four new chips are the MTIA 300, 400, 450 and 500. Each one has a different use: The MTIA 300 is already in production and is intended to train the algorithms that decide what content Facebook and Instagram users see. The MTIA 400 (known internally as Iris) has completed laboratory testing and is en route to data centers. Meta claims that it offers performance “competitive with leading commercial products,” according to its official statement. The MTIA 450 (Arke) will double the high-bandwidth memory compared to the 400 and is scheduled for early 2027. The MTIA 500 (Astrid), the most advanced, will arrive in mid-2027 and will incorporate, according to the company, improvements in low-precision data processing. The chips are manufactured by TSMC, the world’s largest semiconductor producer, and have been developed in collaboration with Broadcom on the RISC-V open architecture. The rhythm is the most striking thing. What’s unusual is not just that Meta makes its own chips, but the speed at which it plans to do so. The usual cycle in the industry is one or two years between generations. Meta aims to release new versions every six months. “The pace of AI evolution is so fast that we always want to have the most advanced chip available when we need it,” counted Song. This accelerated cadence is possible, according to the company, thanks to a modular design that allows components to be reused between generations. ANDthis does not replace NVIDIA. It is important not to lose sight of the context. Meta remains one of the largest buyers of GPUs on the market. just a few weeks ago signed multi-million dollar agreements with NVIDIA and AMD to supply chips for the next few years, and has also reached an agreement to rent computing capacity on Google chips, as share Wired. MTIA chips are designed for specific and internal tasks (inference and recommendation systems), not for training large language models, so this strategy is complementary to your chip plans with NVIDIA or AMD. Nor should we forget that Meta recently had to abandon its most ambitious training chip, known internally as Olympus, after the project became complicated in the design phase, according to counted The Information. Susan Li, CFO of Meta, confirmed at a Morgan Stanley event that the company still has the goal of developing processors capable of training models, but without giving more details. And now what. The real test of this bet will come when the chips are deployed at scale. The challenge at the moment is to guarantee HBM memory supply before a RAM crisis that is affecting the entire technology sector. Song himself recognized to CNBC that the company “is absolutely concerned” about it, although it stated that they have assured supply for their current plans. In the long term, we will see if Meta can achieve something similar to what Google did with its TPUs. Cover image | Mariia Shalabaieva and Goal In Xataka | OpenClaw has caused a real media earthquake in China. The Government has prevented its officials from using it

Spain is betting its future in the semiconductor industry on a single card: gallium chips

SPARC Foundry is one of the best assets that Spain can cling to to get on a train, that of semiconductors, currently guided with a firm hand by USA, South Korea, Taiwan, China and Japan. This Galician company, however, does not pursue producing silicon chips. In this area, competing with the five powers I just mentioned is essentially impossible. SPARC’s plan involves building a manufacturing factory in the Valadares Technology Park, in Vigo. next generation photonic semiconductors. The interesting thing is that these chips will not be silicon; They will be manufactured using gallium arsenide (GaAs), indium phosphide (InP) or gallium nitride (GaN), and will most likely have a leading role in the telecommunications, defense, automotive, consumer electronics, quantum computing or the aerospace industry. Be that as it may, SPARC will not tackle the GIGaNTE project alone. Indra leads it with a 37% stake in SPARC Foundrywhich places the latter group as the majority partner of the company specialized in the production of chips. According to SPARC and Indra, the Vigo semiconductor plant will be operational during the first half of 2027 and will have the capacity to manufacture up to 20,000 wafers per year when it is able to work at full capacity. An interesting note: GIGaNTE, the name of this project, has been designed around the chemical formula of gallium nitride (GaN). Gallium aspires to be the protagonist of the next generation of chips Photonic integrated circuits use photons to process and transmit information. Photons are the elementary particles responsible for forms of electromagnetic radiation, including the manifestation of visible light. They have no mass and are capable of traveling in a vacuum at a constant speed: the speed of light. However, something worth not overlooking is that although we are referring to them as particles, they also manifest as waves, hence the existence of the quantum phenomenon known as ‘wave-particle duality’ to identify the wave nature of light. Although, as we have seen, SPARC will produce photonic chips, the core of its business will revolve around gallium arsenide and gallium nitride. Unlike silicon, They are not elementary semiconductors. And they are not because the latter are characterized by being made up of a single chemical element, while gallium arsenide (GaAs) is composed of gallium (Ga) and arsenic (As), and gallium nitride (GaN) is composed of gallium (Ga) and nitrogen (N). SPARC is going to produce photonic chips and the core of its business will revolve around gallium arsenide and gallium nitride The term semiconductor is appearing many times in this article, so it is a good idea that we review what it is about before moving forward. A semiconductor is an element or compound that, under certain conditions of pressure, temperature, or when exposed to radiation or an electromagnetic field, behaves like a conductor, and, therefore, offers little resistance to the movement of electrical charges. And when it is found in other different conditions it behaves like an insulator. In this last state it offers great resistance to the displacement of electrical charges. In elements with electrical conduction capacity, some of the electrons in their atoms, known as free electrons, can pass from one atom to another when we apply a potential difference at the ends of the conductor. Precisely, this electron displacement capacity is what we know as electric currentand we all know intuitively that metals are good conductors of electricity. Curiously, they are because they have many free electrons that can move from one atom to another and, thus, they manage to transport the electrical charge. Gallium nitride and gallium arsenide are semiconductors, and this implies that under certain circumstances they are capable of transporting electrical charge. When the appropriate conditions exist, the mobility of its electrons is much greater than in semiconductors such as silicon or germanium. And this means that its capacity to transport electrical charge is also superior. Another very interesting property of these compounds is their high saturation rate. It is not necessary for us to delve into this parameter to the point of excessively complicating the article, but it is interesting that we know that it reflects the maximum speed at which electrons can move. through the crystal structure of these compounds. This maximum speed is limited by the dispersion suffered by the electrons during their movement. Gallium arsenide transistors can work at frequencies above 250 GHz This property has very important repercussions. One of them is that gallium arsenide transistors can work at frequencies above 250 GHz, which is a quite impressive figure. In addition, they are relatively immune to overheating and produce less noise in electronic circuits than silicon devices, especially when it is necessary to work at high frequencies. On the other hand, gallium nitride can work at very high voltages and reach extreme temperatures without its performance or stability being compromised. Besides, allows manufacturing compact and efficient transformers Because it dissipates little energy in the form of heat, it will most likely play a fundamental role in the charging infrastructure of electric cars and base stations for 5G communications. Image | Generated by Xataka with Gemini More information | SPARC Foundry In Xataka | Spain steps on the accelerator in its particular chip race. And it does so with a total commitment to integrated photonics

TSMC is the ‘kingpin’ of chips and Apple has always been its best friend. That just changed

TSMC is the foundry of the world. Although there are others like Samsung that have muscleit is the Taiwanese company that has conquered the high-performance chip segment. It has achieved this through capacity, technology and an alliance: that of Apple. For a decade, TSMC was Apple’s great friend, the one that manufactured its chips and the one that revolutionized – with the designs of Apple Silicon– the laptops. Now NVIDIA rules. And he has elbowed his way through. In short. In the midst of the AI ​​era and with a technological current in which it is impossible to separate oneself from NVIDIA, Apple has more than enough reasons to feel jealous. While the mobile segment faces cuts unprecedented due to the crisis of RAM and components, and with Tim Cook himself -CEO of Apple- commenting on the difficulties they will have This 2026, artificial intelligence is going like a rocket. Major memory manufacturers have pivoted to high-bandwidth memory for AI GPUs, and companies like NVIDIA, Phison, amd and even the Chinese ones like SMIC and Huawei They are clapping their ears. They have made the AI ​​Big Techs dependent on their hardware, and no one makes that hardware like TSMC. Result? According to the latest reports, NVIDIA will become its largest customer this year. The importance of ‘Customer A’. It may seem like an unimportant change of chips, but it is actually more relevant than we think. The difference between a ‘Customer A’ and a ‘Customer B’ implies that, faced with production bottlenecks, one of the two is given priority. We already saw this in the 2020 semiconductor crisis when, precisely, half of the industry was drowning (cars, cameras, TVs and mobile phones) while Apple did not have such bad forecasts because it was the darling of a TSMC that was going to focus on iPhone chips to consolidate a lucrative relationship that began with the Apple A8 of the iPhone 6. Jensen Huang himself -CEO of NVIDIA- has commented the quite proud play on a podcast. “Morris –Morris Chang, founder of TSMC and friend of Huang – will be happy to know that NVIDIA is TSMC’s largest customer right now,” said the CEO. It is because little margin: 19% for NVIDIA compared to 17% for Apple, but it is an achievement and a thermometer of how the industry is doing. Last year, NVIDIA’s contribution to TSMC was 12%, which is a considerable jump in a very short time. “I need a lot of wafers”. Obviously, this does not imply that TSMC is going to stop pampering Apple over other companies. Apple has a huge percentage of the mobile segment, but NVIDIA is crucial to keep the AI ​​machinery rolling. Despite the Google attempts with its TPUsthe agreements of OpenAI with Broadcomthose of Goal with NVIDIA and AMD or those of xAI manufacturing its chipsNVIDIA is still the one who splits the cod. Even Chinese companies need NVIDIA GPUs and, of course, NVIDIA is more than willing to take a cut. On a recent visit to Taiwan,. Huang met with local industry heavyweights and noted that “NVIDIA would need a lot of wafers this year,” putting even more pressure to a TSMC that is crucial in the artificial intelligence chain. Synonym of success. Samsung, Huawei and SMIC They are fighting to be alternatives in case TSMC collapses. But TSMC has put us on the couch and has been looking at how to diversify the business for a few years. In Taiwan they maintain the heart and the muscle, but the plant in Europe – in Germany – is underway and they already have an operational foundry in the United States. In fact, there are plans to expand it because they have more and more clients who need a very specific product that works like a Swiss watch. But this has a B side: all the industry’s eggs are in the same basket. If TSMC fails, the house of cards can collapse. There is already some report that indicates that the American plant that manufactures for Apple, Intel, NVIDIA or AMD is overwhelmed due to a huge amount of orders. And there, precisely, lies the importance of being a client A… or a client B. Images | TSMC, NVIDIA In Xataka | SK is one of the chip whales and it is clear about one thing: not all the money in the world will satisfy AI’s hunger for RAM

Apple completely changes the architecture of its chips with a textbook “divide and conquer”

The week started with a flurry of news from Apple, something we already expected after Tim Cook’s words stating that it was going to be a “great week.” And in addition to the new iPhone 17e and iPad Airtoday it was the MacBook’s turn. In this article we wanted to focus on explaining what is special about the new M5 Pro and M5 Max processors, chips that land at the latest MacBook Pro. The company follows the same pattern as always. First comes the base chip, the M5, which we already saw in the 14-inch MacBook Prohe iPad Pro and Apple Vision Proalong with the new MacBook Air, and then, they take advantage of their most capable equipment to welcome the most powerful variants. But this year there is something different, and that is that the company uses a new manufacturing architecture internal that Apple had not used until now in its Mac chips. We will tell you all the details. Apple’s M4 Pro and M4 Max SoCs, in numbers m5 pro m5 max M5 m4 photolithography 3nm (3rd gen) 3nm (2nd generation) 3nm (2nd generation) 3nm architecture Fusion Fusion A single die A single die CPU cores Up to 18 18 Up to 10 Up to 10 Supercores 6 6 4 4 performance cores 12 12 6 6 GPU cores Up to 20 Up to 40 Up to 10 Up to 10 neural engine 16 16 16 16 maximum unified memory 64 128 32 32 bandwidth 307GB/s 614 GB/s 153GB/s 120GB/s ray tracing Yes (3rd gen.) Yes (3rd gen.) Yes (3rd gen.) Yeah neural accelerator on GPU Yes (per core) Yes (per core) Yes (per core) No connectivity Thunderbolt 5 Thunderbolt 5 Thunderbolt 4 Thunderbolt 4 / USB 4 codecs H.264, HEVC, ProRes, AV1 H.264, HEVC, ProRes, AV1 H.264, HEVC, ProRes, AV1 H.264, HEVC, ProRes, AV1 memory integrity enforcement Yeah Yeah No No The big news: the Fusion architecture Perhaps one of the most striking aspects of these new chips is the call ‘Fusion’ architecture. Apple has designed this SoC (system on a chip) by combining two other chips manufactured in TSMC’s third-generation 3-nanometer node. The signature promise that the chips communicate with each other through very high bandwidth and minimal latency. Why this approach? As chips grow in number of cores and memory needs, Putting everything on a single piece of silicon becomes increasingly complicated and expensive. The solution of dividing it into two interconnected chips allows its capabilities to be scaled without sacrificing efficiency. Each of these chips integrates CPU, GPU, neural engine, unified memory controller, Media Engine (which are the cores dedicated to processing multimedia codecs) and controllers. Thunderbolt 5. It is, in essence, the basis that makes it possible for the M5 Max to reach figures that we previously only saw in desktop chips. A new CPU from top to bottom Both the M5 Pro and M5 Max share the same CPU design: 18 cores organized into two very different types. On the one hand there are the so-called super cores: six high-performance cores which Apple also incorporated into the standard M5. The company assures which are “the world’s fastest CPU cores in single-thread performance”thanks to the fact that they handle greater bandwidth, and have a new cache hierarchy and better branch prediction. On the other hand, the chip incorporates 12 performance cores completely new, different from the efficiency cores we have seen in previous generations. They are optimized specifically for multi-threaded workloads that require sustained power without skyrocketing consumption. The combination of both groups of cores allows, according to Apple, a jump of up to 30% in performance for professional tasks regarding M4 Pro and M4 Maxand up to 2.5 times more multi-threaded performance compared to M1 Pro and M1 Max. It will be interesting to see this performance improvement in action when we test the devices in depth. What the M5 Pro promises Your GPU scales up to 20 cores next generation, each with an integrated neural accelerator. Memory bandwidth goes up to 307GB/sand the chip can manage up to 64 GB of unified memory. Apple promises up to 20% more graphics performance compared to the M4 Pro, and up to 35% improvement in applications that use ray tracing, thanks to its dedicated third-generation engine included in the chip. The shading engine is also updated, incorporating second-generation dynamic caching technology and hardware-accelerated mesh shading. What this technology basically does is simplify complex geometries into more manageable meshes for when it’s time to render. In terms of AI, Apple claims that the M5 Pro offers more than four times the GPU performance for artificial intelligence compared to the M4 Pro, and more than six times compared to the M1 Pro. M5 Max: the ceiling of Apple laptops The M5 Max shares the same 18-core CPU as the M5 Pro, but doubles the graphics and memory resources. Your GPU reaches 40 coresthe unified memory bandwidth reaches 614 GB/s (twice as much as the M5 Pro) and can hold up to 128 GB of unified memory. In graphic performance, Apple assures an improvement of up to 20% compared to the M4 Maxand up to 30% in ray tracing applications. For AI tasks, the chip promises more than four times the peak GPU performance compared to its direct predecessor and more than six times compared to the M1 Max. With these astronomical figures, Apple puts on the table a tremendously capable chip for all types of professionals, from 3D artists to app developers, AI, etc. And in the end, having such an amount of bandwidth on a laptop makes tasks with large volumes of data much easier to digest. We will see in practice how they perform. The rest of the package: Neural Engine, Thunderbolt 5 and security Beyond the CPU and GPU, both chips incorporate a 16-core Neural Engine renewed, which promises a higher bandwidth connection to memory, ideal for functions of Apple Intelligence and other local AI applications. In connectivity, the M5 Pro and … Read more

Meta was building its AI chips to not be dependent on NVIDIA. Has ended up surrendering to the evidence

Meta faces a crucial year. While its competitors were laying the foundations for AI, Meta was burning money in the metaverse. That, along with a totally different approach to what Google or OpenAI were doing with AI, caused Zuckerberg’s company to pass a few years in the gutter. After reorganizing the house and sign the AI ​​A-TeamMeta was preparing so much a great model as new own chips for training. The thing… hasn’t turned out as expected. MTIA. Within the different Meta teams focused on artificial intelligence, there is one known as MTIA. It comes from ‘Meta Training and Inference Accelerator’ and its objective was research and design own chips training for artificial intelligence. Having your own chip makes all the sense in the world, since it is designed based on the needs you have. They have another advantage: you are not dependent on anyone else. If NVIDIA doesn’t have enough chips, it doesn’t matter because you have yours and can continue scaling data center systems (and those of Meta are immense) to continue the training and inference tasks. Meta was not going to be in charge of manufacturing, something that the highly reputable TSMCbut the program got off to a bad start. This is very difficult. Reuters He already mentioned it last year. After testing his first in-house developed training chip, Meta realized that things were not going well. It was underperforming what they expected, and it was also worse than the competition. They did not throw away the chips, but instead referred them to other systems (such as those for recommending Facebook and Instagram based on algorithms). The problem is that the performance of the training chip, the one really important for the AI ​​career, was not enough. Strategy change. In The Information They echo a statement from Meta stating that the company remains committed “to investing in different silicon options to meet our needs, which includes the advancement of our MTIA division” and they urge us to remain attentive to news that will be shared throughout this year. However, in the same medium it is noted that Meta has greatly lowered its expectations with its chips. The idea was to have two chips. On the one hand, Iris, a single instruction training chip that is easy to design, but from which it is difficult to extract all the juice in these training tasks. artificial intelligence training. On the other hand, Olympus, a chip that would be completed towards the end of this year and that would be the central part of Meta’s training clusters. According to The Information, there were many internal doubts about the stability of Olympus, its intricate design and profitability, so they have left it in the drawer to focus on more “simpler” chips. The evidence. In the end, if you can’t beat your “enemy”, join him. The sources consulted by The Information point out that, in addition to other complications, the training software was not as stable as what alternatives such as those from NVIDIA offer. And all of this has ended up causing two multimillion-dollar agreements. In a period of just a few days, Meta signed agreements with both AMD and NVIDIA so that both can supply them with chips to train the AI. It’s a win-win for everyone because Meta receives what he needs, NVIDIA has another client on a list it dominates and AMD continues to make a name for itself in the sector thanks to agreements like this one or the one they signed last year with OpenAI. In addition, Meta secures several sources so as not to depend only on one company. In fact, it is also estimated that they have signed an agreement to rent TPU units from Google. The competition. Meta’s objective, therefore, is to diversify its portfolio of AI chip suppliers as much as possible while continuing to investigate its own chips of which, supposedly, we will learn details later. They may continue investigating Olympus or a variant or decide on another approach. Because what is clear is that they must develop something ‘own’. NVIDIA and AMD are suppliers, not competitors as such. The real competition is OpenAI, X and Google, and the last two have their factories at full capacity. Google with its TPUsprocessors designed exclusively for AI, and xAI with its own chips that they abandoned and picked up more recently. Objective: dethrone NVIDIA. And all this occurs in a world in which everyone is ‘friends’, but enemies at the same time. I already say that NVIDIA is a hardware supplier, but they practically control the AI ​​​​computing market and are moving both in hardware and software. It is logical that other companies are investigating alternatives to boost their own AI. Added to the list is an Amazon that is also manufacturing some chips called Trainium3 UltraServer and OpenAI with its agreement with Broadcom to manufacture chips. It is, as I say, a curious scenario: everyone needs each other, and there is the “circular economy” of AI, but at the same time everyone wants to be independent. The problem is that NVIDIA has a huge advantage in this and has both the technology and the contracts with memory companies… and the contacts with which it ends up manufacturing the best chips: TSMC. In Xataka | Trump ordered the Pentagon to stop using Claude for being a “Woke AI.” Right after he bombed Iran using Claude

AI has hijacked the chips that made it possible

Lenovo already said it a few days ago: if you want, or need, to buy a device, buy it as soon as possible. The rise of artificial intelligence and the Big Tech fever is causing an unprecedented component crisis. Not us, but Micron. And who is Micron? Well one of the three companies that dominates production overall RAM memory. With only a few players dominating the game, what is happening is that everyone has focused on allocate your resources to the manufacture of high bandwidth memories for AI. For each resource that is allocated to the creation of memory for these GPUs, several are abandoned for the creation of Consumption RAM. And what does RAM have? Absolutely everything. And the industry has just sounded an alarm: this is not a temporary squeeze. It’s a tsunami. And it is going to take smartphones ahead. The AMR crisis is a tsunami It may seem like there’s a lot of hype in making predictions, but there’s a problem: those predictions come from within the industry itself. Talking about memory producers is talking about Micron, Samsung or SK Hynyx, but also Phison. This company manufactures the chips that allow memory modules to communicate with each other and with other components, and its CEO commented a few days ago that estimates suggest that This year between 200 and 250 million fewer mobile phones will be launched. It is an absolute outrage, but beyond the figure, something else stands out: there will be companies that will have to abandon the business. It’s logical. Think that it costs us a lot more money buy an SSD or a ‘RAM stick’but to companies too. It’s not that there is no RAM for consumers: it’s that there is no RAM for anything other than data centers. Therefore, if Nothing – to give an example of one that has already said that it will not launch a high-end product this year– buy the memory at a price three times higher, you have two options: Sell ​​the mobile much more expensive just for that componentso the user will perceive a brutal increase without an improvement in sections such as the processor or the cameras. Do not take out mobile. And if your business depends on continuing the cycle of annual launches, you have a problem. From within the industry, voices like those of SMIC, Intel or NVIDIA have already dropped that the crisis remains for a whilebut now it is the International Data Corporation that gives another pessimistic vision for the mobile market. And the interesting thing is that It is not something that will affect the entire sector equally. According to the IDC, the smartphone market will suffer the biggest drop in its history this year, sinking to a low not seen for more than a decade. We are not talking about benefits, but about units. How to collect ReutersIDC analysts believe that “what we are witnessing is not a temporary squeeze, but rather a tsunami-like shock originating in the memory supply chain. Apple has already said that it is something that it’s going to impact them in a key part of its business, but, precisely, Apple cannot complain as much as others. According to the group, this decline will affect low-end Android mobile manufacturers more severely than Apple or Samsung. These two giants fight in another range and, in fact, they may even benefit because consumers can opt for their models if they see that those of other brands begin to rise in price. The report points out the same as the CEO of Phison: there will be some smaller rivals that will exit the market entirely. And it is a huge problem for these low and mid-range phones. It is estimated that the price of memory represents 20% of the cost of these terminals, so a price increase to compensate would be unfeasible. They simply would not buy those phones. The IDC expects the average selling price of all smartphones to increase by 14% this year, and even if the market begins to recover between 2027 and 2028, some smaller manufacturers will struggle. As we said, there are many voices that are pointing to price increases also in mobile phones. At the moment, those that have already come out are some Samsung Galaxy S26 whose prices have remained the same compared to what was seen last year… but without increases in RAM or changes in many specifications. In the case of the S26 and S26+they are essentially the same phone as the S25. And they are from Samsung, one of the main RAM manufacturers, wow. We will see what happens with other models whose RAM commitments were not signed and committed before the crisis hit, but things are not looking good at all. Therefore, if you have to buy something for whatever reason, it is a bad time, but everything indicates that it will be much better than tomorrow. Image | Xataka (edited) In Xataka | We have reached a point where not even the CEOs of Google or Microsoft deny that we have an AI bubble

AMD wants to be the great alternative to NVIDIA in AI chips, and Meta has a plan that involves both

Meta has signed one of the largest contracts in history with AMD regarding chips for artificial intelligence. The agreement It represents a boost for AMD in its attempt to stand up to NVIDIA. It also shows how Lisa Su’s company intends to continue putting its foot even further into that little corner of circular financing that big technology companies have created in relation to AI. There are some nuances worth commenting on, so let’s get down to it. The agreement. Meta will purchase enough chips from AMD to power data centers with up to six gigawatts of computing power over the next five years. Just like esteem According to the Wall Street Journal, the total value of the contract would exceed $100 billion, since each gigawatt represents tens of billions in revenue for AMD, according to the company itself. First deliveries will begin in the second half of 2026, with a first gigawatt of AMD’s new MI450 chips. There is more. The agreement is not only about buying chips. As part of the pactAMD will offer Meta purchase guarantees (warrants) to acquire up to 160 million AMD shares at a symbolic price of one cent per share, which could make Meta the owner of up to 10% of the company. Of course, there are conditions, since the titles will be released in tranches as certain technical and commercial milestones are met. The last tranche will only be unlocked if AMD stock reaches $600, according to share the WSJ. On Monday it closed at $196.60, and after hearing the news, AMD shares have risen more than 10% in pre-opening. AMD seeks its place alongside NVIDIA. The company led by Lisa Su has been trying to gain ground in a market that NVIDIA dominates with more than 90% share. This agreement with Meta, together the one who signed with OpenAI in October in very similar terms, is its most ambitious bet to achieve it. “Meta has a lot of options. I want to make sure we always have a clear place at the table when they think about what they need,” counted His at the press conference prior to the announcement. Meta doesn’t put all her eggs in one basket. Zuckerberg’s company is not betting exclusively on AMD. Last week too closed an agreement with NVIDIA to acquire millions of its chips for tens of billions of dollars, and also is in talks with Google for the use of its AI processors. “At the scale at which we operate, there is room for all three,” counted Santosh Janardhan, head of infrastructure at Meta. The company’s strategy involves diversifying suppliers and ensuring sufficient supply for its major expansion. Meta spent 72 billion dollars last year in data centers and plans to disburse up to 135,000 million this year. And back to circular financing. Meta pays AMD for chips, and AMD returns some of that money in the form of shares. A similar scheme that we already saw in the agreement with AMD and OpenAI, but also identical to that of the rest of the big technology companies around AI. The problem of demand is also worth noting. And Reuters stood out the words of Matt Britzman, an analyst at Hargreaves Lansdown, who said that although Meta is securing supply and diversifying, “having to give up 10% of its capital suggests that AMD could have difficulty generating organic demand.” What’s coming now. The AI ​​race is not only fought in laboratories, but also in the field of finance. For AMD, the challenge now is to demonstrate that its chips live up to the demands. For Meta, the goal is to build with them “tens of gigawatts this decade and hundreds of gigawatts or more over time,” in words from Zuckerberg himself. All this while we are witnessing unprecedented spending on infrastructure and energy and of which we apparently do not see the bottom line. Cover image | AMD and Meta In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

has surpassed Sora and Veo without NVIDIA chips

Brad Pitt and Tom Cruise fighting on the rubble of a devastated city. The recreation of the most expensive shot in the movie F1 for nine cents. Dragon Ball scenes indistinguishable from the original anime. None of this has been filmed by anyone. Everything has generated Seedance 2.0the ByteDance video model launched a few days ago. It has its seams, because it builds on already-made creativity and much is missing on a narrative level. But the technical plan is impressive. Why is it important. It is no longer “China is coming, China is coming.” It’s “this is what China already does.” The independent consultancy CTOL places it above Sora 2 and Veo 3.1 for specific improvements: native 2K resolution, synchronized audio as standard, simultaneous input of text, image, video and audio (something that no Western rival offers at the same time) and 30% faster generation. Between the lines. The uncomfortable thing for Silicon Valley is not just the quality. The most uncomfortable thing is what it has been achieved with. Seedance has been built without H100NVIDIA chips banned for China. And it still surpasses the models that do have them. It’s already happened something similar with DeepSeek in LLMs and now it happens with synthetic video. The pattern is consolidating: the sanctions are not serving to slow China down, but rather accelerating it, because they force it to innovate faster. In dispute. Disney, Paramount, Warner Bros. and Sony have sent termination requests to ByteDance for violating copyright. AND SAG-AFTRA has denounced the use of voices and faces of actors without consent. The trigger was seeing that Seedance was capable of cloning someone’s voice from a single photograph. ByteDance has suspended that feature and promised improvements, without specifying which ones. Yes, but. The studies are more disarmed than they appear: Their claims attack the generation of protected content, not training with that content, which could be covered by fair use law. The music industry has already gone through a similar scenario and ended up negotiating. Hollywood is headed for the same fate: not being able to stop this, but at least getting a cut. Disney already does it with OpenAI. But with Bytedance geopolitics comes into play and whether it will be capable of something similar with a Chinese company, not a Californian one. The big question. ByteDance has an asset that no one can replicate: the largest short video ecosystem in the world. TikTok and Douyin know, on a scale that is unrivaled, what makes a video tick, and that knowledge is built into Seedance. When I reach CapCutthe most popular editing app in the world, the impact will reach another level. Right now the question is not whether Seedance is better than Sora. We have already seen that it is. The question is whether the world will be willing to use it. In Xataka | Seedance 2.0 has flooded the networks with AI-generated videos with Disney content. And Disney has picked up the phone Featured image | BiliBili – Seedance

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.