TSMC is the ‘kingpin’ of chips and Apple has always been its best friend. That just changed

TSMC is the foundry of the world. Although there are others like Samsung that have muscleit is the Taiwanese company that has conquered the high-performance chip segment. It has achieved this through capacity, technology and an alliance: that of Apple. For a decade, TSMC was Apple’s great friend, the one that manufactured its chips and the one that revolutionized – with the designs of Apple Silicon– the laptops. Now NVIDIA rules. And he has elbowed his way through. In short. In the midst of the AI ​​era and with a technological current in which it is impossible to separate oneself from NVIDIA, Apple has more than enough reasons to feel jealous. While the mobile segment faces cuts unprecedented due to the crisis of RAM and components, and with Tim Cook himself -CEO of Apple- commenting on the difficulties they will have This 2026, artificial intelligence is going like a rocket. Major memory manufacturers have pivoted to high-bandwidth memory for AI GPUs, and companies like NVIDIA, Phison, amd and even the Chinese ones like SMIC and Huawei They are clapping their ears. They have made the AI ​​Big Techs dependent on their hardware, and no one makes that hardware like TSMC. Result? According to the latest reports, NVIDIA will become its largest customer this year. The importance of ‘Customer A’. It may seem like an unimportant change of chips, but it is actually more relevant than we think. The difference between a ‘Customer A’ and a ‘Customer B’ implies that, faced with production bottlenecks, one of the two is given priority. We already saw this in the 2020 semiconductor crisis when, precisely, half of the industry was drowning (cars, cameras, TVs and mobile phones) while Apple did not have such bad forecasts because it was the darling of a TSMC that was going to focus on iPhone chips to consolidate a lucrative relationship that began with the Apple A8 of the iPhone 6. Jensen Huang himself -CEO of NVIDIA- has commented the quite proud play on a podcast. “Morris –Morris Chang, founder of TSMC and friend of Huang – will be happy to know that NVIDIA is TSMC’s largest customer right now,” said the CEO. It is because little margin: 19% for NVIDIA compared to 17% for Apple, but it is an achievement and a thermometer of how the industry is doing. Last year, NVIDIA’s contribution to TSMC was 12%, which is a considerable jump in a very short time. “I need a lot of wafers”. Obviously, this does not imply that TSMC is going to stop pampering Apple over other companies. Apple has a huge percentage of the mobile segment, but NVIDIA is crucial to keep the AI ​​machinery rolling. Despite the Google attempts with its TPUsthe agreements of OpenAI with Broadcomthose of Goal with NVIDIA and AMD or those of xAI manufacturing its chipsNVIDIA is still the one who splits the cod. Even Chinese companies need NVIDIA GPUs and, of course, NVIDIA is more than willing to take a cut. On a recent visit to Taiwan,. Huang met with local industry heavyweights and noted that “NVIDIA would need a lot of wafers this year,” putting even more pressure to a TSMC that is crucial in the artificial intelligence chain. Synonym of success. Samsung, Huawei and SMIC They are fighting to be alternatives in case TSMC collapses. But TSMC has put us on the couch and has been looking at how to diversify the business for a few years. In Taiwan they maintain the heart and the muscle, but the plant in Europe – in Germany – is underway and they already have an operational foundry in the United States. In fact, there are plans to expand it because they have more and more clients who need a very specific product that works like a Swiss watch. But this has a B side: all the industry’s eggs are in the same basket. If TSMC fails, the house of cards can collapse. There is already some report that indicates that the American plant that manufactures for Apple, Intel, NVIDIA or AMD is overwhelmed due to a huge amount of orders. And there, precisely, lies the importance of being a client A… or a client B. Images | TSMC, NVIDIA In Xataka | SK is one of the chip whales and it is clear about one thing: not all the money in the world will satisfy AI’s hunger for RAM

Apple completely changes the architecture of its chips with a textbook “divide and conquer”

The week started with a flurry of news from Apple, something we already expected after Tim Cook’s words stating that it was going to be a “great week.” And in addition to the new iPhone 17e and iPad Airtoday it was the MacBook’s turn. In this article we wanted to focus on explaining what is special about the new M5 Pro and M5 Max processors, chips that land at the latest MacBook Pro. The company follows the same pattern as always. First comes the base chip, the M5, which we already saw in the 14-inch MacBook Prohe iPad Pro and Apple Vision Proalong with the new MacBook Air, and then, they take advantage of their most capable equipment to welcome the most powerful variants. But this year there is something different, and that is that the company uses a new manufacturing architecture internal that Apple had not used until now in its Mac chips. We will tell you all the details. Apple’s M4 Pro and M4 Max SoCs, in numbers m5 pro m5 max M5 m4 photolithography 3nm (3rd gen) 3nm (2nd generation) 3nm (2nd generation) 3nm architecture Fusion Fusion A single die A single die CPU cores Up to 18 18 Up to 10 Up to 10 Supercores 6 6 4 4 performance cores 12 12 6 6 GPU cores Up to 20 Up to 40 Up to 10 Up to 10 neural engine 16 16 16 16 maximum unified memory 64 128 32 32 bandwidth 307GB/s 614 GB/s 153GB/s 120GB/s ray tracing Yes (3rd gen.) Yes (3rd gen.) Yes (3rd gen.) Yeah neural accelerator on GPU Yes (per core) Yes (per core) Yes (per core) No connectivity Thunderbolt 5 Thunderbolt 5 Thunderbolt 4 Thunderbolt 4 / USB 4 codecs H.264, HEVC, ProRes, AV1 H.264, HEVC, ProRes, AV1 H.264, HEVC, ProRes, AV1 H.264, HEVC, ProRes, AV1 memory integrity enforcement Yeah Yeah No No The big news: the Fusion architecture Perhaps one of the most striking aspects of these new chips is the call ‘Fusion’ architecture. Apple has designed this SoC (system on a chip) by combining two other chips manufactured in TSMC’s third-generation 3-nanometer node. The signature promise that the chips communicate with each other through very high bandwidth and minimal latency. Why this approach? As chips grow in number of cores and memory needs, Putting everything on a single piece of silicon becomes increasingly complicated and expensive. The solution of dividing it into two interconnected chips allows its capabilities to be scaled without sacrificing efficiency. Each of these chips integrates CPU, GPU, neural engine, unified memory controller, Media Engine (which are the cores dedicated to processing multimedia codecs) and controllers. Thunderbolt 5. It is, in essence, the basis that makes it possible for the M5 Max to reach figures that we previously only saw in desktop chips. A new CPU from top to bottom Both the M5 Pro and M5 Max share the same CPU design: 18 cores organized into two very different types. On the one hand there are the so-called super cores: six high-performance cores which Apple also incorporated into the standard M5. The company assures which are “the world’s fastest CPU cores in single-thread performance”thanks to the fact that they handle greater bandwidth, and have a new cache hierarchy and better branch prediction. On the other hand, the chip incorporates 12 performance cores completely new, different from the efficiency cores we have seen in previous generations. They are optimized specifically for multi-threaded workloads that require sustained power without skyrocketing consumption. The combination of both groups of cores allows, according to Apple, a jump of up to 30% in performance for professional tasks regarding M4 Pro and M4 Maxand up to 2.5 times more multi-threaded performance compared to M1 Pro and M1 Max. It will be interesting to see this performance improvement in action when we test the devices in depth. What the M5 Pro promises Your GPU scales up to 20 cores next generation, each with an integrated neural accelerator. Memory bandwidth goes up to 307GB/sand the chip can manage up to 64 GB of unified memory. Apple promises up to 20% more graphics performance compared to the M4 Pro, and up to 35% improvement in applications that use ray tracing, thanks to its dedicated third-generation engine included in the chip. The shading engine is also updated, incorporating second-generation dynamic caching technology and hardware-accelerated mesh shading. What this technology basically does is simplify complex geometries into more manageable meshes for when it’s time to render. In terms of AI, Apple claims that the M5 Pro offers more than four times the GPU performance for artificial intelligence compared to the M4 Pro, and more than six times compared to the M1 Pro. M5 Max: the ceiling of Apple laptops The M5 Max shares the same 18-core CPU as the M5 Pro, but doubles the graphics and memory resources. Your GPU reaches 40 coresthe unified memory bandwidth reaches 614 GB/s (twice as much as the M5 Pro) and can hold up to 128 GB of unified memory. In graphic performance, Apple assures an improvement of up to 20% compared to the M4 Maxand up to 30% in ray tracing applications. For AI tasks, the chip promises more than four times the peak GPU performance compared to its direct predecessor and more than six times compared to the M1 Max. With these astronomical figures, Apple puts on the table a tremendously capable chip for all types of professionals, from 3D artists to app developers, AI, etc. And in the end, having such an amount of bandwidth on a laptop makes tasks with large volumes of data much easier to digest. We will see in practice how they perform. The rest of the package: Neural Engine, Thunderbolt 5 and security Beyond the CPU and GPU, both chips incorporate a 16-core Neural Engine renewed, which promises a higher bandwidth connection to memory, ideal for functions of Apple Intelligence and other local AI applications. In connectivity, the M5 Pro and … Read more

Meta was building its AI chips to not be dependent on NVIDIA. Has ended up surrendering to the evidence

Meta faces a crucial year. While its competitors were laying the foundations for AI, Meta was burning money in the metaverse. That, along with a totally different approach to what Google or OpenAI were doing with AI, caused Zuckerberg’s company to pass a few years in the gutter. After reorganizing the house and sign the AI ​​A-TeamMeta was preparing so much a great model as new own chips for training. The thing… hasn’t turned out as expected. MTIA. Within the different Meta teams focused on artificial intelligence, there is one known as MTIA. It comes from ‘Meta Training and Inference Accelerator’ and its objective was research and design own chips training for artificial intelligence. Having your own chip makes all the sense in the world, since it is designed based on the needs you have. They have another advantage: you are not dependent on anyone else. If NVIDIA doesn’t have enough chips, it doesn’t matter because you have yours and can continue scaling data center systems (and those of Meta are immense) to continue the training and inference tasks. Meta was not going to be in charge of manufacturing, something that the highly reputable TSMCbut the program got off to a bad start. This is very difficult. Reuters He already mentioned it last year. After testing his first in-house developed training chip, Meta realized that things were not going well. It was underperforming what they expected, and it was also worse than the competition. They did not throw away the chips, but instead referred them to other systems (such as those for recommending Facebook and Instagram based on algorithms). The problem is that the performance of the training chip, the one really important for the AI ​​career, was not enough. Strategy change. In The Information They echo a statement from Meta stating that the company remains committed “to investing in different silicon options to meet our needs, which includes the advancement of our MTIA division” and they urge us to remain attentive to news that will be shared throughout this year. However, in the same medium it is noted that Meta has greatly lowered its expectations with its chips. The idea was to have two chips. On the one hand, Iris, a single instruction training chip that is easy to design, but from which it is difficult to extract all the juice in these training tasks. artificial intelligence training. On the other hand, Olympus, a chip that would be completed towards the end of this year and that would be the central part of Meta’s training clusters. According to The Information, there were many internal doubts about the stability of Olympus, its intricate design and profitability, so they have left it in the drawer to focus on more “simpler” chips. The evidence. In the end, if you can’t beat your “enemy”, join him. The sources consulted by The Information point out that, in addition to other complications, the training software was not as stable as what alternatives such as those from NVIDIA offer. And all of this has ended up causing two multimillion-dollar agreements. In a period of just a few days, Meta signed agreements with both AMD and NVIDIA so that both can supply them with chips to train the AI. It’s a win-win for everyone because Meta receives what he needs, NVIDIA has another client on a list it dominates and AMD continues to make a name for itself in the sector thanks to agreements like this one or the one they signed last year with OpenAI. In addition, Meta secures several sources so as not to depend only on one company. In fact, it is also estimated that they have signed an agreement to rent TPU units from Google. The competition. Meta’s objective, therefore, is to diversify its portfolio of AI chip suppliers as much as possible while continuing to investigate its own chips of which, supposedly, we will learn details later. They may continue investigating Olympus or a variant or decide on another approach. Because what is clear is that they must develop something ‘own’. NVIDIA and AMD are suppliers, not competitors as such. The real competition is OpenAI, X and Google, and the last two have their factories at full capacity. Google with its TPUsprocessors designed exclusively for AI, and xAI with its own chips that they abandoned and picked up more recently. Objective: dethrone NVIDIA. And all this occurs in a world in which everyone is ‘friends’, but enemies at the same time. I already say that NVIDIA is a hardware supplier, but they practically control the AI ​​​​computing market and are moving both in hardware and software. It is logical that other companies are investigating alternatives to boost their own AI. Added to the list is an Amazon that is also manufacturing some chips called Trainium3 UltraServer and OpenAI with its agreement with Broadcom to manufacture chips. It is, as I say, a curious scenario: everyone needs each other, and there is the “circular economy” of AI, but at the same time everyone wants to be independent. The problem is that NVIDIA has a huge advantage in this and has both the technology and the contracts with memory companies… and the contacts with which it ends up manufacturing the best chips: TSMC. In Xataka | Trump ordered the Pentagon to stop using Claude for being a “Woke AI.” Right after he bombed Iran using Claude

AI has hijacked the chips that made it possible

Lenovo already said it a few days ago: if you want, or need, to buy a device, buy it as soon as possible. The rise of artificial intelligence and the Big Tech fever is causing an unprecedented component crisis. Not us, but Micron. And who is Micron? Well one of the three companies that dominates production overall RAM memory. With only a few players dominating the game, what is happening is that everyone has focused on allocate your resources to the manufacture of high bandwidth memories for AI. For each resource that is allocated to the creation of memory for these GPUs, several are abandoned for the creation of Consumption RAM. And what does RAM have? Absolutely everything. And the industry has just sounded an alarm: this is not a temporary squeeze. It’s a tsunami. And it is going to take smartphones ahead. The AMR crisis is a tsunami It may seem like there’s a lot of hype in making predictions, but there’s a problem: those predictions come from within the industry itself. Talking about memory producers is talking about Micron, Samsung or SK Hynyx, but also Phison. This company manufactures the chips that allow memory modules to communicate with each other and with other components, and its CEO commented a few days ago that estimates suggest that This year between 200 and 250 million fewer mobile phones will be launched. It is an absolute outrage, but beyond the figure, something else stands out: there will be companies that will have to abandon the business. It’s logical. Think that it costs us a lot more money buy an SSD or a ‘RAM stick’but to companies too. It’s not that there is no RAM for consumers: it’s that there is no RAM for anything other than data centers. Therefore, if Nothing – to give an example of one that has already said that it will not launch a high-end product this year– buy the memory at a price three times higher, you have two options: Sell ​​the mobile much more expensive just for that componentso the user will perceive a brutal increase without an improvement in sections such as the processor or the cameras. Do not take out mobile. And if your business depends on continuing the cycle of annual launches, you have a problem. From within the industry, voices like those of SMIC, Intel or NVIDIA have already dropped that the crisis remains for a whilebut now it is the International Data Corporation that gives another pessimistic vision for the mobile market. And the interesting thing is that It is not something that will affect the entire sector equally. According to the IDC, the smartphone market will suffer the biggest drop in its history this year, sinking to a low not seen for more than a decade. We are not talking about benefits, but about units. How to collect ReutersIDC analysts believe that “what we are witnessing is not a temporary squeeze, but rather a tsunami-like shock originating in the memory supply chain. Apple has already said that it is something that it’s going to impact them in a key part of its business, but, precisely, Apple cannot complain as much as others. According to the group, this decline will affect low-end Android mobile manufacturers more severely than Apple or Samsung. These two giants fight in another range and, in fact, they may even benefit because consumers can opt for their models if they see that those of other brands begin to rise in price. The report points out the same as the CEO of Phison: there will be some smaller rivals that will exit the market entirely. And it is a huge problem for these low and mid-range phones. It is estimated that the price of memory represents 20% of the cost of these terminals, so a price increase to compensate would be unfeasible. They simply would not buy those phones. The IDC expects the average selling price of all smartphones to increase by 14% this year, and even if the market begins to recover between 2027 and 2028, some smaller manufacturers will struggle. As we said, there are many voices that are pointing to price increases also in mobile phones. At the moment, those that have already come out are some Samsung Galaxy S26 whose prices have remained the same compared to what was seen last year… but without increases in RAM or changes in many specifications. In the case of the S26 and S26+they are essentially the same phone as the S25. And they are from Samsung, one of the main RAM manufacturers, wow. We will see what happens with other models whose RAM commitments were not signed and committed before the crisis hit, but things are not looking good at all. Therefore, if you have to buy something for whatever reason, it is a bad time, but everything indicates that it will be much better than tomorrow. Image | Xataka (edited) In Xataka | We have reached a point where not even the CEOs of Google or Microsoft deny that we have an AI bubble

AMD wants to be the great alternative to NVIDIA in AI chips, and Meta has a plan that involves both

Meta has signed one of the largest contracts in history with AMD regarding chips for artificial intelligence. The agreement It represents a boost for AMD in its attempt to stand up to NVIDIA. It also shows how Lisa Su’s company intends to continue putting its foot even further into that little corner of circular financing that big technology companies have created in relation to AI. There are some nuances worth commenting on, so let’s get down to it. The agreement. Meta will purchase enough chips from AMD to power data centers with up to six gigawatts of computing power over the next five years. Just like esteem According to the Wall Street Journal, the total value of the contract would exceed $100 billion, since each gigawatt represents tens of billions in revenue for AMD, according to the company itself. First deliveries will begin in the second half of 2026, with a first gigawatt of AMD’s new MI450 chips. There is more. The agreement is not only about buying chips. As part of the pactAMD will offer Meta purchase guarantees (warrants) to acquire up to 160 million AMD shares at a symbolic price of one cent per share, which could make Meta the owner of up to 10% of the company. Of course, there are conditions, since the titles will be released in tranches as certain technical and commercial milestones are met. The last tranche will only be unlocked if AMD stock reaches $600, according to share the WSJ. On Monday it closed at $196.60, and after hearing the news, AMD shares have risen more than 10% in pre-opening. AMD seeks its place alongside NVIDIA. The company led by Lisa Su has been trying to gain ground in a market that NVIDIA dominates with more than 90% share. This agreement with Meta, together the one who signed with OpenAI in October in very similar terms, is its most ambitious bet to achieve it. “Meta has a lot of options. I want to make sure we always have a clear place at the table when they think about what they need,” counted His at the press conference prior to the announcement. Meta doesn’t put all her eggs in one basket. Zuckerberg’s company is not betting exclusively on AMD. Last week too closed an agreement with NVIDIA to acquire millions of its chips for tens of billions of dollars, and also is in talks with Google for the use of its AI processors. “At the scale at which we operate, there is room for all three,” counted Santosh Janardhan, head of infrastructure at Meta. The company’s strategy involves diversifying suppliers and ensuring sufficient supply for its major expansion. Meta spent 72 billion dollars last year in data centers and plans to disburse up to 135,000 million this year. And back to circular financing. Meta pays AMD for chips, and AMD returns some of that money in the form of shares. A similar scheme that we already saw in the agreement with AMD and OpenAI, but also identical to that of the rest of the big technology companies around AI. The problem of demand is also worth noting. And Reuters stood out the words of Matt Britzman, an analyst at Hargreaves Lansdown, who said that although Meta is securing supply and diversifying, “having to give up 10% of its capital suggests that AMD could have difficulty generating organic demand.” What’s coming now. The AI ​​race is not only fought in laboratories, but also in the field of finance. For AMD, the challenge now is to demonstrate that its chips live up to the demands. For Meta, the goal is to build with them “tens of gigawatts this decade and hundreds of gigawatts or more over time,” in words from Zuckerberg himself. All this while we are witnessing unprecedented spending on infrastructure and energy and of which we apparently do not see the bottom line. Cover image | AMD and Meta In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

has surpassed Sora and Veo without NVIDIA chips

Brad Pitt and Tom Cruise fighting on the rubble of a devastated city. The recreation of the most expensive shot in the movie F1 for nine cents. Dragon Ball scenes indistinguishable from the original anime. None of this has been filmed by anyone. Everything has generated Seedance 2.0the ByteDance video model launched a few days ago. It has its seams, because it builds on already-made creativity and much is missing on a narrative level. But the technical plan is impressive. Why is it important. It is no longer “China is coming, China is coming.” It’s “this is what China already does.” The independent consultancy CTOL places it above Sora 2 and Veo 3.1 for specific improvements: native 2K resolution, synchronized audio as standard, simultaneous input of text, image, video and audio (something that no Western rival offers at the same time) and 30% faster generation. Between the lines. The uncomfortable thing for Silicon Valley is not just the quality. The most uncomfortable thing is what it has been achieved with. Seedance has been built without H100NVIDIA chips banned for China. And it still surpasses the models that do have them. It’s already happened something similar with DeepSeek in LLMs and now it happens with synthetic video. The pattern is consolidating: the sanctions are not serving to slow China down, but rather accelerating it, because they force it to innovate faster. In dispute. Disney, Paramount, Warner Bros. and Sony have sent termination requests to ByteDance for violating copyright. AND SAG-AFTRA has denounced the use of voices and faces of actors without consent. The trigger was seeing that Seedance was capable of cloning someone’s voice from a single photograph. ByteDance has suspended that feature and promised improvements, without specifying which ones. Yes, but. The studies are more disarmed than they appear: Their claims attack the generation of protected content, not training with that content, which could be covered by fair use law. The music industry has already gone through a similar scenario and ended up negotiating. Hollywood is headed for the same fate: not being able to stop this, but at least getting a cut. Disney already does it with OpenAI. But with Bytedance geopolitics comes into play and whether it will be capable of something similar with a Chinese company, not a Californian one. The big question. ByteDance has an asset that no one can replicate: the largest short video ecosystem in the world. TikTok and Douyin know, on a scale that is unrivaled, what makes a video tick, and that knowledge is built into Seedance. When I reach CapCutthe most popular editing app in the world, the impact will reach another level. Right now the question is not whether Seedance is better than Sora. We have already seen that it is. The question is whether the world will be willing to use it. In Xataka | Seedance 2.0 has flooded the networks with AI-generated videos with Disney content. And Disney has picked up the phone Featured image | BiliBili – Seedance

what is needed are cheaper chips

Let’s face it, I’ve been using high-end phones for more than a decade, but I tend to test mid-range phones quite frequently and it’s been clear to me for a long time that you can buy a smartphone for 300 euros and have decent performance for standard use. Obviously, not for a gamer or a demanding user, but for the average user. Hence the mobile phones that I most recommend They cost between 300 and 500 euros. This upward range has two explanations: the first, that in addition to performance, it is common for “a good camera” to appear on many people’s wish lists. And here, the Google Pixel A is the king. The second is a market where the price increase is inevitable because everything goes up, but especially components like memory or storagewhich can lead to the tragic news of recover 4GB of RAM. Qualcomm is the manufacturer that equips most of the premium Android phones on the market and according to rumorsits next flagship will arrive twice: a Pro version for the ultra-premium range and another for the pure high-end. The difference between the two would be the type of RAM supported and the GPU configuration, similar to what Apple does with its iPhones. Surely their benchmarks are printable, but More than its advantages, what worries me is the price. The cost of current Snapdragon 8 Elite Gen 5 They are around 280 dollars and for the following ones everything indicates that They will overcome the barrier of 300 dollars. This means that for many manufacturers, just purchasing the processor will account for a third of the RRP of their devices. Google shows that another path is possible Meanwhile, Google is going its own way within the ecosystem: Your Tensors are never at the top of performance and not only do they not seem to care, but they continue to offer seven years of updates even in their A versions. For more ridicule, their new Google Pixel 10A even repeats processor. And nothing happens: any mid-range from the last three years will allow you to successfully use messaging, social networks or browse the internet. It is true that there are specific use scenarios where continuing to add more and better hardware can be differential, such as ray tracing, moving games with a certain cadence, or AI. But on the one hand it is something niche and on the other, current models can still give a fight. And I’m not just talking about the high-end. Google Product Manager, Toni Urban, makes quite a statement of intent: We had to make difficult engineering decisions to maintain that price of 549 euros, which we have maintained for four generations. The chipset is part of that consideration. We knew we could still deliver the best of Google’s AI and the best camera experience with the chip we had; We didn’t feel like we were sacrificing quality, and we still continued to incorporate important improvements. If a medium range of a couple of years can continue to carry out normal and current tasks in a solvent manner, with a veteran high-end, even better. It is rare to find someone who renews a high-end one for another citing performance reasons. The bottleneck is another: it could be the camera, thermal management or the battery and its performance, because performance is a problem solved years ago on mobile phones. Google’s decision not only seems right from a price point of view, but also from a balance point of view: performance tests take a backseat when factors such as temperature or battery life act as limitations. Not obsessing over performance allows manufacturers to differentiate themselves in other areas or simply maintain their prices. And that is no small thing. In Xataka | The best mobile phones (2026), we have tested them and here are their analyzes In Xataka | Best mobile phones in quality price. Which one to buy based on use and seven recommended models

While the world fights for the most advanced chips, there is a company making gold with the ones that go inside your washing machine

If you have walked through an industrial estate, you have surely come across the typical warehouse with the sign “Spare Parts and Bearings (Insert name)”. And it’s easy for you, at that moment, to wonder what the hell a bearing is and how the rest of the businesses are closing, except for ‘Rodamientos Paco’. Well, in the world of technology there is also a ‘Paco Bearings’. Is called Texas Instruments and, in full era of sophisticated chips, artificial intelligence and quantum computingis breaking it with something very specific. Boring chips. In short. Companies are in the middle of the results presentation period. In this round, the managers inform their shareholders about the direction of the company, while allowing us to learn about data on upcoming devices or business plans. Texas Instruments usually goes unnoticed in these more ‘techie’ times, but they are finishing up a fiscal year with very positive numbers. The fourth quarter they closed with 4,420 million and anticipate increasing to 4,680 million in the first quarter. In the last three months, its share value has increased by 18%. Its shares are among the highest among companies in the same sector and, as we said before, the curious thing is that it is doing all this almost silently. Live outside the hype. You can constantly read information about cutting-edge chips on Xataka. It is true that the current nature of components is marked by the current RAM memory crisis either of SSDsbut the snapdragonthe Apple Silicon, the latest from NVIDIA or AMD It is what usually marks the conversation. They are the most sophisticated and interesting chips, but a coffee maker does not need a chip like that. That’s where Texas Instruments comes into play. Because calling their chips “boring” is not an exaggeration. They are outside the AI ​​hype, the data centers and the most exciting features because its market is different: sensors, connectivity, controllers. Where are Texas Instruments chips? In routers, smart refrigerators, washing machines, air conditioners, as secondary chips in televisions, in remote controls, in calculators or in smart smoke detectors. But they not only make chips, but also another series of integrated circuits for wireless communications, signal processing in all types of devices and even sensors that detect tire pressure, engine temperatures or the air conditioning system. Texas Instruments chips and sensors are in… everything. Even in weapons. An example of a tiny sophisticated chip in the headphone stick… with only 16 KB of RAM. Because you don’t need more Huge investment. And the company is not sitting idly by with the huge amount of money it is making with its ubiquity strategy. a few days ago, Bloomberg reported on the agreement that Texas Instruments had reached to buy Silicon Labs. Also American, also with ‘boring’ chips that They are inside ‘things’ of all kinds. The operation is not closed, but the smell of it caused Silicon Labs shares to increase 51% to more than $206. The curious thing? That Texas Instruments is willing to pay more: up to $231 per share to investors. The operation has not been closed, but there is talk of a purchase of 7.5 billion dollars, well above the 4.5 billion that Silicon Labs is “worth.” Great year ≠ perfect year. All of this is… outrageous, but it indicates something very specific: they are spending a lot of money to reinforce a huge, stable market that goes unnoticed in a time when everything revolves around artificial intelligence and sophisticated technology. The purchase of Silicon Labs, paying such a high premium per share, shows that they know very well what they are getting into and the value of a market in which they are a key player. But one thing must also be noted: although revenues rose, annual profits did not increase at the same rate. He total invoiced increased by 13%, but as they have also invested more, this increase in costs reduced the profit margin, which “barely” increased by 4.2%, with some quarters being worse than others (in Q4 they fell by 3.5%). They haven’t had a perfect fiscal year, but there is one thing that is undeniable: they are still the kings of their niche. If we can describe being everywhere as a “niche”. In Xataka | While half the world looks for an alternative to Taiwan, Jensen Huang is very clear about the harsh reality: there is no

China has given the green light to buy NVIDIA chips. The problem for your companies is that you will closely monitor each operation

NVIDIA has hundreds of thousands of H200 chips trapped in limbo. It is one of the company’s most powerful chips and the standard of the companies that are training AI. It is preferred for train the modelsand also the weapon with which The United States sought to leave China out of the game. After movements by the two countries, The US finally approved (25% commission through) that NVIDIA could sell the H200 to Chinese companies. China has taken some time, but finally it seems that it will accept the offer reluctantly and with an ace up its sleeve: DeepSeek. The mess. The H200 issue is a soap opera. In the context of the trade and technology warthe United States played one of the best cards they had: preventing one of their most powerful products from reaching Chinese hands. They also hindered European companies like ASML from selling their most advanced machinery for making semiconductors to companies like Huawei or SMIC. China responded, of course. He attacked with rare earth -that control almost exclusively– and has been showing little by little that they can not only create advanced semiconductors on your own (and pushing old technology to the limit), but they are alive and well in the battle for artificial intelligence. Furthermore, they have developed a robotics industry and other aerospace practically out of nowhere, making a vacuum to Western chips, and that has caught the United States on the wrong foot. China makes a move. Seeing that China was advancing and the US was not getting a cent, they moved tab: They opened the door for NVIDIA to sell its H200s to certain Chinese customers. For each sale, the US took 25%, but it seems that it was something that the Chinese Big-Tech wanted to take on because they need, at least currently, that NVIDIA technology. And the GPU company itself increased production expecting two million orders above normal. The problem is that everything moved very quickly. without China, really, having said anything. Because here it is not just a question of whether the United States lets it sell, but whether China wants its companies to buy. In a tense calm that left requests halted and thousands of H200 in limbo, China has finally made a move. According to Reuters, and as we told a few days agothere are companies that will be able to place orders for the H200. There is a “but”. It is not carte blanche for anyone to place an order. According to WSJ, Chinese authorities have indicated that each purchase must be for a use considered “necessary.” That includes advanced research or development in AI. Because two factors come into play here: On the one hand, it seems that there are Chinese companies that are pressuring the Government to let them access the technology. NVIDIA was allowed to sell the H20 to Chinese customers, but if these customers can now buy the H200 – six times more capable – they want to take advantage of it. But China does not want everyone to throw themselves into the arms of NVIDIA because, precisely, they have been building their own semiconductor industry for five years with SMIC and Huawei in the lead. China’s goal is to stop depending on the US, and if everyone starts buying US chips like crazy, they will not advance on the technological roadmap that the country marked a long time ago. That is to say, it seems that Chinese regulators are going to evaluate which companies can or cannot buy the H200 depending on the use they want to give it. It has been reported that, for example, ByteDance, Alibaba and Tencent will be able to import 400,000 H200 chips. But there is a twist to all this. deepseek. China’s quintessential artificial intelligence model is one that has turned both NVIDIA and the United States upside down. The question was how it was possible that, without access to the latest technology, DeepSeek could optimize its AI so much. On the one hand, ingenuity to circumvent the CUDA standard. On the other hand, there are those who are clear that DeepSeek has been trained with NVIDIA cards… smuggled. Accusations of smuggling are nothing new in this commercial and technological war, but precisely, and according to Reutersthe company that joins NVIDIA’s massive H200 order along with ByteDance, Alibaba and Tencent is… DeepSeek. Officially, and without restrictions, they will be able to access the H200. “We have given China the argument to launch its own industry and, at the same time, we are giving them access to ours again” – Samuel Bresnick Whiplash. I really liked this concept that Wired uses to define American policy in this regard. They are the ones who started the conflict and their position has been pivoting about tariffsbut with more or less lax measures depending on the moment. It seems clear that, now, they are at a point where they have had to think “if China is going to somehow reach the technology, at least we sell it and earn something along the way.” Samuel Bresnick is a researcher at the Georgetown Center for Security and Technology and comments in Wired that the worst thing you can do is “come and go,” noting that “we have given China the argument to launch its own industry and, at the same time, we once again give them access to ours.” Get your batteries. And meanwhile, there’s Jensen Huang. The CEO of NVIDIA has taken a mass bath in recent days in both China and Taiwan, where he has met with some of the companies that move the semiconductor sector. NVIDIA sat at the same table, TSMCFoxconn or Asus, and Huang came out, half joking, half seriouswith one request: you need wafers and RAM. Regarding the purchase of the H200, China is walking on eggshells, and it makes perfect sense. It is at a point where it does not want to be left behind, and to do so it needs its … Read more

The panic of technology companies about running out of chips has broken the RAM market. Manufacturers have said enough

The RAM market is completely broken. In November of last year we talked about a 300% increasewas the result of the perfect storm caused by AI and data centers. Faced with brutal shortages, large companies are trying to get hold of as much memory as possible, which further destabilizes the market. Now manufacturers are taking matters into their own hands. No hoarders, thank you. In an extensive report published by Nikkei Asiatalk about the big three DRAM manufacturers (Samsung, Micron and SK Hynix) implementing stricter rules for their customers in order to prevent them from hoarding memory. The measures are aimed at ensuring that demand is real, that is, that the chips are not going to end up collecting dust in a warehouse “just in case.” Manufacturers are asking for details about who the chips are for, the quantities and what they will be used for. OpenAI’s dirty deal. We go back to October 1, 2025. OpenAI signed an agreement with Samsung and SK Hynix to a potential demand for 900,000 DRAM wafers per month. The figure is equivalent to 40% of all world production, absurd, but what is striking is the “potential.” As they point out multiple users on Xare securing a critical product for data centers that have not yet been built, with money they do not have. Some analysts called this agreement “The dirty DRAM deal”whose hidden objective seemed to point to a rather dirty move: to create a moat by preventing its competitors from accessing critical technology. Open orders. The AI ​​race is not going to stop because chips rise in price and big technology companies have done what they had to do: everything possible to get chips. At the end of last year, Reuters He said that some companies such as Google, Amazon, Microsoft and Meta had even approached Micron with open orders, that is, they were willing to accept all the memory they could supply, without a price cap. A full-fledged preventive hoarding. Compulsive shopping. AI companies are not the only ones that have tried to secure their chips, PC manufacturers such as Asus, MSI, Dell or HP also began to buy RAM compulsively at the end of 2025 for accumulate inventory before what was coming. Manufacturers are aware of overorders and that is why they are now demanding data on the end customer. The winners. While everyone is fighting to get their chips, Samsung is getting rich. It is not only that has tripled its profitsFurthermore, it is the technological more has appreciated in 2025ahead of Alphabet and TSMC. For its part, SK Hynix has doubled its profitsmainly due to the boom in demand for high-bandwidth memory (HBM), of which it is a key supplier. In Xataka | There is a lack of RAM memories and Micron is going to spend 1.8 billion dollars to produce more. but not for you Image | Unsplashedited

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.