It will impose tariffs from 25 to 100% to the chips manufactured in Taiwan

Donald Trump is fulfilling his word. During the electoral campaign he promised that he would make the decisions that were necessary to reinforce the business of US companies within the US. And he also assured that he would sanction tariffs all those countries that threaten the interests of the nation that leads since January 20. As soon as he has been in the government for a week, and he is doing both. “In the very close future we will impose tariffs on foreign production of computer chips, semiconductors and pharmaceutical products to return the manufacture of these essential goods to the US (…) went to Taiwan; now we want them to return. We do not want to give them Millions of dollars in the ridiculous Biden program. 100%”, Donald Trump declared yesterday during a conference that was held in Florida (USA). This measure of the Donald Trump’s government has not taken TSMC offset The express mention to Taiwan that the US president has made a few hours ago is a very clear allusion to TSMC. On this Asian island there are other semiconductor manufacturers, such as UMC, but its relevance in the chip market is much lower than that of the company currently leading CC Wei. TSMC dominates the integrated circuit market with A quota of approximately 60%so your leadership in the chip manufacturing industry is indisputable. TSMC has been leaching its strategy for more than four years to extend its semiconductor manufacturing infrastructure beyond Taiwan’s borders Anyway, the passing step that the US administration is going to give will not take TSMC by surprise. This company has been outlining its strategy for more than four years to extend your manufacturing infrastructure of semiconductors beyond the borders of Taiwan. And he is doing it for two reasons. On the one hand it is an effective way to protect your business if at any time it is triggered A war conflict between China and Taiwanand their plants on the island were useless. But, in addition, TSMC is significantly developing its infrastructure in the US. His plan is that their new Arizona factories not only serve to protect their business from a possible conflict between China and Taiwan; They also protect it from the foreseeable US tariffs. The first of these plants is already producing integrated circuits in the N4 lithographic node, which belongs to the Finfet family of 5 Nm. In fact, he is about to deliver Apple’s first chips games. The second Arizona factory will be operational in 2028 and will produce circuits integrated in N3 (3 Nm) and N2 (2 Nm) nodes. And finally, the third factory will not be listed at all until the end of this decade and will produce chips in the N2 (2 nm) node. So far the most advanced TSMC integration technologies were only available In its Taiwan plantsbut, as we have just seen, Soon they will also be in the US. And in this way it will solve two problems of a stroke: it will be fought from the tariffs that the Trump government will approve and will reinforce its production infrastructure beyond its country of origin. One last note: in addition to the US, TSMC is building New plants in Japan and Europe. Image | TSMC More information | C-Span In Xataka | Intel’s plan in front of an unattainable TSMC: beat Samsung and consolidate as the second largest chips manufacturer

The DGT will install 122 new radars in 2025. Locations and when they will begin to fine the 24 radars that are already active

The radars save lives and in 2025 we will continue with the installation of 122 new devices on the Spanish roads, of which 17 fixed and 7 section entered service this January 21 With these words, Fernando Grande-Marlaska, Minister of the Interior, confirmed that The DGT will install more than a hundred radars This year 2025 on Spanish roads. Thus, the head of Interior stressed that the fulfillment of the speed limits “is key” to avoid accidents. The DGT has been defending that installing New radarsas well as the use of vans and even camouflaged motorcycles With mobile radars, you should improve accident figures on roads. The organism has been doing special emphasis on speed although Distractions are the main cause of accidents With mortal victims on our roads. Be that as it may, what is certain is that the DGT aspires to deploy 122 new radars on Spanish roads. All new radars and where will they start fine Although in its information the DGT has also made it clear that its intention is to install a greater number of section radarsfor now, the 24 new radars that already work on Spanish roads are mostly fixed. As you can check in the list below, only seven of the 24 new speed controllers They are stretch. The rest, the 17 new radars are fixed. This is confirmed by the DGT that, in addition to its locations, has confirmed that they came into operation on January 21. That does not mean, however, that they are already sanctioning with fines that can reach the 600 euros and six points in the driving card. At the moment, the cinemometers will pass a first test phase in which only the drivers will be notified that have exceeded the maximum allowed speed. That period lasts a month and, later, the drivers will be sanctioned. As for its locations, the DGT confirms that the 17 fixed radars are found in the following places: Province Road Kilometer point (PK) Almería A-1050 1+200 decreasing Almería AL-3117 1+400 growing grenade N-432 425+950 decreasing Malaga A-7054 3+500 growing A Coruña AC-221 2+550 growing A Coruña AC-841 9+800 decreasing Asturias N-634 377+850 growing Asturias GJ-10 0+250 growing Lugo LU-862 74+150 decreasing Ourense N-525 237+880 growing Pontevedra N-550 84+150 growing Alicante N-332 89+050 decreasing Alicante CV-86 13+800 growing Alicante CV-905 7+050 decreasing Alicante A-77a 0+420 growing Valencia V-31 6+125 d Valencia CV-410 3+510 c The seven new section radars are found in the following locations. PROVINCE ROAD PK Start PK End Almería A-370 8+400 c 9+400 c Malaga MA-20 9+300 c 10+300 c A Coruña AC-552 26+275 c 27+300 c Asturias AS-12 2+150 d 0+850 d Ourense OU-536 25+850 c 28+040 c Valencia A-3 347+100 c 349+150 c Valencia V-23 3+330 d 1+600 d In addition, remember that in Xataka We tell you how to know Where are all radars. In this way, we will avoid falling into one of the 50 radars that most fine in Spain. Photo | DGT In Xataka | The roads of Spain are plagued with empty radar boxes: they work according to the DGT

Pebble returns in 2025. The simplest smart watch on the market will return to its roots

The founder of Pebble, Eric Migicovsky, has announced The return of one of the most influential smart watches in history. A decade after cementing the market, it returns with a similar vision – but not the same – to the original: do the right thing, but do it well … and sustainably as a company. Why is it important. In a market dominated by the Apple Watch and other increasingly complex and complete watches, focused on health and sports, Pebble proposes to return to its roots: A screen e-paper Always active, a week of autonomy and physical buttons instead of touch screen. Without much more. Between bambalins. Google, current owner of the brand after the purchase of Fitbit, who in turn was the one who bought what was left of Pebble in 2016, has decided to release the source code of Pebbleos. That allows Migicovsky to create a “spiritual clone” of the original clock under another brand. Original Pebble. Image: Pebble. The new device will maintain the essence of Pebble: notifications, music control, alarms, time, calendar and basic monitoring of activity. No electrocardiograms or blood oxygen measurement. Against the complexity of the watches of the present. The device still has no name. Migicovsky has only announced one domain: Repebble.com. Yes, but. The big difference is in the business model. Migicovsky will finance the project personally, without external investors or Kickstarter campaigns, thanks to the businesses he has done since he left Pebble. “The keyword is’ sustainable,” he said about it. One of Pebble’s mistakes ten years ago was growing in excess. When the skinny cows arrived, the cost structure was unassumable. Deepen. The market of Wearable It has changed a lot since 2012, but there is an active community of Pebble users that keep the original devices alive: Rebble. The question is whether this minimalist approach can find its hole in 2025, when the industry is committed to complexity and advanced functions. The new Pebble points to a specific user niche, not to a massive presence. In fact there are the shots of the sustainability that Migicovsky poses. Outstanding image | REPEBBLE In Xataka | Thus Pebble failed: “The biggest mistake was not to react when we knew that the Apple Watch was going to arrive”

The earth’s crust is disappearing under California. The test is in its earthquakes

The border between the mantle and the land cortex is a region that attracts interest of numerous geologists. Being so close and so far from the reach of the instruments that these scientists use perhaps a certain mysticism to this region, but above all, because the dynamism of The interactions Among the outermost layers of our planet makes this border a specially active region at the geological level. Scratching the bark. Now, a new study He has revealed A new aspect of this interaction. He has done it in the Sierra Nevada Californiana, or rather under this Sierra, where they have found evidence of how the mantle “pela” the earth’s crust. Delamination Geologists believe that, from time to time, fragments of the lithosphere end up detaching themselves and sinking into the upper layers of the terrestrial mantle. This process is known as alamination or sinking of the lithosphere and could be responsible for the notable differences in the thickness of the oceanic crust in contrast to the continental cortex, among other characteristics of the geology and geography of the planet. Generally, this process is seen as a “drip”: the heaviest rock of the cortex loses consistency and ends up detaching from the lithosphere to sink into the mantle, composed of less dense materials. However, Maybe this delamination is more abruptsomething like the terrestrial mantle “pelara” the cortex. Seismic waves. As usual in this type of studies, the team analyzed the way in which seismic waves move through the interior layers of the Earth in order to study factors such as the composition and density of these layers. Sierra Nevada is a seismically active region, which implies a greater ease to compile data in this way. The researchers responsible for the study They combined various sources of seismic data in their study, starting with this analysis, called the receiving function. The team combined it with the exhaustive catalog data of the Advanced National Seismic System (COMCAT). In this catalog they detected the presence of a “band of seismicity” in the region, located from 40 kilometers under the surface, which concentrated small earthquakes of magnitudes between 1.9 and 3.2. Break, I don’t drip. Thanks to the differences detected through the receiving functions, the equipment was able to find a differentiated layer in the mantle, a not so differentiated layer as it extends to the north and that is consistent with the hypothesis that part of the lithosphere in the South Zone broke out of the cortex several million years ago. The small earthquakes on the other hand, could be indicative that this detachment was made for breakage instead of drip, according to the authors of the study. The details of this analysis were published In an article In the magazine Geophysical Research Letters. Strengthening the hypothesis. The evidence is not yet conclusive as the team admits, but they add to the already numerous that support the hypothesis that the discontinuity of Mohorovičić (the border between the cortex and the upper mantle of the earth) is not abrupt under the mountain range from Sierra Nevada, but rather gradual. In Xataka | We knew that Yellowstone hid an immense volcano but not the place he would explode. Until now Image | Arttower

Deepseek does the same as Openai’s most advanced models with much less resources. The key: “Reinforcement Learning”

The entire world is wondering how it is possible that the models of AI of Deepseek They have become overnight the great protagonists of today in the field of artificial intelligence. The answer is relatively simple. These models have managed to demonstrate that You can do more with much less. Both Deepseek V3 and Deepseek-R1 are comparable to GPT-4 or O1 OPENAI respectively, but it is estimated that their training has been much less expensive and its inference, of course, is: the prices of the Deepseek API are up to 35 sometimes lower than those of OpenAi, but that makes one wonder how it is possible. The answer is clear, and it is because we have at our disposal the technical reports of these AI models. Precisely his study has allowed us to clarify What are the techniques that this Chinese R&D laboratory has used to develop these models so efficient and capable. Many techniques, a single objective: efficiency There are several differences that make Deepseek’s new model especially efficient. Its creators explain in detail in the detailed Technical Report that is publicly available. Here are the most relevant: Deepseekmoe (“Mixture of experts”): In models such as GPT-3.5 the entire model was activated in both training and inference (when we use it). However, not all model components are necessary for our requests. The MOE technique – already introving with Deepseek V2 – precisely divides the model into multiple “experts” and only activates those that are necessary according to the request. GPT-4 is already a MOE model. But as we said, Depseekmoe even went further and differentiated between even more specialized experts, in addition to using some somewhat more generalist experts that could contribute value in certain requests. Managing all those specialized or generalist experts not only benefits inference, but also the training phase, making it more efficient. This technique is similar to the so -called “Time Scaling test” that also adjusts the size or complexity of a model during efficiency. Deepseekmla (Multi-Head Latent attention): It is another substantial improvement-even more than the previous one, and also introduced with Deepseek V2-that affects the way in which memory is managed in these models. Normally it is necessary to load both the model and the entire context window – the one that allows us to write prompts and include long texts, for example. Context windows are especially expensive because each token requires both a key and their corresponding value. With the improvement introduced with this technique, what was made possible was to compress that warehouse of keys and values, dramatically reducing memory use during inference. Auxiliary -los-Free Load Balancing: If we imagine a model like a great orchestra, each musician is an “expert” within the model. To play a complex piece, not all musicians are necessary all the time. Traditionally the so -called “auxiliary losses” were used to make sure that all musicians played enough, but these losses could interfere with that interpretation of the musical piece (model training), which could degrade general performance. With Deepseek V3 the model is able to balance the work of each expert dynamically. That does the simplest, direct and efficient training by eliminating “auxiliary losses.” In addition, the elimination of interference allows the model to learn better and with less resources … and get better results. Multi-Token Prediction Training Objective: Often predicting the following word depends on several previous words or context. With this technique instead of predicting only the following word, the model learns to predict several words at the same time. That makes more natural and understandable and less ambiguous texts generate, but also accelerates training by reducing the number of steps necessary to generate the complete text sequence. FP8 Mixed Precision Training: The use of Numbers FP8 allows significantly reducing memory consumption and accelerates calculations. Some critical parts of the model continue to use FP32 training to guarantee precision, but there is another additional benefit of FP8: the size of the models is reduced. Other models use techniques such as quantization or parameter pruning. Although Openai does not give data on GPT-4 in this section, the assumption is that it works with BF16, more expensive in terms of memory. Although FP8 theoretically leads to less precise models, other complementary techniques such as fine-grained quantization are used to reduce the negative impact of values ​​that come out of the common, which makes a stable training possible. Cross-Node All-to-Lall Communication: During training it is necessary to constantly exchange information between all nodes (computers) connected in training data centers. That can become a bottleneck, but these new Deepseek V3 techniques include efficient communication protocols, data traffic reduction and efficient synchronization to accelerate training and, once again, reduce the costs of that process. Reinforcement and “distillation” learning as keys But in addition to all these techniques, those responsible for Deepseek V3 explain how they pressed it with 14.8 billion tokens, a process to which a supervised adjustment followed (Superved Fine-Tuning, SFT) and several stages of Reinforcement Learning (Reinforcement Learning, RL). The SFT phase-which is mentioned in the Deepseek V3 report-was completely omitted in the case of Deepseek-R1. However, learning by reinforcement is an absolute protagonist in the development of both models, especially in R1. The technique is well known in the field of artificial intelligence, and it is as if we trained a dog with prizes and punishments. The model learns to respond better by giving rewards if you do well. Over time, the model learns to take actions that maximize long -term reward. In Deepseek, learning for reinforcement is used to break down complex problems in smaller steps. In it Deepseek R1 technical report It also indicates how this model makes use of RL techniques directly on the base model, without the need for supervised training. That saves computing resources. The call also comes into play here Thought chain (chain-of-though)also mentioned in the technical report. This refers to the ability of a language model to show the intermediate steps of its reasoning. The model not only … Read more

It flirts with the high range, but is price of mid -range. Samsung’s tablet that I want to travel touches its minimum price

Having a tablet is very useful in various situations, either to take it on a trip, to see multimedia content (and in streaming) at home or to study and work. But … which one to choose? There are many models, but now one of the most interesting, and what would choose myself, is the Samsung Galaxy Tab S9 Fe. Because? Basically because it now has one of the best discounts to date: it can be found in Mediamarkt by 399 euros In its 256 GB Wi -Fi version. Samsung Galaxy Tab S9 Fe (Wifi, 256 GB) * Some price may have changed from the last review A off -road tablet that touches its minimum historical price The Samsung Galaxy Tab S9 Fe It is a standard size tablet that has some most interesting specifications. Regarding your screen, riding a 10.9 -inch LCD panel which offers a 90 Hz soda rate and a resolution of 2304 x 1440 pixels. Besides, comes with a S-PENwhich is quite useful for various tasks. Internal level we find the processor Exynos 1380and is accompanied by 8 GB of RAM and 256 GB of internal storage; A quite good figure if we have in mind download multimedia content and do not want to depend on cloud services. On the other hand, Its battery is 8,000 mAh and supports fast chargingIt has Bluetooth 5.3 connectivity and Wifi, but not with 5g. Its dual speakers are compatible with Dolby Atmos And the tablet has IP68 certification With dust and water resistance. In addition to all this, while on the front we have a 12 MP camera that is ideal for video callon the back we find a single 8 MP camera. Other tablets that have a good value for money Lenovo Tab P12 – Tablet of 12.7 “3k (MediaTak Dimensity 7050, 8 GB of RAM, 128 GB Expandable up to 1 TB, 4 speakers, wifi 6 + Bluetooth 5.1, Android 13) Conte Pen Plus – Gray * Some price may have changed from the last review Samsung Galaxy Tab A9+ Tablet Android, 64 GB Storage, Wi -Fi, 11 ”screen, Gray Sound (Spanish version) * Some price may have changed from the last review Some of the links of this article are affiliated and can report a benefit to Xataka. In case of non -availability, offers may vary. Images | Samsung In Xataka | Best tablets. Which to buy and 9 recommended models for all pockets and needs In Xataka | Best Samsung mobiles: which buy and recommended models based on budget, tastes and quality price

The EU has finally become independent of Russian gas. Now faces an equal uncertain dependence: the US LNG

In the last five years, the supply of liquefied natural gas in Europe depended mainly on Russian reservesrepresenting almost 40% of imports thanks to their competitive prices and an extensive network of gas pipelines. However, Europe has sought to reduce the dependence on Russian gas by the Ukraine War, facing an uncertain energy panorama. Also, still imports Russian LNG record amounts By boat and Hungary and Slovakia oppose restrictive measures. The reserves, which had reached historical levels before winter thanks to storage policies, are now beginning to descend. Given this situation, Europe has chosen to diversify its sources and increase LNG imports from other countries, being the United States one of the emerging suppliers. However, this transition will not be easy. Short. At the time he assumed, in less than 24 hours, Donald Trump signed an executive order with the different measures that Ipso facto was going to take. In addition, the issue of gas and tariffs to Europe comes before assumingbut now the president of the United States has launched a warning to the European Union demanding that more oil and liquefied natural gas be bought or, otherwise, will face the imposition of tariffs. This threat occurs in a context of commercial and energy tensions, where the US seeks to gain ground in the European market, which has historically depended on Russia’s energy imports. However, the EU does not have a centralized purchasing power that allows it to negotiate large -scale contracts, since it is individual companies that decide where to buy the gas. Evolution of European LNG imports in recent years The evolution of the gas supply. This graph represents the supply of LNG in Europe, which has experienced notable changes such as more than 15 years ago liquefied natural gas came in most countries such as Qatar and other producers. However, Russia’s agency was marking over time. However, the position of the United States as a supplier of Europe is from 2020, which is observed how it is consolidated. This was due, in large part, to the sanctions and commercial restrictions imposed on the Kremlin, which forced the EU has diversify its sources. In the last year, US imports have reached historical levels, even exceeding traditional suppliers. Europe’s position. Although Ursula von der Leyen, president of the European Commission, has shown its willingness to replace Russian gas with American LNG, the EU does not have the centralized purchase capacity on a large scale, so each member country negotiates it independently. For its part, Hungary and Slovakia, more aligned with the Kremlin For their energy treatment, they may not share these EU measures. However, Brussels aims to reduce the dependence of Russian fossil fuels for two years, but the high price of American LNG compared to Russian gas remains an important obstacle. In addition, the EU is struggling to protect its industries and reduce high energy prices, especially in countries such as Germany, which depend on gas for its industry. And Russia? Despite the Ukraine War and the sanctions imposed by the United States and the EU, Russia remains the largest gas supplier for the latter. The reason is because European companies continue to import large volumes of Russian LNG due to the lowest prices and the lack of short -term affordable alternatives. For its part, the Kremlin is looking for new markets for its energy and is approaching more to the Asian continent. Commercial relations. The production capacity of American LNG is increasing, and more natural gas plants are expected to enter into operation in the coming years. By 2026, the United States, Canada and Qatar may meet much of the European LNG demand, thus reducing the need for Russian gas. In addition, the EU seeks to reduce its natural gas consumption by 25% by 2030, modifying import and market patterns. However, prices will remain a considerable obstacle for total change to American LNG. Image | Unspash Xataka | Russia has managed to make fun of Europe’s sanctions: I just had to disguise its gas with Azerí flag

The historical agreement between Spotify and Universal promises a ‘streaming 2.0’. But we have already seen changes of this type

“The next era of streaming innovation.” According to the words of those involvedthat is where the agreement that Universal Music Group and Spotify have reached. This is an agreement that will cover several of the next few years and that will affect both the recordings and the artists’ royalties. It is the first step towards what both companies want to baptize as “era of streaming 2.0” The agreements come back. The pact is remarkable for many reasons, but above all because it is the first Spotify agreement with a multinational industry in several years. It is also commented to improve the conditions of payments to artists, which Since April last year They went down because of a platform rate in the United States oriented to audiobook lovers. Taking advantage of a 2023 lawthis rate allowed the company to pay less Royalties To the composers, which unleashed a sour response from the musical community. This covenant with Universal seems to want to solve some of the problems of that rate, but not just that. Spotify for Superfans. The idea of ​​’streaming 2.0′ It is not new, of courseand Universal has been caressing her for a while, but this seems to be the first firm step towards her. What could be realized? This “Spotify for Superfans” terminology is as the Verge defines The spotify that we are going to have (or part of it, at least) and that will take shape after the agreement. When Universal talked about this ‘streaming 2.0’ I thought, According to your presentationin “Super-Premium” subscriptions that grant advantages such as anticipated access to music, exclusive luxury editions, high-resolution audio and artists’ questions and answers. Further streaming that never. This ‘streaming 2.0’ consists of taking the technological capacity of the platforms one step further to offer extras to consumers. This translates into the need for these platforms to reach treatment with Majorsbecause they are the ones who will provide better quality audio, associated videos, extras to which no one has access, exclusive content of artists … that, or for discounted, that the large companies themselves melt their services of streamingas we have seen that happened in the audiovisual after the outbreak of Netflix. Are Spotify following its same steps? But … makes sense? We have heard similar ideas with the streaming audiovisual. In fact, that support in the “extras” is one of the differentiating cuddles of Prime Video, which with its “X-rays” provides information, trivia, filmographies and even possibilities of e-commerce of the content of their films and series. However, no one would say that it is one of the tricks that puts prime video above its rivals, but an anecdotal added. These are times when the catalog rules: people subscribe and quickly unless platforms in search of content to consume, and do it all the time. The extras are an appetizing gift, but … is that the base of the streaming of the future? In Xataka | The best free alternatives to Spotify to enjoy your music legally and without having to pay

The next phase of AI is not to see who invests more but who invests less

The hole that the Hype Deepseek has caused Nvidia to assess (half billion dollars and climbing) It is somewhat deeper than a simple market adjustment: it is the end of an era in the AI ​​industry. Success can no longer be measured in invested dollars. Why is it important. Until now, the dominant narrative in AI has been very simple: More money = better models. This equation has promoted stratospheric assessments and has justified mass investments such as The Stargate project and his half billion. Deepseek has just demonstrated that this logic begins to become obsolete. The contrast. OpenAI invests hundreds of millions of dollars in each iteration of GPT. Goal has dedicated billions to flame, also open source (With nuances), without leading in performance. Deepseek has achieved equivalent or higher results with 5.6 million dollars. Efficiency has triumphed over financial muscle. Even if the 5.6 million have an extensive small print and the real cost is higher, that does not cancel its milestone in efficiency. Between the lines. The market reaction, with generalized collapses in technological beyond Nvidia, reinforces the change of paradigm. Not only is Depseek built a good model, it has shown that the emperor is naked. Huge investments in AI infrastructure, after all, could be based on erroneous assumptions about the relationship between spending and performance. The money trail. The technical innovation of Deepseek –su architecture of ‘mixed experts‘or its reduced precision system – are a signal: the future of AI does not go through larger data centers, but make them smarter and more efficient. And leave the great technological position on the other side of the Pacific: How to justify multimillion -dollar investments when a rival gets similar results with a cost fraction? What happens to valuations based on the assumption that AI requires continuous mass investments? Are they sustainable The margins of Nvidia In AI chips if the trend is towards efficiency? Yes, but. Not everything is efficiency. The great will argue that their massive investments are justified by the need for scale and reliability. Even here Deepseek poses more uncomfortable questions: are they really necessary 100,000 GPUS To train a good model … or have we been waste resources due to lack of innovation? The next. The market is going to reassess the entire AI value chain. If the models can train with a fraction of the expected cost, what does that mean to …? Chips manufacturers such as NVIDIA and AMD. Infrastructure suppliers Cloud. Startups that have raised billions based on mass investment projections. Even for projections of Energy consumption by AI training. The next phase of the AI ​​race may not be measured in Teraflops or in model sizes, but in innovations that improve efficiency. The race is no longer to see who can spend more, but to see who can spend less while getting more. The arrival of Deepseek marks a milestone and the beginning of an era: one in which the competitive advantage will not come for having the deepest pocket, but the smartest idea. This horse drop is already half a billion of dollars for Wall Street. For now. Outstanding image | Xataka with Mockuuuups Studio In Xataka | Deepseek is the fashion model. The problem is that nobody knows very well what you are doing with our data

Technological millionaires have lost 108,000 million in a single day

Just take a look at the list of greater fortunes of the world to realize that, in one way or another, AI is the common denominator that has catapulted these fortunes at current levels. However, the earthquake caused by Deepseekhas caused investors to doubt the real value of the projects supported by large technological ones, which has picked up the value of their actions, dragging with them the fortunes of their CEOS and founders. Specifically, Fortune esteema joint loss of more than 108,000 million in a single day. It is not the first time This happens. Jensen Huang and Nvidia in the eye of the hurricane. The news that the new China has not used the most powerful processors Nvidia for its development has made the market assessment of the Jensen Huang company lost 400,000 million dollars. As a result of that fall, the Jensen Huang fortune He lost a value of 20,100 million dollars in just one day, which represents 20% of the total fortune. Larry Ellison’s infrastructure. Much of Oracle’s success in recent years has been based on the structure offered for AI, So it is not surprising that investors have also staged their doubts selling their positions in the company that founded and directs Larry Ellisonwhich currently occupies the fifth place among the greatest fortunes in the world. Percentage, the New York millionaire lost less than Huang with 12% of his fortune, but in absolute terms, Oracle’s fall after the presentation of Deepseek caused him a devaluation of his fortune estimated at 22.6 billion euros. Collateral damage of the fall. Together, technological giants lost about 94,000 million directly due to the depreciation of their market value due to the mass sale of shares. This fall caused an expansive wave in other indices that, indirectly, also suffered losses. The Nasdaq index 3.1% fell and the S&P 500 left 1.5%. For example, Michael Dell’s fortune, the computers giant13,000 million were left after the presentation of the new Chinese model, while the co -founder of Binance Holdings, Changpeng “CZ” Zhaolost 12,100 million dollars. Immune to earthquakes. Although the Deepseek’s emergence It has meant a tsunami for the AI ​​development scene, some leading actors seem to put their fortunes in dusty and left unscathed from this stock market shaking. The biggest beneficiary has been Bernard Arnault, who has known Fishing in a scrambled river Adding 5,000 million dollars to their fortune while most technological millionaires painted their negative balances. Worthy of a teacher in stock jarism has been the role of Mark Zuckerberg, who has not only unharmed the tsunami that has caused China, but has added 4.2 billion dollars to his fortune. Jeff Bezos, with a more discreet role, has added 519 million, but like Zuckerberg, he has fallen to the storm being one of the main investors in the development of AI. In Xataka | The next border of the super farms is no longer to be Milmillonarios, is to be Billionaires: Musk, Zuckerberg and Bezos are candidates Image | Flickr (Fortune Global Forum, Trump White House Archced, Presidency of the Mexican Republic),

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.