You can make money with your GPU when you don’t use it. It is enough that you lend it to those who train AI models

To execute and offer tools based on generative artificial intelligence, a lot of calculation power is needed (and that leads to a lot of energy). Therefore, the most powerful market cards and specific processors for Datacenters They are so quoted today, hence companies such as Nvidia, which specializes in this market, are reaping such an overwhelming success. And since not everyone can afford a powerful graphics card to experiment with AI, there is a service that we see more and more common: to rent a graphics card to remove an extra money. There are several platforms to get it and under these lines we tell you everything you need to know. How the business works. The model consists of acting as a host in a Marketplace where clients are looking for GPU instances for their AI projects. You set the price per hour, the platform manages payments and the client executes their work in an isolated container on your machine. You could say that it is like an Airbnb, but focused on computer hardware. Instances with an rtx 4090 in vast ai Numbers that we must take into account. An RTX 4090 is usually Between about 0.20 and 0.60 dollars per hour in these marketplaces, depending on the demand. In the best theoretical scenario, operating 24 hours a day for a full month, a high -end GPU could invoice around 240 gross dollars monthly (considering that we put it for rent 24 hours a day). But reality is usually more modest, since we have to discount what we pay on our electrical bill, the platform commissions (which can reach 24% in Platforms like Runpod) and, above all, that real occupation is rarely 100%. Expanding market. The price difference between traditional cloud giants (AWS, Google Cloud) and these P2P marketplaces is considerable. While renting a GPU on AWS can cost three or six times more, platforms such as Runpod or Vast AI offer access to very powerful graphics cards, as is the case of RTX 4090, for a few cents the time. And of course, these prices are really attractive to developers who want to experiment with artificial intelligence but do not have means to have a team comparable to the projects they work on. What you should know before starting. Turning your PC into a rental server is not plug-and-play. In most cases you need Install Linuxconfigure updated NVIDIA drivers, open network ports And keep your team working for the hours for which you have committed to offer it, together with adequate refrigeration, which will be necessary if your GPU is going to start working much more and for much longer. In addition, your customers expect the machine to be available when they hire it, which means that you will not be able to use it for gaming or personal work. It should also be noted that the income generated is also subject to taxation and it is possible that it is required to register as an economic activity in cases where income exceeds a certain threshold. There are certain risks. Beyond the wear that the hardware can receive for being constantly working, there are maximum performance, there are some security concerns. Although platforms use containers to isolate workloads, some experts warn about possible Vulnerabilities in multi-tean environments (those environments that serve several users) that could compromise our data or use the GPU to improper purposes. Is it worth it? For most users with a single GPU, the benefits are modest once all expenses and others are discounted. Now, the business makes more sense if you already have the amortized hardware, do not pay too much on your electrical bill and accounts with certain technical knowledge to maintain the stable system. Even more if you have a potential graphics card or level for datacentes. As an experiment or complementary income experiment it can be interesting, but do not expect it to make you rich. First steps. If you want to try it, start with offers “interruptibles“, that is, cheaper but that can be canceled, in order to know the real demand. Vast.ai and Runpod They offer detailed documentation to become host, including step -by -step guides and preconfigured templates. Of course, it is advisable to always control real electrical consumption and establish operation limits to prevent your equipment from becoming a slave to the background processes. Cover image | She Don In Xataka | Nvidia, TSMC and SK Hynix are the most powerful chip companies on the planet. None can allow any of the others to fall

When you are openai and you can’t buy enough GPU, the solution is obvious: manufacture yours

Openai will create its own artificial intelligence chips. It is a crucial decision for the future of your business, but the ally you have chosen to do it: Broadcom. When the river sounds. The runrún has been listening Since the beginning of 2024. Nvidia, owner and lady of the segment of the AI ​​accelerators, was an ally too powerful for OpenAi. The solution was clear: develop its own chips with which to minimize that dependence. Broadcom takes chest. Hock Tan, Broadcom CEO, yesterday told investors that the company had closed an agreement with a mysterious client that would invest 10,000 million dollars in AI chips. Although Broadcom did not reveal the client’s name, sources close to those agreements indicated In Financial Times that this client is none other than OpenAi. Neither Broadcom nor this last company have confirmed the data. Xpus to power. Those chips, to those who referred as Xpus, are a kind of specialized and personalized variant of the NVIDIA or AMD accelerators. We have the perfect example in the TPUS (Tensioner Processor Units) that Google presented almost a decade ago And that has been improving generation after generation (we are already going for the seventh generation, called Ironwood). Broadcom, by the way, has collaborated in the development of these Google chips, so it has overdue experience in that area. Own chips for internal use. According to sources close to this collaboration, Openai aims to use these internal AI chips, and there are no plans to offer them to external clients. That reinforces the theory that Openai wants to create data centers with these own chips to avoid (or at least mitigate) the dependence of Nvidia. Nvidia will have (a lot) competition. Nvidia dominates Iron fist This segment, but has long for the rivals – both in the West as in the East – work to make their monopoly in this sector disappear. Microsoft He has MaiaAmazon His trainiumGoogle its aforementioned TPU and AMD of course Your instinct. To goal It is about it. But Cuda remains the “Moat” of Nvidia. Of course the true key to Nvidia overwhelming success is not so much in its chips and in the fact that its architecture CUDA is de facto standard In this market and all AI systems developers usually base their projects on that platform. It is the “Moat” of Nvidia, that “pit” that allows you to protect its “castle” from the rivals and continue dominating the market. And here there are also attempts to avoid the dependence of the company, and among them Those from China stand out. And TSMC, what? The funny thing is that for months it seemed that The ally that Openai had sought To carry out this project was the most important semiconductor manufacturer in the world, TSMC. Earl this year that collaboration It seemed to go on the right track and several sources pointed out that we would have the first OpenAI GPUS for 2026. It may simply have chosen to have a plan B (TSMC) to avoid its dependence on NVIDIA, but also prepare a C (Broadcom) plan. Image | Qualcomm In Xataka | China’s self -sufficiency test in chips for AI is already here: it has not bought Nvidia or a single H20 GPU in the last quarter

Nvidia world leadership in chips for AI is brutal. In GPU for games directly has fulminated the competition

Nvidia dominates the global chips market for artificial intelligence (AI) with a fee that during the last three years has oscillated between 80 and 94%, according to Fourweekmba. Your leadership is supported by A very competitive hardware and a software ecosystem in which CUDA (Compute Unified Device Architecture) It has an essential role. This technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs. Most of the artificial intelligence projects that are currently being developed are implemented on CUDA, and replace it with another option in the projects that are already underway it is a problem. Huawei, who aspires to an important portion From this market in China, it has Cann (Compute Architecture for Neural Networks), which is its alternative to CUDA. AND Moore Threads and Cambricon Technologies They have muse and neuware respectively. Even so, the competitors of Nvidia will cost them a lot to break the leadership of Cuda. Nvidia has distributed 94% of GPUs for market games During the second quarter of 2025, 11.6 million graphics cards for PC and 21.7 million processors for desktop computers have been distributed throughout the planet, according to the US consultant Jon Pedie Research. By themselves these figures do not tell us just anything, but they acquire the relevance they deserve if we consider that they indicate that the distribution of graphics cards has grown by 27% and that of CPUs 21.6% compared to the first quarter of 2025. Distributed units allow us to train a very precise idea about market behavior It is important that we do not overlook that these figures quantify the distributed units, and not the units sold. However, there is a direct correlation between them, so the distributed units allow us to form a very precise idea About market behavior. Anyway there is a fact that is even more shocking than all we have collected so far in this article: NVIDIA has distributed no less than 94% of GPUs for market games during the second quarter of 2025, again according to Jon Pedie Research. AMD has been forced to settle for 6% of the distributed units, and Intel does not even appear in the report of this consultant because its presence is anecdotal. So are things in the graphic hardware market for PC. One more note to conclude: the rebound of distributed units of graphics and processors for PC during the second quarter of 2025 against the first responds in all likelihood to the need for stores and users to supply before Tariffs approved by the US government They entered into force. Image | Xataka More information | Jon Pedie Research In Xataka | Nvidia is ready for the chip for the need to survive in China. Who is not ready to let him sell is the US government

He has not bought Nvidia or a single H20 GPU in the last quarter

The future of Nvidia in China is every day that spends more gloomy. In early October 2024 Chinese administration arrived to the companies of artificial intelligence (AI) A recommendation in which I asked them to use chips produced in China as much as possible. Ten months later This recommendation has been transformed into a requirement. And is that the Chinese government is already forcing data centers that belong to the State throughout the country to use at least 50% of Chinese integrated circuits on their servers. On the other hand, Nvidia has achieved the export license you need to sell in China Your GPU for IA H20but the Chinese government has vetoed this chip. And he has done so that the administration of the cyberspace of China, which is the main Internet regulatory body in this country, This GPU is thoroughly investigating Because he suspects that he could incorporate a rear door of difficult location by Chinese experts. If so, the possibility of China to use this chip. The direct consequence of this unfavorable scenario for Nvidia is that during the last quarter it has not sold a single H20 GPU in China, As Shaun Rein holdsan expert in the Chinese economy and founder of the consultant The China Market Research Group (CMR), which is housed in Shanghai. This statement is true, but it has a small trick. For a good part of the last quarter Nvidia did not have the export license that he needed to deliver this chip to his Chinese clients, but he already has it. And it could have sold thousands of these GPUs during the last weeks. China has alternatives designed to compete with Nvidia chips Despite the efforts of the US government to avoid it, the avant -garde chips for ia They have continued arriving in China. And they have done it mainly through secondary markets and parallel import roads that run in India, Malaysia or Singapore, in which The US action It is very limited. In addition, the developers of great AI models that have projects with projects with CUDA They have found the appropriate place to get these GPU: The international second -hand market. Cambricon Technologies is one of the companies specialized in the design of GPU for AI with greater growth potential Anyway China already has three alternatives Very clear to Nvidia. Although it is not as well known as Huawei or Moore Threads, Cambricon Technologies is one of the companies specialized in the design of GPU for AI with greater growth potential. In fact, he has received the approval of the Shanghai bag (China) to raise 560 million dollars. It will allocate them to the design of four chips for training and inference of AI models, and also to the development of an alternative to CUDA. On the other hand, Moore Threads He has developed several GPU for AI applications that, on paper, rivaize some of the advanced solutions that have placed in the Nvidia, AMD or Huawei market. The MTT S4000 and MTT S3000 cards are its most interesting proposals right now, although, curiously, in its porpholio the MTT S80 card, a proposal for games and content creation that, according to Moore Threads itself, has a 14.4 TFLOPS calculation capacity also appears in Floating Coma operations of simple precision. The other indispensable actor in the Chinese chips industry for IA is Huawei. His most ambitious proposal right now is the chip Ascend 910dwho seeks to overcome the performance of the GPU NVIDIA H100. However, this Chinese company has also recently presented its chip Ascend 920a solution that is clearly destined to occupy in the Chinese market The gaps that the NVIDIA H20 GPU is going to leave. This proposal will enter large -scale production during the second half of 2025 using 6 NM integration technology that have presumably developed elbow with Huawei elbow and SMIC (Semiconductor manufacturing international corp). Image | Nvidia | Zhang Kaiyv More information | Shaun Rein In Xataka | The US gives Huawei a great opportunity: to get its new chip for AI with the Nvidia market in China

presumably already has its first 6 -nm GPU for full games China

In China there are dozens of GPU designers for games and artificial intelligence (AI). The Chinese government has supported the proliferation of these companies with Very juicy subsidies in answer to US sanctions. Huawei, Metax, Biren Technology, Moore ThreadsInnosilicon, Zhaoxin, Iluvatar Corex, Deglinai or Vast Ai Tech are some of the most important, but there are more. Many more. And among all of them, it has just attracted the attention of the semiconductor industry: Lisuan Technology. This emerging company was born in 2021, in the same breeding ground triggered by The conflict held by the US and China which has also resulted in the constitution of other best -known graphic hardware companies, such as Moore Threads or Biren Technology. Although, as we have just seen, it is a very young firm, it is supported by veteran engineers who have developed a good part of their professional career in the US. It is exactly the same that happens with Moore Threads or Bire Technology, who tell in their ranks with former Nvidia employees. Lisuan has announced that it has a GPU as powerful as the GeForce RTX 4060 This week this company has revealed In your Wechat account Something important: it already has a GPU for games as powerful as the GeForce RTX 4060 of Nvidia. This is what Lisuan says, so the most prudent is that we collect it with reservations. When this graphic hardware is finally available we will check if this statement is reliable or not. However, this is not all. The most relevant thing is that this company ensures that this GPU, which it has baptized as G100, is the first manufactured entirely in China using 6 Nm integration technology. SMIC is already manufacturing in its nodes of 6 Nm the GPU for the Ascend 920 of Huawei Lisuan has not confirmed which Chinese manufacturer of semiconductors is producing this chip, but in all likelihood it is about SMIC (Semiconductor manufacturing international corp), The largest Chinese manufacturer of integrated circuits. This company You are already manufacturing in its 6 Nm nodes THE GPU FOR IA Ascend 920 From Huawei, so it is perfectly credible that this chip for Lisuan Technology produces in those same nodes. To manufacture integrated 6 and 7 nm circuits using equipment deep ultraviolet photolithography (UVP) of the Dutch company ASML SMIC uses a technique known as Multiple patterning. This strategy consists in transferring the pattern to the wafer in several passes with the purpose of increasing the resolution of the lithographic process. It works, but is responsible for the performance by wafer Be clearly improvable. In any case, Lisuan has revealed some more data about its G100 GPU. Apparently this graphic processor will work side by side with a generous vram memory map (it is only a conjecture, but it will possibly be 16 GB), will have moderate energy consumption And it will be compatible with the API Directx 12, Vulkan 1.3, OpenGL 4.6 and OpenGL 3.0, so it should be able to deal with current video games without problem as long as their controllers are up to it. Whatever the first graphics cards equipped with this GPU will be available during the third quarter of this year, although it is likely that large -scale manufacturing arrives at the beginning of 2026. Image | Lisuan Technology More information | Lisuan Technology In Xataka | We can forget an AI without hallucinations for now. The general director of Nvidia explains why

Huawei plans to advise Nvidia in China. It has a new GPU for theory that in theory is extremely powerful

Huawei is putting all the meat on the grill to absorb so much share in the Chinese GPU market for artificial intelligence (AI) as I can. And it is that the entry into force of the last US sanctions package is compromising with all probability Nvidia leadership in China. The US Department of Commerce It has imposed restrictions to the export to the country led by Xi Jinping of The H20 GPUand this in practice means that this chip presumably will not reach the Chinese clients of Nvidia. This last company has announced that this ban will cause a hole in its accounts of 5,500 million dollars Due to the commitments linked to the H20 GPU that had already acquired the reserves of this chip that it will not finally satisfy. Some of the Chinese companies that have bought large amounts from the H20 chip to NVIDIA and who presumably planned to continue doing them are Tencent, Alibaba or Bytedance, but at the current situation they will have to resort to an alternative. And Huawei has put it on a tray. The GPU Ascend 910D aspires to snatch the leadership in performance from Nvidia Huawei reacted immediately to US sanctions. And is that just a few hours after the entry into force of the new regulation of the Department of Commerce He presented his chip for the ascend 920a solution that is clearly destined to occupy in the Chinese market the gaps that the NVIDIA H20 GPU is going to leave. This proposal will enter large -scale production during the second half of 2025 using 6 NM integration technology that have presumably developed elbow with Huawei elbow and SMIC. Until now Huawei wanted to get his hardware to dominate the inference processes in AI However, this is not the only asset that Huawei has to increase its market share both in China and beyond its country of origin. And is that, According to Reutersthis company is preparing to start the testing and validation phase of a new GPU for AI: The Ascend 910D chip. Unlike the GPU Ascend 920 that, as we have seen, presumably aspires to compete with the NVIDIA H20 chip, the GPU Ascend 910D seeks to overcome the performance of the chip NVIDIA H100. If this movement is confirmed, already priori this information is reliable, it will be evident that Huawei will have chosen to fight in all hardware market segments for the nvidia. Until now this Chinese company wanted to get its hardware dominate the inference processes in AIand not the training of the models, as Georgios Zacharopoulos, a senior researcher of AI who works on the acceleration of inference in the Huawei laboratory in Zurich (Switzerland) points out in this statement. “The training is important, but it only happens a few times. Huawei focuses mainly on inference, which will ultimately give us access to more customers,” says Zacharopoulos. Inference is broadly the computational process carried out by language models with the purpose of Generate the answers which correspond to the requests they receive. In any case, the information we have reflects that the GPU Ascend 910D will allow Huawei to compete with the chips for the most advanced NVIDIA both in inference and in training. Image | Huawei More information | Reuters In Xataka | In a low voice, China has begun to remove some tariffs from US products. Your concern: the chips

He will be able to continue selling his H20 GPU for AI in China, although he has cost him a 1 million dinner per diner

The GPU for applications of artificial intelligence (AI) H20 is the salvation of Nvidia in China. Since the sanctions package officialized by the administration of Joe Biden entered into force November 16, 2023 This is THE ONLY SOLUTION FOR IA That Jensen Huang’s company can sell to its Chinese clients. And, in addition, during the last months it is being a real success. This chip on paper is much less capable than the most sophisticated GPUs that Nvidia currently sells. In fact, this is the reason why the US Department of Commerce has allowed its sale in China in recent months. Its limitations invited us to initially assume that their reception in this Asian country would be warm, but It has not been so at all. According to Business Intelligence semiconductorsince this chip reached the Chinese market in mid -2024 its sales have grown 50% quarter to quarter, which positions it as the most successful Nvidia product today. However, sales of The H100 GPUwhich is more powerful, “only” grows 25% quarter after quarter. In spite of everything since the end of 2024, the H20 Chip is undergoing the scrutiny of the US Department of Commerce. Jensen Huang has added one of his most unlikely successes Gina Raimondo, the Secretary of Commerce during the mandate of Joe Biden, He made this warning to Nvidia In December 2023: “If redesign a chip so that it can be used for AI, we will control it the next day.” The direct allusion to the Jensen Huang company is evident. The return to the US government of Donald Trump and his collaborators far from appeasing the panorama promised to end once and for all for the sale of the H20 GPU in China. The Nvidia business in China has resentful during the last two years as a result of the sanctions imposed by the US For Nvidia this restriction would be a very hard blow. His business in this Asian country has rented during the last two years as a result of the sanctions imposed by the US, and stop selling the GPU that during the last months He has sustained this company in China I would make a difficult wound to heal. These are the circumstances in which Jensen Huang has met with Donald Trumpand has done so with a firm purpose: to ensure that the government allows Nvidia to continue selling its H20 chip in China. The Trade Department, which under Trump’s mandate is being led by Howard Lutnick, intended According to several filters to prevent this week that this GPU continue to arrive in the country of Xi Jinping. However, against all forecast the US administration has suspended, at least temporarily, its export prohibition of this chip. This conclusion is surprising, but there are even more circumstances in which Jensen Huang has achieved his goal. And it is that the general director of Nvidia has approached this negotiation directly with Donald Trump during a dinner at the restaurant of the Mar-A-Lago tourist complex housed in Palm Beach (Florida), which is owned by the latter. It has transcended that Huang and the other diners They have paid a million dollars each for attending this dinner. But Huang has compensated. Nvidia can continue selling its H20 GPU in China. At least for the moment. Although, yes, he has pledged to invest more money in data centers for the US. Image | Nvidia More information | Npr In Xataka | The Nvidia pulse and US administration becomes more virulent. The B20 GPUs for danger

The B300 GPU is the new Nvidia beast for Ia. And we already know what prepares for 2026 and 2027

Jensen Huang, the co -founder and general director of Nvidia, has not let out the opportunity to publicize the next GPU for artificial intelligence (AI) that have put their engineers ready in the framework of the GTC 2025 (GPU Technology Conference). The spectacular thing this electrical engineer has presented is The DGX B300 platform. This hardware is the most powerful Nvidia for generative, although according to this company it is also its most efficient proposal from an energy point of view. The Blackwell Ultra GPUs work on the B300 platform elbow with a 2.3 TB Map of HBM3E memory, delivering according to NVIDIA 72 Pflops in training processes with precision FP8 and nothing less than 144 pflops in inference tasks with precision FP4. These figures are a real monstrosity. In fact, the B300 platform is 11 times faster in inference and 4 times in training than its predecessor, the B200. This is the hardware with which Nvidia wants to maintain her leadership If we look at the consumer figures announced by NVIDIA we will see that apparently the energy efficiency of B200 and B300 platforms is similar. The first consumes approximately 14.3 KW maximumand the second one 14 kW. However, there is something that we should not overlook: the GPUs of both solutions have been implemented on the Blackwell microarchitecture, but they are not the same. The Blackwell Ultra chips of the B300 platform are more powerful than the Blackwell to dry infrastructure B200. The B300 platform integrates 50% more memory, allowing you to deal with larger AI models In addition, the B300 platform integrates 50% more memory, which in theory allows this hardware to deal with larger and more parameters. This proposal will reach the first data centers During the second semester of 2025. In any case, Nvidia has not only spoken of her current hardware in this edition of her conference dedicated to AI; He has also anticipated what his engineers are working for 2026 and 2027. The microarchitecture that will replace Blackwell is known as Rubin, and, as expected, it will be even more powerful than his predecessor. An interesting detail is that Rubin will be compatible with Blackwell at the infrastructure level, which will allow Nvidia customers to combine both solutions. In any case, Rubin will deliver 1.2 EXAFLOPS in training processes with precision FP8 compared to 0.36 EXAFLOPS of the B300 platform. It will arrive during the second half of 2026. And during the second semester of 2027 Nvidia will launch Rubin Ultra, a review that according to this company will reach 5 exaflops In training tasks with FP8 precision, so your performance in this scenario will be almost four times greater than Rubin’s. A last interesting note: Rubin will use HBM4 memory, while Rubin Ultra will have HBM4E. Image | Nvidia More information | Nvidia In Xataka | AI is already our best ally to solve the mathematical problems that seem impossible

The Singapore government has revealed which companies are involved in the delivery of GPU from Nvidia A Deepseek

Depseek continues to be the artificial intelligence (AI) of the moment five weeks after its irruption. And is that the debate about the hardware used by this Chinese company to train your AI model Keep on the table. High-flyer, your parent company, is A quantitative coverage fund specialized in trading algorithmic. This simply means that this company uses advanced mathematical models and computational algorithms to address investment decisions with the greatest possible success guarantees. Deepseek was born as a high-flyer secondary project to take advantage of its computer resources and “put one foot” in the AI ​​industry. Its creators say that in the training of their model they have used only 2,048 chips H800 of Nvidia. However, Some analysts defend that, in reality, its infrastructure brings together 50,000 GPU H100 bought through intermediaries. This is the problem. High-flyer could legally buy the H800 chips to the entry into force of US sanctions of November 16, 2023, but the H100 GPUs should not be in their possession. Singapore is the entrance door to China of the chips for the most advanced Nvidia The US government has suspected for many months that Chinese companies and research centers dedicated to AI acquire the most advanced NVIDIA GPUs through Singapore and Malaysian intermediary companies. This possibility is no longer just a hypothesis. And it is that the Singapore government has confirmed that it has identified those responsible for diverting to China, and presumably towards the Deepseek parent company, servers that contain the high -performance GPUs produced by NVIDIA. The US now has the opportunity to tighten its fence a little more about China This information was revealed last week by the television channel Channel News Asia, and today the Minister of Internal Affairs and Justice of Singapore, K. Shanmugam, He has confirmed it. Interestingly, it has not specified what the GPUs that incorporate these machines, but it has made public a very important fact: The name of the companies They have manufactured the servers. And they are two very important Nvidia customers: Dell Technologies and Super Micro Computer. If it is finally confirmed that Depseek, or any other Chinese company that is dedicated to AI, is getting the Nvidia avant -garde GPUs acquiring servers of these companies in Singapore or Malaysia, USA will have the opportunity to tighten your fence a little more. However, this would not demonstrate the guilt of Dell and Super Micro, although its indirect involvement in the traffic of the Nvidia chips. This circumstance would put on the table the need to control with more precision where their servers will stop, something that, on the other hand, It is not easy. Whatever the Singapore government claims to be willing to collaborate with his American counterpart to end the illegal vanguard GPU traffic. Image | Nvidia More information | Reuters In Xataka | We can forget an AI without hallucinations for now. The general director of Nvidia explains why

Openai is finishing designing its own GPU for Ia. And we already know what agreement has arrived with TSMC

Sam Altman and the rest of OpenAi’s directive dome are determined to stop using GPUs in the medium term artificial intelligence (AI) of Nvidia. We know it with certainty since January 2024. On that date Altman began a journey that pursued find investors with the necessary muscle To help your company Develop your own chip for AI. And, apparently, he had a good reason to do it that goes beyond reducing his dependence on Nvidia hardware. Just a few weeks earlier, in December 2023, Pat Gelsinger, the former general director of Intel, declared that the AI ​​industry is determined to leave behind CUDA (Compute Unified Device Architecture). This technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs, and replace it with another option in the projects that are already underway it is a problem. “The entire industry is determined to eliminate market CUDA (…) We see it as a shallow and small pit, so we are motivated to propose a broader set of technologies both to address training and innovation or science of data “, Gelsinger defended During the event “Ai Everywhere” held in New York. In addition, he assured that Google and OpenAi are two of the companies with a great specific weight in the AI ​​industry that they want to leave CUDA behind. TSMC is the ideal ally for Openai We do not say it. It is evident that Sam Altman believes it if we stick to the steps he has taken during the last months. A little over a year ago he began his conversations with TSMC, which is the largest semiconductor manufacturer on the planet with A market share close to 60%. Altman needed to explore the possibility that this Taiwanese company manufactured its GPU for ia. After all, TSMC produces the chips designed by NVIDIA or AMD, among other companies, for this scenario of use. TSMC will manufacture the chips for ia designed by Openai in its 3 nm node The negotiation that they had already culminated successfully. TSMC will produce the GPUs for AI designed by OpenAi. But this is not the only thing we know. According to SCMP These chips will be manufactured in the 3 NM node of TSMC, which is currently Its most advanced integration technology (In 2025 it will begin producing large -scale integrated circuits In the 2 Nm node). And, in addition, Openai has already started the final stage of design of its own GPU for AI, according to Reuters. This last information fits with the date on which the company led by Sam Altman began presumably. The two media that I just mentioned argue that during the next months OpenAI will send the preliminary design of its GPU to TSMC with the purpose of starting the first validation and production tests. This project phase is known in English as tape-out. At the moment neither OpenAi or TSMC have made official statements about the advances of their collaboration, but all the information in which we have just inquired is consistent enough to give it for good. After all, according to these sources, the Taiwanese chips manufacturer will begin large -scale production of the OpenAI GPU for 2026. Image | TSMC More information | SCMP | Reuters In Xataka | Some researchers claim to have created an AI as good as those of Openai and Deepseek for $ 50. And the data is real

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.