A quarter of a century ago a student put together 32 GeForce graphics cards to play Quake III. CUDA came from there

In the year 2000 Ian Buck wanted to do something that seemed impossible: play Quake III in 8K resolution. Young Buck was studying computer science at Stanford, specializing in computer graphics, and then a crazy idea occurred to him: put together 32 GeForce graphics cards and render Quake III on eight strategically placed projectors. “That,” he explained years later, “was beautiful.” Buck told that story in ‘The Thining Machine’, the essay published by Stephen Witt in 2025 that traces the history of NVIDIA. And of course one of the fundamental parts of that story is the origin of CUDA, the architecture that AI developers have turned into a gem and that has allowed the company to boost and become the most important in the world by market capitalization. And it all started with Quake III. The GPU as a home supercomputer That, of course, was just a fun experiment, but for Buck it was a revelation, because there he discovered that perhaps specialized graphics chips (GPUs) could do more than draw triangles and render Quake frames. In 2006 the GeForce 8800 GTS (and its higher version, the GTX) began the CUDA era. To find out, he delved into the technical aspects of NVIDIA graphics processors and began researching their possibilities as part of his Stanford PhD. He gathered a small group of researchers and, with a grant from DARPA (Defense Advanced Research Projects Agency), began working on an open source programming language that he called Brook. That language allowed something amazing: making graphics cards become home supercomputers. Buck demonstrated that GPUs, theoretically dedicated to working with graphics, could solve general-purpose problems, and also do so by taking advantage of the parallelism offered by those chips. Thus, while one part of the chip illuminated triangle A, another was already rasterizing triangle B and another writing triangle C in memory. It wasn’t exactly the same as today’s data parallelism, but it still offered amazing computing power, far superior to any CPU of the time. That specialized language ended up becoming a paper called ‘Brook for GPUs: stream computing on graphics hardware‘. Suddenly parallel computing was available to anyone, and although that project barely received public coverage, it became something that one person knew was important. That person was Jensen Huang. Shortly after publishing that study, the founder of NVIDIA met with Buck and signed him on the spot. He realized that this capacity of graphics processors could and should be exploited, and began to dedicate more and more resources to it. CUDA is born When Silicon Graphics collapsed in 2005 – due to NVIDIA that was intractable in workstations – many of its employees ended up working for the company. 1,200 of them in fact went directly to the R&D division, and one of the big projects of that division was precisely to take forward this capacity of these cards. John Nickolls / Ian Buck. As soon as he arrived at NVIDIA, Ian Buck began working with John Nickolswho before working for the firm had tried—unsuccessfully—to get ahead of the future with his commitment to parallel computing. That attempt failed, but together with Buck and some other engineers he launched a project to which NVIDIA preferred to give a somewhat confusing name. He called it Compute Unified Domain Architecture. CUDA was born. Work on CUDA progressed rapidly and NVIDIA released the first version of this technology in November 2006. That software was free, but it was only compatible with NVIDIA hardware. And as often happens with many revolutions, CUDA took a while to gel. In 2007 the software platform was downloaded 13,000 times: the hundreds of millions of NVIDIA graphics users only wanted them for gaming, and it remained that way for a long time. Programming to take advantage of CUDA was difficult, and Those first times were very difficult for this projectwhich consumed a lot of talent and finances at NVIDIA without seeing any real benefits. In fact, the first uses of CUDA had nothing to do with artificial intelligence because artificial intelligence was barely talked about at the time. Those who took advantage of this technology were scientific departments, and only years later would the revolution that this technology could cause take shape. A late (but deserved) success In fact, Buck himself pointed this out in a 2012 interview with Tom’s Hardware in 2012. When the interviewer asked him what future uses he saw for the GPGPU technology offered by CUDA in the future, he gave some examples. He talked about companies that were using CUDA to design next-generation clothes or cars, but he added something important: “In the future, we will continue to see opportunities in personal media, such as sorting and searching photos based on image content, i.e. faces, location, etc., which is a very computationally intensive operation.” Here Buck knew what he was talking about, although he did not imagine that this would be the beginning of the true CUDA revolution. In 2012 two young doctoral students named Alex Krizhevsky and Ilya Sutskever They developed a project under the guidance of their supervisor, Geoffrey Hinton. The Nvidia Way: Jensen Huang and the Making of a Tech Giant (English Edition) The price could vary. We earn commission from these links That project was none other than AlexNetthe software that allowed images to be classified automatically and which until then had been a useless challenge due to the cost of the computing it required. It was then that these academics trained a neural network with NVIDIA graphics cards and CUDA software. Suddenly AI and CUDA were starting to make sense. The rest, as they say, it’s history. In Xataka | We can forget about AI without hallucinations for now. NVIDIA CEO explains why

A single man wrote a quarter of the entire Encyclopedia

Write It requires, above all, patience and perseverance. Facing a blank page or screen is, on many occasions, a fight against physical and mental fatigueand many give up before their time. So when it comes to writing “a lot,” in large amounts and lengths of time, the list shrinks. There are notable cases, like Dickensone of the most prolific authors of the 19th century, or Asimovwith more than 500 books and thousands of letters. However, none like the story of the man who wrote much of the encyclopedia alone. Louis de Jaucourt. Born in Paris in 1704 into a Protestant noble family, from a young age, Jaucourt demonstrated a deep inclination for knowledgewhich led him to study theology in Geneva, physics and mathematics in Cambridge, and medicine in Leyden. In addition to mastering five modern languages, he also had advanced knowledge of Latin, Greek and numerous disciplines, from literature to the exact sciences, a reflection of the encyclopedic spirit of the Enlightenment in which he lived. However, if he will be remembered for something in history, it is for his contribution to knowledge with a titanic work that was beginning to take shape among the French elites: the Encyclopédie. First came the Enlightenment. We are talking about one of the most ambitious intellectual projects of the 18th century, one created at a very special moment of cultural and philosophical effervescence in Europe, known as the Enlightenment. At that time, the aim was to free knowledge from the restrictions imposed by religion and absolutist monarchy, promoting the use of reason as a way to understand the world and improve society. In France, particularly, this intellectual impulse gained great strength, facing the authoritarianism of the monarchy of Louis XV and the influence of the clergy, who saw enlightened ideas as a threat to their power. In this context, intellectuals such as Voltaire, Rousseau and Montesquieu challenged traditional beliefs and promoted critical thinking what was going to lead to the foundations of the Encyclopédie. Creation and development. Also known as Dictionnaire raisonné des sciences, des arts et des métiers, the megaproject began in 1751 under the direction of Denis Diderot and Jean le Rond d’Alembert, always with the aim of compiling all human knowledge in an accessible work. Inspired by the Cyclopaedia of Ephraim Chambersthe Encyclopédie was initially planned as a simple translation, but it soon evolved into an original and much more ambitious project. Throughout its 35 volumes, The work compiled more than 70,000 articles and 3,000 illustrationsranging from natural sciences and arts to philosophy and artisanal techniques (a novelty at its time). The work of Diderot and d’Alembert was supported by 146 collaborators, including prominent Enlightenment thinkers, who worked on the compilation and review of articles in various disciplines. And above all, a man: Jaucourt. A quarter. Louis de Jaucourtfervent contributor to the Encyclopédie, contributed no less than 17,200 articlesaround a quarter of the Encyclopedia’s total, and he did so, very importantly, writing up to eight a day without receiving any financial compensation. With extensive training and full of resources, the man dedicated much of his life to the project, even selling properties to finance it. In addition, he wrote about everything, covering topics such as democracy, freedom, equality and science. Jaucourt’s dedication was such that Diderot affectionately dubbed him the “slave of the Encyclopédie,” given his commitment to the work, in which he invested decades and much of his assets. A single man, in short, who helped expand the scope of the work and guarantee its success. Extra ball. A fact to place the titanic work of man in context. Before the Encyclopedia, he dedicated 20 years of his life to writing a gigantic work, the medical treatisein six volumes (and in Latin). After two decades of work, he traveled to Amsterdam to escape French censorship for printing. Bad luck meant that the ship sank with the complete work, the only copy it had. A tragic event that seems to have left him wanting more. The legacy. The Encyclopédie was a revolutionary work that, in addition to disseminating knowledge, promoted equal and accessible education. His most notable contribution was the inclusive approach to knowledge, encompassing both academic topics and practical knowledge, and reflecting the spirit of the Enlightenment by erasing the barriers between elitist knowledge and applied or “useful” knowledge. This approach inspired future encyclopedic works and left a deep mark on modern philosophy and education. The Encyclopédie also encouraged the questioning of absolute power and intellectual emancipation, and is considered one of the fundamental pillars of Enlightenment thought, influencing later movements. like the french revolution. In short, an entire political and social manifesto that challenged the structures of power and religion of that time, and that had in a single man the ability to bring together a quarter of the knowledge of humanity. That on top of that he did it by living modestly and selling part of his assets makes it even more extraordinary. Image | PXHere In Xataka | A library in Ireland kept a 134-year-old treasure: Bram Stoker’s lost tale before Dracula In Xataka | We have discovered the most important medieval songbook of the century. It was lost in the archive of the Barcelona Cathedral

He has not bought Nvidia or a single H20 GPU in the last quarter

The future of Nvidia in China is every day that spends more gloomy. In early October 2024 Chinese administration arrived to the companies of artificial intelligence (AI) A recommendation in which I asked them to use chips produced in China as much as possible. Ten months later This recommendation has been transformed into a requirement. And is that the Chinese government is already forcing data centers that belong to the State throughout the country to use at least 50% of Chinese integrated circuits on their servers. On the other hand, Nvidia has achieved the export license you need to sell in China Your GPU for IA H20but the Chinese government has vetoed this chip. And he has done so that the administration of the cyberspace of China, which is the main Internet regulatory body in this country, This GPU is thoroughly investigating Because he suspects that he could incorporate a rear door of difficult location by Chinese experts. If so, the possibility of China to use this chip. The direct consequence of this unfavorable scenario for Nvidia is that during the last quarter it has not sold a single H20 GPU in China, As Shaun Rein holdsan expert in the Chinese economy and founder of the consultant The China Market Research Group (CMR), which is housed in Shanghai. This statement is true, but it has a small trick. For a good part of the last quarter Nvidia did not have the export license that he needed to deliver this chip to his Chinese clients, but he already has it. And it could have sold thousands of these GPUs during the last weeks. China has alternatives designed to compete with Nvidia chips Despite the efforts of the US government to avoid it, the avant -garde chips for ia They have continued arriving in China. And they have done it mainly through secondary markets and parallel import roads that run in India, Malaysia or Singapore, in which The US action It is very limited. In addition, the developers of great AI models that have projects with projects with CUDA They have found the appropriate place to get these GPU: The international second -hand market. Cambricon Technologies is one of the companies specialized in the design of GPU for AI with greater growth potential Anyway China already has three alternatives Very clear to Nvidia. Although it is not as well known as Huawei or Moore Threads, Cambricon Technologies is one of the companies specialized in the design of GPU for AI with greater growth potential. In fact, he has received the approval of the Shanghai bag (China) to raise 560 million dollars. It will allocate them to the design of four chips for training and inference of AI models, and also to the development of an alternative to CUDA. On the other hand, Moore Threads He has developed several GPU for AI applications that, on paper, rivaize some of the advanced solutions that have placed in the Nvidia, AMD or Huawei market. The MTT S4000 and MTT S3000 cards are its most interesting proposals right now, although, curiously, in its porpholio the MTT S80 card, a proposal for games and content creation that, according to Moore Threads itself, has a 14.4 TFLOPS calculation capacity also appears in Floating Coma operations of simple precision. The other indispensable actor in the Chinese chips industry for IA is Huawei. His most ambitious proposal right now is the chip Ascend 910dwho seeks to overcome the performance of the GPU NVIDIA H100. However, this Chinese company has also recently presented its chip Ascend 920a solution that is clearly destined to occupy in the Chinese market The gaps that the NVIDIA H20 GPU is going to leave. This proposal will enter large -scale production during the second half of 2025 using 6 NM integration technology that have presumably developed elbow with Huawei elbow and SMIC (Semiconductor manufacturing international corp). Image | Nvidia | Zhang Kaiyv More information | Shaun Rein In Xataka | The US gives Huawei a great opportunity: to get its new chip for AI with the Nvidia market in China

Some researchers created a company where all employees were AI agents. They did not make a quarter of the work

With a generative AI that already shows Signs of decelerationthe next great jump already glimpses on the horizon: the AI agents. Unlike chatbots, an AI agent can be given a complex task and will act independently, making decisions on the march to achieve their goal. Everything pointed to the fact that 2025 was going to be the year of the agents ia And, to verify it, some researchers did A curious experiment: They put several of these agents to work in a fictitious company. It didn’t go very well. A fictitious company. The study was conducted by Benegie Mellon University researchers and sought to measure the effectiveness of the AI ​​agents. In it, they created an environment that pretended to be a small company dedicated to the development of software to which theagentcompany baptized. The company had 18 employees and an objective plan for the sprint quarterly. In addition, they had enough internal documentation such as an employee manual, human resources policies or good practices guide. Employees communicated through a Slack type chat program for communication between them. He Staff. The AI ​​agents who put to work in Theagentcompany included Google, OpenAi, Meta and Anthropic models. They were assigned roles such as Financial Analyst, Project Manager or Software Engineering. A technology director and a human resources manager were also created to which each agent could contact if they need it. Among the tasks they had to do was write code, search the Internet, open programs or organize data on spreadsheets. Quite typical in a company of these characteristics. The problems. The agents began to work and at first everything was going well, but it soon appeared problems and misunderstandings. One of the agents had to access information, but a popup appeared on the screen and could not see it. Although I could close it by clicking the X of the upper right corner, he asked for help to human resources, which told him that the computer department would soon contact him to solve it. He never contacted and the task was not completed. The agents also developed a curious behavior when they were not clear what were the steps to follow. Sometimes they cheated and created shortcuts to skip the difficult part of a task. For example, an agent did not find the person who had to ask a question. What he did was change the name to another user for that of the user he had to ask. The results. The employee medal of the month was taken by Anthropic and his Claude 3.5 Sonnet model. But, although he was the best, he only managed to complete 24% of the tasks assigned to him. Germini 2.0 Flash and Chatgpt only completed 10% of the tasks and the worst employee was Nova Pro 1 of Amazon with 1.7% of completed tasks. The most common failures were caused due to lack of social skills and not being well looking for the Internet. The threat of AI agents. According to the last World Economic Forum Reportthe AI ​​will destroy more than 90 million jobs in the next five years (although it is also expected to be created almost twice new positions) and AI agents have a threat to many jobs. However, experiments like this show that technology is not yet ready to replace 100% of a human employee. Currently, AI agents They make many mistakes And, like Tesla’s Autopilot, for now it is better Do not remove your hands from the steering wheel. Image | Gemini In Xataka | The workers have stopped fear of AI as a machine to destroy jobs: software engineers do not think the same

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.