Elon Musk says AGI will arrive in 2026. He said the same thing last year

Artificial general intelligence or AGI is the great goal that AI gurus keep mentioning. From Sam Altmanpassing through Zuckerberg and his superintelligence teameven of course Elon Musk. The problem is that they are already beginning to repeat themselves and this whole thing sounds more and more like a huge déjà vu. Today he doesn’t trust, tomorrow he does. Surely you have come across this nice sign in some of those authentic bars or shops. The AGI is starting to sound exactly the same. They count in Gizmodo that Elon Musk has set a date for the arrival of the long-awaited AGI: 2026. He recently said that Grok 5, which will be launched next year, had 10% chance of getting the AGI and it seems like he’s now upping his ante. During an xAI meeting, Musk stated that he is confident that the company’s ability to scale its computing power will help them achieve AI that surpasses human intelligence. Everything good, except because in 2024 he said that the AGI would arrive in 2025. He hype and the calm. What happened to Musk is further proof of the disconnect between the discourse of the “sellers” of AI and the experts who make AI. Altman, Musk and Zuckerberg start from the idea that the more AI scales (that is, the more investment is made), the sooner AGI will arrive. From there exorbitant investment in data centerssome the size of entire cities. On the other side we have AI researchers and developers, whose speech is much more realistic. Yann LeCun recently saidconsidered the godfather of AI, that the path to achieving AGI is not language models, but world models. Research also points in that direction and recently we talked about how language is not the same as intelligenceso the current path seems more like a dead end. Andrej Karpathy, co-founder of OpenAI, has also spoken out, and in his opinion the AGI will arrive, but it will take at least another decade. Musk’s other predictions. According to Business Insiderin the meeting with xAI employees, Musk also talked about the construction of data centers in space, an idea with which several companies are flirting in view of the energy problem. Musk related this infrastructure to his plans to colonize Mars and pointed to the possibility that Tesla Optimus robots were the operators of these infrastructures. It hasn’t always been so optimistic. In 2017 issued a warning: “It is urgent to regulate artificial intelligence before it becomes a danger to humanity.” Maybe 2017 sounds very far away, but we don’t have to go that far back. In 2023 signed a letter with other personalities from the technological world in which he called for AI laboratories to pause model development for at least six months due to the immediate danger of a replacement of humanity. Today he believes that AGI is imminent and defends that AI will do everything for us and that “working will be optional.” Musk’s speech on AI has taken a radical turn, especially now that he has an AI company. What things. Image | Gage SkidmoreFlickr In Xataka | Implanting a chip in your hand to perform magic tricks sounded spectacular. Until you forgot your password

OpenAI has turned the global economy into Russian roulette with a single bullet: AGI

2025 is being the year in which OpenAI has ceased to be a technology company and has become a black hole that attracts capital, expectations and the destiny of companies that move billions, with a ‘b’. Sam Altman has designed a scenario where there are only two possible outcomes: AGI for them or collapse for everyone. Why it is important. OpenAI’s valuation has reached $500 billion as an unlisted company. It has moved more than a billion (also with ‘b’ and it is not a false friend of “billions”) in deals in recent weeks. Those figures only make sense if they get the AGI (Artificial General Intelligence). If not, everything explodes. The panoramic. A year ago, a round of 6.6 billion It seemed like an astronomical figure. Nine months later, 40 billion. Now we talk about 100 billion with NVIDIA. And so naughty. When we reach these magnitudes (and they are repeated) we stop talking about simple capital injections and talk about binary bets on the future of the world economy. The problem is that these figures have dragged other giants to the same precipice. The backdrop. Microsoft was the first to get hooked. Then he considered divorce and since then They are still together, but sleeping in separate beds. Furthermore, OpenAI has achieved something more dangerous: chaining Oracle, AMD and above all NVIDIA, the most valuable company on the planet on the stock market. If OpenAI clears its throat, all NVIDIA knobs jangle. And if NVIDIA falls, it drags down the S&P 500. The domino effect would reach pension funds, corporate spending and the US GDP. And from there, a chain effect for the economy of the rest of the world. behind the scenes. NVIDIA is not only funding OpenAI, it is also guaranteeing some of the debt the startup needs to build its own data centers. Is circular money: NVIDIA sends money in exchange for shares. OpenAI uses it to rent chips from NVIDIA. And those contracts allow NVIDIA to take on more debt to continue financing OpenAI. A loop that only works as long as the music continues playing. When the Titanic began to sink, the orchestra’s musicians were forced to continue playing. Yes, but. AI already works. It is already transforming sectors. Nobody doubts it. You don’t need to be AGI to have value. The problem is that OpenAI does need AGI to justify these insane valuations. They have set up a structure where any slowdown, any sign of doubt, will trigger panic. The money trail. Altman has found in Masayoshi Son to the perfect partner. The SoftBank founder has a history of big bets blowing up and miraculous saves (Alibaba, ARM). The Altman-Masa combination is a capital cannon pointing skyward. But it is also a detonator: if they fail, the explosion will be proportional to the ambition. According to Altman’s analysis, OpenAI has to beat Google before the latter’s TPUs hit the market and change the rules of the game. That’s why the rush. That’s why Atlas. That’s why the agreements with Broadcomconversations with Intel, promises to AMD. It’s not just about building the best AI, it’s about surviving until you get it. The big question. What if another macroeconomic event stops everything before superintelligence arrives? OpenAI is racing against the clock, it needs AGI before the economy trips over its own shadow. Meanwhile, the market rewards these alliances with instant increases. Oracle has multiplied its value just by announcing agreements with OpenAI. Capitalism of expectations: benefits are no longer needed, only promises of a future that does not yet exist. The same thing happens to others because OpenAI is the new King Midas. Decisive moment. This is no longer a bubble that can burst. It is a bet that can fail. And the difference matters. A bet drags down everything around it. OpenAI is already too big to fail without causing a cataclysm. Which makes it probable an Intel-type state bailout if things go wrong. Altman knows that many AI companies will disappear when the euphoria ends. Only the largest will survive. OpenAI plays at being so big that it has to be rescued. It’s already happened with the dotcoms‘. It can happen again. OpenAI has forced a binary scenario: either we achieve AGI or we face a brutal recession. AI works, transforms, improves processes. But that is no longer enough. We need trillions in value created. And if they don’t arrive in time, the collapse will be rapid. And ugly. In Xataka | AI is giving a second youth to unexpected actors: the old guard of enterprise software Featured image | OpenAI, Alexander Gray

OpenAI has become the “Fast Food” of AI. And that means that for Sam Altman the business is attention, not AGI

It was sung that OpenAI was going to launch its browser, so the Yesterday launch of the Atlas browser It didn’t take us too much by surprise. What is important is the fact that the company does not stop constantly releasing products and services. The pace is the most extraordinary we have experienced in recent years, and the obvious question is, what is OpenAI pursuing with this strategy? OpenAI is the great machine churros AI products of the world. In recent weeks we have seen how OpenAI has not stopped launching new AI services and products that have managed to flood the market. Some examples: And that’s not counting the Recently announced agreements with NVIDIA, AMD and Broadcom which make it clear that the pace of OpenAI announcements is absolutely dizzying: too many new things too often. Because? The hype race as a business priority. That extraordinary flurry of releases suggests that OpenAI’s big corporate priority is not so much the vaunted pursuit of AGI as it is dominating the conversation and, above all, the attention economy. What OpenAI wants is for us to be constantly talking about it, and the truth is that these launches are not exactly small: they all pose notable changes in its ecosystem and in the technology industry itself. Smokescreen. And such frenzy also acts as a strategic smokescreen. With this bombardment of releases (browser, applications, SDKs, improved models), Altman and his team not only generate more hype, but saturate the competitive space. Rivals barely have time to assimilate or replicate a feature when the next one has already been announced. Towards an operating system. The launch of Atlas is an especially significant move. With it it seems to be clear that OpenAI no longer wants to be a simple layer, the engine of AI, but a complete operating environment in the style of WeChat or the App Store. In fact wants to be the Windows of AIbut either it turns out well, or it is going to be the mother of all bubbles. Expectations attract new users (and investors). These constant movements also mean that these products also generate new expectations, even if only temporarily. OpenAI has managed to partly conquer the attention economy with launches such as Studio Ghibli style images or more recently with Sora. This has allowed it to attract millions more free users, which the company then tries to convert into paying users. Not only that: its growth also helps investors want to participate in the company’s multimillion-dollar investment rounds. And the AGI, what? And while all these launches are taking place, we see how the holy grail of AI, getting a general artificial intelligence (AGI), seems to take a backseat. It is as if that speech had become an empty mantra or a long-term goal that is not credible in the middle of this chaos. Altman has achieved replace philosophical conversation —the one that caused the hypothetical arrival of the AGI— due to a consumer conversation. The Fast Food of AI. The AI ​​ecosystem that OpenAI is creating has adopted a consumption pattern similar to that we experience on social networks: fast and ephemeral, based on the latest viral news. The Studio Ghibli-style visuals were exciting for a couple of weeks, and the same has happened with Sora 2, but that “wow” effect fades quickly. What is OpenAI doing to revive the hype again? Launch a new product. Atlas is the latest example. Seeking to be a de facto monopoly. With all these movements, OpenAI continues to attract more and more users and dominate the conversation and gain attention. That may not get you what you really need (income) at the moment, but it solidifies your absolute benchmark position and helps make it what you’re really looking for: the de facto monopoly of AI. Image | Mariia Shalabaieva In Xataka | ChatGPT will let you have erotic conversations. Welcome to emotional intimacy with an AI

The companies of AI tell us that they want to achieve an AGI. What they are really conquering is the economy of attention

Sora 2 is already with usand with it a tool that will allow anyone to create great videos generated by AI. The problem is that we will have an even greater problem of “Ai Slop“(” Bazofia generated by AI “): Internet is direct to become an even more gigantic hay career for the economy of attention. What’s happening. In recent days we have seen a goal to launch vibes Ya Openai launch Sora 2. Both applications are actually platforms in which to discover, create and share videos generated with AI. Both democratize that access to generate video content easily through their AI models, and we will no longer have to be an expert in Premiere, Davinci or dedicate hours to the script, recording, editing and publication of the videos, because AI does everything much easier. That does not mean that this content will be better: only there will be more. A lot more. Sora’s website greets us with a “Explore” section in which we can move vertically to see content destined for one thing: to have us trapped by the Doomscolling. Trapped in doomscrolling. The result of these initiatives can be seen quickly in the Sora websitein the application of iOS (for the moment, only in the US) or in the Meta Ai Ai and vibes. Here we have infinite content that moves and whose cost of production for the user is zero euros (for a video, if you want to create many the thing changes and you will have to pay a subscription) and just a few seconds. The incentive not to stop creating videos is huge, because the promise is that “anyone can create viral videos.” But. What these tools raise is to turn the viral into a formula. For this, it is possible to use emotional templates in a simple and direct way and that appeal to our attention. We have filters of nostalgia, indignation and tenderness on demand, for example, and the risk is the death of authenticity: everything seems designed by an algorithm. Quantity (and immediacy), no quality. These platforms – and those that are to come – are like gigantic automated content farms and are designed for user retention, not to give value. The content is the new slot machine, and the “AI Slop” manages to exploit our cognitive biases better than ever. It is synthetic dopamine at an industrial scale. 11 tricks to dominate Tik tok More creators = more slaves on platforms. Facebook created its empire convincing users to create content for it. Suddenly they were creators and consumers, and social networks took advantage of the reef and put us to publish reflections, links, and share photos and videos of all kinds. For some that was not enough claim, but being able to do everything with AI can attract new audiences (creators?). The user is more than ever the unpaid worker and the product. And those platforms want to attract the maximum possible number of usuals for the same thing that Facebook did it 20 years ago: to monetize that attention. Will it give us the same not to believe anything? In that martemágnum of content generated by the contamination of the contents, it promises to be so exceptional that it will be more difficult to distinguish the real from what is generated by the. A couple of years ago the thing was relatively complicated with imagesbut today it is almost impossible. There is an obvious risk that goes beyond the Deepfakes and the possible fraud and scams: the threat is that We will not even believe that real images and videos are indeed real. The AI ​​Slop era began. The “AI Slop” or “Bazofia generated by AI” is content generated by AI which is technically impressive but probably lacks meaning, authenticity or purpose beyond achieving that immediate attention by users. We have already seen how these tools already serve to generate comments and texts of text and images, but the video is even more powerful. OpenAI wants to kill Tiktok. In fact, with Sora 2 and its new website and app the objective, whether indirect or not, is to end Tiktok, which He owned and lady of the short video format. That users can now generate content with themselves as protagonists doing all kinds of impossible things thanks to AI is an extraordinary claim for many traditional Tiktok users. The “cameos” are a possibly addictive and brilliant product to achieve that goal. Before uploading a video with which to create cameos or mixtures, Openai warns of the implications. Is that enough for us not to create them? But. Of course, risks are important in security and privacy. The fraud and scams with increasingly credible deepfakes will be difficult to detect, but it is also asked where privacy ends if a platform ends up having ours images, videos and audios that can be edited and remix if we allow it. OpenAi offers configuration options, parental controls And warnings when it comes to uploading content, but it is not clear if that will be enough. In Xataka | Differentiating the AI ​​content on the Internet is increasingly difficult. The solution goes through something similar to fillets

The US pursues the AGI as if it were the Holy Grail. In China they are more pragmatic and are applying AI to plant tomatoes

The long AI career continues its course, and although it seemed that the United States had taken the lead, China has managed to recover the lost terrain and stand up to the Big Tech. The funny thing is that The approaches of these two countries are totally differentand that makes great losers and winners here. In short and long term. US for the AGI. The North American country has a very different strategy from that of China in regards to artificial intelligence. Large technology companies are investing billions of dollars in search of that holy grail called AGI (General artificial intelligence). China, more pragmatic. On the other, China, which has adopted a different and much more pagmatic strategy. Rather pursue great objectives that a priori are far from being achieved, the Chinese government, led by Xi Jinping, is prioritizing The development of AI practical applications that are above all efficient and have limited implementation and, if it can be, low. Promises, promises. The difference between both visions is huge but highlights the mentality with which both countries face their efforts. The US companies that work in the US believe that the AGI is close despite Some experts They are clear that The generative AI is not the way. The Manhattan Project of the IA. That seems to give equal to visionary theorists looking for that AGI, because according to them, this mile military advantage that can suppose. For certain sectors politicians in the US the development of an AGI It is comparable What the Manhattan project was and the construction of the atomic bomb during World War II. But as they explain Some expertsthat project was not the three years of work, but rather supported studies and research that had been running for three decades in an US that at that time looked in the long term. China wants to be useful today. That way of contemplating the career of AI contrasts with that of China. Its leader, Xi Jinping, has not shown special interest in AGI, and its approach is much more pragmatic: he seeks to focus AI on applications for practical purposes. That has led to the Models of the AI ​​developed in China are already taking advantage of everyday tasks. Practical applications. For example, they point out In The Wall Street Journalhttps: //www.wsj.com/tech/ai/china-hasthe qualification of access exams to high school, the improvement of weather forecasts, or assistance to agriculture with methods to optimize crop rotation. It should be noted that the US also uses AI in these areas, at least in the form of projects such as Google Weather Lab either Alphafold 3 For the development of medications with AI. Chinese government support. Although there are efforts by both countries for that practical approach, the difference here is that in China there is a very fortune government support. Beijing is investing significantly in that vision with a Investment fund of 8,400 million dollars To support new startups, and both local governments and Chinese state banks have launched their own investment programs. And open models. Another of the key points that differentiate both strategies is that of the closed and owner of the models of the large US companies and Open and open source vision of Chinese models. These open models allow to be downloaded and freely modified, and also reduce the cost of implementing this technology for companies that want to adapt it to their needs. The trade war conditions everything. It is also true that commercial restrictions imposed by the US condition the Chips development and AI software in China. That has caused the Asian giant to have adopted a curious tactic: to let the US assume the enormous costs of exploring new paths to develop AI, and then follow its steps as quickly as possible but without having to face those strong economic investments. Risk aversion. Although Xi Jinping may raise a strategy that the AGI pursues, experts say it will only do it when I see that you have enough guarantees of succeeding. Kendra Schaefer, from the trivium Chinese consultant, explained How the Communist Party does not want to be threatened by an AGI that condition its future. According to her, the Chinese government is “one of the most reluctant governments to the planet’s risk.” Outstanding image | Xataka with Midjourney In Xataka | China has declared the war on private school: why he predicted the prolific “tutorials”

Google’s new AI generates interactive worlds from a prompt. Deepmind believes it is a step to get the AGI

The Google Deepmind team has announced its new AI model to generate interactive worlds. At the end of last year We were surprised with what Genie could do 2 And the new version is an important leap, one that for Google is an advance in the creation of the General Artificial Intelligence or AGIthat which can match the abilities of the best human. Genie 3. It is the new World Model o Deepmind world model. Allows to create interactive worlds for which we can explore, all from a Prompt of text. The previous model was very limited and could only be used for a few seconds, but with Genie 3 Deepmind promises that it can be explored for “several minutes.” In addition, the resolution has improved at 720p to 24FPS. The model is based on Genie 2 and I see 3. It has memory. It is the most important improvement of the new model. The world is generated through ia as we explore it, but if we turn around and look at something we had already seen, it remains the same. We can also change something, such as painting on a wall, and that is kept as we leave it all the time. This did not happen in previous versions and its creators say they did not explicitly schedule it to do that. As explained in an article in TechcrunchGenie 3 is able to remember what he has already generated to train himself, in this way he learns how the world and his physical works. Interactive. It also emphasizes that events can be added with Prompts additional In his article, Deepmind puts several interactive examples such as a meadow in which we can choose if a tractor, a bear, a horse or hot air balloons will appear. They call it “promptable World Events” and also allows you to change aspects such as weather. Why is it important. Worlds models are useful in different scenarios such as the creation of scenarios for real -time games, in education or in the training of AI agents. Google points to it in its blog as a key step to reach the AGI, that upper artificial intelligence that so many companies are trying to get as soon as possible. These worlds can be used as a training field for other AI, also including robots, cases in which simulating real scenarios is a challenge. In the presentation, the Deepmind team explained how they put an agent in a stage that simulated a warehouse and asked him to approach certain elements, such as a green garbage cube. In all the tests he achieved, according to the Deepmind team “the fact that (the agent) is able to achieve this is because Genie 3 remains coherent.” The competition. The largest IA competition, at least at the level of products for the end user, we see it in the chatbots and, to a lesser extent, in video or audio generators. The world models are less popular among the public and there is not a great competition. Nvidia presented cosmos at the beginning of the year And there are some companies like World Labs They offer similar proposals. We would like to finish this text with a link so you can try it, but Genie 3 is only available in Beta for a very limited group of academics. Image | Deepmind In Xataka | Some researchers created a company where all employees were AI agents. They did not make a quarter of the work

Microsoft and Openai AGI clause

Microsoft is losing its advantage in AI before its own partner. OpenAI, the company that received 13,750 million dollars of investment, now negotiates from a position of strength to weaken the most lucrative relationship of the technological sector. That of a marriage of convenience in which both parties have more and more misgivings of the other. Why is it important. A clause buried in the contract between the two companies could dynamite the alliance. Yes OpenAi declares having reached the General Artificial Intelligence (AGI)Microsoft would lose access to all future models. It would be anchored in obsolete technology while its partner would conquer the market with higher tools. The panoramic. What began as a marriage of convenience has become a battle for control: Microsoft needs Openai technology to compete with Google and Amazon. OpenAI needs Microsoft servers to train their models. But Each company now seeks to reduce its mutual dependence While renegotia a ratio valued in hundreds of thousands of millions. In detail. The AGI clause works as a three -part watchmaking pump. First, the Openai Council can unilaterally declare that the AGI has reached. Second, there is an economic threshold: If OpenAI shows that its models can generate more than 100,000 million in benefitsYou can cut access to Microsoft. Third, Microsoft is prohibited from developing AGI on its own until 2030. When the agreement was signed, Microsoft thought the AGI would take decades to arrive. Sam Altman, OpenAi CEO, It has been shortening the deadlines. It sounds as if Altman were moving the goal to call AGI to a simple better model to get rid of Microsoft. What has happened. The problems began in November 2023, when Openai’s Council fired Altman without warning Microsoft. Although they replaced him days later, Microsoft lost confidence in its partner. In March 2024, Microsoft hired Mustafa Suleymanfounder of Deepmind, to lead his division of internal. OpenAI responded by diversifying its suppliers Cloud With agreements with Oracle and Google. The figures. Openai folded its annual income, from 5.5 billion to 10,000 million dollars, in 2024. This year It already goes for 12,000 million. But the company continues to lose money: in 2024 it registered losses of almost 5,000 million. Microsoft, meanwhile, generates 75,000 million annually with Azure, and an important part comes from OpenAi. The context. OpenAI wants to restructure your business model to eliminate the top to the benefits and allow investors and employees to have direct shares. Microsoft must approve this change, which gives negotiation power to review the entire relationship. Between bambalins. Tensions have climbed beyond financial aspects. Some OpenAi senior researchers resist delivering their developments to Microsoft, despite the company’s contractual rights until 2030. OpenAI has begun to offer its services directly to business clients, short -circuiting Microsoft as an intermediary. Yes, but. Both companies remain deeply interconnected. Microsoft has built its entire AI ecosystem (Co -cilot, Azure Openai Service) around OpenAi technology. Chatgpt has 900 million downloads to The 100 million co -pilotturning Openai into the real winner of the consumer market. By win. And now what. Negotiations advance towards an agreement that could be completed by the end of summer. Microsoft seeks to keep access to OpenAi technology even after AGI and obtain a 30-35% participation in the restructured company. OpenAi wants freedom to choose suppliers Cloud and offer services through AWS and Google Cloud. Turning point. If they do not get an agreement, Microsoft could keep the current contract until 2030, but would run the risk of losing access to the most important advances in AI. OpenAI, meanwhile, needs Microsoft’s approval to complete its restructuring and access 40,000 million in new financing. The big question It is if one of the most powerful alliances of technology can survive when both partners have become rivals. The answer will also serve who will control much of the future of AI. Outstanding image | Xataka In Xataka | All AI companies promise that the AGI will arrive very soon. The problem is that chatgpt is not the way

All AI companies promise that the AGI will arrive very soon. The problem is that chatgpt is not the way

In December 2022 chatgpt He left us speechless to all. However, two and a half years later we have a problem: it does not seem that after all this time I can go to much more. It has improved, yes, but in the meantime we are moving away from the great promise of AI, which is none other than going beyond and that someone manages to reach what is known as the General Artificial Intelligence. And it seems clear that this path, that of Chatgpt, is not the good to get it. Promises, promises. A few months ago Sam Altman called the president of the United States, Donald Trump, and He said that the AGI would arrive before it ended its mandate. It is a message that has been repeating for months, although then spoke of “A few thousand days“Dario Amodei, CEO of Anthropic, believes that It could arrive beforein 2026. Elon Musk – who promised that he would have a totally autonomous Tesla in 2016 – agreed and pointed at 2026 as the year we will have an AGI. All are hypeptimists for a simple reason. Money. Like Altman, all who defend the rise and development of AI and the imminent arrival of the AGI do so to raise more and more money for their companies. We know that developing, training and running models of ia costs true fortunes, but the progress in this field seems to be slowing down. Doubts with climbing. There are many who believe that the current strategy of climbing the models – give more GPUS and get more data to train them – no longer compensates as much as before. The latest versions of the great foundational models exceed their predecessors, yes, but not in a striking way. It’s as if we had touched the roof. This is not the way. And for months the voices of experts have begun to be heard making it clear that other solutions must be sought. Nick Frosst, a student at Geoffrey Hinton and founder of Cohere, is clear that current technology is not enough to reach an AGI. What the generative AI does is “predict the next most likely word”, but that is very different from the way humans think. Lecun believes that we will take a long time to achieve an AGI. Personalities respected in the world of AI such as Yann Lecun, head of the division of AI in the finish line, are clear. Models as chatgpt They will not be able to match human intelligence. Also ensures that achieving a human level AI It will take a long time: Nothing “a few thousand days” as Altman said. And Sutskever coincides. This openai co -founder, is also skeptical with the potential of the generative AI, which according to him It is barely improving. His new startup, Safe Superintelligence, aims to create a superintelligence with “nuclear” securityalthough at the moment there have been no details about the strategy they are following to achieve it. It is not of course the one that followed when it helped create chatgpt. A recent survey to an academic association of experts in this field They thought the same: Three quarters of those who responded do not believe that current methods serve to end up developing an AGI. The generative AI is not a miracle. As they point out In The New York Timeswhat chatbots like chatgpt or other developments in this field is to do one thing very well, “but they are not necessarily better than humans in others.” According to him there is a certain temptation to think of these chatbots as something magical, but “these systems are not a miracle. They are very impressive gadgets.” Chatgpt does not challenge what he knows. Thomas Wolf, co -founder and Chief Science officer of Hugging Face, is clear that the generative AI is very good, but is far from taking us to an AGI. What we have, he explained a few weeks ago, is like “a country full of people who tell us yes to everything.” Chatgpt does not challenge us, but he does not challenge what he knows either. “We need a system that is able to ask yourself things that nobody had thought Or that nobody had dared to ask, “he said. Many challenges ahead. Among the differences between AI and human intelligence is that the latter is linked to the physical world: part of our intelligence is to know when to turn the toast, for example. There are advances in robotics and sensors that can help solve such problems, but this is a good example of how there are still many challenges to overcome to achieve that general artificial intelligence that is supposed to match (or overcome) to human intelligence in all disciplines. And the Ias that reason? The generative AI companies have found a small respite with the modes of reasoning of their chatbots. Here we find a singular advance that allows AI to respond more precisely and detailed thanks to “thinking” their answers and following a process of “reasoning” that tries to imitate the human. However, this does not seem to take us to an AGI, and again these modes of reasoning are rather a way to try that the answers are something better and do not see “hallucinations” by the chatbots. In spite of everything, Chatgpt and its rivals continue to make mistakes in this and the rest of the ways. Odds. On the horizon some possibilities appear. The current approach based on neural networks accompanies the approach of symbolic systems (based on rules) that can help provide elements such as deductive reasoning or abstract knowledge management to current models. It also works on training of models with physically precise virtual environments and in the so -called systems of meta-learningwhich allow to train new neural networks quickly and with a limited data set. But companies need products to sell us. These approaches to the development of new research roads are there, but the problem … Read more

While OpenAi insists on being the new Google, Deepseek says they have higher goals: AGI

Sam Altman has been very clear about his way of seeing things in Openai from the beginning. He has always been in a hurry to launch the ED models his company, market them and be looking for investment rounds to be able to sustain that frantic expense rhythm. The funny thing is that one of his greatest rivals, Deepseekyou are looking for a totally opposite strategy. Income grows. As they point out In Financial Timesthe company led by Liang Wenfeng has become one of the favorites of the Chinese technological panorama. Its payment services, consisting of the API for the use of Deepseek V3 and R1, have worked very well and have allowed income to cover operating expenses for the first time last month. But Deepseek doesn’t look for money. However, industry experts point out in FT that Liang has no intention of trying to take advantage of the time to maximize financial income marketing more and more their (efficient) products. No investment. Nor seek investors who support their goalsand in fact there is talk of how difficult it is to talk to the founder of the company. An investor of a multi -million dollar fund in China indicated how “we took advantage of high -level government connections and we only managed to sit with someone from their financial department who told us that they regretted it but did not look for investment rounds.” A for the AGI. Instead the company is focused on developing its current models and also in seeking the development of general artificial intelligence (AGI). It is the same challenge that other rivals pursue, but it is not clear what path each one is following to reach those with superior abilities even to those of human intelligence in all areas. Totally different from OpenAI. Deepseek’s approach is totally opposite to Openai, which from the first moment took advantage of Chatgpt to create a commercial business around that chatbot. Then the successive (and colossal) rounds of investment would arrive. The rumored SoftBank’s round, which It is estimated at 40,000 million dollarswould make its value amount to 260,000 million dollars. A small draft startup. In Financial Times they also point out that Depseek has 160 employees, something that contrasts with the more than 2,000 that OpenAi has. That has made others like Alibaba or Tencent convince business clients such as Apple, who will use QWen (from Alibaba) as an option for iPhone users in China. And with a “small” infrastructure. According to sources close to the company, Liang bought 10,000 GPUS NVIDIA H800 and 10,000 A100 in recent years. He did it before access to them was vetoed. It is not an too high amount if we compare it for example with which Elon Musk acquired In XAI, and also both are somewhat less powerful models than H100 and of course than modern B200. Deepseek R2 and V4 are on their way. The new versions of their AI models are in full development. They were expected to be launched in May, but the company may decide to accelerate that launch and make them available to the general public even before to take advantage of the good time of the company. Outstanding image | Solen Feyissa In Xataka | Deepseek, in the spotlight of European regulators: Italy and Ireland act against privacy concerns

there is no clear or agreed plan to reach the AGI

The call Stargate Project It’s going to be a lot to talk about in the coming years. The colossal bet to make the US a leader in AI will focus on the construction of data centers in that country. And yet the proposal faces enormous challenges. Money galore.The investment of 500,000 million dollars in the next four years It is simply overwhelming. That figure represents approximately 30% of Spain’s GDP in 2023 (1.62 trillion dollars), and it certainly represents spectacular support for the North American country’s ambitions. Objective: achieve an AGI. In the official announcement, project participants explained how “all of us hope to continue the development of AI—and in particular IAG—for the benefit of all humanity.” General artificial intelligence (IAG or AGI for its acronym in English) is the holy grail of the discipline, and OpenAI, which will have operational responsibility for Stargate, has been pursuing it for some time. The diffuse meaning of AGI. The problem with this objective is that it is very diffuse. To clarify the issue a little, OpenAI and Microsoft wanted to define it in economic terms and indicated that an AGI will be when it achieves 100,000 million profits. Theoretically, these systems will equal or surpass human intelligence in all fields, and the social and economic implications could be colossal. But we don’t know if we will achieve it. Much more important than giving a definition is actually achieving that artificial superintelligence, and here there is a critical problem: no one knows how to achieve it. Technology companies and AI startups—such as created by Ilya Sutskever either that of Francois Chollet— are following different paths to reach that goal, but it is not at all clear that any of them has the key to an achievement of this type in their hands. And we don’t know how they want to get to it. None of the companies working on the development of an AGI clarify how they are planning to reach that objective, and the feeling is that they are experimenting without knowing very well whether the chosen path will allow them to achieve that objective. Meta made his intention clear a year ago, OpenAI and especially Altman are especially optimistic about itand the same thing happens with Musk and xAI. Mustafa Suleyman, head of AI at Microsoft, is more cautious and he prefers not to make predictions about when we will reach it, although he sees it as feasible. In AnthropicApple and Google seem equally reserved on this issue, but it is inevitable to think that they are also working to not be left behind. Hyperinvestment for hyperpromises. This gigantic investment is to a certain extent contradictory, especially when several experts warn that there is some AI slowdown and scaling—more power and more data to train models—doesn’t seem to work anymore, or at least it doesn’t seem to work as much. There are certainly promising trends such as AI agents or the models that “reason”but is building beastly data centers what we need? Is brute force enough? This also implies equally enormous energy requirementsand it will be interesting how the US resolves these new needs. But that large investment will allow companies to continue talking about how close AGI is, when we have no idea if it is close or simply is not and will not be. An American AGI with Japanese and Arabic money. Especially curious is that the entire project is dedicated to turning the US into an AI leader and manages to develop an IAG, but the money comes partly from other countries. SoftBank, led by Masayoshi Son (on the right in the image), will be the main initial support and will immediately invest 100,000 million dollars, it is Japanese. And MGX is an investment fund from the United Arab Emirates that participated in the investment round recent from OpenAI (like SoftBank) and which also has an alliance with Microsoft. That means that these companies (and perhaps their countries) certainly have a prominent role in this project and its potential benefits. Image | Wikimedia |Wikimedia In Xataka | OpenAI presents its new lucrative structure. Their mission: raise tons of money to develop the AGI

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.