NVIDIA and OpenAI’s relationship is disintegrating

We have to talk. It’s not you, it’s me. Our love broke. That’s just what seems to be happening between NVIDIA and OpenAI, who just four months ago were living an idyllic moment. The first announced a mammoth investment of 100,000 million dollars in the second and everything indicated that we could have before us a new great technological empire. It was the most ambitious marriage in the history of technology, but now that marriage is failing. a decade of love. It was August 2016 and everyone knew about NVIDIA but almost no one knew about OpenAI. Jensen Huang, CEO of NVIDIA, saw clearly that the company had potential, so gave Elon Musk a DGX-1 serverits first “desktop supercomputer” for AI. OpenAI was using more and more advanced NVIDIA GPUs to develop its work, and with the explosion of ChatGPT in 2022, OpenAI became one of the largest customers of NVIDIA GPUs, which in turn was buying shares of OpenAI. The quid pro quo began. August 2016. The idyll began. Jensen Huang handed a DGX-1 to the co-founder and still member of the board of directors of OpenAI at the time, Elon Musk. From where I said I say… In September 2025 NVIDIA announced a “strategic investment” of up to $100 billion in OpenAI. Was one more gigantic case of circular financing that apparently made these two companies stronger and the others weaker. For a few days there has been talk that this announcement is being blurred and the agreement according to The Wall Street Journal is frozen. There they indicate that according to Huang, said agreement was not binding and he privately criticized that OpenAI had a lack of discipline in its business strategy. …to where I say Diego. At a meeting with journalists in Taipei on Saturday Indian that NVIDIA will “absolutely be involved” in the new funding round that OpenAI is carrying out. In fact, he assured that “we will invest a large amount of money, probably the largest investment we have ever made,” but when asked that this investment would be over $100 billion, he said “No, no, nothing like that.” Furthermore, as shown in the video of the included tweet, he clarified that the investment “we never said we were going to invest $100 billion in a single round” and highlighted that “there was never a commitment.” “We were invited to invest up to $100 billion and we were honored,” he explained, but added that “we will consider each round of financing separately.” narrative clash. Huang’s statements made Sam Altman quickly want to downplay the matter saying that “we expect to be a gigantic customer (of NVIDIA) for a very long time” and adding that “I don’t know where all this madness is coming from.” However, the statements of both parties suggest that there are differences of opinion and a latent tension in that hypothetical commitment that they had reached and that perhaps was not communicated or clarified adequately in September. OpenAI has its own NVIDIA complaints. In Reuters they point out that OpenAI is “dissatisfied” with some of NVIDIA’s AI chips because while they are great for model training tasks—preparing them before we use them—they are not so great for inference. OpenAI is said to be looking for alternatives to inference chips and is in talks with Cerebras and Groq to provide them with advanced inference chips. Here’s a bonus chapter: NVIDIA reached an agreement with Groq to license (“pseudo-acquire”) the company’s technology for $20 billion, which has blocked OpenAI’s talks with Groq. And look for other girlfriends. Sam Altman doesn’t hesitate when it comes to looking for alternatives to prosper. He did it when the relationship broke down with Microsoft and He looked for other girlfriends like SoftBankOracle or NVIDIA itself. But in reality he plays several sides, because he has become a shareholder in AMD, one of NVIDIA’s biggest rivals. But there is more. A lot more. Polyamory. Not to mention that Amazon is in talks with Sam Atlman to close an investment of up to 50 billion dollars on OpenAI. Or that Altman is also in negotiations with Softbank that could result in a investment of 30,000 million additional payments by the Japanese company, which had already promised a investment of 40,000 million of dollars a year ago. The amounts are dizzying, but OpenAI handles them as if nothing had happened. Dependencies and reverse lock-in. Typically, companies fear being locked into dependence on a vendor like NVIDIA. Here NVIDIA seems to be suffering just the opposite: being trapped by a client (OpenAI). If NVIDIA invests 100 billion, it becomes too dependent on the success of OpenAI. If Altman’s company fails or changes course, the hole in NVIDIA’s balance sheet would be catastrophic. It is “mutual assured destruction.” Image | Hillel Steinberg | Village Global In Xataka | The leaks are shaping OpenAI’s physical device: headphones that sit behind the ears

OpenAI’s obsession was to train its models like crazy. Now it’s run them faster than anyone else

OpenAI has signed an agreement estimated to be worth more than $10 billion with Cerebras Systems, a startup that designs advanced AI chips dedicated to one thing: running AI models as fast as possible. It is a unique alliance not only because of that change of focus, but because there is a conflict of interests. what has happened. The firm led by Sam Altman has committed to purchasing 750 MW of computing capacity over the next three years from Cerebras. Sources cited in The Wall Street Journal indicate that this alliance has an estimated value of more than $10 billion. We are therefore facing an operation extraordinary in size, but peculiar in form and substance. What Cerebras does. The firm based in Sunnyvale, California, was founded in 2015 by former engineers from SeaMicro, purchased in 2012 by AMD. The startup designs artificial intelligence chips specifically aimed at the inference stage of AI models, that is, executing them. More tokens per second please. When we use ChatGPT or any AI model, what we are looking at is an AI model using inference. Some “write” faster than others, and that speed of displaying text in responses is measured in tokens per second. Typically NVIDIA chips are great for the training phase, but not so much for the inference phase. Chips from companies like Cerebras —or those of the well-known Groqwhich has just been “bought” by NVIDIA—are precisely designed to run those models at full speed and obtain very high token per second speeds. The AI ​​is already good. Now she wants to be fast. NVIDIA’s recent “purchase” of Groq makes it clear that Jensen Huang’s company wanted the ability to offer those ultra-fast inference chips, and now OpenAI seems to want something very similar with its deal with Cerebras. AI models have already achieved remarkable performance in many scenarios, and although they are not perfect, now companies want them to not only work well, but also work very very fast and their responses, even if they are long, appear almost instantly. OpenAI wants more computing power. This operation also helps Sam Altman’s company with another objective: to obtain (and reserve) as much computing capacity as possible in anticipation of the fact that demand for these AI models will grow non-stop in the coming months and years. According to WSJ OpenAI already has more than 900 million weekly users, and its managers have frequently commented that they continue to have computing capacity problems. Brains grow. This agreement reinforces Cerebras’ position in a market that clearly demands this type of solutions. The firm is negotiating a $1 billion investment round that would bring its market valuation to $22 billion, tripling the current valuation, which is around $8.1 billion. In the past it has raised $1.8 billion according to PitchBook. Conflict of interest. This agreement also draws attention for an important aspect: Sam Altman, CEO of OpenAI, is also an investor in Cerebras (he is at the bottom of this Cerebras website) and indeed your company At one point he considered acquiring Cerebras although in the end that operation did not bear fruit. We are therefore faced with an operation that theoretically benefits Altman on both sides, which is worrying. How will OpenAI pay for this party? This new agreement once again triggers the debate about OpenAI’s ability to meet its credit and debt obligations. In 2025 it generated about 13,000 million dollars in income, but that enormous amount remains minuscule if we take into account that the contracts it signed with OracleMicrosoft or Amazon They amount to about 600,000 million dollars that will have to end up getting from somewhere. Where from? It’s a good question. We’ll see if they can end up answering it. In Xataka | The alliance between Oracle and OpenAI is not just about data centers: it is about overtaking Google, Apple and Microsoft on the right

OpenAI’s biggest fear is not that the bubble will burst. It’s just that I do it ahead of time

Sam Altman has admitted in an internal memo published by The Information that Google is catching up technologically with Gemini 3. That’s a real problem for OpenAI, but OpenAI’s real concern isn’t that. It’s just that he needs the party to last long enough to give him time to build his own infrastructure. Why is it important. OpenAI plans to burn more than $100 billion in the coming years pursuing AGI. But it is completely dependent on Microsoft for servers, NVIDIA for chips, and external investors for financing. Google, on the other hand, already has its own TPUs and generates 70 billion in free cash flow per year thanks to Search, YouTube and Google Cloud. If the music stops early, one survives and the other doesn’t. The paradox of timing. OpenAI faces a very peculiar race against time: If investment in AI slows in 2026 or 2027, it will have spent tens of billions but will not have completed its own infrastructure. You will remain tied to expensive suppliers. You will not be able to compete on costs with Google. Staying halfway is the worst possible scenario. Instead, if the bubble lasts until 2030 or beyond, OpenAI will probably have reached the threshold of self-sufficiency. It will have its own chips, its own data centers, economies of scale. It will be able to survive even when the investment tap is turned off. It’s like building a bridge: it doesn’t matter how much you’ve spent a lot. If you only get halfway, it’s of no use. The absence of moat. OpenAI cannot protect itself with sustainable technological advantage. In AI there are no defensive moats (moats) real. Every time OpenAI or any other lab makes a breakthrough, the rest replicate it within months. The only sustainable advantage OpenAI has left is cost. If you control your infrastructure, you can offer prices that no one else can match. If you do not control it, you become a dispensable intermediary between the end customer and whoever does have the chips and servers. The context of the memo. The document published by The Information reveals that Altman anticipated turbulence after the launch of Gemini 3. Google’s new model stands out precisely in the areas that generate the most revenue for OpenAI: automation of web design and programming. Altman acknowledged to his team that “Google has been doing an excellent job lately” and warned that he expects “the environment to be tough for a while.” But he urged them to stay focused on “achieving superintelligence”, admitting this would mean being left “temporarily behind in the current regime”. The figures. OpenAI went from almost non-existent revenue in 2022 to projecting 13 billion this year. It is one of the fastest business growth in history. But it plans to earn 200 billion in 2030. To achieve this, it will need to multiply its current income by 13 in less than five years. Meanwhile, it plans to spend $90 billion on R&D alone through 2030. That represents 45% of its projected revenue. Large technology companies allocate between 15% and 30% of their gross profit to research, not their total income. If OpenAI falls short of its billing goal, that percentage will be even higher. Yes, but. Google has structural advantages that are difficult to overcome: Generates a huge cash flow thanks to consolidated and very profitable products. You can afford to burn money on AI for years without too much trouble. And it already has its own infrastructure after a decade developing TPUs. OpenAI, on the other hand, lives off external funding. His recent agreement with Oracle to design data center components in the United States is an attempt to build that self-sufficiency. Altman presented it as “a step to ensure that the core technologies of the AI ​​era are built here.” At stake. OpenAI’s technological advantage over rivals such as Google and Anthropic has narrowed. Investors have sunk more than $60 billion into OpenAI, recently valuing it at $500 billion, betting that it will continue to dominate the market for AI that creates content and reasons like humans. That bet falters. Anthropic, founded four years ago by former OpenAI employees, is skyrocketing its valuation and aiming to generate more revenue than its former home selling AI to developers and companies. Their models specialize in generating computer code. And ChatGPT is still far ahead of Gemini in usage and revenue, but the gap is narrowing. Between the lines. Altman concluded his memo by acknowledging the pressure: “It sucks that we have to do so many hard things at the same time: the best research lab, the best AI infrastructure company, and the best AI platform/product company. But it’s our destiny in life. And I wouldn’t trade positions with any other company.” The question is not whether OpenAI can technically compete with Google. It’s whether you can hold on financially long enough to stop depending on others. Featured image | Xataka In Xataka | There is a generation working for free as a documentarian of their own life: they are not influencers but they act as if they were.

Sam Altman does not take well to being asked about OpenAI’s astronomical losses

OpenAI has a serious liquidity problem. Earn a lotbut they are crumbs compared to what you need to enter. The numbers don’t come out, but that hasn’t stopped them from signing millionaire agreements. Brad Gerstner, an OpenAI investor and podcaster, asked Sam Altman about this problem and it seems he wasn’t amused. Defensive. They tell it in Futurism“How can a company with $13 billion in revenue commit to spending $1.4 trillion? You’ve heard the criticism, Sam,” asked Brad Gerstner on his podcast, which incidentally also included Satya Nadella listening intently to the exchange. Altman’s response was to become defensive: “If you want to sell your shares, I will find a buyer for you. Enough is enough.” The interviewer laughed it off, and Altman continued in a soft but clearly sarcastic tone: “There are many people who speak with great concern about our products and who would be happy to buy shares.” Click on the image to see the publication in X. Figures. OpenAI recently achieved a $500 billion valuationbecoming the most valuable private company in the world. Not only is it the most valuable, it has signed agreements with some of the most important tech companies such as NVIDIA, amd, Broadcom and just yesterday with amazon. Not only is it valuable, it The tech industry has tied its destiny to that of OpenAI. If it fails, the consequences can be catastrophic. Losses. Brad Gerstner is not at all wrong when he asks Altman about the inconsistency between his company’s expenses and profits. A few days ago, Microsoft presented its results and, given that they own 27% of OpenAI, in The Register They calculated how much money Altman’s company had lost in the last quarter. The figure is dizzying: 11.5 billion in just 90 days. It’s something to be worried about. For profit. After months of rumors about a impending divorcefinally Microsoft and OpenAI signed a kind of separation of assets. In parallel, OpenAI finally achieved his desired goal: finally become one for profit company. This measure gives them more flexibility to collaborate with third parties and make new rounds of investment. More wood. Despite the more than justified doubts about the astronomical spending on AI, the big technology companies announced a few days ago that They were going to spend even more than they planned. Investors are worried, and if not, tell Zuckerberg, who despite achieving record income, saw how its shares fell 8%. Question of faith. Sam Altman shares the same optimism and, responding to Gerstner, states that “revenues are growing rapidly (…) we are making an open bet that they will continue to grow.” Curious that he doesn’t give any figures to back it up. Image | TechCrunch, Flickr (License CC BY 2.0) In Xataka | The world of AI has a problem: there is no energy for so many chips

Microsoft had a saved secret. His new AI model for Copilot is the clearest statement against Openai’s domain

Since the fever broke out by generative AI, Microsoft has opted for OpenAi models to give life to key functions in some of its most important products. It is not strange if we remember that those of Redmond They invested more than 10,000 million dollars in the startup directed by Sam Altman. However, this alliance of convenience It has been showing fissures for months And, as time goes by, The rivalry between both parties becomes more evident. It was Microsoft itself that, a year ago, included OpenAI in his list of competitorstogether with Amazon, Apple, Google and Meta. And it is OpenAi who, According to The Informationinsists on not wanting to share its avant -garde technology when it arrives, if it arrives, the AGI. Even if you try to make up, the link is no longer as solid as in its first days. And now there is another chapter underway. Microsoft AI begins to show their own letters Mustafa SuleymanCEO of Microsoft AI, who assumed the position when the association with Openai had been consolidated for years, It has just presented two internally developed models. They are advanced proposals that users can already prove and reflect the company’s ambition: “Create AI applied as a platform for products.” One is a real novelty and another we already knew. Let’s look at the details. Mai-1-Preview. It is the great novelty. It is a Mixure-OF-Experts model, in the style of GPT-4O or GPT-5, designed to offer great capabilities in resolution of instructions and useful responses for daily consultations. According to Suleyman, it is the first model trained from beginning to end in Microsoft AI’s own laboratories. The most striking thing is that it can already be tested. Just enter LMARENAselect Direct Chat and choose Mai-1-Preview. In the coming weeks it will also arrive in Copilot, although only “for certain cases of text use.” It is paradoxical: the text functions of the Microsoft Chatbot work thanks to OpenAi, but now it will begin to live with its own technology. Developers can also request anticipated access to API. MAI-VOICE-1. It is a voice generation model that stands out for its expressiveness and naturalness. For some time now it promotes functions such as Copilot Daily (news summaries) and Copilot Podcasts, although only in English. It is also available in COPILOT LABSwhere you can try different voices, styles and narration tones. One of its strengths is efficiency: it can generate a complete minute of audio in less than a second using a single GPU. With this, it is one of the fastest and most effective voice systems that exist today. Microsoft defends that the voice will be the interface of future assistants of AI. And he wants to advance with a high fidelity solution capable of responding in different scenarios. Could I have resorted to Openai and your GPT-4o in audio version? Yes. Do you want to do it? Everything indicates not. “Much more to come. We have great ambitions for what comes next: advances in the models, an exciting roadmap in computation capacity and the opportunity to reach billions of people through Microsoft products. We are building an AI for all,” said Suleyman in X. It remains to see what course the relationship between Microsoft and OpenAi will take. What is clear to users is that there will be more variety and more tools to experiment. Images | OpenAI In Xataka | Goal wants us to use AI when we don’t know what to say at WhatsApp: This is how your new option for writing assistance works

Openai’s plan to get more money with GPT-5 has exploded in the face: total back

Shitting not only affects streaming. We begin to see indications that the chatbots of AI are no longer so beneficial for users, and they are not for a simple reason: they must be monetizing them. It is what OpenAi has just done when launching GPT-5, a model that promised to be easier to use and powerful than ever, but has ended up returning to what worked with GPT-4. The controversial router. When Openai launched GPT-5, he did it with a great novelty: to raise it as a unique model that adapted only to the needs of each user according to the question we asked him. The router detected if that question was more or less complicated and theoretically activated the most appropriate mode of operation in each case, but there was a problem: Always using the cheapest operating mode. Reverse. People-especially those who used chatgpt-quickly criticized both that decision and the option to use ancient models such as GPT-4O. The mutiny had effectbecause: He has recovered the old models he had killed (such as the aforementioned GPT-4O), although only for payment users Has enabled the function of choosing GPT-5 variant to free users It is a spectacular revenue just when Openai had sold us that unique and off -road model (and its router) as something differential. The options are good. Altman himself explained that from now on Chatgpt users can choose between “Auto”, “Fast” and “Thinking” when using GPT-5. “Most users will want to use a car,” he said, “but additional control will be useful for some.” He also remembered that GPT-4O is again available for payment users and clarified that “if at any time we deactivate it, we will warn with a long time.” Less root, more customizable. The OpenAi CEO also talked about another of the problems that were highly criticized in GPT-5: it was too neutral. Too cold and robotic. That could change very soon because as I said, “we are working on an update of the GPT-5 personality that should be perceived warmer than the current but not so annoying (for many users) as GPT-4O.” In Openai they have understood something important: people love to be able to customize everything they use … although many do not. A GPT-5 hypothesis. In Semiianalysis They have a theory Curious that they would explain the way Openai has launched GPT-5. According to them, the model of the model is not the model, but the router. This component was intended to monetize the service much more and convince users to pass the free service to one of the payment subscriptions. Altman already pointed to that approach. In fact, Sam Altman shared first revealing data on Sunday. The percentage of free users who were using the “reasoning” variant of GPT-5 had gone from less than 1% to 7%, while for Chatgpt Plus users that percentage went from 7% to 24%. That can imply that the basic model was not as good as users expected and preferred to make him think, but there is another striking fact. More subscriptions. According to Semiianalysis, the router and that best behavior when the “thinks” model seems to have convinced many more users, and subscriptions, they point out, have multiplied by 3.5. The router may have caused criticism of intensive chatgpt users, but it also seems to be a key element to try to achieve something OpenAi needs like water: convert free users – it has about 700 million – into paid users. Shit. Openai’s tactics, if really this, is not new. To degrade the free service with respect to the payment usually makes more users go to the payment (in addition to the initial criticisms). We have seen it for example in Netflix: when it began to close shared accounts and put internet ads, it was thrown over and the service seemed to have a complicated future. Today Netflix is more reference than ever and the fucking its service has worked perfectly. You may want to copy the idea in Openai. In Xataka | Sam Altman and Elon Musk hate each other publicly, so Altman has attacked where it hurts most: Neuralink

How to install Openai’s new Got-Oss models on your computer to have your own chatgpt at home

OpenAi has announced New models Open Source that anyone can download and install on their computer: GPT-Oss. With these already on the street, it is an excellent opportunity to start stirring with the The premisesthat is to say, Executed on our computerso today we are going to teach you to install and use them. Differences between the two models Although they are called in a similar way, GPT-Oss-120B and GPT-Oss-20B are not exactly the same nor have the same requirements. The first model, GPT-Oss-120b, reaches a parity near OpenAi O4-mini and requires at least 60 GB of graphic memory. Having your own chatgpt at home is easy, but it requires a team at height | Image: Xataka His little brother, GPT-Oss-20B, is somewhat less capable (similar to O3-mini, according to OpenAI), but can be executed on devices Edge. In other words, it can be executed on your own computer whenever it has at least 16 GB of memory, preferably graphic. In summary: GPT-Oss-120b: Large model, need at least 60 GB of vram or unidicada memory and is not suitable for consumer computers. GPT-Oss-20b: smaller model, need 16 GB of vram or unified memory and is suitable for consumer computers. The one we are going to use, for obvious reasons, is GPT-Oss-20b. Considerations to take into account Executing an AI like in local is an intensive process that can cause, and surely cause, that your computer slows down a lot. Although you could execute it having 16 GB of RAM, the ideal is that your team has A high -end GPU. What will happen if your computer has less than 16 GB of vram memory? Than the tool will use RAMwhose figure must be equal to or greater than 16 GB. If not, the system will not work properly. As a general recommendation, the ideal is to dedicate all the possible resources of your computer to the execution of the model, so it closes everything that is not strictly necessary. Install Ollama on your computer OLLAMA Installation | Image: Xataka For this tutorial we will use a well -known application: Ollama. It is an Open Source platform that simplifies, and much, the installation, access and use of LLMS (Large Language Models). Let’s say he is an executor of models. Chatgpt is an online platform through which we interact with a model, such as GPT-4O. Ollama is the same, But at home and with the models we have installed on our computer. It is a free, open source software and available for Windows, Mac and Linux. Download GPT-Oss Once we have downloaded and installed the program on our computer we will find an interface like this. If you wish, you can also use ancient ollama through the Command interfacebut the truth is that the graphic interface is much more pleasant. Main interface of Ollama | Image: Xataka If we look, we will see a drop -down in the lower right area with the name of the model we are using or that, rather we will use. Access to the different AI models from Ollama | Image: Xataka If we click on the drop -down we can access a good handful of models, such as Deepseek R1, Gemma either Qwen. In the case, we are interested in selecting “GPT-Oss: 20b“ Download of the model, armate of patience | Image: Xataka Having selected “GPT-Oss: 20B”, it will be enough to send a message in the chat to begin the download of the model. At patience, because it weighs 12.8 GB and can take a while. Talking to GPT-Oss-20b through Ollama | Image: Xataka Once it is installed, you can start talking with AI as if it were Chatgpt. Of course, if your GPU does not meet the minimum requirements you will see that it is much slower than chatgpt. Not surprisingly, you are running the model on your computer, not in a data macrocenter full of the latest in Dedicated Gpus of Nvidia. Another option: LM Studio Lm Studio | Image: Xataka Ollama has the advantage of being intuitive, simple and direct. If we want more options, a much more complete program is LM Studio. This is available for Windows, Linux and Mac and, as Ollama, is able to manage several models, GPT-Oss: 20b among them. It is a more advanced application that allows us to better adjust both the behavior of our computer and that of the model, although squeezing it to the maximum requires more advanced knowledge. Cover image | Xataka In Xataka | How to move from image to video using artificial intelligence: 14 essential free tools

Openai’s hypothetical social network does not want to connect people. Want your data to train your AI

In Openai they do not conform to being absolute referents in the segment of artificial intelligence. The last rumors point to that They intend to create a social network that would go beyond Chatgpt. The reason, eye, is not to compete with Facebook, Instagram or X. At least, not directly. Social network = data to feed the AIThe movement responds without a doubt to that voracious hunger of new data that the AI ​​models have and that allows them to improve and polish their behavior in different scenarios. A social network would allow Openai to use all those data entered by users to train their models. X already discovered that trend. The Fusion between X and XAI It was already a clear demonstration of that strategy: suddenly I had a perfect system to train its AI, Grok model, with all the posts of X users. And goal, of course, too. Goal takes time doing the same With Facebook and Instagram – although The EU forced her to slightly change your plans – and also collect data when we use ai target at whatsapp, Although you can avoid it. All with the same goal: to have “fresh food” for its artificial intelligence models. A social network to share images. Apparently the prototype that is already developed focuses on Chatgpt’s ability to generate images. It would therefore be a social network more similar to an instagram full of images generated by AI, from which we imagine that users would be generated. Altman already warned. The funny thing is that the rumor occurs weeks after Sam Altman himself, CEO of OpenAi, jokes with that possibility. When the news appeared that Meta was preparing an independent app for goal AI, Altman replied saying “ok, perfect, perhaps we make a social app.” This does not connect people. Perhaps they already had the project and it was not a joke, but although Altman pointed out that that could allow “revenge” of Facebook, the intentions would not be those of competing with Facebook when connecting people – the original purpose of that social network, at least – but that of collecting more and more data for their AI, which is what they have also ended up doing social networks. And the AGI for when. The problem of this theoretical project is that it would be partly a distraction for Openai. It would certainly allow for more data for the training of its AI models, but it is not clear that the scaling is the right path for the ultimate end of OpenAi: get a general artificial intelligence or agi. Too many parallel projects. Altman is famous for generate excessive expectations about the AGI. However, this challenge is tarnished by their last releases, especially in the case of GPT-4.5. The company seems to be somewhat scattered with gigantic projects like Stargatethe development of Your own chips o The mysterious hardware project In collaboration with Jony Ive. Too many apples in the basket? We will see. Image | Xataka with chatgpt In Xataka | Chatgpt is already the most downloaded app in the world. His only problem is that he does not know how to make money with it

The startup of ia of one of Openai’s co -founders has no product. Even so it is valued at 32,000 million dollars

Promises and expectations can use a fortune. It is the only thing that justifies that a startup of which nothing is known It is worth good 32,000 million dollars. That is more than eBay, Endesa or Hyundai, but with the difference that these companies have been working for years and even decades to achieve that figure. But we are in the AI ​​era, and here, we insist, promises and expectations are worth a lot. There is not much more than right now it seems to offer Safe Superintelligence (SSI), the startup of Ia co -founded by Ilya Sutskever, who was already confused of OpenAi and abandoned his ranks less than a year ago. According to Financial TimesSutskever has managed to lift a financing round of 2,000 million dollars for its startup, which makes the assessment of the same ascend to those mentioned 32,000 million dollars. The figure is also surprising because the economic moment we live, with tariffs threatening everything, precisely raises an important brake on investment. In September SSI He already lifted $ 1 billion and that made its valuation out of 5,000 million. That figure has multiplied by six, which seems to make it clear that they have something striking in hand. In An interview Last year Sutksver raised an AI with “nuclear safety”clarifying that “no matter how safe we ​​mean as when we talk about nuclear safety, as opposed to safe when we talk about” trust and security “”. In a later interview in September this engineer and entrepreneur indicated that he and his team had “identified a new mountain to climb and that it is something different from what I previously worked on.” Sources close to SSI have indicated that the company works in very special ways to develop and climb AI models. If true The milestone would certainly be intriguingespecially now that it is criticized that climbing – more gpus and more data to train AI models— It no longer provides such striking improvements. Be that as it may, the former OpenAi employees are apparently very well after having left the company. We have another example in Look Muratiwhich has also launched a call startup Thinking Machines Lab. He is also working to raise an important investment round … and also does it without having any product to show. So are these times. Image | OpenAI In Xataka | There are too many AI models. That raises a true death sentence for Anthropic and Claude

Personalized GPTS are one of Openai’s great inventions. Now Google has just released yours in Gemini

One of the most interesting functions that chatgpt has are GPTS. In a nutshell, they are Chatgpt personalized versions created for specific purposes. We could have a GPT focused on correcting our texts, to solve mathematical problems or plan trips. It is a really useful function, but for now they can only be created by payment users. Anyone can use them, but only premium users can create them. Well Google has decided to opt for a different path with Gemini and its Gems. And yes, the user win. Gems? That is the name that the equivalents of GPTs receive on Google Gemini. To all purposes, they are exactly the same. Instead of using Gemini’s “general” version, Gems allow us to use a specialized version in certain tasks. It is a function that, having dedicated the necessary time and care, can be very useful. 10 Google applications that could have triumphed For everyone. So far, only payment users could create and use Gems. That is, the only way to access this function was paying the 21.99 euros per month that costs access to Gemini Advanced. That is over. As plannedGoogle has released access to gems and, from today, creating them and using them is completely free. Gems creator | Capture: Xataka Options. Google gives us five predetermined gems focused on brainstorming, professional orientation, programming, learning and writing review. Grace, however, is to create ours. To do this, you just have to go to the Gems manager and start the process (or you can do it by clicking directly in This linkwhich is direct access). Important: Although gems can be used from mobile app (and deployment is being progressive), they can only be created in the web version. Some keys. When creating a GEM it is important to be clear, concise and descriptive. Here are Some tricks to get the perfect prompt. For example, if we want our Gem to serve us to correct texts in English, something like this should be put: “You are a text reviewer in English and aid to people to detect and correct failures in their writings. Your work is to analyze the texts, find all errors, explain to the user why he is poorly written and suggest improvements. Use a friendly tone. Use Spanish to give explanations. Be patient.” The result will be something similar to this: when giving the badly written phrase “I are not feeling lots well”, the GEM returns the following answer: Example of use of a Gem created by us | Capture: Xataka Models. In our Gems we can use the models we have access to Google Gemini. In the free version we can use Gemini Flash 2.0 and Gemini Flash 2.0 Flash Thinkingwhich is experimental. If we had the payment version we could use the most advanced models. Using the reasoning model can be really useful if we create a very specific Gem focused on answers that need precision. Limitations. Gems are very useful, but they have an important limitation: they do not admit the rise in documents, at least in the Spanish version and for now. In the English version they seem to admit them. Being able to upload documents is a very interesting function to consult bibliography, interact with a PDF, with an Excel sheet, etc. Let’s think about the potential that this has to analyze data, extract trends or digest a lot of information more easily. The problem is that, for the moment, we do not have it available. Cover image | Xataka In Xataka | Google’s results with generative come to Spain. And with them, an elephant in the media room

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.