One night in 2000, Jennifer Lopez debuted a historic dress. And then Google changed the internet forever

If you have a moment, go to Google and type something like “Jennifer Lopez 2000 Grammy dress.” Leave that new AI Mode section aside and tap on the ‘Images’ tab to find a green Versace dress with a jungle print that caused a real sensation in both the fashion media and the world of technology. In fact, that dress marked a before and after on the internet. Because before February 23, 2000, when we wanted to see what clothes the current star had worn to an event (to give an example), we had to wait for the news to appear on TV, browse through magazines or go to the Internet to Google it. And there you didn’t find the photo, but instead you had to wade through a sea of ​​blue text links to search through. There was no Google Images. We’re not even talking about videos. Before JLo’s Grammys dress, this was all field text Why it is important. Google’s decision to organize information based on images and not only on text not only changed the world of fashion as the work of a European brand went from being seen on the catwalks and little else, to reaching the entire world. It also modified our way of accessing information, laying the foundations for an Internet (and later, social networks) focused more on audiovisuals than on pure and simple text. These were the dawn of the internet of content. What started in July 2021 with an index of 250 million images, went to one billion images in 2005 and by 2010, exceeded 10 billion. Later, Google stopped offering that figure to focus on quality over quality. Paradoxically, in 2025 it is following the opposite path, massively deindexing images by considering them low quality or generated by AI. The context. In the year 2000, the Google search engine was not what it is now: the undisputed leader with almost 90% share. And the “almost” thing is something about the post-internet – ChatGPT had been overcoming that barrier for more than a decade. In fact, with just a couple of years of life, he was beginning his rise at a time when there was no hegemony as he managed to impose later, with others like Yahoo! and Altavista with greater weight. And then she arrived on the red carpet at the 42nd Grammy Awards, nominated that year for Best Dance Recording for “Waiting for Tonight.” Jennifer Lopez wore a semi-transparent green dress with a dizzying V-neckline that fell to her navel. If you already existed at that time and were old enough to watch TV, you surely saw it because because her dress was viraleven before that concept was used for matters other than biology. Seeing it once wasn’t enough, so people went online to look for it en masse. “People wanted more than text (…). At the time, it was the most popular search we had ever seen” counted Eric Schmidt for Project Syndicate. The former Google CEO explained that at the time “we didn’t have a sure way to get users exactly what they wanted: J.Lo wearing that dress.” Between the lines. That’s when started to cook Google Search Image. According to Cathy Edwards, director of engineering and product at Google Images, it wasn’t something that happened overnight, but JLo lit the fuse. There were few employees, but like Edwards explained In 2020, it was clear to everyone that they needed to build a photocentric search engine. The question was knowing what priority to give it. That same summer, Google hired a newly graduated engineer, Huican Zhu, and put him to work with Huican Zhu, who was the executive director of YouTube and who at that time was responsible for product. The two stood hand in hand and, According to Edwardsthey practically developed it alone to launch Google Search Images in July 2021. In Xataka | People are so fed up with the current Internet that they are returning to MySpace. Not out of nostalgia, but out of rebellion In Xataka | All the times that throughout the 20th century we imagined ourselves on the Internet

The Earth has been providing heat for millions of years and now Google wants it for something very different from heating

The race for artificial intelligence is no longer fought only in laboratories or chip factories. It is moving towards much more basic and, at the same time, more critical terrain: electricity. At a time when data centers are increasing their energy consumption and the electrical grid is beginning to show signs of saturation, an American geothermal startup has just closed one of the largest financing rounds in the sector. It is called Fervo Energy, it has raised $462 million and, among its investors, is Google. It is not just another financial movement. It is a clear sign of where big technology companies are looking to sustain their ambitions with artificial intelligence. First commercial project. The company has closed this financing in a Series E – one of the last phases of private investment before a possible IPO – aimed not at research, but at the deployment of large-scale energy infrastructure. The round, led by B Capital as lead investor, will serve to accelerate the construction of Cape Station, its geothermal plant in Utah, and advance the development of other projects. In other words, moving from technology demonstration to commercial production of firm electricity for the grid. In addition, the round has aroused the interest of a broad group of industrial, financial and technological investors. Among the new names are AllianceBernstein, Mitsui, Mitsubishi Heavy Industries, Breakthrough Energy Ventures and, especially significantly, Google. As reported by TechCrunchFervo has raised nearly $500 million in equity and debt in the last year alone, reflecting an unusual investment appetite for a technology that for decades was considered marginal. The Google entry. Fervo is not just a climate bet or an impact investment: it is a direct energy supplier for data centers. The company already maintains an agreement with Google to supply geothermal electricity to its facilities, something that turns the technology company into a client and investor at the same time. This move fits with a broader trend. The big tech companies they have stopped trusting only in the traditional electricity market. The explosion of generative AI has multiplied the demand for continuous, stable and emission-free energy, a profile that neither solar nor wind power alone can guarantee without massive battery backup. On the other hand, geothermal energy offers firm electricity 24 hours a day. How does the Fervo bet work? Fervo’s key It’s in your technology of Enhanced Geothermal Systems (EGS). Unlike traditional geothermal energy – which depends on natural hot aquifers – Fervo drills hot rock, injects water and creates artificial reservoirs that allow steam to be generated in a controlled manner. A direct adaptation of hydraulic fracturing and directional drilling techniques developed over decades by the oil and gas industry. It is no coincidence: many Fervo engineers come from that sector. The flagship project is Cape Station, located in Beaver County, Utah. According to the company’s planswill begin supplying 100 megawatts in 2026 and will reach 500 megawatts in 2028. One of the key factors is speed, as the company has drastically reduced the drilling time for its wells: from about a month in its first projects to a current average of about 15 days. As Sarah Jewett, senior vice president of strategy, explained, to TechCrunchapproximately half of the cost of a well depends on drilling time. Reducing it is synonymous with economic viability. AI as the engine of the new energy map. The rise of Fervo cannot be understood without the pressure that artificial intelligence puts on energy infrastructure. According to the International Energy Agencythe electrical consumption of data centers could double before 2030. An analysis by the Rhodium Group goes further and estimates that advanced geothermal could cover up to two thirds of new energy demand of these centers in the United States. Google is not alone in this race. The company is simultaneously exploring the reopening of nuclear plantsthe development of small modular reactors (SMR) and even experimental projects as solar-powered orbiting data centers. The logic is the same in all cases: ensure its own, stable, long-term electricity supply. In the words of the CEO of FervoTim Latimer: “There is a huge appetite to understand how the history of electricity demand is going to be resolved.” The answer, increasingly, lies in energy sources that previously seemed secondary. A sector that matters again. For years, geothermal energy was relegated to wind and solar energy. Today, United States live a true renaissance of the sector. The combination of new technologies, private capital, institutional support and demand from Big Tech is changing the landscape. Fervo is considered a pioneer within this new ecosystem. According to TechCrunchthe company is focused for now on the western United States, where the hot rock is closer to the surface, but does not rule out expanding to other states or abroad when its technology is even more optimized. The subsoil as a competitive advantage. While artificial intelligence is presented as the most ethereal technology of our time, its expansion depends on something deeply physical: constant, cheap and clean megawatts. In this context, Fervo represents more than just an energy startup: one more—but key—piece in the new infrastructure that supports the digital age. Google didn’t get here by chance. He has been exploring all possible avenues for some time to ensure stable power for his AI. And in that strategy of not closing any doors, while some look to the sky, others – like Fervo – look underground, kilometers underground, where the planet’s heat is beginning to emerge as one of the most solid responses. Image | FervoEnergy and freepik Xataka | The United States may win the AI ​​race, but its problem is different: China is winning all the others

MediaMarkt allows you to get the latest Google Pixel at an unbeatable price and with interest-free financing

Google has managed to sneak into the most desired mobile phones among those looking for a high-end terminal. Now, at MediaMarkt, they have a 15% discount code available for the Google Pixel 10 series. One of the models that is worth it is the Google Pixel 10 Prowhich you can take now 764.15 eurosapplying the code ‘MM15GOOGLEPIXEL‘. You will have to apply the code during the purchase process and you will see that the price of the mobile changes in the cart. It will go from costing 899 euros to 764.15 euros thanks to this MediaMarkt promotion. Plus, you can finance it up to 18 months without interestpaying a fee of 42.45 euros per month. Although this 15% discount applies to all models of the Google Pixel 10 series, keeping the two other alternatives at the following prices, entering the same code (‘MM15GOOGLEPIXEL‘): Google Pixel 10 by 594.15 euros: with 6.3-inch Actua OLED screen, Android 16, Google Tensor G5 and 128 GB. Google Pixel 10 Pro XL by 934.15 euros: with 6.8-inch Super Actua OLED screen, Android 16, Google Tensor G5, 16 GB of RAM and 256 GB. Google Pixel 10 Pro, 128GB The price could vary. We earn commission from these links A top mobile now at an unbeatable price There are many characteristics of this Google Pixel 10 Pro which make it one of the brand’s most interesting models in this promotion. Its screen is type 6.3 inch OLED and refresh rate of 120 Hz. Its brain is the Google Tensor G5 chip, which is accompanied by 16 GB RAM and 128 GB internal storage. Presents IP68 certification and works under Android 16 operating system. Plus, software updates are guaranteed for seven years. Another of its great assets is its photographic system, with a 50 MP main sensor, a 48 MP wide-angle lens and a 48M 5x telephotoQ. Regarding its battery, it supports wired charging at 30 W and wireless charging at 15 W. Finally, it should be noted that it has Wi-Fi 7Bluetooth 6, NFC and 5G. The best accessories for this Google mobile Google Pixel Watch 2 with Fitbit and Google The price could vary. We earn commission from these links Google Pixel Buds A-Series – Truly Wireless Earbuds The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | Álex Alcolea (Xataka) and Google In Xataka | The best mobile phones, we have tested them and here are their analyzes In Xataka | The best quality-price mobiles. Their analyzes and videos are here

Google just changed the rules of the lightweight model

Now, in the race to lead the development of artificial intelligence, something unusual has just happened. Gemini 3 FlashGoogle’s new model, has surpassed GPT-5.2 Extra High, the higher-reasoning variant of OpenAI, in several performance tests. And that forces us to rethink some of the rules that we took for granted. A fast model that also reasons. Google’s new model comes with a very specific promise: to demonstrate that “speed and scalability do not have to come at the expense of intelligence.” Although it has been designed with efficiency in mind, both in cost and speed, Google insists that Gemini 3 Flash also excels at reasoning tasks. According to the company, the model can adjust your thinking ability. It is able to “think” for longer when the use case requires it, but it also uses 30% fewer tokens on average than Gemini 2.5 Promeasured with typical traffic, to complete a wide variety of tasks with high precision and without penalizing response times. The truth is in the benchmarks. Are the benchmarks perfect? No. But they are still one of the most useful tools we have for comparing AI models.confront them against each other and detect in which scenarios they perform better or worse. And in this area, Gemini 3 Flash comes out well. In SimpleQA Verifieda test that measures reliability in knowledge questions, Gemini 3 Flash achieves 68.7% compared to 38.0% for GPT-5.2 Extra High. In multimodal reasoning, within MMMU-Pro, Google’s model scores 81.2% compared to OpenAI’s 79.5%. In Video-MMMU, Flash achieves 86.9% compared to 85.9% for GPT-5.2 Extra High. If we look at multilingual and cultural capabilities, Flash is again ahead, with 91.8% compared to 89.6% for GPT-5.2 Extra High. In Global PIQA, focused on common sense in 100 languages, the difference remains: 92.8% for Flash versus 91.2% for the OpenAI model. Everything indicates that Gemini 3 Flash is specially optimized to capture nuances outside of English and reason more fluently in global contexts. He also excels in the use of tools and agents. In Toolathlon, Flash scores 49.4% compared to GPT-5.2 Extra High’s 46.3%. In the FACTS Benchmark Suite, the difference is tighter, but still in favor of Google: 61.9% versus 61.4%. In long-term tool execution tasks, Flash appears to show greater consistency. But he is not the king of pure reasoning. Now, it is worth looking at the complete photo. Although Gemini 3 Flash outperforms the best OpenAI model in several tests, if you are looking for “pure” reasoning, the balance changes. In the most demanding tests in this area, GPT-5.2 Extra High continues to set the benchmark. OpenAI’s model leads ARC-AGI-2, focused on visual puzzles, with 52.9% compared to Flash’s 33.6%. In AIME 2025, with code execution, it reaches 100% compared to 99.7%. And in SWE-bench Verified, aimed at software engineering, it obtains 80.0% compared to 78.0% for Gemini 3 Flash. What exactly is GPT-5.2 Extra High. Throughout the article the name GPT-5.2 Extra High appears several times, and it is normal to wonder if it is something new or little known. In reality, it is not a model that is usually mentioned to the general public. Google uses this designation in its comparison table to refer to the maximum level of reasoning available in the OpenAI API for GPT-5.2 Thinking and Pro. In the official OpenAI documentation it is identified as “xhigh”. Where you can use Gemini 3 Flash. Access to Gemini 3 Flash is not country dependent. If you have access to the Gemini appyou are already using this model, which has become the default option. It is also reaching developers through the API, AI Studio and Vertex AI. In the United States, the deployment goes a step further, as the Gemini 3 Flash has become the default model of the AI Mode of the Google search engine. The price of using Gemini 3 Flash. For those who want to integrate Gemini 3 Flash into their applications, the model costs $0.50 per million input tokens and $3 per million output tokens. This is a slight increase over Gemini Flash 2.5, which was $0.30 per million tokens in and $2.50 per million tokens out. An increasingly tight race. Gone are the days when Google tried to confront ChatGPT with Bard, or when OpenAI seemed to be years ahead of the rest. Today, the distances between the big players in AI have been drastically reduced. The competition is more direct, more technical and, above all, much closer. Images | Google In Xataka | Amazon is preparing an investment of 10 billion in OpenAI because if you can’t beat your enemy, the best thing is to join him

what is repeated the most is nothing like Google searches

During these three years of living with ChatGPTthere has been a certain feeling that the usual search engine is no longer essential. The chatbot responds in natural language, allows for questioning and, in many cases, saves time compared to a list of links. But that comfort does not necessarily imply that it is doing the same job as Google. Searching is not always about getting a closed answer: it is also about exploring sources, comparing and deciding for yourself what information to give credit to. To understand what is really changing, it is worth looking at how each tool is used and not just how they are talked about. Before moving on to the study, we can ask a specific question: when we open ChatGPT, are we searching for information in the classic sense or are we doing something else? The nuance matters because “searching,” as we have said, mixes very different actions. What studies say about the true relationship between ChatGPT and Google A paper from the National Bureau of Economic Research prepared with data provided by OpenAI is the starting point to land on this topic. It is built from messages sent to ChatGPT automatically classified to detect patterns without anyone reading the content. The objective is not to evaluate the quality of the responses, but to measure why we use the chat in practice and how that use changes over time. The first photo offered by the paper is clear and should be given with temporal precision. In June 2025, 73% of messages were considered non-work related, compared to 27% linked to work tasks. This distribution also changes with respect to previous stages that the study itself compares, and suggests that personal use is gaining weight over time. The data matters because it questions a widespread idea: that chat is above all a professional tool. When the analysis goes down to detail, the activity is concentrated in three large categories. Practical guidance: when we want to understand something, clarify concepts or see options more clearly. Information search: investigate specific facts, topics or questions (this is the section that is closest to the traditional web search pattern). Writing: includes everything from writing to structuring ideas and planning tasks. This translates to very recognizable gestures that do not depend on a list of results. We sometimes use ChatGPT to clarify ideas or ask for guidance. Other times we delegate work, from polishing an email to organizing a document or preparing a plan. And, to a lesser extent, it is also used as a space to think out loud and organize concerns. In all of those cases, the value is not in reaching a page, but in receiving a response tailored to the immediate context and in the form of usable output. That’s where the comparison with Google becomes more accurate. The search engine is designed to show a map of linksallowing us to explore sources and decide which ones to enter, with the cost of reading, comparing and synthesizing information scattered on the web. ChatGPT, on the other hand, concentrates some of that work on an answer and adjusts it to what we have asked, which shifts the effort from navigation to interpretation. This coexistence fits well with what describes Nielsen Norman Group in one of his studios. Their main conclusion is that search habits are surprisingly persistent and that we tend to start with what is familiar, even when we have already incorporated AI tools into other daily tasks. We often use it as a mental and practical shortcut to reach destinations we already know. Instead of typing “youtube.com” directly, we type “YouTube” into Google and from there we access the site. Under this scheme, the search engine continues to operate as a great gateway to the web ecosystem, rather than as a pure discovery engine. The result is neither a clean replacement nor an immediate replacement, but rather a more fragmented and functionally distributed ecosystem. We alternate between traditional search engines and chatbots depending on the moment and the taskand that redistributes the effort between finding information, understanding it, making decisions and producing content. Even so, it is advisable to handle these data with caution. The ecosystem is still moving and habits are still adjusting, so we should not read these results as definitive. On the other hand, Google has been incorporating layers of generative AI, since the summaries with AI until the so-called AI Mode. However, for now the link-based model continues to set the pace of the experience. And the service continues to be, furthermore, a dominant source of traffic for the webalthough its own AI integration is already starting to reduce the need to click in many cases. Images | Berke Citak | Firmbee.com | sarah b In Xataka | Microsoft has reduced its ambition with AI. It has been realized that almost no one uses Copilot, they say in The Information

Google is serious about putting data centers in space. Elon Musk and Jeff Bezos rub hands

While there are municipalities debating whether to let big technology companies install data centers in their domainsGoogle wants a strike further: taking the data centers to space. Google. The company revealed its intentions a few weeks ago and your Suncatcher project wants to install two prototype satellites before 2027. Curiously, Elon Musk and Jeff Bezos are more than delighted with the idea of ​​their rival. Suncatcher Project. Push the capabilities of the artificial intelligence requires that we train it and, for this, they are necessary huge data centers with spectacular computing power. The problem is that the energy needs of these facilities They are astronomical, becoming resource sinksmaking oil companies set aside their renewable energy plans and even raising the opening of “private” nuclear power plants. Suncatcher couldn’t have a more appropriate name. In space, without the influence of the atmosphere, solar panels They capture the light spectrum in a different way, enough to feed those data centers that seem insatiable, and what Google proposes is to build constellations of dozens or hundreds of satellites that orbit in formation at about 650 kilometers high. Each of them would be armed with Trillium TPU (processors specifically designed for AI calculations) and would be connected to each other via laser optical links. Pichai puts the topic anywhere. Although 2027 is the key date, it is evident that Google is very interested in airing its plans because it is a sign of both technological power and an invitation for interested entities to invest in the process – and a way to continue inflating everything around AI-. And the person who is practicing this speech the most is the company’s CEO himself: Sundar Pichai. Since we learned of Google’s plans, Pichai has spoken of the topic in every interview he has given. It does not tell anything new beyond that hope of having TPUs in space in 2027 and the ambition that in a decade extraterrestrial data centers will be the norm. Musk and Bezos: competition, but allies. And if Google is interested in selling its narrative, those who are also interested are two of its most direct competitors: Elon Musk and Jeff Bezos. Both Musk with several of his companies and Bezos with Amazon Web Services are in the race for data centers and artificial intelligence. They have some of the largest on the planet, but they also have something that the rest of the competitors don’t: ability to launch things into space. Musk with SpaceX and Bezos with Blue Origin have the tools to put satellites into orbit, charging for each kilo they launch into space. And it is there, the more credible it seems that the future of computing is in low Earth orbit, the more economic and political sense they will make. SpaceX as Blue Origin. Both are Google’s competition, but also the option for Google to achieve its objective. And, ultimately, we keep seeing rival companies renting their services from each other. Data center fever in space. The truth is that, at first, it sounds like a crazy plan to build these extraterrestrial data centers, but from the most pragmatic point of view (removing logistics and the money that both development and each launch will cost from the equation), it is a plan that makes sense. In space, a panel can perform up to eight times more than on the Earth’s surface, in addition to generating electricity continuously by not depending on day/night cycles. It is something that would eliminate the need for huge batteries, but also for complex water-based cooling systems. And, as we said, Google is not alone in this. Currently, there is a fever for space data centers with big technology companies in the spotlight: Considerable challenges. Now, Google itself comment It will not be easy to carry out this strategy. On the one hand, the costs. The company claims that prices may fall several thousand dollars per kilo to just $200/kg by mid-2030 if the industry consolidates. They note that, in that case, the price of launching and operating a space data center could be comparable to the energy costs for an equivalent terrestrial data center. Another difficulty will be maintaining a close orbit between the satellites. They would have to be within 100-200 meters of each other for optical links to be viable. And most importantly: radiation tolerance by the TPUs. Google has been experimenting with this for years, but they must test the effects of radiation on sensitive components such as the HBM memory. Surely astronomers They will be delighted with this strategysame as with starlink. Image | THAT In Xataka | We are launching more things into space than ever before. And the next problem is already on the table: how to pollute less

Europe was happy with the changes in the App Store, but not with those in Google Play. There is a historic fine at stake

Google is in the crosshairs of the European Commission. A few days ago they announced a new investigation into monopolistic practices with AI summaries, but it is not the only front they have open. The company has already paid historical fines and you face a new one if you don’t make changes to Google Play, your app store. what has happened. They tell it in Reuters. The European Commission is not satisfied with the changes that Google has made to its app store to comply with the Digital Markets Act or DMA. Regulators consider that there are two points that do not comply with the standards: There are technical restrictions that make it difficult for developers to direct users to external channels with better prices. Google continues to charge a commission to the developer even if the user buys the app from its website, with the excuse that they have “facilitated” the purchase. Why is it important. If Google does not make the necessary changes to comply with the DMA, it faces a fine that could amount to 10% of its total revenue. In 2024 they will invoice 350,000 million dollarsso the maximum fine would amount to 35,000 million, the highest to date. Google can still offer to apply changes to avoid paying the fine. The Apple case. The one the Commission is satisfied with is Apple. In fact, they are using your case as an example of what needs to be done. It was not a bed of roses and Apple was fined 500 million euros for not complying with the DMA. Apple had to remove restrictions that prevented redirection to alternative offers. The Epic trial. The European Commission is not the only one that has Google Play in its sights. In the United States, the judge of the Epic vs Google case made a historic decision: Google would have to allow rival stores within the Play Store. Recently Google and Epic reached an agreement through which Google undertakes not to charge commissions of more than 20% on purchases’in-game’ and 9% for the rest. In addition, developers will be able to showcase other payment systems through Play Billing. The agreement must still be approved by the judge, but it seems that Google will have no options but to comply with what both the judge and the EU ask of it. What Google says. The company announced changes in Google Play last August to avoid the fine, is what the Commission now considers insufficient. Google competition lawyer Clare Kelly said the company was “concerned that these could expose Android users to harmful content.” This is the usual position of American companies that are under the scrutiny of the European Commission. Mark Zuckerberg called the DMA “censorship” and there has also been harsh criticism and tariff threats since the Trump administration. Recently, a national security strategy document He claimed that European laws could mean an “erasure of American civilization.” The fruits of the DMA. He overregulation of the European Union is subject to criticism, but It also has a good side. Thanks to the DMA has made USB-C mandatory for all manufacturers, forcing Apple to abandon its proprietary connector. It has also brought us the Universal AirDrop and the changes in the app stores so that we have more freedom when it comes to where to download our apps. Image | Xataka, Pexels In Xataka | Europe wants to protect itself against Huawei, but the energy sector knows something uncomfortable: it cannot move forward without it

OpenAI and Google deny that they are going to put ads in ChatGPT and Gemini. The reality is that accounts do not come only with subscriptions

What AI has a profitability problem It is something well known. All you have to do is look at the OpenAI accounts, which in the last consolidated quarter lost a whopping $11.5 billion. The subscriptions were presented as a way to monetize chatbotsbut ChatGPT barely has 5% of the total users on one of your payment plans. The numbers do not come out and, although companies deny it, the shadow of advertising hangs over AI. what’s happening. Rumors that some very popular chatbots are integrating ads are intensifying in recent days. First they began to circulate alleged screenshots of an ad on ChatGPT and later a media specialized in advertising claimed that Gemini will have announcements in 2026. Companies deny it. Google has been quick to deny the information, ensuring that Gemini has no ads and “there are currently no plans in place to change it.” What stands out above all is that “currently”, which continues to leave the door open to include advertising in the future. For its part, OpenAI has come out to deny it ensuring that what appeared in that screenshot “was either not real or it was not an advertisement.” What was seen was a suggestion to connect the account of Target, the popular American hypermarket chain. When the river sounds… Despite the forcefulness in denying it, a few days ago we learned that OpenAI is preparing the ground to include advertising in ChatGPT. ChatGPT beta version for Android includes explicit references to an ad feature and tags like “content bazaar” and “ad carousel.” Additionally, the company is hiring experts in advertising platformsso the appearance of ads is not a question of “if”, but of “when.” In the case of Google, we haven’t seen any screenshots or traces in the code, so there isn’t that sense of imminence. However, there are rumors that there will be announcements in AI summaries and taking into account that advertising is the company’s main business, it does not sound crazy that they end up integrating ads into their chatbot. Investment vs return. The imbalance between what technology companies are spending on AI with what they are earning is totally unbalanced. Big tech companies like Google are increasing their revenue, but It is not thanks to AI, but to its cloud services. In the case of OpenAI, without an infrastructure to minimize the impact, the disconnection between expenses and income is brutal. Subscriptions are not enough. AI has managed to penetrate the general public and, according to the consulting firm Menlo Venturesalready has 1.8 billion users around the world. The problem is that only 3% pay any type of subscription. OpenAI currently has 5% paying users and expects that by 2030 the figure will increase to 8.5%. It is still not enough to achieve the desired profitability. According to a study by JP Morgan, For the AI ​​industry to achieve a 10% return on everything they have spent, it would take $650 billion a year, which is the same as saying that 1.4 billion people pay more than $400 each year to use AI. They may succeed, but for now ads seem like a faster way to generate income. Image | Generated with Gemini In Xataka | AI has become the best example that if you don’t pay for the product, you are the product

Google changed the news to summaries made with AI. Now the European Commission has something to tell you

In March of this year an earthquake shook European publishing houses. The reason was that Google implemented AI Overviews in your search engine. This means that, where links to media news previously appeared, a summary made with AI now appears, with the detriment that this entails for the media, which in some cases They have lost up to 50% of traffic. Now the European Commission has taken action on the matter. What has happened? The European Commission has formally opened a new antitrust investigation against Google. The reason this time is the use of content from media outlets and YouTube creators to feed their AI summaries, all without compensating the creators. The investigation will try to elucidate whether Google is distorting competition by placing unfair rules on the media, while its access to content (especially in the case of YouTube) displaces other competitors of AI companies. In the words of Teresa Ribera, Executive Vice President for a Clean, Fair and Competitive Transition at the European Commission: “AI is bringing remarkable innovation and many benefits to people and businesses across Europe, but this progress cannot come at the expense of the fundamental principles of our societies. That is why we are investigating whether Google has imposed unfair conditions on publishers and content creators, while putting developers of rival AI models at a disadvantage, in breach of EU competition rules.” Why is it important. The research involves questioning the model that Google has built around its generative AI, but it also calls into question the entire problem of the use of foreign content by these tools. Opens the door to reconfiguring the AI ​​market, imposing limits and compensation for original content creators The impact. As we said, the arrival of AI summaries has had a huge impact on media traffic. If readers receive the response without having to make a single click, that traffic is lost and not only that: it is unrecoverable. The worst thing is that to give that answer, Google drinks from the information published by those same media. In the case of YouTube, creators are required to accept a clause so that their content can be used for different purposes, including train your AI. Consequences. The investigation has just begun and there is no set date for its conclusion, which could take years. They will study whether Google has violated the article 102 of the Treaty on the Functioning of the EU and the article 54 of the Agreement on the European Economic Area, which prohibit the abuse of a dominant position. If Google is eventually found to have breached these rules, the Commission could force them to take measures to comply with the law, such as compensating creators, allowing them to opt out of having their content appear in summaries, or even removing summaries across the EU, in addition to a possible fine. And now they go… It is not the first time that Google has faced monopoly accusations in the EU. In fact, it is the technology company that accumulates the highest fines. The highest was 4.3 billion for abuse of dominant position with Androidfollowed by 2,950 million for their abuse in the advertising market. He also had to pay 2,420 million for Google Shopping and 1,490 million for AdSense. Images | UnsplashEuropean Commission In Xataka | The EU has spent years fiercely fighting monopolies. Teresa Ribera has other plans for telecos

Amazon and Google are already armed

Let’s go back in time for a moment. By mid-2023, ChatGPT He had been between us for half a year and artificial intelligence was experiencing a huge boom. For some it was a fad; For others, one of the most relevant technological disruptions of recent times. OpenAI had the lead. From being practically unknown, she went on to dominate headlines. He achieved this by putting a product as fascinating as it was imperfect in the hands of the public, the type of launch that Big Tech would hardly have materialized. In its early months, ChatGPT had few guardrails and made mistakes quite frequently. Such a setback would have meant a reputational blow Huge for any tech giant, but a startup could possibly take that risk. NVIDIA, the winner in the AI ​​race, faces a challenge The basis for competing head-to-head with ChatGPT was—and continues to be—developing increasingly advanced language models. After launching GPT-3.5the model that gave rise to ChatGPT, OpenAI moved quickly to introduce GPT-4 in March 2023. From that moment on, the old and new players in the sector had no alternative but to enter the game, unless they wanted to stay out of what was already aiming to be the next technological revolution. And whatever the name of the company, they all depended on one key player: NVIDIA. The reason? Jensen Huang’s firm had the best AI-specialized GPUs on the market, with the H100 as standard bearer. And we talk about GPUs because they are much more suitable than CPUs for the parallel processing tasks that modern AI requires. The approach was simple: if you wanted to compete in generative AI—and hopefully aspire to lead it—you needed to buy NVIDIA GPUs, upgrade your data centers or open new ones. And all this against the clock. The result was massive demand that sometimes led to shortages. An illustrative example: at the beginning of 2024, Meta announced that Its renewed infrastructure for generative AI would reach 350,000 NVIDIA H100 GPUs by the end of the year, with a power equivalent to about 600,000 H100. By June 2024, NVIDIA was already the most valuable listed company on the planet. It is often said that the difficult thing is not to get to the top—that too—but to stay there. That’s exactly the challenge NVIDIA faces now. Their products remain at the top and its strategy covers much more than hardware: it includes a stack of software designed to make the most of the architecture CUDA. But the competition is knocking on the door loudly, the same as with OpenAI. The leadership of these years is pressured by increasingly fierce and global competition. Amazon and Google’s bet on AI chips The most recent move comes from Amazon, which has just introduced Trainium3 UltraServer, a system powered by its AI chip Trainium3 manufactured in 3 nanometers. According to the companythese chips and systems are 40% more energy efficient than the previous generation. And it doesn’t end there. Amazon also showed its roadmap for Trainium4, already in development. It promised support for NVLink Fusion, NVIDIA’s high-speed chip interconnect technology, opening the door to interoperable systems. Trn2 Ultra Cluster Google, which has been developing its TPU chips for AI for a decade, is starting to bet on its own hardware to boost Gemini. Even rumors circulate that talk about a multimillion-dollar investment by Meta to buy AI chips from Google, a move that enters squarely into AI territory. We would also be facing a quite notable change of course in the search engine company, which until now had limited the use of its chips to its own data centers. Ironwood by Google Do you remember that Meta is one of NVIDIA’s big clients? Well, it is also developing—and testing—its own AI chips, such as the MTIA. Even OpenAI has decided to partner with Broadcom to design their own hardware for their data centers. In China, the scenario does not favor NVIDIA either. The trade war with the United States has narrowed its room for maneuver in the country, while local actors such as Huawei they push their own chipswith the Ascend 910D already in production and Ascend 920 new generation. Ultimately, we are seeing many of the players who turned to NVIDIA in the early stages of the AI ​​race beginning to follow their own path. The reasons are multiple, but one stands out above all: the need for independence in a technological competition that only intensifies. Images | World Governments Summit | amazon | Google In Xataka | If you think the internet was much better before AI, congratulations: they have created an extension for you

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.