What is Google technology to mark and detect contents made with artificial intelligence

Let’s explain What is and how does Synthid workthe tool developed by Google to mark and detect content made by AI. In a world like the current one in which the creation of texts and images with artificial intelligence is a reality, and where the video is the next step, so being able to make this identifiable content is important. So that you understand what it is about, we are going to start telling you what exactly this technology is and what it offers. Then, we will explain your operation a bit, and we will end up talking about your arrival and implementation. What is Synthid Synthid is a technology with which Mark the content that has been generated by artificial intelligence. What it does is generate a watermark in these published contents so that they are easier to detect by other systems, and that there is more transparency and trust in them. We live in a world in which it is already difficult to know if a text has been created by an AI or a human. The creation of images is also improving so much, that sometimes it is difficult to know when one has been created or simply modified using artificial intelligence. And the next step will be the video. This does not leave in a panorama in which the proliferation of false and misleading news seems to reach a new level, something that will make users begin to distrust even more in what is published. And it is in this context that Synthid technology arrives Created by Googleand that the company aspires that others also use. It is a technology that adds a watermark to the content created by AI, and then offers another technology to detect it. Google will start using this technology already in its generative AI systems. Therefore, When you generate an image with Geminito give an example, in the metadata of this image there will be a brand of water that you will not see, but other algorithms can detect to know if it is made by AI. Google’s idea is that other companies with AI systems to generate content They also adopt this standard. In this way, it will be much facilitated that then the rest of the platforms know if an image or video are made by an algorithm or a person has really made them. How Synthid works What Synthid does is embed digital water marks directly in images, audio, text or videos generated by artificial intelligence. These water marks are not visible, but are a line of code in the metadata that you are not going to notice. However, just as Google has this system to print water marks, it also has A version of Synthid to detect them call Synthid detector. In this way, its solution is able to generate the two necessary steps, to put an identifier to the content and to allow it to detect it. Therefore, the idea of ​​the company is that its chatbots and solutions to generate artificial intelligence content use this technology. Then, other companies will be able to use the Synthid detector to know if this content has been created by a Google AI. In fact, this can be An independent platform in which simply upload images, texts, videos and others, and will tell you if Pro Synthid has been created. And once this infrastructure is created, Google also expects other companies to be associated with them. In this way, they trust to make Synthid a standard for many to use it to put water marks, and many platforms also have the detector. When Synthid will arrive Although Synthid is a promising proposal and with many striking characteristics, At the moment it is not a universal standard. This means that it is only one of several proposals. Google is already betting on her products, but at the moment nothing more. The next step will be convince other AI companies to use this technologyand thus generate an ecosystem that makes it a standard. But for now this has not happened, and no matter how much Google marks its images, it is useless if no one else does. Therefore, we still do not know to what extent Synthid will end up use. The next months and years will give us a track If other companies join this technology or not. At the moment, Google has a list open for companies to start trying it. In Xataka Basics | Gemini 2.5 Pro and Flash: What are the novelties of the new artificial intelligence models of Google capable of thinking and speaking

Google has hit an acceleration in the race for AI. That multiplies the Android value

A pylon hammer. That is what Google seemed yesterday in the opening talk of its Google I/O event. The company left us An avalanche of ads related to advances in their artificial intelligence models and, of course, of the practical applications that these advances will mean. Google’s ambition is clear, and yesterday they took a giant step to become the great winners of the race and AI. Hello, this is the future. Although some of the novelties will not be available immediately, Google’s speech was clear: this can be the future. One in which we will not touch the mobile so much, but we will constantly talk with him. Gemini’s advance – which now has an “Deep Think” mode of reasoning – is clear, and Google has hit an acceleration to apply it everywhere. And especially in one. Ai mode vs traditional search engine. It is likely that searching on the Internet has changed forever. What we already saw with Perplexity now arrives much clearer to the Google search engine thanks to its “ai mode” in which the search engine He no longer delivers results, but talks with us. That will leave the traditional search engine more and more, which at the moment remains the one that is selected by default. For the moment. Advances everywhere. The improvements in PROJECT ASTRA They demonstrated how the AI ​​assistant is already prepared to “infiltrate” in our lives. They have already integrated it with Chrome, but also wins overlaps with Search Live, the mode of operation with which our mobile serves as their eyes, ears and mouth to interact with the world. Advances in image 4 and I see 3 —I Flow– They join others such as real -time translation on Google Meet – which will also reach the connected glasses. A more practical and useful than ever. But the most relevant message was precisely the one that Google found its product demos. The AI ​​that now seems to be limited to very specific tasks demonstrated in these releases how it could help solve everyday problems. Fix a bike with the wizard, try new clothes virtually, that the AI ​​monitor the price of a product and let you know and buy it (with your previous confirmation) or travel to any country and understand you with other people without speaking their language – a old promise, now closer than ever – were some of those demonstrations. And in all of them, Google’s AI showed unstoppable advance. One that, by the way, will boost its other great product. Android with super powers. Google’s mobile platform remains a fundamental pillar of these advances, because the AI ​​that is available through models such as Gemini does not act alone: ​​it is a complement, almost a “plugin” for Android, whose inertia and influence are evident in the mobile market. And its variants, too. Although in many cases the solutions created by Google are used on other platforms (Jules In PCs and laptops, I see/image too) there is an intimate relationship with Android and of course, with its variants. A few days ago Google already made clear its plans to Integrate Gemini in the car, in the clock or on television. All of them make use of Android operating systems, but there is another especially promising. Glasses. In Xataka we have been able Try the new Google glasses with Gemini And experience, even being limited by based on a prototype, is hopeful for this type of product. The integration of AI functions is something that the Ray-Ban Meta had already pointed out, and it is clear that this market promises to be especially important in the coming years. And again, below Android XR, which can end up becoming a true pillar of Google’s strategy. Faced with Google’s ambition, Apple’s indifference. Google’s staging yesterday – and Microsoft in its Build conference The day before – demonstrated the ambition of these companies in the field of AI. They have opted everything to this technology and seem especially prepared to get all the juice to this revolution. Meanwhile, Apple does not stop losing trains and giving disappointing news In this segment. Waiting for wwdc 2025. We can take Apple’s pulse much better in less than a month. On June 9, the opening talk of the WWDC 2025 is celebrated, and it will be then when we know if Apple has something real to show ourselves in the matter of AI or declined even more from the headplay. A priori it does not seem that they can move file soon, and the company’s approach seems to be much more cautious and gradual than that of the competition. Here Apple prefers to wait for their proposals to be much more polished before presenting them, but there is also the other option: that they are really in trouble and this can pass them long -term invoice. In Xataka | Everything we have seen of Apple Intelligence places Apple far behind in Ia. Even their employees believe it

What are the novelties of the new Google artificial intelligence models capable of thinking and speaking

Let’s tell you What are Gemini’s novelties 2.5the new version of the family of Google artificial intelligence models. This model comes with two different flavors, Gemini 2.5 Pro with all functions and Gemini 2.5 Flash, a lighter version. In this new launch they stand out above all Your “thought” mode and your ability to speak generating audio natively from their answers. But we will explain all this next. As always, these two versions are available for all users, both free and payment. However, payment will have the possibility of using versions without limitations, and interacting with them for a longer time instead of having limited interactions such as free users. GEMINI 2.5 NEWS The main novelty of Gemini 2.5 is its advanced “thought” capacity. With her, before sending you the answer she is generating again, as if she are thinking, and analyzes the data better so that it is as successful as possible. This increases the time it takes to respond, but reduces errors. Here are differences depending on the model. The Gemini 2.5 flash model simply incorporates An internal reasoning process Before generating the answer. In addition, both the Flash and PRO version allow you to access a summary of the internal reasoning process that has been carried out. Gemini 2.5 Pro instead has a deeper reasoning capacity called Deep Thinkwhich allows you to address more complex problems more precisely, especially when creating code or with mathematical and logical problems. A native audio output is also added, allowing you to imitate Google Assistant being able to respond in voice. This is available in Pro and Flash versions, offering a more natural conversation experiencesince you can answer you with your voice. You can also adjust the accent or tone, such as asking for a dramatic voice when I tell you a story. With this movement, we can see that Gemini’s future seems to be to replace Google Assistantsince the new capabilities that you have obtained point to better interactions with users. Another novelty is to be able to practice a more affective dialogue, which means that You can detect emotions in your voiceand thus adjust your answers to this context in a more natural way. In addition, you can also ignore background conversations when you talk to AI to know that you should only answer you. As usual with the launch of a new artificial intelligence model, the context window has also been expanded to be able to analyze very extensive documents, complete code repositories or very long conversations. And security has also been significantly improved, to avoid injecting indirect instructions. In Xataka Basics | Gemini Advance functions that become free in March 2025

I have tried the new Android design 16. If this is what awaits the Google system … that they have my sword

Yesterday was one of those days that are marked forever in the technological calendar. We may not be aware of the impact that everything presented yesterday will have, but Google marked a before and after not only in the future of Gemini and Android, but in how we relate to consumer electronics. While showing us how they will work Android XR glassesthe Project Beam holographic callsand the Ai mode in the searchThe company launched one of Android’s greatest resideños to date. The Google Pixel selectable can already enjoy Android 16 With this change in design, and we have tried it. The launcher. He launcher It is one of Android’s most iconic parts and, and Google never dares to change it too much. It remains almost equal to what we had seen so far, with some small aesthetic change in the search, voice and icons Google Lens of the application drawer. The notification bar. Android (at least in Google’s native interpretation) does not give up to the iPhone control center, unlike the rest of customization layers. We continue to have a quick access bar above and, just below, notifications. This panel has completely redesigned: we have new icons, new easier design and access to the notification history and the notification configuration menu. And yes, the mythical battery icon is now horizontal, Android has lost one of its greatest hallmarks and now looks more like iOS. Android 16 does a better job with complementary colors. The color palette. Something that can be seen instantly with Material 3 It is a better integration of automatic issues. From the arrival of Youthe Android color palette adapts to the colors of the wallpaper. With Android 16 this remains so, but integration is even deeper and (to taste a server) more aesthetically. The icons, the transparency of the notification bar … absolutely all the elements of the system mix the colors of the Wallpaper so that everything looks good. The multitasking. At least in this beta, it is much easier to interact with the apps that we have in Multitasa. We continue without having the button to remove all the apps in an accessible way, but there are interesting options. We can quickly select the contents of an app in multitasking without even having it open in the foreground. We go to multitasking, click the image button, and we will have quick access to Google Lens, copy, share and save. We can also capture screen or select some concrete area of ​​the app. In the drop -down, we have the divided screen options, pause the app, the app, capture, select or close. The adjustments. This is, by far, the point that I liked of Android 16 and Material 3. I cross my fingers to be just a matter of beta and that there are changes in the final version. The adjustments are worse about what we had previously, going fast and at the foot. Even having tried this beta in a Pixel 9 Pro, everything is too compacted and small. It is difficult to distinguish between the sub -rush, and it becomes quite tedious to find adjustments that we saw before with the naked eye. Nor does I fall in love with the design too much. Now it is simpler and more minimalist, but there is neither a especially clear order, nor has it taken too careful at aesthetic level. The best is missing, apps. I have stayed with the desire to try one of the reasons for being of the new version of material 3: apps. This redesign makes sense when apps adapt to it, both aesthetic and functionalities. Live activities are not available in this beta, and there is still work to do. Be that as it may, this new version is good news. Android is more consistent than ever at the aesthetic level (waiting to see how they solve the design of the adjustments), and prepares For a revolution by Gemini. It will not be the one who stays outside this ship. Image | Xataka In Xataka | Google has put a price on the future of AI: $ 250 per month

Ai Mode, Project Beam, I see 3, the Project Aura, Jules glasses and everything presented in a Google I/or 2025 loaded with ambition

Google has taken the heavy artillery in the war to lead the development – and the business – of artificial intelligence. The American giant has done so in his annual developer conference, which more than this has been a demonstration of strength, a showcase where he has presented some of his most innovative advances. Next, we review all the products that Google has presented this Tuesday, May 20. If you want to deepen any of them, next to each name you will find a link with all the information. Gemini Ultra, I see 3 and image 4: a subscription for those who want everything Gemini Ultra is the new most complete artificial intelligence subscription of Google. It costs $ 249.99 per month And, for now, it is only available in the United States. Includes access to tools such as video generator I see 3the FLOW editing app and the Deep Think mode of Gemini 2.5 Pro, which has not yet been officially released. Subscribers also obtain improvements in notebooklm and whisk, storage of up to 30 TB in the cloud, YouTube Premium and access to the chatbot gemini directly from Chrome. Some of the most advanced functions are driven by the technology of Project Marinerwhich gives “agricultural” capacities agents. Deep Think: an AI that takes time to respond better Deep Think is a new mode of reasoning of the Gemini 2.5 Pro model which allows the AI ​​to consider several possible answers before deciding on one. It seeks to improve accuracy in complex tasks and advanced benchmarks.It is currently only available for a small test group through Gemini’s API. Google states that he is carrying out security evaluations before launching it publicly. Ai Mode and Search Live: This way Google wants to redesign the search Ai mode It is a new experimental function for the search for Google that allows you to ask complex questions, with multiple elements, from an AI -based interface. It is launched this week in the United States and is able to handle sports, financial data and offer options such as “tested” virtual. Throughout the summer Search Live will arrive, a function that will allow asking questions based on what the mobile camera detects in real time. Gemini in Chrome and detection of synthetic content Gemini is integrated into Chrome as navigation assistant to help understand the web pages content and execute tasks. Gmail incorporates custom intelligent responses and a new function to clean the inbox. Google has launched Synthid detectora verification system that uses invisible water marks to identify content generated by AI. BEAM: 3D video call with simultaneous translation Beam, formerly known as Project Starlineconvert video calls into almost face -to -face conversations thanks to a six -chamber matrix and a light field screen. It offers headquarters to the millimeter and video to 60 frames per second. It includes real -time translation in Google Meet, preserving the voice, tone and expressions of the original speaker. Jules: Google agent to program without touching the keyboard Jules is Google’s new assisted programming agentdesigned to compete with platforms such as cursor, Windsurf or Codex. It is able to generate tests, update dependencies, write changelogs in audio and correct bugs while the user continues to work on other things. It works without plugins or additional facilities and is available in Public Beta for US users. Android also present at the event Android premieres new tools to find phones and lost objects, and also a new design language called Material 3 Expressive. Google showed its new glasses livedeveloped with Xreal and based on Android XR. The demo included interaction with Gemini by voice, simultaneous translation and overlap of real -time information. The project is called Project Aura and seeks to bring Android to the XR world with a practical and ornaments. Images | Google In Xataka | Smart glasses do not have to be an armed one. Google has it clearer than ever

The next milestone for the IAS that generates video was to make them with audio. Google has achieved it with I see 3

Great day for Google. We are in full I/O 2025, the most important software event for the American company. Interestingly, Android is being one of the least sounded names: this year the only thing that matters is AI. And, related to AI, Google has been working on a model that allows you to generate video through text. That model is I seeand in its new update it is able to generate these videos … with audio. I see 3. Google has three levels for its generative artificial intelligence of video. I see 1, I see 2 And the new I see 3. Yes, they are much easier names regarding what We are accustomed to us. I see 3 is the most powerful model, capable of generating 4K video with advanced film compression. In this Google I/or gains a key capacity: the generation of video with audio. Of environmental sounds to dialogues. Google goes with everything with I see 3. This model not only has a higher quality with respect to I see 2: it is the only one of Google capable of generating videos with audio. For example, if in the prompt we detail that we want an urban scene, it will be able to recreate some of the sounds corresponding to it (people walking, traffic, bustle, etc.). Google goes further, and promises to be able to create even dialogues between characters. This is one of the definitive barriers for text to text to become practically a science fiction function. With I see 3 it will be possible to do everything. IMPROVEMENTS IN SEE 2. Although I see 3 is the absolute protagonist, I see 2 is updated with new functions. Among them, it premieres new camera controls much more precise for Traveling and Zoom movements, outpainting options to expand the framing (to pass the vertical to horizontal or vice versa video), as well as the possibility of adding or deleting elements of the video. Flow arrives. Related to VI, Image and Gemini arrives Flow, the new Google tool to create cinematographic videos through AI. It is a new work environment to be able to give free rein to our creations with I see: a video editor with whom we can create both with image and I see. In addition to functioning as editor, it will have some social function. Through Flow we can access Flow TV, a feed in which we will see content, channels and creators who are generating videos with I see. Ahead of Open AI. Chatgpt creators surprised the world with Soraits artificial intelligence to generate video from a prompt. The problem? At least, at the time we write these lines, it is not able to generate video. In December 2024 Google already advanced Sora on the right Showing the capabilities of VER 2, which quadrupled the video output resolution with respect to the Open AI model. It also allowed to create more durable videos, and a “understanding” of spectacular physics, something that makes the difference when creating a natural video. Your rivals. Rival video generators such as Runway, Luma ai either Pika Labs They allow to add external audio, but in no case generate sound at the time of delivering the final video. Google has just been punched on the table with I see 3, maintaining the first career position and further complicating things to giants like Open AI. At the moment, these functions will be available for GEMINI Ultra subscribers in the United States through the Gemini and Flow app, as well as for companies through VERTEX AI. Image | Google In Xataka | 14 tools to create free images

Google already offers synthid to mark content generated by the as such. Now it will help us identify them

When Pope Francis I appeared With a colorful white coat of feathers, many probably believed that this image was real. It was not, but that image, Like Donald Trump’s arrestone thing made clear: that we were increasingly difficult to differentiate the images created by an AI of those that are not. How to solve it? Water marks. Among the possible solutions, there was a fairly obvious: identifying the images of AI as generated by AI. That is: every time someone asked Chatgpt an image with Studio Ghibli style oa grok any of Bill Gates with a gunthose models of AI should include certain metadata in the imagne file. Basically it would be like “Put a seal” to these images to be able to identify them in case of doubt. There Synthid enters. There is Several efforts In that sense, and among them is Synthid. This water brand technology was presented in 2023 And a year later They offered as a free tool so that anyone could implement it. Fight against Deepfakes. As those responsible for the company comment, since its launch in 2023 Synthid has placed water marks in more than 10,000 million images, videos, audio files and texts. This can be identified as generated by AI and thus reduce the possibilities of misinformation and erroneous attribution. In addition, they stand out in Google, the results generated by VI 3, Image 4 and Lyria 2 will continue to have Synthid water marks. The AI ​​content detector arrives. The company has announced during the Google I/O 2025 event that Synthid Detector launches, a verification portal to help users identify the contents generated by AI. Its operation is simple: it is enough to raise any content (text, image, video, audio), and the Synthid detector will identify whether the entire file or only one part contains synthid. Preparing for the good and the bad of AI. This type of tool can be especially useful for artists and creators who can thus defend their work as legitimate, but it is above all a useful option that helps us protect ourselves from Deepfakes and misinformation attempts. But. As we mentioned months ago, Synthid is a promising proposal and with many striking characteristics, but it has a problem: that (for the moment) It is not a universal standard. Here Google must not only use it on its platforms, but to reach a consensus with the rest of large technology so that there is a single open and interoperable system that solves the problem of water brands universally. In Xataka | Ilya Sutskever’s new company has a clear objective: to create a “nuclear” security superinteex

Smart glasses do not have to be an armed one. Google has it clearer than ever

Goal is marking the way to the market on how smart glasses should be with their collaboration with Ray-Ban. And Google, which has been developing projects from some years and years cardboard VR glasses to a device to Vision Pro (Your project with Samsung and Android XR), it seems to have realized what the winning format is. In the Google I/or 2025 he has shown us the news in Android XR. Far from helmets, far from devices that we can only use at home for a few hours due to its weight and dimensions. His words make it clear: “We know that glasses can only be really useful if you want to wear them all day.” That was the key. The small format. The glasses, glasses are. And everything else is helmets or devices of augmented/mixed reality for a more complex use scenario: games, video projection or, why not, the complete replace It is not working. Goal has done it well this year with its Ray-Ban Metaglasses that with the naked eye seem completely normal. Only when we approach them, the cameras system, the LED and the thickness of the Patillas Chivan that we are facing an electronic device. Gemini see everything. The focus of this I/or has been clear: Gemini wants to be able to see everything. In both Google and iOS, you will be able to see both the content shown by the camera and the content shared through our screen. The only thing you need are “eyes and ears” so … nothing prevents this concept from taking intelligent glasses. Not only helmets. Until now, the mixed reality helmets They looked like the only devices capable of showing capabilities beyond the simple: calls and video/photos recording. Google has just demonstrated in the I/O. Google wants to integrate Gemini into traditional format glasses, so that the wizard is right in our view. The goal? Free ourselves from the mobile and that we can interact with the environment in a much more natural way. Ten years of work. According to Google, the company has been working for more than a decade on this type of concept for smart glasses. It has only been possible now, with Android XR. Camera, microphone, speakers and telephone connection. It is all that is needed for Android XR to work together with our smartphone and have access to its applications. Similarly, the most advanced devices will equip a screen on the lens to offer information to the user privately. What can be done. Batched with the phone, the glasses can already work with Gemini. The idea is clear and Google does not hide it: “Come and hear the same as you.” With Android XR in glasses you can send messages, ask for indications of Google Maps, take photos that will be stored in Google Photos, and even translate a conversation in real time. With who. The success of Android XR in glasses will depend on Google’s ability to convince its partners. As of today, the company announces collaboration with Gentle Monster and Warby Parker to create Style glasses with Android XR. In “a future”, they hope to collaborate with more partners. Similarly, they claim that collaboration with Samsung will go beyond viewers: the goal is to bring Android XR to traditional format glasses. Image | Xataka In Xataka | Xiaomi glasses are being a brutal success in China. And that they have not yet went on sale

Google has just announced the closest to the holographic science fiction: Project Beam

Videollamar have been useful for years. A solution that works, although with obvious limitations. See and hear the other person is fine, but the Sensation of real closeness It is still far away. Google has been trying to solve that problem for some time, and now has decided to take a more determined step to get it. That step is called Beam. It is the new name of a technology that we already knew as Project Starline, an experimental proposal that sought to recreate the experience of a three -dimensional face -to -face conversation, and that we had the opportunity to try last year. Now, that idea evolves in the form of a platform. Beam was born as a communications system designed to integrate into real environments, supported by the infrastructure of Google Cloud and enhanced with advanced artificial intelligence models. A conversation with volume, not only with image. Google Beam’s key is in its volumetric video model. A IA -based system that transforms a 2D video signal into a realistic three -dimensional representation, visible from any angle. When combined with a type screen Light Fielda sense of depth is achieved that allows to maintain visual contact, interpret expressions and generate more natural communication. According to Google, this helps generate trust and understanding as if the conversation were face -to -face. The objective declared by the company is to create more significant connections between people, wherever they are. To achieve this, Beam relies on two fundamental pillars: the reliability and scalability of Google Cloud, and his experience accumulated in artificial intelligence (AI). Everything designed to integrate without friction in existing workflows. Real -time translation without giving up naturalness. Beam not only focuses on the image. It also wants to facilitate understanding. One of the most striking functions is real -time voice translation, now available today in Google Meet. It allows to maintain a fluid conversation between people who speak different languages, retaining the tone, cadence and expressions of each interlocutor. The result is a more natural conversation, where technology is perceived less and connection between people, more. For Google, this functionality is just the beginning. His long -term vision is clear: to ensure that anyone, anywhere in the world, can be seen and understood with total clarity. Beam arrives at work. At the moment, Beam points to the professional environment. Google has announced an agreement with HP to launch the first compatible devices, which will reach selected customers this year. It should be noted that it does not work with any configuration. These devices will have several cameras to capture the subject from different angles. In addition, the company is collaborating with companies such as Zoom, Diversified and AVI-SPL to integrate this technology into different corporate environments. Great organizations have already shown interest, including Deloitte, Salesforce, Citadel, Nec, Hackensack Meridian Health, Duolingo and Recruit. From Deloitte, for example, they emphasize that Beam is not only a technological advance, but a way of rethinking how we connect in the digital age. A clear promise. Be there without being. This is Beam’s central idea. It is not just a technical improvement, but an evolution in the way of communicating. Beam wants to talk to someone at a distance does not feel like a video call, but as a face -to -face conversation. Images | GOOLGE In Xataka | Google has put a price on the future of AI: $ 250 per month

Google has put a price on the future of AI: $ 250 per month

It has not been the first, but the most forceful. Openai took the first step in autumn, asking 200 monthly dollars per chatgpt pro. Google has responded by raising the bet: $ 250 for its ultra plan. A subscription that not only monetizes capabilities. Also marks hierarchies. Do you want the most capable? Pag. And it is not a way of speaking. At that price, it is not accessed only to a smarter chatbot, but to the hard core of the near future: Deep Think, The new mode of reasoning of Gemini 2.5 Pro. Preferential access to video and audio generation tools (I see 3, Image 4). Project Mariner: Agents who understand, plan, act, execute. Flow, the cinematographic creation tool with camera control and video generation in 1080p. Whisk animate, a tool to turn images into animated videos of 8 seconds. Notebook LLM with higher limits and advanced versions of the model. Gemini in Chrome, with context of the page, with early access. Gemini integrated into Gmail, Docs, Chrome and Search, with persistent context and priority use. 30 TB of storage in Drive, Photos and Gmail. And a premium YouTube subscription. Gemini is no longer just conversation. It is the interface of the world. Image: Google. And the most important thing is not what Ultra includes. Is what leaves out. Google has taken everything that defines this new stage of AI – agency, autonomy, deep reasoning, extended multimodality – and has encapsulated after a payment wall. In that gesture there is not only business model. There is vision. Intelligence becomes a product, but also on border. By the way, there is a 50% discount for the first three months. And the previous plan is now called ‘Google Ai Pro’. On the other side it remains Flash. The free version – or low cost – designed for most. Fast, competent, useful. Like a car without a steering wheel. An AI without memory, without tools, without hands. It serves to respond, not to act. Do not create flows, it does not automate anything, it does not think beyond a few seconds. Flash is the promise of democratization that is still maintained. Ultra, the true pilot. The Google movement does not surprise, but confirms. The mass access phase is over, open experimentation. What is now built is an economy of computational performance. Whoever wants more context, more persistence, more power, will have to pay it. And soon. Because if 2022 was the year of glare, and 2024 that of the co -pilots, 2025 will be that of digital classes. And this time, the border will not be technical. It will be economical. What Google is not just a commercial movement. It is structural. Institutionalizes restricted access. If for years the knowledge tended to open —wikipedia, Google, YouTube, Moocs– Now it begins to fold in high -end products. With this, productivity, creativity and the ability to compete are also replicated. The digital elevator continues to go up, but is getting more and more to get on it. Because it is not paid only for technology. It is paid for an advantage that is not seen, but that decides: the right to think with more help, in more directions, with less friction. The right to automate before others. To have an assistant who does know how to program, who does understand video, which does remember, that does act. As always, intelligence – human or artificial – tends to concentrate where capital accumulates. And the rest? The average user is on the other shore. With a AI that responds, but does not decide. Who attends, but does not anticipate. That summarizes, but does not build. That is the new gap: not between those who use AI and those who do not, but between those who have it in their favor, and those who only see it from behind a crystal. Google has made official what Openai has already hinted: access to automated knowledge will be priced, threshold and owner. And if the future is made of intelligent interfaces, reasoning engines and agents who execute for us, then: the future costs $ 250 per month. In Xataka | The new great models of generative the AI ​​do not stop delaying. It is a dangerous indication that we have touched the roof Outstanding image | Solen Feyissa in Unspash

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.