Ai Mode, Project Beam, I see 3, the Project Aura, Jules glasses and everything presented in a Google I/or 2025 loaded with ambition

Google has taken the heavy artillery in the war to lead the development – and the business – of artificial intelligence. The American giant has done so in his annual developer conference, which more than this has been a demonstration of strength, a showcase where he has presented some of his most innovative advances. Next, we review all the products that Google has presented this Tuesday, May 20. If you want to deepen any of them, next to each name you will find a link with all the information. Gemini Ultra, I see 3 and image 4: a subscription for those who want everything Gemini Ultra is the new most complete artificial intelligence subscription of Google. It costs $ 249.99 per month And, for now, it is only available in the United States. Includes access to tools such as video generator I see 3the FLOW editing app and the Deep Think mode of Gemini 2.5 Pro, which has not yet been officially released. Subscribers also obtain improvements in notebooklm and whisk, storage of up to 30 TB in the cloud, YouTube Premium and access to the chatbot gemini directly from Chrome. Some of the most advanced functions are driven by the technology of Project Marinerwhich gives “agricultural” capacities agents. Deep Think: an AI that takes time to respond better Deep Think is a new mode of reasoning of the Gemini 2.5 Pro model which allows the AI ​​to consider several possible answers before deciding on one. It seeks to improve accuracy in complex tasks and advanced benchmarks.It is currently only available for a small test group through Gemini’s API. Google states that he is carrying out security evaluations before launching it publicly. Ai Mode and Search Live: This way Google wants to redesign the search Ai mode It is a new experimental function for the search for Google that allows you to ask complex questions, with multiple elements, from an AI -based interface. It is launched this week in the United States and is able to handle sports, financial data and offer options such as “tested” virtual. Throughout the summer Search Live will arrive, a function that will allow asking questions based on what the mobile camera detects in real time. Gemini in Chrome and detection of synthetic content Gemini is integrated into Chrome as navigation assistant to help understand the web pages content and execute tasks. Gmail incorporates custom intelligent responses and a new function to clean the inbox. Google has launched Synthid detectora verification system that uses invisible water marks to identify content generated by AI. BEAM: 3D video call with simultaneous translation Beam, formerly known as Project Starlineconvert video calls into almost face -to -face conversations thanks to a six -chamber matrix and a light field screen. It offers headquarters to the millimeter and video to 60 frames per second. It includes real -time translation in Google Meet, preserving the voice, tone and expressions of the original speaker. Jules: Google agent to program without touching the keyboard Jules is Google’s new assisted programming agentdesigned to compete with platforms such as cursor, Windsurf or Codex. It is able to generate tests, update dependencies, write changelogs in audio and correct bugs while the user continues to work on other things. It works without plugins or additional facilities and is available in Public Beta for US users. Android also present at the event Android premieres new tools to find phones and lost objects, and also a new design language called Material 3 Expressive. Google showed its new glasses livedeveloped with Xreal and based on Android XR. The demo included interaction with Gemini by voice, simultaneous translation and overlap of real -time information. The project is called Project Aura and seeks to bring Android to the XR world with a practical and ornaments. Images | Google In Xataka | Smart glasses do not have to be an armed one. Google has it clearer than ever

The search as we knew it is over. Google’s AI Mode no longer delivers results, he talks

In recent times, Perplexity has done something that seemed unthinkable: Make Google Search feel old. With its conversational interface, direct answers, linked references and a relentless update rhythm, has shown that a search engine with not only possible, but desirable. It is faster, clearer, more useful. And increasingly popular. Many already have it as FJA tab. Google has taken note. And he has responded. The most important announcement – although somewhat camouflaged – of the I/O 2025 It was not a new model or an ultra intelligent agent. It was this: Ai mode arrives at Search and Gemini. Or in other words: Google has begun to transform its search engine into something that looks a lot like perplexity. For now it is only for users of the United States –argh–, but the movement is clear. When AI Mode activates in the Gemini app, the user stops doing classic style searches and begins to receive direct answers generated by the model, with links to sources, relevant context and capabilities to go beyond: compare, ask for explanations, continue asking. The search engine no longer delivers blue link lists, not even a summary above. Find conversation. In this way, Gemini is not a conversational model. It is an active knowledge engine, A synthesis of LLM, browser and assistantwith the ambition to replace the habit of “google” with that of “asking.” You can search for flights, understand documents, ask for cross opinions or compare articles. And all that, without touching an external page. This is not The results with generative that we saw will arrive in Spain a couple of months ago. This goes much further. That were generative responses about classic results. Ai mode is something else: It is more perplexity, more direct, more useful. And more dangerous for the web ecosystem. Because here is the turn that nobody should overlook. In Perplexity, at least for now, the sources are visible, well prominent, and are central to the experience. In ai mode, on the other hand, ambition seems different: respond so much and so well that the user does not feel the need to leave. A closed, polished, self -sufficient experience. That changes things. Not only for the user, who can stop distinguishing between response and source. Also for the media, creators, forums, specialists. Everything that today feeds Gemini from the web becomes less visible in the process. Knowledge is preserved, but loses authorship On the surface. Perplexity Forced Google to advance. But in doing so, Google has changed certain rules. He has taken what works – the synthesis, natural language, speed – and integrates it into an ecosystem, broader, more fluid, also more opaque. If Perpleplexity was a pioneer in experience, Google now counterattacks with total integration. Therefore, the AI ​​Mode in Gemini is not just a technical novelty. It is a paradigm change in how we look, how we read, how we inform ourselves. The user no longer consults a database. Interact with a system that interprets, selects, synthesizes and responds. Google has caught where the search is going. And has decided to move. But in his style. In Xataka | Google has put a price on the future of AI: $ 250 per month Outstanding image | Google

What is, how it is used and what can you do with this mode of Google’s artificial intelligence

Let’s explain What is Gemini Livea way of Google’s artificial intelligence that acts as an assistant. It is the most direct way you have to interact with Google’s AI, and it has several interesting functions, including one to see what you have in front. We are going to start explaining what exactly Gemini Live is and how this function differs with the others from artificial intelligence. Then, we will explain what you will be able to do with it. What is Gemini Live Gemini Live is a gemini function in which You can speak normally With artificial intelligence. This means that you will enter a screen where Gemini is always listening, and you can talk in a voice and then the AI ​​responds automatically, staying at the end listening again in case you keep talking. The normal Gemini mode is like a textual chat. You can write or send voice messages, but the interaction is due to differentiated shifts, you send a question, you get the answer, you can attach things, and so on. While Gemini Live is like having a natural conversation. You have no button to send your consultation, you simply speak naturally, and when Gemini detects that you have finished speaking then the answer generates. Therefore, it is more how to have a human assistant. This allows you to hold more natural and “normal” conversations during the time you keep active the function. You won’t even have to touch anything on the mobileyou can support him and do other things while conversing with him. Gemini Live is available for user accounts, and both in free and payment accountsalthough it is in Gemini Advance where you have more functions and less interactions limit. Business accounts, even payment, still have no access. How Gemini Live is used To use Gemini Live, you have to click on your icon to the right of the writing field of the application. This icon has a button with three vertical stripes and a star. This will open the Live screen. In it, you don’t have to do anything else, simply You have to talk and ask the questions you want in a natural way. Gemini Live will understand your tone, what you ask and when you stop talking, and it will be then when it generates the answers. Below you have the controls, with a pause button and another to leave Live. But some users are already receiving two other options. The first one has a camera icon and serves to activate the camera and that Gemini Live see what you have in front. And the second serves to share a screen with Gemini. What can you do with Gemini Live With Gemini Live you can do it. The same questions you ask the normal geminionly with the difference that interaction is more natural and “human.” You ask him a question about whatever you want, and Gemini answers you. Then you can keep asking him. As if it were a person, Gemini will remember what you have been talking about so far to use it as a context. This means that you do not need to mention what you have asked before to ask you a different question about the same topic, since you will remember what you have been talking about in the session. In cases where the answer needs to generate text or code, then Live will take you to the normal gemini screen to teach you the answer there. But otherwise, almost everything is the same. If you use the camera optionyou can ask you things related to what you are seeing. This is something very similar to sending a photo to Gemini and asking questions related to her. The difference is that it will be in real time, you will not have to do a process to take the picture or send it, but you will have the option of activating the camera and that I see. And if you decide to share a screen with Gemini Livethen the AI ​​can see what is on your mobile screen, and you can ask you questions related to it. For example, if an application shows you something you can ask something related to it. In Xataka Basics | How to use Gemini to look at your gmail email: configuration and what can you do

The phenomenon of the year in Tiktok Spain is an influencer dressed in the nineteenth -century maiden mode

The phenomenon of Inés de Robles (better known as Inesdrobles) It is particular for very different reasons. On the one hand, it is a fashionable Tiktaker who, however, remains faithful to a style that cannot even be described as Vintagebut hugs the rancid and little strident as a sign of identity. Second, those who have raised their commentators. And now, he is riding the wave of fame Tiktaker. The template. Inés de Robles videos are always the same, which has undoubtedly helped you establish a definite style. For example, never speak; simply, with unusually current background music (Quevedo, Mar Lucas, J Balvin, Ozuna … a whole Playlist own of a young woman of tastes mainstream), which accompanies of often frightful playbacks. Dressed in clothing Vintagethat sometimes they touch the directly typical of the last century (although, as we will see, not quite), it always makes a characteristic gesture: it bends a foot on the knee of the opposite leg, and stretches it and supports the ground as if it were a ballet step. 11 tricks to dominate Tik tok The comments. However, what has made it viral are its commentators: with a very white and nothing offensive sense of humor, they joke about the outdated aesthetics of videos. “She does not do history exams, she does Storytimes“,” I have gone so much that I have reached the Renaissance “,” I came out in ‘For his illustrious’ and I gave me ‘me’ “,” Fonograph of the lady? “,” This video has reminded me of the summer of 1874 “,” That is the treaty of newly signed tordesillas, right? ” 300,000 ‘Like’ and several thousand comments. Famous (and promotions) arrive. Fame is knocking on the door of Inés de Robles in the form of faranduleo, with collaborations where other people poses with her and makes her famous gesture with her foot. Some of them have been Beéle, Violeta Mangriñán, Omar Montes and even Iker Casillas. And of course, promotions have arrived, some more naked (Grefusa!a Futurist optics), and others more appropriate (Carolina Herreraa online copying where Gutenberg’s Bible says there). His last nine videos, all of April, are guests or promotions paid. An inimitable attraction. The result of this whole mixture is a fascinating account, since Inés de Robles never speaks, which makes her look like a young woman trapped in a bubble. The curious use of artists as inappropriate as the Zowi or Bad Gyal in the background in their videos contrasts with the descriptions of the videos, halfway between naivety (“enjoying sunset”, “excursion day”) and the consciously rancid (“APPOINTMENT for Thursday Tea”, “Tuesday of Mandados”). Or it is one Performance very careful or one of the last traces of spontaneity of Tiktok. And if so … What do you want to tell us exactly? @inesdrobles Recording some themes to the rhythm of @beéle 🤘🎶 #INESDOBLES #INESDEROBLES #Classicgeneration ♬ Sobloove – Beéle Inés: Origins. To discover the answer you just have to go back to the beginning of your account, not far behind in time (July last year), where we see elements such as fashion music and trap display, but in line unquestionably more modernaccompanied by their Two sisters. Some photo linked to rhythmic gymnastics also makes it clear where it comes from The famous and enigmatic gesture with the leg. The outfits They are, above all the last ones, already directly out of a period of the 1st period, possess to continue with the joke of the traveler in time. The Old Money style. As Absolutely everything can be categorizedInés de Robles aesthetics It can be framed In a recent current known as Old Moneya style with connection points with fashions Cayetana, PREPPYCatholic or prick and that is inspired by the lifestyle of American rich families, with luxurious leisure exhibition: golf, equestrian or tennis. Table skirts, poles, pearls … Everything that implies a social category in which it does not just enter, but carries generations in the family abounds (hence the “old money”, not very successful literal translation). Of course, here the ingenious account comments have twisted the concept and have taken Inés to dress with clothes that go beyond the merely aristocratic to get into the directly nineteenth. Header | Tiktok In Xataka | We have been waiting for years at airports for years. Tiktok’s “airport theory” believes that it has been a mistake

It has rained 143% more than normal and Spanish agriculture has suddenly entered into crisis mode. They do not lack reasons

After more than 20 days of continuous rainfall and four huge high impact stories, peninsula floors They are practically saturated. And it should not be a surprise: the amount of water that has fallen has been 143% above Normal. And, although it may seem good news, it doesn’t always rain to everyone’s taste. Isn’t it good news? At the agroganadero level, there are many farms to which this Borrascas festival has been very good: vineyards, olive groves, nuts, dryland cereals and, in general, the livestock that feeds on pastures. But the field is much bigger. Andalusia It’s a good example. Only in the province of Seville, sunflower, chickpea, pea, cabbage and grelo have been affected. But there are more: the red fruits of Huelva, the pepper, the cucumber, the watermelon and the melon Almeria or the lettuce, the broccoli and the Murcian cauliflower. This is going to add the Granada asparagus if the situation is maintained. But why? For a concatenation of factors, of course. Floods have drowned many sown (with “root, gangrene and fungal asphyxiation problems”); The low temperatures are slowing down the development (when not burning) numerous crops; and the lack of labor (or the impossibility of working on lands) prevents the necessary works – or even collection. In figures, According to news four“Farmers find only 15% production.” So much so that a few days ago the Andalusian farmers sighed for just 15 days of sun. And what will happen? If everything goes well (that is, if everything goes as expected) nothing should happen. It is true that the situation has been harmful, but if the time is normalized and without water restrictions, the campaign is in time to save. If the “anomalous” situation lengthens, we will have a problem. However, whatever it is, it is very possible that we notice the break in the supermarket. A couple of years ago, Europe He ran out of red pepper For a cold wave. What has happened is very similar. Right now, there are a dozen products that play with the breakage of the supply. In case climate change I would not assume enough problemsnow it is stubborn to mess up the stations. And that, of course, is a challenge for one of the key sectors of Spain. Image | Markus Winkler | CHANDLER CRUTTINGE In Xataka | “Not a crop is saved”: Spain is about to discover in its flesh the effects of water scarcity

Openai brings to the EU the ‘Live Camera’ mode to see the world in real time

“I see that you wear a hoodie.” This is a fragment of conversation that Chatgpt And an OpenAI employee maintained just over eight months ago. The company of artificial intelligence (AI) led by Sam Altman presented his new model GPT-4O And he showed us how he could help improve his famous chatbot. At some point, Chatgpt could use the camera of our device to see the surrounding environment and, consequently, offer us a deeper experience. Something like what we saw in the movie ‘Her’, saving the distances. This mode landed first in the United Statesand now it is deploying in the European Union. Hello, chatgpt. What do you see? The novelty, known as’Live Camera‘It is a complement to the advanced voice of Chatgpt, a very promising combination of technologies. In another of the demonstrations shared by the company We have seen the chatbot knowing a puppy named Bowser or helping to learn Spanish to some users. We are facing a tool that can also be very useful for people living with some blindness or visual disability. Since the way of vision You can identify objectsyou can describe in natural language what is in front of the camera. The underlying technology, GPT-4O, also promotes the app Be My eyes. Chatgpt’s real -time vision vision It should be noted that the novelty that is being deployed today allows us to share with chatgpt what is on our screen. So the model could help us solve, for example, mathematical problemsas we have seen In other videos shared by OpenAI last year during the initial announcement of the tool. The activation icon and the welcome screen of the Live Camera mode Well, now that we know everything we could do with the latest of Chatgpt the question is how we can start using it. In the first place, we will have to keep in mind that it is a payment feature, that is, it is available for Chatgpt Plus users ($ 20 per month) and Chatgpt Pro (200 dollars per month). Theodore falls in love with Samantha in ‘Her’, a movie released in 2014 If we meet this requirement, we will only have to make sure to have the updated application. We open the application and click on the lower right corner button to activate the advanced voice mode. We will meet the camera button Below the left, an element that we must press to begin the action. Chatgpt will remind us that it is an option that is in the beta phase and the system will possibly ask us permission to access the camera. If we agree, we accept and begin to interact with this most advanced version of the chatbot. To share the screen in real time, click on the three points and then the corresponding option. It is important to keep in mind that paying for Chatgpt Plus does not provide unlimited access to these functions. So we will have to adapt to whats Use limits newspapers established by OpenAI or, in case our budget allows it, pay $ 200 per month for Chatgpt Pro to use these models without limitations. Images | OpenAI In Xataka | Deepseek, in the spotlight of European regulators: Italy and Ireland act against privacy concerns

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.