What are the news about Google’s new artificial intelligence model?

Let’s tell you what they are the main news of Gemini 3the new version of the model artificial intelligence announced by Google. We already have the first data on its main characteristics. As always, the flashes will go to Gemini 3 Pro, which will be the most advanced version. Here, one thing you want to know is that You will notice few of these new developments when you are using Gemini in a conventional way. Most of these changes are aimed at advanced users. Gemini 3 news A step forward in all areas: Google has presented the results of its model in various types of tests, comparing it with the previous version and its direct competition. He is ahead of everyone in everything, from mathematics to understanding what is happening on the screen or creating code. Reason “at the doctoral level”: That is what the test results also indicate, although where it advances the most is in the mathematical results, with a score of 23.4% for the MathArena Apex test compared to 1.0% for GPT 5.1 or 1.6% for Claude Sonnet 4.5. Integrates with Google Search: Gemini 3 is linked to Google’s AI Mode, integrating into the search engine. Generate visual elements: Gemini 3, has the ability to create interactive visual elements, such as calculators, simulations or widgets in real time. Something especially useful when integrated into the search engine. Sometimes it may not respond to you with text, but with an interactive webapp. More direct answers: Google has fine-tuned the way its model responds, offering more concise responses that offer more valuable information and less flattery, clichés and clichés. Improvements in “Deep Thinking”: Another of the most notable improvements is in deep thinking, in addition to advances in code execution, abstract reasoning and visual understanding. Larger context window: This model has a context window of up to one million tokens, being able to analyze large code repositories or very long texts on which you can later work. Better contextual reasoning: Reasoning is improved, especially with long contexts, to avoid hallucinations. Parallel reasoning improvements: Abilities to reason with visual and textual data at the same time are improved, improving accuracy when interpreting tables, diagrams and interfaces. Improvements in its multimodal mode: The analysis of all types of information is improved. For example, you can decipher and even translate handwritten recipes in different languages, and use them to create a cookbook that you can share. You can also analyze sports matches, scrutinize research data and generate code from it. Programming improvements: As we said at the beginning, one of the biggest improvements of this model is in its ability to program. Improved your agent mode: Your ability to use tools and operate a computer through the terminal using agent mode has also been improved. Agents with Gemini can now autonomously plan and improve more complex software tasks. Gemini 3 will begin to be available in the coming daysalthough as we told you at the beginning, it is possible that many of the differences will not be noticed unless you are going to try to take advantage of them in an advanced way. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

AI data centers consume too much energy. Google’s ‘moonshot’ plan is to take them to space

Training models like ChatGPT, Gemini or Claude requires more and more electricity and water, to the point that the energy consumption of AI threatens to exceed that of entire countries. Data centers have become real resource sinks. According to estimates by the International Energy Agencythe electrical expenditure of data centers could double before 2030, driven by the explosion of generative AI. Faced with this perspective, technology giants are desperately looking for alternatives. And Google believes it has found something that seems straight out of science fiction: sending its artificial intelligence chips into space. Conquering space. The company Project Suncatcher has been revealedan ambitious experiment that sounds like science fiction: placing its TPUs—the chips that power its artificial intelligence—on satellites powered by solar energy. The chosen orbit, sun-synchronous, guarantees almost constant light. In theory, these panels could work 24 hours a day and be up to eight times more efficient than the ones we have on Earth. Google plans to test its technology with two prototype satellites before 2027, in a joint mission with the Planet company. The objective will be to check if its chips and communication systems can survive the space environment and, above all, if it is feasible to perform AI calculations in orbit. The engineering behind the idea. Although it sounds like science fiction, the project has solid scientific bases. Google proposes to build constellations of small satellites—dozens or even hundreds—that orbit in compact formation at an altitude of about 650 kilometers. Each one would have chips on board Trillium TPU connected to each other by laser optical links. Such light beams would allow satellites to “talk” to each other at speeds of up to tens of terabits per second. It is an essential capability to process AI tasks in a distributed manner, as a terrestrial data center would do. The technical challenge is enormous: at these distances, the optical signal weakens quickly. To compensate, the satellites would have to fly just a few hundred meters apart. According to Google’s own studyKeeping them so close will require precise maneuvering, but calculations suggest that small orbit adjustments would be enough to keep the formation stable. In addition, engineers have already tested the radiation resistance of their chips. In an experiment with a 67 MeV proton beam, Trillium TPUs safely withstood a dose three times higher than they would receive during a five-year mission in low orbit. “They are surprisingly robust for space applications,” the company concludes in its preliminary report. The great challenge: making it profitable. Beyond the technical problems, the economic challenge is what is in focus. According to calculations cited by Guardian and Ars Technicaif the launch price falls below $200 per kilogram by the mid-2030s, an orbital data center could be economically comparable to a terrestrial one. The calculation is made in energy cost per kilowatt per year. “Our analysis shows that space data centers are not limited by physics or insurmountable economic barriers,” says the Google team. In space, solar energy is practically unlimited. A panel can perform up to eight times more than on the Earth’s surface and generate almost continuous electricity. That would eliminate the need for huge batteries or water-based cooling systems, one of the biggest environmental problems in today’s data centers. However, not everything shines in a vacuum. As The Guardian recallseach launch emits hundreds of tons of CO₂, and astronomers warn that the growing number of satellites “is like looking at the universe through a windshield full of insects.” Furthermore, flying such compact constellations increases the risk of collisions and space debris, an already worrying threat in low orbit. A race to conquer the sky. Google’s announcement comes in the midst of a fever for space data centers. It is not the only company looking up. Elon Musk recently assured that SpaceX plans to scale its Starlink satellite network—already with more than 10,000 units—to create its own data centers in orbit. “It will be enough to scale the Starlink V3 satellites, which have high-speed laser links. SpaceX is going to do it,” wrote Musk in X. For his part, Jeff Bezos, founder of Amazon and Blue Origin, predicted during the Italian Tech Week that we will see “giant AI training clusters” in space in the next 10 to 20 years. In his vision, these centers would be more efficient and sustainable than terrestrial ones: “We will take advantage of solar energy 24 hours a day, without clouds or night cycles.” Another unexpected actor is Eric Schmidt, former CEO of Google, who bought the rocket company Relativity Space precisely to move in that direction. “Data centers will require tens of additional gigawatts in a few years. Taking them off the Earth may be a necessity, not an option,” Schmidt warned in a hearing before the US Congress. And Nvidia, the AI ​​chip giant, also wants to try his luck: The startup Starcloud, backed by its Inception program, will launch the first H100 GPU into space this month to test a small orbital cluster. Their ultimate goal: a 5-gigawatt data center orbiting the Earth. The new battlefield. The Google project is still in the research phase. There are no prototypes in orbit and no guarantees that there will be any soon. But the mere fact that a company of such caliber has published orbital models, radiation calculations and optical communication tests shows that the concept has already moved from the realm of speculation to that of applied engineering. The project inherits the philosophy of others moonshots of the company —like Waymo’s self-driving cars either quantum computers—: explore impossible ideas until they stop being impossible. The future of computing may not be underground or in huge industrial warehouses, but in swarms of satellites shining in the permanent sun of space. Image | Google Xataka | While Silicon Valley seeks electricity, China subsidizes it: this is how it wants to win the AI ​​war

These are the limits of use of Google’s artificial intelligence in its free and payment version

Let’s tell you What are Gemini’s limits announced by Google itself. With this, you can know how much you can use each of the functions depending on whether you are using a free version of Gemini or if you opt for any of the payment. Thus, you can know from the requests to Gemini 2.5 Pro that you can make up to date to the research reports thoroughly or the generation and Image editionbecause the number of times you can do everything depends on the version you have on this artificial intelligence. These are Gemini’s limits Free gemini Gemini with Google Ai Pro Gemini with Google Ai Ultra Use of model 2.5 Pro Up to 5 requests per day Up to 100 requests per day. Up to 500 requests per day. Use of 2.5 flash model Unlimited use Unlimited use Unlimited use Context size 32,000 tokens 1 million tokens 1 million tokens Audio summaries UP TO 20 AUDIO AUDIO AT DAY UP TO 20 AUDIO AUDIO AT DAY UP TO 20 AUDIO AUDIO AT DAY Thorough investigations Up to 5 reports per month with 2.5 flash Up to 20 reports per month with 2.5 Pro Up to 200 reports per month with 2.5 Pro Deep Think There is no There is no Up to 10 requests a day with a context of 192,000 tokens Image generation and editing Up to 100 images a day Up to 1000 images a day Up to 1000 images a day Scheduled actions There is no Limited access with 2.5 Deep Think Limited access with 2.5 Deep Think Video generation There is no Up to 3 videos a day with I see 3 fast Up to 5 videos a day with I see 3 Anticipated access to functions There is no Priority access to some new functions Priority access to some new functions Price €/month € 21.99/month € 274.99/month As you can see in the table, there are some functions you can’t find in the free version of Google Gemini, such as the scheduled actions, the generation of videos with I see 3or Deep Think functions. Nor will you receive new functions as soon as you will do if you have a payment version. But there is also good news, and it is that Google is very generous with the generation and editing of images. Free AI users can create or edit up to 100 images a day, which gives a lot of room to play with those functions in which Gemini stands out. For the rest, as expected, the free version has less context tokens, that is, it has less capacity to remember things when it comes to responding to you, although it has enough for most tasks. Audio summaries are the same for all platforms. In Xataka Basics | Gemini Image Editor: 16 forms and tricks to squeeze Nano-Banana with Google’s artificial intelligence

16 forms and tricks to squeeze Nano-Banana with Google’s artificial intelligence

Let’s give you a few tips for squeeze the image edition to the maximum of Gemininow that integrates the free nano model For all users. This is the new evolutionary leap of the artificial intelligencewith which you can create all kinds of editions without using any external app. Many of these things, from improving colors to restoring or changing elements before needed apps, which were sometimes paid. Now, all this and much more you can do it for free with Google’s AI. We are going to tell you their main possibilities for the expans to and even You can combine them To get the results you are looking for. In each of these functions we will offer you a Prompt As an example, and we are going to explain what you can do. Remember that these prompts or commands for AI are just a starting point that you will be able to customize according to your needs. Edit and improve the photos You no longer need to use Photoshop, because Gemini allows you improve many aspects of a photograph. You can ask you to increase the contrast, to enhance the colors, or to take the richest photos. You can also make more concrete requests for technical editing aspects. EXAMPLE PROMPT: This photo is a bit off. I want you to increase the contrast and empower the colors. Apply filters to photos Do you remember those apps that allow you to apply filters to the photos? Well Gemini has made them history, because You can apply any filter to the photos by just asking for it and specifying the type of filter you want. It doesn’t matter if it is for the photo to be in black and white, that it looks like a drawing or made of charcoal, Gemini will apply it. EXAMPLE PROMPT: I want you to make this photo look like a portrait made with charcoal Change the colors of elements Do you want to know how your hair would look like? Or would a furniture or object of another color be seen too? Gemini also allows you change the color of any element From a photo that you upload, you just have to specify what you want to change and what color do you want. EXAMPLE PROMPT: Change the color of the vinyl disc to dark green. Eliminate elements of a photo The function of surrounding to eliminate from mobile apps seemed science fiction? Well, now it is something that this can also do. You can delete any element you want, do not need or surround it or mark it, just explore what you want it to disappear. EXAMPLE PROMPT: I want you to eliminate the cars that appear in this photo You can also eliminate people Not only objects, too You can delete people from a photo. In fact, when the AI ​​does not only the bottom of what the person covered, but will eliminate other things such as their shadow. Come on, there will be no trace. EXAMPLE PROMPT: I want you to delete the person from the right in the photo You can also replace one element with another Objects not only disappear, but They can also be changed to something else. Come on, you can change a cat for a dog, a car for a tractor, or hide that inappropriate drink to seem to drink a much healthier thing. EXAMPLE PROMPT: In this photo, I want you to change the bottle of beer for a bottle of water. Restore old photographs If you like Restore old photographs with artificial intelligenceyou can use specific applications … or ask Gemini to do it. The results of Restore the photos They can depend on the original photo, and also on how much you want to specify. We in the Prompt of example have only asked for AI to think about everything, but you can specify colors of elements, clothes or whatever you want. EXAMPLE PROMPT: I want you to restore this old photograph, eliminating the cracks and imperfections that appear in it. I also want you to color it, giving all people a natural aspect, and that the clarity and sharpness of the photo better so that it looks better. You can do zoom out in a photo If you have a photo in which only your face comes out or just waist salts up, you can ask Gemini to do zoom out so that you appear entirely. Of course, remember that this will have to generate content, and the more it has to make less quality and the less realistic the photo will be, although you can always have a lot of meticulousness describing what you want it to appear. EXAMPLE PROMPT: Zoom out so that I look at my whole body. I am sitting on the ground, and behind me there is a separating fence of a festival, and behind mine there is an empty scenario. Change the expressions of the face Another of the things you can change is the expression of the face of a person. This is quite simple, and the results will be quite realistic, since it will use the person’s factions so that everything is the same. EXAMPLE PROMPT: Change the expression of this person to seem sad and moved Disguises a person as you want When you upload a photo of yours or another person, you will be able dress in other ways or create complex scenes Using your face. You can ask to change everything and use your face as a reference of the scene you want to create. Here, be thorough when describing what you want, the results can surprise you. EXAMPLE PROMPT: Draw the portrait of a person dressed as an executive. You must have the same facial features as the photo. It must be a photo like that of a great CEO, with the dark background and with a confident look. Add elements based on others And if having an image of you … Read more

Google’s AI summaries are already beginning to include scams

Even the most experts in technology can fall into the online scam. One of them is Alex Rivlin, a real estate agent of Las Vegas who became without knowing it in the victim of a new and sophisticated scam. His case, uncovered by The Washington Post, ignites alarms on how new generative artificial intelligence tools, such as Google AI summariesthey are being exploited to give a new life to old deceptions. A scam that began with a simple task. With a simple search to book a transport for a European cruise. When looking for Alex Rivlin, Royal Caribbean’s customer service number on Google, the answer is It was directly given by the search engine, With a tray phone number. What I did not know is that this number was not of the shipping company, but of some scammers. A perfectly orchestrated hoax. By calling the number that the AI had offered, an alleged customer service worker answered that gave him the precise details about the transport he needed to use in Venice. He informed him of the rates and even negotiated the possibility of not charging some supplements that were in the company’s services portfolio. Finally, they reached an agreement: it would cost him $ 768. And to pay it, he provided his credit card. The scam was uncovered the next day. The victim detected that suspicious positions had appeared in his account and the name of the company that was charging him was not the ‘Royal Caribbean’, and Rivlin realized the deception. Fortunately, he could cancel his credit card and The bank returned the moneyalthough he was surprised at how well the scam was. Old tricks, new and powerful ammunition: the AI. The Rivlin incident is not an isolated case, but the tip of the iceberg of a growing problem. The technique of publishing numbers of FALSOS CUSTOMER Customer Services Telephone It is not new, but the arrival of the generative AI has catapulted it to a new level of effectiveness. Until now these scams took advantage of paying to appear in the first search results, but now that AI is the best weapon. This technology collects the information you can find in the network, but does not verify that it is authentic. In this way, if a telephone number is repeated on several websites and forums, AI can interpret it as a credible source and serve it directly to the user seeking help. In fact, the research done by The Washington Post found that this same phone was related to large cruise lines such as Disney. The response of technology and criticism of experts. Google has affirmed to The Washington Post that its summaries of AI and the Search results are usually effective When directing users to official information and that specific fraudulent examples have been eliminated. However, critics argue that it is not enough. Lily Ray, Vice President of SEO strategy at the Amsive Digital Marketing firm, points out that allowing AI summaries to provide phone numbers “open a new opportunity for scammers, and one that they are clearly taking advantage of.” Security experts underline that Google It already has commercial information databases Verified to which it could give priority, instead of depending on broader web content and susceptible to manipulation. Figmingly looking out of AI is not a good idea. This story lets us clearly see how hackers are going to find The way to get our data at all costs. In this case they have taken advantage of the trust we can have in the results of Google’s AI because it is the first thing that is seen. The same happened with sem search results that were located in the top positions, and that responded to our philosophy that the first link is always the best. That is why the recommendation is very clear: we must always verify the information we are looking for as a telephone number, in the official source as on the company’s website or in its own application. There are more scams related to AI. This example is not isolated, since we have seen studies that suggest that when we ask an AI a company of a company, It is possible that it is wrong. Something that gives the phishing. But they do not stay here, since the scams can reach Spotify reproduction lists either In a simple PDF. Images | Firmbee.com Nordwood Themes In Xataka | Call-Center operators are finding a curious problem: many clients believe they are an AI

Google’s new AI generates interactive worlds from a prompt. Deepmind believes it is a step to get the AGI

The Google Deepmind team has announced its new AI model to generate interactive worlds. At the end of last year We were surprised with what Genie could do 2 And the new version is an important leap, one that for Google is an advance in the creation of the General Artificial Intelligence or AGIthat which can match the abilities of the best human. Genie 3. It is the new World Model o Deepmind world model. Allows to create interactive worlds for which we can explore, all from a Prompt of text. The previous model was very limited and could only be used for a few seconds, but with Genie 3 Deepmind promises that it can be explored for “several minutes.” In addition, the resolution has improved at 720p to 24FPS. The model is based on Genie 2 and I see 3. It has memory. It is the most important improvement of the new model. The world is generated through ia as we explore it, but if we turn around and look at something we had already seen, it remains the same. We can also change something, such as painting on a wall, and that is kept as we leave it all the time. This did not happen in previous versions and its creators say they did not explicitly schedule it to do that. As explained in an article in TechcrunchGenie 3 is able to remember what he has already generated to train himself, in this way he learns how the world and his physical works. Interactive. It also emphasizes that events can be added with Prompts additional In his article, Deepmind puts several interactive examples such as a meadow in which we can choose if a tractor, a bear, a horse or hot air balloons will appear. They call it “promptable World Events” and also allows you to change aspects such as weather. Why is it important. Worlds models are useful in different scenarios such as the creation of scenarios for real -time games, in education or in the training of AI agents. Google points to it in its blog as a key step to reach the AGI, that upper artificial intelligence that so many companies are trying to get as soon as possible. These worlds can be used as a training field for other AI, also including robots, cases in which simulating real scenarios is a challenge. In the presentation, the Deepmind team explained how they put an agent in a stage that simulated a warehouse and asked him to approach certain elements, such as a green garbage cube. In all the tests he achieved, according to the Deepmind team “the fact that (the agent) is able to achieve this is because Genie 3 remains coherent.” The competition. The largest IA competition, at least at the level of products for the end user, we see it in the chatbots and, to a lesser extent, in video or audio generators. The world models are less popular among the public and there is not a great competition. Nvidia presented cosmos at the beginning of the year And there are some companies like World Labs They offer similar proposals. We would like to finish this text with a link so you can try it, but Genie 3 is only available in Beta for a very limited group of academics. Image | Deepmind In Xataka | Some researchers created a company where all employees were AI agents. They did not make a quarter of the work

Google’s summaries are reducing clicks to half. And that only points in one address: the collapse

There is a runrún that runs through digital media writings around the world. The editors look at their traffic metrics with a face of concern while they see how Google, which has been being mainly mainly to attract readers between SEO and Discover, has now become its greatest rival. And now, A PEW Research Center study It puts figures to what many already intuited: Google’s summaries are sweeping with a good part of the web traffic. In addition, it is not a recoverable traffic, it is traffic that will never return. Why is it important. We are facing the balance that has been holding the Internet for decades. Google sent traffic to the websites … … that, in return, created content that fed the search engine. It was a symbiotic ecosystem: the search engine made sense by the websites, the websites received traffic from the search engine. Now only Google wins. When the Ai overViews26% of users directly leave their search session, compared to 16% in traditional searches. The AI becomes the final destination, not to the starting point. The result will be paradoxical: Many digital media and independent websites will close due to lack of traffic. That will leave Google with less content for Your AI summaries and to train future models. The golden egg chicken, dead to pecks. In figures. The numbers are devastating. Only 8% of searches with the summary of AI generate clicks to web pages, compared to 15% when Google shows only traditional results. And just 1% of users click on the sources cited within the AI summary itself. The study tracked the online activity of 900 American adults during March 2025. More than half (58%) ran into a search that produced an automatically generated summary. The context. The Ai overViews They appear in one in five Google searches. Long consultations, formulated as questions or written in complete phrases are more likely to activate these automatic summaries. The sources that most quote the AI remain the usual: Wikipedia, YouTube and Reddit concentrate 15% of all appointments. Between the lines. Google faces an inevitable strategic dilemma. If it does not evolve towards smarter experiences, it runs the risk of losing relevance against PerplexityChatgpt and other competitors that give direct answers. But this evolution generates a paradox: the company feeds on the content that others create, while its tools of eliminating economic incentives to continue creating that content. Web editors report traffic falls from 15% to 35% since these summaries were generalized. So far there was a balance, but it is increasingly broken. The big question. How will the web content creation ecosystem support when traditional incentives disappear? The summaries of AI need updated information to work, but are eroding the business models that make that creation possible. Corrective measures are already beginning to appear: OpenAI has signed license agreements with various mediaand Google is valuing similar formulas. But the consequences of the problem come much faster than these solutions. A foreseeable scenario: formulas will end up balance the balance. But they cannot prevent the ecosystem from being reduced, with means closing or reducing their templates. It is the market, friend. Yes, but. Google doesn’t see it that way And ensures that AI Overviews “help understand complex topics faster” and continue “directing billions of daily clicks.” He has described PEW’s study as “methodologically defective.” However, the trend is unanimously perceived in the industry. Google is completing its transformation: to be the great web traffic distributor to become the final destination where the information is consumed without ever leaving its domains. It is the logical evolution of a search engine in 2025, but also the end of an era for the web ecosystem as we knew it. In Xataka | Google continues to redesign its search engine with AI. Your new function speaks by phone with business in your name Outstanding image | Xataka, Mockuuups Studio

For years we were who we called to ask for information. Now Google’s AI calls for us

Imagine you are looking for a hairdressing for your dog. You do not have time to call, compare prices or review reviews one by one. But Google yes. His artificial intelligence is in charge. Call for youconsult availability, ask for prices and give you the answer, all without you said a single word. That is exactly What has begun to activate Google: A new function within its search engine that allows the system to contact local businesses in your name. It is not an independent app or an isolated experiment: it appears directly in the search results, under a button that says “Have Ai Check Pricing.” An AI that acts in your name, without changing your way of searching It is not necessary to learn commands or configure anything special. Press the button, answer a series of questions and the system is responsible. Speak for us, collects data on prices and availability and returns them organized. We continue using the search engine as always, but with an AI that is no longer limited to responding: it begins to do things. An interesting point is that companies can control this function from their business profile, which guarantees that they have control over whether or not to receive this type of calls. For now, this capacity is only available in the United States. Of course, Google has not mentioned concrete plans for its deployment in the European economic space or has given dates about its arrival in other regions. If something is clear is that Google is no longer made up of offering links. He wants to execute tasks. Organize data, reason, investigate … And now also contact third parties. Google is also updating a parallel solution called Ai mode. Now it works with Gemini 2.5 Proits smartest model, and the Deep Search tool, capable of making deep searches. Both improvements are only available to those who have registered in the AI Mode experiment through Search Labs, and require a subscription to the pro or ultra plans. These novelties arrive just when the launch of Chatgpt agentsthe new openai system that operates on a virtual computer and Execute real tasksstep by step. And where proposals also stand out such as Comet, from Perplexity, o Computer Use, from Anthropic. What do they have in common? Beyond their differences, these are solutions that seek to do things in the user’s name. Images | Freepik | Google In Xataka | Google is determined to win the war of smart watches with its AI. To achieve this, you will have to offer us a disruptive experience

Google’s compact has very good arguments, although not everything shines the same

What if they told you that the cheapest mobile of Google of current generation has the same chip as its high range and that you will receive updates until 2032? On paper sounds very good, but another thing is to live with him for weeks. In a new one 24/7 of the Xataka YouTube channelwe have tested the Pixel 9a To verify how it yields, how it behaves in the day to day and if your camera remains the greatest claim. The first week makes it clear where the shots go. “The experience is being one of the most comfortable I’ve had with a phone in a long time,” says our partner. It is a manageable mobile, light and thought to be used with one hand. The 6.3 -inch OLED screen has 120 Hz and good brightness to exterior soda rate. But there is a design decision that attracts attention: “It seemed that we had overcome the great frames with Pixel 9, but they have returned in this middle range.” The 9A pixel repeats processor: mounts the same G4 tensioner than the most expensive models. So why sometimes seem to go slower? “I’ve found some Punctual thyrox or a certain slowness when unlocking the phone or opening some application. ”The device, in addition, arrives with 8 GB of RAM compared to the 12 GB of higher models. Is this combination a problem? Video tests can help us draw a conclusion. Where there are no surprises is in the operating system, which is Android 15. “It is the software that is closest to Android Stock that we can experience, ”he points out. Without aggressive layers or Bloatware, and with a promise that highlights:“ We have 7 years of updates, which would support us until 2032. ”Our partner reflects on whether the mobile will endure so much time. And then there is AI. Circle to Search, Gemini, Gemini Live… Functions that are already integrated into the system. But in practice, they have not had much prominence. “I have not had the need to use any of the Artificial Intelligence Tools”Will it be a matter of habit? Or real utility? In any case, will we know more in the video. In photography, Pixel 9a bets on balance. “He doesn’t want to be the best, but he wants to give us very good results,” he explains. The main sensor offers natural images, with a balanced dynamic range and pleasant blur even without using the portrait mode. That is how some loose points that can be decisive for some users also appear on the scene when choosing this phone or not. The battery improves previous generations. Now we have 5,100 mAh and that shows. “Most cycles will have achieved about 7 hours of active screen.” In practice, it is a mobile that It arrives at the end of the day with marginalthough without overcoming the best rivals in autonomy. Nor does it do so in fast charge: 23 W and something more than an hour to load it completely. Where then does this pixel 9a fit? Is it a recommended mobile? Everything will depend on the user’s needs, but to discover it you can help you in this video that Emos preparing in Xataka and which is already available on our YouTube channel. We invite you to take a look and leave your opinions in the comments. Images | Xataka In Xataka | I had no idea why my phone burned so much in summer. And he did not make me a bit of grace discover it

The search as we knew it is over. Google’s AI Mode no longer delivers results, he talks

In recent times, Perplexity has done something that seemed unthinkable: Make Google Search feel old. With its conversational interface, direct answers, linked references and a relentless update rhythm, has shown that a search engine with not only possible, but desirable. It is faster, clearer, more useful. And increasingly popular. Many already have it as FJA tab. Google has taken note. And he has responded. The most important announcement – although somewhat camouflaged – of the I/O 2025 It was not a new model or an ultra intelligent agent. It was this: Ai mode arrives at Search and Gemini. Or in other words: Google has begun to transform its search engine into something that looks a lot like perplexity. For now it is only for users of the United States –argh–, but the movement is clear. When AI Mode activates in the Gemini app, the user stops doing classic style searches and begins to receive direct answers generated by the model, with links to sources, relevant context and capabilities to go beyond: compare, ask for explanations, continue asking. The search engine no longer delivers blue link lists, not even a summary above. Find conversation. In this way, Gemini is not a conversational model. It is an active knowledge engine, A synthesis of LLM, browser and assistantwith the ambition to replace the habit of “google” with that of “asking.” You can search for flights, understand documents, ask for cross opinions or compare articles. And all that, without touching an external page. This is not The results with generative that we saw will arrive in Spain a couple of months ago. This goes much further. That were generative responses about classic results. Ai mode is something else: It is more perplexity, more direct, more useful. And more dangerous for the web ecosystem. Because here is the turn that nobody should overlook. In Perplexity, at least for now, the sources are visible, well prominent, and are central to the experience. In ai mode, on the other hand, ambition seems different: respond so much and so well that the user does not feel the need to leave. A closed, polished, self -sufficient experience. That changes things. Not only for the user, who can stop distinguishing between response and source. Also for the media, creators, forums, specialists. Everything that today feeds Gemini from the web becomes less visible in the process. Knowledge is preserved, but loses authorship On the surface. Perplexity Forced Google to advance. But in doing so, Google has changed certain rules. He has taken what works – the synthesis, natural language, speed – and integrates it into an ecosystem, broader, more fluid, also more opaque. If Perpleplexity was a pioneer in experience, Google now counterattacks with total integration. Therefore, the AI ​​Mode in Gemini is not just a technical novelty. It is a paradigm change in how we look, how we read, how we inform ourselves. The user no longer consults a database. Interact with a system that interprets, selects, synthesizes and responds. Google has caught where the search is going. And has decided to move. But in his style. In Xataka | Google has put a price on the future of AI: $ 250 per month Outstanding image | Google

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.