Google already offers synthid to mark content generated by the as such. Now it will help us identify them

When Pope Francis I appeared With a colorful white coat of feathers, many probably believed that this image was real. It was not, but that image, Like Donald Trump’s arrestone thing made clear: that we were increasingly difficult to differentiate the images created by an AI of those that are not. How to solve it? Water marks. Among the possible solutions, there was a fairly obvious: identifying the images of AI as generated by AI. That is: every time someone asked Chatgpt an image with Studio Ghibli style oa grok any of Bill Gates with a gunthose models of AI should include certain metadata in the imagne file. Basically it would be like “Put a seal” to these images to be able to identify them in case of doubt. There Synthid enters. There is Several efforts In that sense, and among them is Synthid. This water brand technology was presented in 2023 And a year later They offered as a free tool so that anyone could implement it. Fight against Deepfakes. As those responsible for the company comment, since its launch in 2023 Synthid has placed water marks in more than 10,000 million images, videos, audio files and texts. This can be identified as generated by AI and thus reduce the possibilities of misinformation and erroneous attribution. In addition, they stand out in Google, the results generated by VI 3, Image 4 and Lyria 2 will continue to have Synthid water marks. The AI ​​content detector arrives. The company has announced during the Google I/O 2025 event that Synthid Detector launches, a verification portal to help users identify the contents generated by AI. Its operation is simple: it is enough to raise any content (text, image, video, audio), and the Synthid detector will identify whether the entire file or only one part contains synthid. Preparing for the good and the bad of AI. This type of tool can be especially useful for artists and creators who can thus defend their work as legitimate, but it is above all a useful option that helps us protect ourselves from Deepfakes and misinformation attempts. But. As we mentioned months ago, Synthid is a promising proposal and with many striking characteristics, but it has a problem: that (for the moment) It is not a universal standard. Here Google must not only use it on its platforms, but to reach a consensus with the rest of large technology so that there is a single open and interoperable system that solves the problem of water brands universally. In Xataka | Ilya Sutskever’s new company has a clear objective: to create a “nuclear” security superinteex

An unforeseen live and a bet that points far away

Google’s story with the glasses is not over. On the contrary. After years of silence, the company has put them on stage again. And he has done it with a project that recovers a family name: Project aura. It is not the first time we listen to it. It has been playing from The times of Google Glasswhich ended in a dark corner of technological history. But this time, the approach is very different. During the Google I/or 2025, where we also saw proposals such as Ai mode either Beamthe company showed live the state of development of its new mixed reality glasses. A proposal that Born in collaboration with Xreal And that, according to those responsible for the project, wants to take the Android experience to the XR universe with naturalness, context and responses in real time. A live demo, without tricks or touch -ups. It all started when Shahram left, responsible for the device area, He launched a question to the public recorded in a video posted on YouTube: “Who points to see an early demonstration of the Android XR glasses?” The answer came from bambalins. Nishtha Bhatia, part of the team, appeared on the scene remotely and began to show the real operation of the glasses. The first thing we saw was an interface superimposed in real time over the environment. Through the integrated camera, the glasses showed what he had in front while Bhatia received messages, played music, consulted addresses or interact with Gemini, the conversational assistant, all through voice commands. Without taking out the mobile. Without clicking anything. In one of the most striking moments, the demo showed how I could ask which band was the author of a painting that was watching. Gemini responded, although with the occasional delay attributable to connection problems. He also asked that a band of the band on YouTube Music be reproduced, which happened without manual intervention. Everything was recorded in the image shared in real time. Live translation and a small failure on stage. The final test consisted of a conversation between Izadi and Bhatia in different languages. She spoke in Hindi, he in Farsi. The glasses, by Gemini, offered a simultaneous translation with voice interpretation. The system worked correctly for a few seconds, but those responsible decided to interrupt the demo when they detected a failure. Despite the stuping, the message was clear: Google wants to play again in the field of connected glasses, this time with a more mature base, supported by its service ecosystem, in Gemini and in collaborations with key actors in the world XR. The difference, at least for now, is in the approach: practical experiences, in real time, without long -term ornaments or promises. Images | Google In Xataka | Google already has an agricultural AI capable of programming for you: it’s called Jules and seeks to stand up for OpenAi

Europe is closing its doors

The Covid-19 Pandemia generated a great migration of US digital nomads towards countries with more lax mobility measures and with a much more affordable cost of life. Many European countries, including Spain, received them with open arms. To them and their investments. However, that has long been for a long time, and the geopolitical scenario It has changed radically. What has not changed is the Desire of Americans migrate to Europe In search of stabilitysecurity and a better quality of life than in the US. The difference is that, now, Europe is closing its doors. So much that even US citizens will need An entrance visa (Ethics) to visit Europe. The American exodus. The interest of Americans to obtain residence in European countries and in the United Kingdom has been increasing. Last year Ireland received 31,825 applications for American citizenship. Only during the month of February this year, 3,692 citizenship applications from the US have already been submitted. In statements to EuronewsArielle Tucker, founder of the Connected Financial Planning migration advisory platform, said that after the arrival in the presidency of Donald Trump, “many people feel that the longer they remain in the United States, the more insecure they are about how their quality of life will be and how that could affect their financial well -being.” Kelly Cordes, founder of Irish CitizenSHIP Consultants, He counted to Bloomberg that he had also noticed a notable increase in requests for citizenship of Americans to Ireland. “It is definitely different from everything we have seen. People are very worried; they have an urgency to obtain citizenship.” Of an average of 10 weekly citizenship requests that this agency processed in 2024, it has gone between 20 and 25 applications a week. Europe and the United Kingdom close their doors. Despite the increase in residence and citizenship requests, the authorities of Europe and the United Kingdom are responding with stricter regulations. Even so, 6,100 US citizens requested British citizens, according to data of the United Kingdom Ministry, taking advantage of its British roots. For its part, Italy, which previously allowed those who demonstrated family ties to obtain residence with relative ease, has carried out an emergency reform To avoid the avalanche of applications and now requires more rigorous evidence and longer processes to protect the interests of the local population. Gold Passports. Another option used by Americans With greater economic resources it is to obtain the calls “Gold Passports” either Golden Visa. These programs allow to obtain residence or citizenship in exchange for significant investmentsgenerally in real estate or the implementation of local businesses. Portugal and Spain were Very popular destinations Among Americans who were looking for this type of visas, but the situation It has changed In recent months due to negative impact In the real estate market and the local economy. New strategic opportunities. Faced with this panorama, some European countries are taking advantage of this migratory current of Americans to attract qualified talent. France, for example, has launched the campaign “Choose France for Science“, With the aim of attracting highly trained international and professional researchers by speeding up the processing of their residence permits. In contrast, Doge’s cuts, with Elon Musk in frontthey are putting in serious trouble the work of researchers in the USthus simplifying the immigration decision for these scientists. In Xataka | Of course, digital nomads love Oviedo. It is not for the way of life: it is because they charge 90,000 euros In Xataka | Digital nomadic visas: the countries hook to attract the best digital talent without paying the cost to keep them Image | Unspash (Global Residence Index)

The search as we knew it is over. Google’s AI Mode no longer delivers results, he talks

In recent times, Perplexity has done something that seemed unthinkable: Make Google Search feel old. With its conversational interface, direct answers, linked references and a relentless update rhythm, has shown that a search engine with not only possible, but desirable. It is faster, clearer, more useful. And increasingly popular. Many already have it as FJA tab. Google has taken note. And he has responded. The most important announcement – although somewhat camouflaged – of the I/O 2025 It was not a new model or an ultra intelligent agent. It was this: Ai mode arrives at Search and Gemini. Or in other words: Google has begun to transform its search engine into something that looks a lot like perplexity. For now it is only for users of the United States –argh–, but the movement is clear. When AI Mode activates in the Gemini app, the user stops doing classic style searches and begins to receive direct answers generated by the model, with links to sources, relevant context and capabilities to go beyond: compare, ask for explanations, continue asking. The search engine no longer delivers blue link lists, not even a summary above. Find conversation. In this way, Gemini is not a conversational model. It is an active knowledge engine, A synthesis of LLM, browser and assistantwith the ambition to replace the habit of “google” with that of “asking.” You can search for flights, understand documents, ask for cross opinions or compare articles. And all that, without touching an external page. This is not The results with generative that we saw will arrive in Spain a couple of months ago. This goes much further. That were generative responses about classic results. Ai mode is something else: It is more perplexity, more direct, more useful. And more dangerous for the web ecosystem. Because here is the turn that nobody should overlook. In Perplexity, at least for now, the sources are visible, well prominent, and are central to the experience. In ai mode, on the other hand, ambition seems different: respond so much and so well that the user does not feel the need to leave. A closed, polished, self -sufficient experience. That changes things. Not only for the user, who can stop distinguishing between response and source. Also for the media, creators, forums, specialists. Everything that today feeds Gemini from the web becomes less visible in the process. Knowledge is preserved, but loses authorship On the surface. Perplexity Forced Google to advance. But in doing so, Google has changed certain rules. He has taken what works – the synthesis, natural language, speed – and integrates it into an ecosystem, broader, more fluid, also more opaque. If Perpleplexity was a pioneer in experience, Google now counterattacks with total integration. Therefore, the AI ​​Mode in Gemini is not just a technical novelty. It is a paradigm change in how we look, how we read, how we inform ourselves. The user no longer consults a database. Interact with a system that interprets, selects, synthesizes and responds. Google has caught where the search is going. And has decided to move. But in his style. In Xataka | Google has put a price on the future of AI: $ 250 per month Outstanding image | Google

Smart glasses do not have to be an armed one. Google has it clearer than ever

Goal is marking the way to the market on how smart glasses should be with their collaboration with Ray-Ban. And Google, which has been developing projects from some years and years cardboard VR glasses to a device to Vision Pro (Your project with Samsung and Android XR), it seems to have realized what the winning format is. In the Google I/or 2025 he has shown us the news in Android XR. Far from helmets, far from devices that we can only use at home for a few hours due to its weight and dimensions. His words make it clear: “We know that glasses can only be really useful if you want to wear them all day.” That was the key. The small format. The glasses, glasses are. And everything else is helmets or devices of augmented/mixed reality for a more complex use scenario: games, video projection or, why not, the complete replace It is not working. Goal has done it well this year with its Ray-Ban Metaglasses that with the naked eye seem completely normal. Only when we approach them, the cameras system, the LED and the thickness of the Patillas Chivan that we are facing an electronic device. Gemini see everything. The focus of this I/or has been clear: Gemini wants to be able to see everything. In both Google and iOS, you will be able to see both the content shown by the camera and the content shared through our screen. The only thing you need are “eyes and ears” so … nothing prevents this concept from taking intelligent glasses. Not only helmets. Until now, the mixed reality helmets They looked like the only devices capable of showing capabilities beyond the simple: calls and video/photos recording. Google has just demonstrated in the I/O. Google wants to integrate Gemini into traditional format glasses, so that the wizard is right in our view. The goal? Free ourselves from the mobile and that we can interact with the environment in a much more natural way. Ten years of work. According to Google, the company has been working for more than a decade on this type of concept for smart glasses. It has only been possible now, with Android XR. Camera, microphone, speakers and telephone connection. It is all that is needed for Android XR to work together with our smartphone and have access to its applications. Similarly, the most advanced devices will equip a screen on the lens to offer information to the user privately. What can be done. Batched with the phone, the glasses can already work with Gemini. The idea is clear and Google does not hide it: “Come and hear the same as you.” With Android XR in glasses you can send messages, ask for indications of Google Maps, take photos that will be stored in Google Photos, and even translate a conversation in real time. With who. The success of Android XR in glasses will depend on Google’s ability to convince its partners. As of today, the company announces collaboration with Gentle Monster and Warby Parker to create Style glasses with Android XR. In “a future”, they hope to collaborate with more partners. Similarly, they claim that collaboration with Samsung will go beyond viewers: the goal is to bring Android XR to traditional format glasses. Image | Xataka In Xataka | Xiaomi glasses are being a brutal success in China. And that they have not yet went on sale

In 1997 a construction company had the delirious idea of ​​building the house of the Simpsons and getting it for sale. It ended up regular

Building a house identical to that of the protagonist family of ‘The Simpsons’ looked like a teacher promotional play. And give it to a fan to live in it, the cherry icing. However, neither Fox nor the unsuspecting spectators of the series who went through this house seemed to be clear a patent obviousness: The Simpsons are cartoon charactersand do not work in the real world. Firefighter ideas. The initial idea was from the Kaufman & Broad Construction Company, from the 3D designs that were being created for the 1997 video game ‘Virtual Springfield‘. The intention was to create a house identical to the originalfor which they were analyzed A hundred episodes of the series. The problems started from the first moment: the house of the series lacks something as essential as load walls. However, the builders ended up giving a safe design and fitting with what was seen on television. Mutant house Leaving aside the fact that the house has changed multiple times with the passage of time in the series (for example, the shape, size and distance from each other of the windows), the designers focused on two well -known rooms: that of television and Bart’s room. And they left from there, in a kind of version Cartoon of “Build the house from the roof.” The result: four bedrooms, two floors and, outside, a house in the tree and a backyard. Total: 200 square meters painted of squeaky yellow and with orange rooms, phosphorescent green and pink. The idea of ​​designers It was that the house was 90% normal, 10% cartoon. The devil, in the details. The final touches were given by Rick Floyd, Hollywood production designer who included thousands of details for the most terminal fans. Higher doors than normal to pass Marge’s hair, identical dresses and costumes in the closets of each character, holes near the ground for mice, dozens of Duff beer cans in the fridge, a saxo in Lisse’s room and a painted of the barto that Matt Groening himself did. And also an absolutely useless chimney in Henderson’s desert, Nevada, where the house is located. Pepsi gives it to you. The home found owner through a Pepsi and Fox contest that was launched in 1997: 15 million people sent tests for the purchase of brand products to participate, and the winner would take the house or $ 75,000 in cash (although the value of the house was estimated at double). The winner also promised to paint the facade in accordance with the rules imposed by the neighborhood. The winner was a 63 -year -old Kentucky retiree who decided to accept the money because she had no intention of moving from her home. The house became Attraction for curious. Pillage. An attraction that, by the way, had to be monitored 24 hours a day, by the looting of the unique objects inside. However, over time, surveillance relaxed and the house ended up becoming a curiosity without interest. In 2001, already converted into a reasonably normal housewas sold to another particular, a neighbor who had been secretary of the construction company. He had to make reforms, because the interior was uninhabitable with all the bright colors of the cartoons. Today, its facade remains A magnet for traveler fans And the project, one more sample that we cannot have beautiful things. All promo. The authentic business of the Simpsons is in merchandising: during its first year it generated 2,000 million dollarsand to date, it has 4.7 billion dollars. It is a phenomenon to which we add licenses and collaborations amounts to a value of 13,000 million. But no merchandising artifact is as special as Simpson objects in the real world: Lard Lady Donuts donuts, Duff Beer cans, Apu stores and Krusty Burger restaurants. None, however, as delusional and special as the family home. Header | Fox In Xataka | The Simpsons is a black family: the last theory that gives a radical turn to what we thought about knowing about the series

His name is Jules and seeks to stand up for OpenAi

Google qualifies it as an “asynchronous programming agent”, but it is much easier to define Jules as what it is: an AI system that helps you program more and better. But it is also something else. War of “Vibe Coding “. Cursor or Windsurf have become banners of a new fever for the program assisted by AI. That of “Vibe Coding” in which you talk to do the job or ask you to self-refle the code based on pressing the tab repeatedly (“Tab-Tab-Tab”). OpenAi in fact just Buy Windsurf for 3,000 million dollarsand the reason is evident: to conquer the developer community to use their programming tool with AI and not another. And right is where Jules enters. The machines that scheduled for us. Jules was presented in December on Google Labs preliminary with one objective: not to offer only a co -pilot to program or a code self -completed tool, but an autonomous agent who reads the code, understands what he intends to do and starts working to solve the problem. Jules “sneaks up” in your repository. As his rivals, one of Jules’s keys is that he is able to integrate (“sneak”) in your code repository to be able to analyze it and help you with your project. Clona all the code in a virtual machine on Google Cloud, studies that code to understand it and from there you can perform various tasks. For example: Design execution tests Create new features Provide audio -shaped changes Correct errors Update dependencies versions An assistant in the background. Jules does all this while you focus on any other task, and when he ends up “thinking” what he has to do and presents the registration of changes (diff) made. Your code is yours. As explained in Google, Jules is private by default. Not only that: Google’s model will not make use of that code as data for your delivery, and all the data you use are kept isolated in that execution environment that is created when using this powerful tool. Integrated with github. Another of Jules’s key elements is that it works totally transparent with Github, the par excellence code repositories. You will not have to install anything extra, which will make the workflow theoretically perfect. Public beta. Google stressed that Jules is now available for all United States users in the URL Jules.google. That will allow anyone to try it for free (and without limits as long as the use is not exaggerated) during this preliminary phase of development and deployment. They are good news for developers who want to try it, including those who do not live in the United States, who can do so with the traditional solution: use a VPN to “simulate” that they are in the US and then connect to that website. In Xataka | Openai has just launched his new programming agent. The interesting thing is what you can do when nobody looks

Google has just announced the closest to the holographic science fiction: Project Beam

Videollamar have been useful for years. A solution that works, although with obvious limitations. See and hear the other person is fine, but the Sensation of real closeness It is still far away. Google has been trying to solve that problem for some time, and now has decided to take a more determined step to get it. That step is called Beam. It is the new name of a technology that we already knew as Project Starline, an experimental proposal that sought to recreate the experience of a three -dimensional face -to -face conversation, and that we had the opportunity to try last year. Now, that idea evolves in the form of a platform. Beam was born as a communications system designed to integrate into real environments, supported by the infrastructure of Google Cloud and enhanced with advanced artificial intelligence models. A conversation with volume, not only with image. Google Beam’s key is in its volumetric video model. A IA -based system that transforms a 2D video signal into a realistic three -dimensional representation, visible from any angle. When combined with a type screen Light Fielda sense of depth is achieved that allows to maintain visual contact, interpret expressions and generate more natural communication. According to Google, this helps generate trust and understanding as if the conversation were face -to -face. The objective declared by the company is to create more significant connections between people, wherever they are. To achieve this, Beam relies on two fundamental pillars: the reliability and scalability of Google Cloud, and his experience accumulated in artificial intelligence (AI). Everything designed to integrate without friction in existing workflows. Real -time translation without giving up naturalness. Beam not only focuses on the image. It also wants to facilitate understanding. One of the most striking functions is real -time voice translation, now available today in Google Meet. It allows to maintain a fluid conversation between people who speak different languages, retaining the tone, cadence and expressions of each interlocutor. The result is a more natural conversation, where technology is perceived less and connection between people, more. For Google, this functionality is just the beginning. His long -term vision is clear: to ensure that anyone, anywhere in the world, can be seen and understood with total clarity. Beam arrives at work. At the moment, Beam points to the professional environment. Google has announced an agreement with HP to launch the first compatible devices, which will reach selected customers this year. It should be noted that it does not work with any configuration. These devices will have several cameras to capture the subject from different angles. In addition, the company is collaborating with companies such as Zoom, Diversified and AVI-SPL to integrate this technology into different corporate environments. Great organizations have already shown interest, including Deloitte, Salesforce, Citadel, Nec, Hackensack Meridian Health, Duolingo and Recruit. From Deloitte, for example, they emphasize that Beam is not only a technological advance, but a way of rethinking how we connect in the digital age. A clear promise. Be there without being. This is Beam’s central idea. It is not just a technical improvement, but an evolution in the way of communicating. Beam wants to talk to someone at a distance does not feel like a video call, but as a face -to -face conversation. Images | GOOLGE In Xataka | Google has put a price on the future of AI: $ 250 per month

Google has put a price on the future of AI: $ 250 per month

It has not been the first, but the most forceful. Openai took the first step in autumn, asking 200 monthly dollars per chatgpt pro. Google has responded by raising the bet: $ 250 for its ultra plan. A subscription that not only monetizes capabilities. Also marks hierarchies. Do you want the most capable? Pag. And it is not a way of speaking. At that price, it is not accessed only to a smarter chatbot, but to the hard core of the near future: Deep Think, The new mode of reasoning of Gemini 2.5 Pro. Preferential access to video and audio generation tools (I see 3, Image 4). Project Mariner: Agents who understand, plan, act, execute. Flow, the cinematographic creation tool with camera control and video generation in 1080p. Whisk animate, a tool to turn images into animated videos of 8 seconds. Notebook LLM with higher limits and advanced versions of the model. Gemini in Chrome, with context of the page, with early access. Gemini integrated into Gmail, Docs, Chrome and Search, with persistent context and priority use. 30 TB of storage in Drive, Photos and Gmail. And a premium YouTube subscription. Gemini is no longer just conversation. It is the interface of the world. Image: Google. And the most important thing is not what Ultra includes. Is what leaves out. Google has taken everything that defines this new stage of AI – agency, autonomy, deep reasoning, extended multimodality – and has encapsulated after a payment wall. In that gesture there is not only business model. There is vision. Intelligence becomes a product, but also on border. By the way, there is a 50% discount for the first three months. And the previous plan is now called ‘Google Ai Pro’. On the other side it remains Flash. The free version – or low cost – designed for most. Fast, competent, useful. Like a car without a steering wheel. An AI without memory, without tools, without hands. It serves to respond, not to act. Do not create flows, it does not automate anything, it does not think beyond a few seconds. Flash is the promise of democratization that is still maintained. Ultra, the true pilot. The Google movement does not surprise, but confirms. The mass access phase is over, open experimentation. What is now built is an economy of computational performance. Whoever wants more context, more persistence, more power, will have to pay it. And soon. Because if 2022 was the year of glare, and 2024 that of the co -pilots, 2025 will be that of digital classes. And this time, the border will not be technical. It will be economical. What Google is not just a commercial movement. It is structural. Institutionalizes restricted access. If for years the knowledge tended to open —wikipedia, Google, YouTube, Moocs– Now it begins to fold in high -end products. With this, productivity, creativity and the ability to compete are also replicated. The digital elevator continues to go up, but is getting more and more to get on it. Because it is not paid only for technology. It is paid for an advantage that is not seen, but that decides: the right to think with more help, in more directions, with less friction. The right to automate before others. To have an assistant who does know how to program, who does understand video, which does remember, that does act. As always, intelligence – human or artificial – tends to concentrate where capital accumulates. And the rest? The average user is on the other shore. With a AI that responds, but does not decide. Who attends, but does not anticipate. That summarizes, but does not build. That is the new gap: not between those who use AI and those who do not, but between those who have it in their favor, and those who only see it from behind a crystal. Google has made official what Openai has already hinted: access to automated knowledge will be priced, threshold and owner. And if the future is made of intelligent interfaces, reasoning engines and agents who execute for us, then: the future costs $ 250 per month. In Xataka | The new great models of generative the AI ​​do not stop delaying. It is a dangerous indication that we have touched the roof Outstanding image | Solen Feyissa in Unspash

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.