What is Claude Cowork, how it works, and what things you can do with this AI assistant on your computer

Let’s explain to you What is Claude Cowork and how does it work?one of the advanced tools of the artificial intelligence of Claude. It is an automation assistant for the computer, a kind of AI agent which you can ask to do tasks on your PC without you having to touch anything. Let’s start by explaining what it is so that you understand the concept. Then we will tell you how it works, to finish by giving you some examples of the things you can do with it. What is Claude Cowork Claude Cowork is basically a personal assistant with artificial intelligence Designed to work natively on your computer. This way, you can use Claude on your Windows or Mac PC to ask it to do things automatically. It has been designed above all to help you with the repetitive tasks you do in your daily life with files, folders and applications. Imagine being able to ask the AI ​​to do things like rename files in a folder, look for duplicates, or even give you summaries of the contents of these files. It is something similar to an AI Agent, but it is not exactly this. AI agents are capable of doing complex tasks for you, like booking a hotel. However, Claude Cowork is designed specifically for automate tasks with files and applicationsand manage the operating system of your local computer. So it doesn’t have as many features, but it does what it’s trained to do better. This tool is available in the Claude desktop appalthough only for paying users. This means that you always have it available. In addition to this, You can also give access to your browser to be able to ask it to do tasks on it or interact with web content, but for that you need to install the extension Claude in Chrome. How Claude Cowork works The way Claude Cowork works is very simple. You open the Claude application and go to the Cowork tab, and in there you ask him what you want him to do using natural language. When making the request, you will have to specify what you want, the folder where you want it done, and all the details you want. Here, you should think that you are asking a person for the task. If you want to change the name of the files in a folder, you will have to specify that you want to rename them, indicate what folder it is, and even the format, in case you want it to be “Year-Month-Name” or any other. Cowork has controlled access to your file systemso that you can decide and customize which elements you can touch and which ones you can’t. When you make a request you can even choose the folder where you want it to act. This tool will first process your text to understand what you want, and then will chain several actions to carry it out. It will be Claude’s own AI that will figure out the way he wants to do it, and if necessary because it doesn’t work, rectify it to do it another way. In the Claude app, within the Cowork section, you will be able to see step by step what it is doing this assistant. The AI ​​will ask you for permission on each piece of data, for example to rename files or connect to a tool, and you can always see the progress and stop it whenever you want. Lastly, you should know that you can use the connectors and extensions to link web services and applications on your computer and be able to do things in them. You can add your notes application, Spotify, or the messaging app among many others. But also web services such as Gmail, Google Drive, Notion, Trivago, WordPress, and many others. What you can do with Cowork The uses of this tool depend on many things, although there are a series of basic actions that you can know and that will save you a lot of time. They are the following: File management: Manage files in any folder, organizing downloads, renaming batches of files with specific patterns, moving documents between folders, finding and deleting duplicates, zipping and unzipping files, and more. Document processing: You can process various document types by extracting text from PDFs, converting files from one format to another, combining multiple documents into one, or extracting specific data from multiple files to create summaries. Automation of repetitive tasks: It can also help you automate tasks you do every day or week, such as preparing reports by putting together data from different files, creating folder structures for new projects, or making organized backups of certain files. Cleaning and maintenance: You can also ask it to do tasks like asking it to delete old files that you no longer need, clean up temporary folders, organize your photo or music library, or find large files that are taking up space. But these are just the basic features of Cowork, and you can get it to do many more things connecting it to cloud services, other applications, or installing the extension to use Chrome. To give an example, I have asked you to create a text file with the list of all the songs (more than 600) that I have in a certain playlist on my Spotify account. So Claude ran his Chrome extension, I could see it go to my Spotify account, I gave him permission to log in, he then looked for various ways to read the songs in the list (first a script and then using the mouse to scroll), and then he created the plain text document. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

Let’s say goodbye to Google Assistant a decade later. Google has begun to delete its code to leave only one option: Gemini

It’s not official but as if it were: the end of Google Assistant or the classic Google Assistant, is scheduled. An analysis of the latest version of the Google app for Android carried out by Android Authority has revealed his almost definitive goodbye. The Mountain View company is eliminating the code that, for the moment, allows us to choose between Gemini and the old assistant. It is the chronicle of a death foretold that ends an era within the company. Where before we saw the Assistant icon and dialog window, we now have the Gemini one. Image by Iván Linares for Xataka Android failed promise. Launched in May 2016, the Google Assistant was going to be a revolution. On paper, it promised full voice control of your cell phone, car and home. In practice, like many users have experiencedits use ended up being “despairing” although the “Okay, Google” It became popular in smartphones and speakers. Your inability to understand the context or natural language and the rise of AI models, has finished burying it. The future belongs to Gemini. With the rise of generative AI, Google has bet everything on Gemini, but it has had a rather confusing rollout. For months, the American company maintained a curious mess with several duplicate names, apps and services… Bard, Assistant with Bard, Project Astra… In practice, two assistants live on the same mobile phone. In February 2024, its “transmutation” began: that was when Google launched the dedicated Gemini app (Bard was left behind) on Android, which when installed was offered as a replacement for Assistant. As we tested in its day, the new AI took over of the invocation with the famous “Hey Google” command. A more mature replacement. The problem with the Gemini assistant is that, at first, it was quite green. It was a powerful chatbot, but a not so useful assistant: it could not execute the basic tasks that the previous one could do, such as routines or orders for home automation. However, Google has spent the last year making Gemini absorb the features of its predecessor. The turning point came at the end of last year, when Gemini Live – the conversational voice mode – finally landed in Spain and in Spanish. Already approaching 2025, Gemini learned a basic function that it was missing: making calls and sending messages without having to unlock the mobile. The last big feature inherited from Assistant, the «Scheduled actions»arrived in June of this year. Google’s plan. At the same time that Gemini was learning the old Assistant tricks, Google has been dismantling the latter, removing useful functions. The objective is more than clear: Gemini is the future and will be everywhere. Now you can act like the “all-seeing” assistant thanks to Project Astra (integrated in Live mode), it is coming to Google Home speakers and its landing on Android Auto is imminent. The last step remains. And that is eliminating the escape route: Google has already consolidated the transition. Gemini is the default assistant on new mobile phones and can be installed on old ones without major impediments. The analysis of the APK of the specialized Android media only confirms that the last step is very simple: eliminate the option to go back. The king is dead, long live the king. Cover image | Composition with Google images and generated with Nano Banana by Pepu Ricca In Xataka | How to create Gemini Gems to have your personalized version of artificial intelligence

Chatgpt began as a simple assistant of AI. OpenAI wants to turn it into your future operating system

OpenAi wants to change everything with chatgpt. The chatbot of AI He no longer wants to be a chatbot of AI with whom we talk: he wants to do everything for us. And to do so the idea is to turn ChatgPT into something surprising: an operating system with which you will talk and talk to ask for things. Why is it important. The developer event held yesterday by OpenAI allowed reveal a new application platform that wants to have ChatgPT as a central axis. The new philosophy makes all types of third -party services work directly within Chatgpt, which connects them and converts them in part of a promising user experience. Surprising examples. During the presentation they were shown various cases of use in which a user simply planned a chatgpt trip and it connected to Booking or needed a training course and the chatbot served it with extra comments connecting to Coursera. OpenAI already has a preliminary version of the SDK that will allow developers create applications that can then interconnect with chatgpt As those first examples already do among those that are Spotify, Canva, Zillow, or the aforementioned Booking and Coursera. It is not a “superapp”, it is something more. The search for a new surface has been for example a particular obsession of Elon Musk. Its objective was to convert X (formerly Twitter) into a surface similar to Wechat, which is that “tool to do everything” that triumphs in China. This SUPERAPP integrates a lot of own services, but also to minialyucations with which the user must operate quite manually. With chatgpt the intention is another. Machine, do everything for me. With operating systems such as Windows or MacOS what we normally ask when doing something is “What app I need to perform this task?” With this apparent chatgpt transformation into an operating system we can simply tell the chatbot “I want to do this task” to complete it. Second attempt. Openai already really tried something like that with the GPT store that launched in January 2024 and allowed to create “personalized GPTS”. Although the company presumed that they had been created More than three million of these GPTSthese “widgets” were nothing more than slight modifications of the Traditional Chatgpt assistant. Although the idea was promising That did not curdle, but this attempt is much more ambitious, especially because now Chatgpt wants to become a kind of orchestra director that connects to all kinds of services to do what the user needs at all times with simple prompts written or spoken. A de facto operating system. Openai’s proposal resembles – at least, conceptually – to what we usually conceive as a modern operating system. Its fundamental function is to serve as an interface between the user and the machine, and here Chatgpt wants to be something similar. It doesn’t matter the hardware and application, because it is Chatgpt that interprets the user’s intention and then connects with the most appropriate applications for each task. Monetization. In Openai they also mentioned that they are preparing the integration of His new agentic commerce protocol to allow payments between services and users. There was no talk of what kind of economic agreement signs Booking or Spotify when they interact with chatgpt, but it is evident that for these services the traffic that comes from chatgpt can be very valuable, and it is reasonable to think that Openai takes a commission if economic transactions are completed. OpenAI VE Chatgpt as an operating system. Nick Turley, head of the Product of Chatgpt in Openai, explained In a subsequent conversation with means what was the vision of the company: “What you will see during the next six months is an evolution of Chatgpt, which will go from being a really very useful application to become something that will look a little more to an operating system.” Developers, come to me. For your idea to succeed, OpenAi needs be available globally. This tool now offers additional characteristics To, for example, connect it to Slack or use it as an SDK to integrate into other workflows. Of the mouse and the keyboard to the conversation. Chatgpt raises that future we have been talking about: one in which instead of using mouse and keyboard to handle our computer We will use text and voice prompts. The interaction theoretically will have to think about how we want to do things – that will already be in charge of chatgpt and the services to which it connects – and more what things we want to do. It is a radical change that promises to get closer even more to do everything with machines … to depend more than ever on them. In Xataka | Openai and AMD have just signed more than an AI agreement: it is the bartering of despair

This is the assistant who wants to mark a before and after on WhatsApp, Facebook and Instagram

It’s over Depend on tricks to use Goal AI in Spain. The assistant has begun its official deployment in Europe, which means that it will be available on WhatsApp, Instagram, Facebook and Messenger at some point in the coming weeks. We are facing a long -awaited launch in the old continent. Recall that the social media company announced this artificial intelligence tool (AI) at the end of 2023but it was available in certain countries. Goal AI in WhatsApp, Instagram, Facebook and Messenger When we talk about goal AI, we refer to an artificial intelligence chatbot similar to Chatgpt, Gemini either Claude. Like these we find a Language model Under the hood (Call 3.2) that he will try to answer almost all the questions we ask him. But it is a product that has at least two substantial differences with respect to its competition: it does not have a dedicated application and does not have an alternative payment. That is, it is integrated for free in WhatsApp, Instagram, Facebook and Messenger. You will notice the arrival of Meta AI when you see A new blue circle icon in the applications mentioned. Using it is as easy as touching it and starting to talk with him in an independent chat window, as you would with any friend, family or colleague. You will also have the possibility of “invoking” goal AI in group chats. This can be particularly useful when you need to solve a question or you are planning a group activity. It will be enough to write @Metaai for the wizard to appear on stage. Possibly you ask yourself if the privacy of the group participants will be affected if someone invokes goal AI. The answer is found in A privacy document of the company, which says that the AI Just read the messages that invoke “@Meta Ai”. In other words, the goal assistant is not one more member of the group, but a functionality of AI that can be invoked at any time. So, according to official information, this tool does not access other messages in the chat. What can be done with goal AI Ok, now goal AI is integrated into some of the most popular applications in the world. The big question is: What can we do with him? The answer is that the possibilities are broad, but here are some interesting ideas to make it up. Write clearest and effective messages If you want a message to sound more professional, more informal or simply better structured, goal Ai can do it for you. “Make this message more professional:” Remember that we have a meeting tomorrow at 10 “. You can also ask you to adapt the text to different records or countries. “Give me an informal version of this Spanish message from Argentina.” Summarizes long texts and highlights the important If you do not have time to read a complete article, goal AI can extract the main ideas. “Tell me the five key points of this report:” Of course, it is always advisable to verify the information before taking it as definitive. The AI ​​models, as recognized by the CEO of Nvidiathey still hallucinate and are inaccurate. Make calculations without complicating you From dividing an account to solving more complex calculations, goal AI does so in seconds. “We have spent 185 euros at a dinner. Add 12% tip and divide the total among four people.” Translate and adapt texts in seconds Goal AI can translate phrases and explain them in context. “Translate” 七転び八起き “to Spanish and explain its meaning in detail.” If you want a more natural or detailed version, you can ask for different options. Answer questions about any topic Like chatgpt or gemini, goal AI can answer doubts about history, science, technology and more. Here, once again, it is advisable to verify your answers. Use it directly at conversations As we point out above, goal AI is integrated in WhatsApp, so you can invoke it in any chat with @Metaai. “We are organizing a trip to Los Angeles, @Metaai, can you give us a three -day itinerary?” The assistant also incorporates functions such as Ai Studio to create generative images and edit photographs, in addition to customization and memory tools. For now, these are only available in the US, but they are expected to arrive in Europe later. Why has the target deployment to the EU taken so long? If we have accounts, it has been a long time since Mark Zuckerberg presented a goal AI. It was in him September Connectwhich means that it has elapsed One year and six months. AI in the old continent seems to advance at two speeds. Meta recognizes that the process has taken longer than expected due to the challenges of navigating in the complex European regulatory system. Even so, as we mentioned, the company expects to continue expanding its services in the future. Images | Goal | Xataka In Xataka | Figure creates a system to make large -scale humanoid robots. And of course, there will be robots manufacturing robots

What is Amazon’s new intelligent assistant, how it works and what is its price and availability

Let’s explain What is Alexa+ worksthe new version of Amazon’s virtual wizard with artificial intelligence that The company has announced. It is not a simple competitor in the career of the AI ​​chatbots, but Amazon takes him to his field to offer much more. We are going to start explaining what Alexa+ (or Alexa Plus) is, what he can do and its main characteristics. And then we will tell you how it works internally, to end up talking about its price and availability. What is Alexa+ Alexa+ is a new version of the Alexa virtual wizard. The idea is to add it Artificial Intelligence Capabilities to the Assistant Alexaso it is not simply to launch a new alternative to Chatgpt, but rather to evolve the company’s assistant. Although Alexa+ will be able to answer any questions as you ask the other artificial intelligence models, their main objective is another. It is an agricultural, and His goal is to do things for you. Come on, it is still an assistant as Alexa has always been, but now much more capable of understanding what you want and performing more complex operations. This new artificial intelligence heart allows Alexa to be much more conversational. Will recognize your natural language Instead of having to use voice commands, and will also answer you in a clearer and more natural way. It is multimodal, which means that you will understand different types of orders by voice or text. But of course, Alexa is still You can interact with the home automation of your home. It is smarter, conversational, personalized and capable of doing things and completing tasks. You explain what you want, and Alexa will be able to interact with the household devices connected better by understanding your instructions. Alexa may use the devices and applications you have connected and activated to perform the tasks you ask for. You can even perform actions such as booking table in a restaurant or sending a message to your children’s kangaroo, adding these events to your calendar. But come on, the most important thing is that you know that Alexa+ is not just an alternative to chatgpt To which you will be able to ask anything, but it is the way in which Amazon uses artificial intelligence to make the existing Alexa smarter and capable. It is still a virtual assistant to perform tasks, but now with more possibilities. How Alexa+ works The new Amazon virtual assistant uses technology Amazon Bedrockwhat this does is allow him access different large language models either Llm. Among the languages ​​they resort are are the company’s own, such as Amazon Novabut also third -party models such as Claude from Anthropic among others. Amazon qualifies Alexa+ as an agnostic in terms of AI model. This means that it will not always use the same, but that will use the most useful at all times Depending on the idea we have asked to complete. Come on, first analyze what you have asked for, and then resort to the model that best suits your request effectively. In addition to this, Amazon has also associated with several media organizations for Be able to access current and truthful information and use it to answer your questions. Among the media chosen are names such as The Washington Post, Condé Nast, AP, Business Insider, Reuters, Sfgate, Pcgamer or Autoblog, among others. But as we have said, the objective is to be an agthic and be able to interact with other services to perform tasks. With this objective, Amazon has reached agreements with services and ecosystems such as Sonos, Samsung, Grubhub, Ubereats, Openable, Dyson, Vizio, Ted, Plex, Xbox or Ticketmaster, for example. This way, If you ask Alexa+ to take you tickets For a specific concert, you will be able to connect to Ticketmaster, look for that concert, look for tickets and buy them. And so with many other things. And so, also to reserve tables, buy meals, or make interactions with other services and devices. Alexa is also still, so He will continue to manage the home automation of your home. But of course, it is not the same to have to remember concrete commands for the wizard to turn on the lights to be able to ask for it with a natural language, or give it more details of what you want to get. Price and availability Alexa+ will have a price of $ 19.99 per monthand will be compatible with practically all Alexa devices that Amazon has commercialized so far. Come on, if you have a device that Alexa uses, you can subscribe to its upper version. In addition to this, Alexa+ It will be free for Amazon Prime subscribersso it will join the catalog of services that the company is offering within this unified subscription. Regarding availability, Alexa+ will initially arrive in the United States During March 2025, and It can only be used in English. It is logical to think that it will end up arriving in Europe and Spain, but the company has not yet announced dates. And taking into account the price of the service, it would not be surprising that prime upload price when doing so. In Xataka Basics | 14 Alexa’s secret commands to find games and fun hidden ways

Perplexity launches an assistant for Android. An all in one that marks a before and after

The AI ​​Assistant Perplexity comes to Android with an enormous promise: to become the first truly useful assistant on your mobile. Or at least that it is capable of going far beyond what we knew. Why is it important. This is not just another AI app. Perplexity integrates real-time web search, multitasking control and advanced automation in an interface that seeks to understand the context of your needs. The context. Current mobile assistants are limited and frustrating when compared to the capabilities of an AI chatbot. Perplexity breaks these barriers by being able to maintain coherent conversations while doing tasks in different apps. That is, it combines the capabilities of an AI chatbot with the classic assistant. It integrates with apps like Uber, Spotify and YouTube, as well as messaging and watch apps. More will come. It is multimodal, so it supports voice, text and camera… …so it can even “see” what you have on the screen. It is available in fifteen languages. Maintains context between different actions. Access information in real time thanks to reading websites. Between the lines. The most important thing about Perplexity Assistant for Android is its ability to maintain context. For example, if you ask it about Italian restaurants near you, it will be able to automatically reserve the place that best suits your previous preferences and your usual dining time. Context. More case studies: You can identify a product, even if it is newly launched, using the camera. The Verge mentions his success in spotting a promotional toy released just two days ago. You can compose and send emails maintaining our usual writing style. You can manage our calendar by creating contextualized events. For example, “remind me to take my heart pill two hours before the Valencia game.” Something more concrete: you can ask him ‘play the song that plays at the end of the movie’Interstellar‘”. The big question. Is Perplexity going to be able to unseat the Google Assistant, or at least be more capable of competing with ChatGPT by having your own scope of action? Although it is not easy at all, it is in a good position: it is already valued at 9 billion dollars, so it has the financial and technological muscle to try. Go deeper. The power of this assistant lies, above all, in its proprietary search engine. It is what allows you to give updated and accurate answers. And what makes it overcome the limitations of chatbots that are based on pre-trained systems. The assistant for Android can be activated from the Perplexity appreplacing the native Google one. As for whether it will come to iOS, it seems complicated: the company remains open to this possibility as long as Apple “grants them the appropriate permissions.” Featured image | Perplexity In Xataka | If you have Movistar, you have Perplexity Pro for free: so you can use it and get the most out of it

Gemini is now the ubiquitous AI assistant

In the presentation of the three new Samsung phones The technology giant was the common denominator of many of the Galaxy Unpacked moments. Gemini is going to maintain a good part of the artificial intelligence functions of Korean mobile phones, just as it is going to do in the Pixel 9 with a wide variety of new features that Google just promoted. These new experiences based on generative artificial intelligence aim to convert The Pixel 9 in more “AI” mobiles and thus continue ‘fighting’ against OpenAI in a fight between the two that seems to have no end. The prominence of Gemini in the new Galaxy S25 is another example of the crucial moment that Google is going through before the OpenAI steamroller with its ChatGPT. In an advertisement on store.google.com you can read “The new functions of Gemini Live and Rodea to search on Pixel 9 mobiles”, and they will encourage users of Google smartphones play in the same league that the next owners of the new Galaxy S25; Just the opposite of last year with the premiere of Rodea para Busca, which was exclusively for the Galaxy S24 for a few weeks. The first new feature is the power ability send content to Gemini to ask any questions, or have a chat to give more context to a video on YouTube or any type of documents as well as images. Best of all, it can be done with the natural conversation offered by Google’s AI, so the interaction can be more enriching than doing it through text: Images– Add an image to the conversation and Gemini Live will offer detailed information and suggestions and can help resolve problems in real time. Files– You can share documents and have a discussion about them to extract important information or key points. Videos: A conversation with Gemini about a YouTube video with all kinds of details. Gemini Live when querying you about a video on YouTube Google The Free Android The Gemini interface has been redesigned conveniently to add a button to include that type of content as a photo from the gallery or take a new one; similar to what Google Lens can offer. There are more important functions in this sense, such as the possibility of being able to ask about what is seen on the screen when taking a screenshot, taking another one on a PDF from the Google Files app and the same, but with the YouTube app. Gemini is going to be that omnipresent assistant to demand its appearance at all times in a very simple way. And once the content is uploaded, a button will appear where you can read it. “Talk to Live about this” on the same interface, so that when you press it, the AI ​​assistant appears with the attached content appearing in the center of the screen and the two pause and cancel buttons just below in the familiar Gemini Live interface. This great novelty is deployed today at Pixel 9 series, as in the Galaxy S24 and S25although in EL ESPAÑOL – El Androide Libre the Gemini update has not appeared in the Google Play Store. For the rest of the Pixel it will arrive with a ‘soon’. Gemini already does several things at the same time Next up will be the new ability Project Astra screen sharing and video streaming capabilities on Gemini. The technology giant had its moment to share what the interface is like today and will be available in the following months through the Gemini app on the Pixel and S25. Another novelty for the Pixel is the deployment that is being carried out today with Deep Research to Gemini Advanced subscribers in the mobile version after introducing it last month in the web client. All compatible Pixel phones will receive it in the following days. Sending a message in Google Messages with Gemini Google The Free Android A very productive day for Google from your website and from now on you can ask Gemini to use multiple extensions at once from a single prompt and the new capabilities of Rodea to search, which are both summarized in this way: You can now ask Gemini to do several things at the same time in different apps: from finding entertainment venues on Google Maps and sending them to a friend via Google Messages. Circle to search with AI summaries either AI Overviews: They will give any useful information and links about places, trending images, unique objects and more. Act instantly: Circle to Search now identifies phone numbers, email addresses, and URLs. The prompts Multi-extension are available now on Android, iOS and on the webwhile Surround Anything to Find Out More with AI Summaries remains for the moment where it is available globally and only in English.

Gemini becomes the definitive AI assistant with its new version

Gemini has had an important presence during the launch of the new Samsung Galaxy S25, S25+ and S25 Ultra. Google took the opportunity to announce several very interesting new features that make it the definitive virtual assistant. Although Gemini is now the native assistant of the Galaxy S25, its new features will not be exclusive to said phones. Some of these functions will be launched first on Samsung smartphones and the most modern Pixels, while others will immediately arrive on other Android phones, the iPhone and the web version of the tool powered by artificial intelligence. Gemini’s first big addition is the ability to perform several actions at the same time in different applications. This means you can ask the AI ​​to find information about a particular place on Google Maps—a restaurant or hotel, for example—and write a summary with details and essential information to send through the Messages app. In this way, Gemini can carry out more than one activity from a single prompt or request. This function can be used from today both in the assistant applications for iOS and Android, as well as in the web version. Likewise, Google has announced the implementation of new extensions that allow its artificial intelligence to connect and interact with Samsung apps, such as calendar, notes and reminders. Gemini Live goes multimodal, and Circle to Search adds improvements Another notable feature that was announced at the Samsung event is the introduction of multimodality in Gemini Live. From now on, the conversational assistant Google allows users to include links to YouTube videos, as well as photos and files, in chat. According to Mountain View, the goal is for the tool to better understand the context of the queries and offer more personalized responses. Likewise, the Californians plan to add more functions inherited from Gemini to Gemini. Project Astrathe technology that in the future will promote mixed reality helmets and augmented reality glasses with Android XR. Among them, the option to share screen and live video transmission. Both the Gemini Live improvements and the upcoming Project Astra-based tools will arrive first on the Galaxy S25 and Pixel 9. In addition to Gemini, Circle to Search (Circle to search) will be incorporating new capabilities. Among them, the expanded AI summarieswhich will allow you to find out more information about anything that appears on your screen by drawing a circle around it. This tool will be launched only in English at first, and in those countries where it already AI works Overview that Google shows on its results page. But that’s not all, because Circle to Search now too detects phone numbers, email addresses and more important data on the screen. This allows you to interact with this information quickly and with a single touch, displaying quick icons to make calls or write emails, among other options.

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.