Claude just demonstrated it with Firefox

For years, finding serious vulnerabilities in complex software has been a task reserved for specialized researchers who spend weeks or months examining millions of lines of code. That scenario is beginning to change. Artificial intelligence models are no longer limited to generating code or helping to debug it, they are also beginning to detect security flaws on their own. A recent example has been shown by Anthropic with Claude Opus 4.6its most advanced model, when put to the test with Firefox. The experiment is especially striking because Firefox, managed by Mozilla and used by hundreds of millions of people, is one of the most audited open source projects in the web ecosystem. Analyze the Firefox browser code. During two weeks of testing, the system identified 22 different vulnerabilities, according to information published by both organizations. Mozilla assessed 14 of them as high severity flaws, meaning they could have served as a basis for attacks if someone had developed the appropriate exploit code. According to those responsible for the project, most of these problems have already been solved in Firefox 148, the version published in February, while the rest will be corrected in future versions. Inside the experiment. Claude’s work was not a simple automatic search for errors. According to Anthropic, the team first used the model to try to reproduce historical vulnerabilities recorded in Firefox, a way to test if it was able to recognize real failure patterns. Then they moved on to the most interesting part of the experiment: asking it to analyze the current version of the browser to locate problems that had not yet been reported. The process started in the JavaScript engine and then expanded to other areas of the code. In total, the analysis covered thousands of files from the project, including several thousand C++ files, generating a long list of findings that were subsequently reviewed by the researchers. A striking fact. Claude found more high-severity bugs in two weeks than the browser usually receives in about two months through its usual investigation channels. During the process, the Anthropic team submitted 112 unique reports to the project’s bug tracking system, although not all were confirmed vulnerabilities. Part of Mozilla’s job was precisely to review, debug and classify those findings before determining which ones had real security implications. The experience ended up becoming a direct collaboration between both organizations to review the results and prioritize corrections. The other half of the problem. The Anthropic team also wanted to see how far the model could go beyond detecting errors and turning those failures into real attacks. To do this, they asked him to develop exploits capable of taking advantage of the discovered vulnerabilities. The experiment included hundreds of runs with different approaches and cost approximately $4,000 in API credits. Still, the result showed a clear difference between the two capabilities: Claude only managed to generate two working exploits in a simplified test environment, without some of the defenses present in a real browser. Beyond the specific case of Firefox, the experiment reflects a change that is beginning to worry and interest the security community at the same time. AI-based tools are rapidly improving at detecting vulnerabilities in complex software, which could help developers fix bugs more quickly. Images | Anthropic | Rubaitul Azad In Xataka | iPhones were supposed to be the most secure cell phones in the world. It was supposed

summarize everything in your email inbox with Claude, Gemini or ChatGPT

Let’s explain to you how to make summaries of the newsletters you have in your email punctually with artificial intelligence. So, if you see that they have been accumulating but you don’t have time to read them, you will be able to ask the AI ​​to summarize them all for you. If your email is Gmail you can resort to Gemini already Claudeand if you have an outlook email then you can do it with ChatGPT. These are the AIs that have connectors for each mail service. But we will also start by telling you how we recommend organizing the newsletters in the email so that it is easier for the AI ​​to find them. First, organize your newsletters Before you start, I recommend tag all newsletters with the label or category system that Gmail and Outlook have. This way, you will be able to later ask the AI ​​to search directly in these categories instead of having to analyze the entire content of your email. Therefore, take your time entering the newsletters and tagging them. At first you will have to label them all, but then, each email address will be linked to the labelmeaning that the next ones that arrive to you and are not new will already be well labeled. Now link AI to your email Claude has a connector system where you must add and activate Gmail. Gemini allows you to do the same with its Connected Appsand in ChatGPT you have a section Applications which allows you to connect Outlook. With this previous step, you will have to link your email account to the AI ​​so that it can access and read your emails. If you are most concerned about your privacy Maybe you should reconsider doing this, because in the end you are going to link your account to the AI, so it can read and process all your emails when you ask it, storing its content on your company’s servers. The emails will no longer be private, you will be sharing them. Now, ask the AI ​​for a summary And now it’s time to go to the AI ​​and write a message asking for the summary. This prompts It has to mention Gmail or Outlook depending on the AI ​​you use and the email you have linked, and if you have done what we have recommended you have to indicate the newsletter label and ask for a summary. Besides, you can specify the structure of the summary so that it is more to your liking. This is the prompt that I have used: I want you to enter my Gmail account, analyze all the emails in the “Newsletters” label, and give me a summary of their content. It has to be a schematic summary, with an H2 for each email telling me the title and sender, and then bullets where you explain the most interesting points of its content. With this, the AI ​​will start to see the emails within your account and will give you a summary as you have requested. Here, keep in mind that you can simply tell it to search for the newsletters without having tagged them, but then there is the possibility that it will not find them all or consider something as a newsletter that really is not. Each AI will give you the results in its own wayalthough maintaining the structure that you have requested if you have specified it. Thus, with the prompt that we have used you will have everything summarized in several points so that you can read it in just a few minutes. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

The US Government stopped using Claude because it was a “woke AI”. Right after he bombed Iran using Claude, according to WSJ

This February 28, Israel and the United States They bombed Iran. It is something that occurs in parallel to a ‘war’ that is taking place on American soil: that of what AI should the country’s military arm use. Because yes, AI has become an essential tool for Intelligence operations, to the point that there are reports that suggest that Claude was key in the massive bombings on Saturday. But there is a problem. Hours before the attack, Trump ordered that Claude and any Anthropic artificial intelligence tools not be used in military operations. And the fact that the Pentagon has disobeyed only responds to one thing: Claude is too deep inside the United States military systems. The Anthropic Mess. This topic is complex, so let’s go with some context before getting into it. When the United States was looking for an AI to support its defense systems and will integrate with PalantirAnthropic offered theirs for the modest price of one dollar. That it was worth it a 200 million contract and both Anthropic and the Pentagon got to work integrating the company’s models into all kinds of systems. Claude’s support is so important to the Pentagon in massive scale data analysis that it is estimated that he was used for the capture of Nicolás Maduro a few months ago. The “problem” is that Anthropic programmed its AI not to violate two red lines: It will not be used to massively spy on American citizens. It will not be used for the development or control of autonomous weapons and attack systems. “The Woke AI”. The War Department and Donald Trump They didn’t agree with this. and last week they released a ultimatum: Either Anthropic gave up its ‘unleashed’ AI, or there would be consequences. What consequences? Play the card Defense Production Act of 1950 to take over the force of Anthropic’s creation. The company had until 5:01 p.m. last Friday to respond, and boy did it do so. In a long statement signed by Dario Amodei, CEO of Anthropic, it was stated that the company was on the side of the country’s defense interests, but not at any price. Their moral standard was very clear and they were not going to give in to the blackmail of a United States that hours before threatened to “make them a Huawei” by putting Anthropic on a blacklist. Amodei’s response infuriated Trump and Pete Hegseth. The Secretary of Defense called Claude an “AI Woke,” a line that Trump himself followed. On his social network Truth Social, Trump pointed out that Anthropic is a “radical left-wing AI company run by people who have no idea how the real world goes.” Striking, to say the least, and with another response: the United States ended its collaboration with Anthropic and prohibited the use of its AI. The problem is that it’s… fake. “I am ordering ALL US federal agencies to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and we will not do business with them again! – Donald Trump Claude to attack Iran. As soon reported The Wall Street Journalthe air attack against Iran was carried out with the help of those same radical left tools. The media noted that commands around the world, including the United States Central Command in the Middle East, used Claude’s tools to assess the situation, identify targets and simulate battle scenarios. Dependence. And this just paints a scenario, one in which the Pentagon is going to have a very difficult time removing those Anthropic tools from its system. It happened in Venezuela and it seems that it has happened again in Iran. Claude is too deep inside the Pentagon’s systems, maintaining an almost symbiotic relationship with the Palantir software, and breaking that from one day to the next seems complicated. HE esteem that it will take six months to eliminate Claude’s trace from the Pentagon software, but despite the prohibition of use and his inclusion on the blacklist by Hegseth, another decision seems to prevail: if we already have this, we will use it until we find a successor. OpenAI goes out for the crumbs (millionaires). And it didn’t take them even half a second to find that new AI provider. OpenAI -ChatGPT- issued a release in which he noted that “the United States needs AI models to support its mission, especially in the face of growing threats from potential adversaries that are increasingly integrating artificial intelligence technologies into their systems.” Interestingly, they have the same red lines that Anthropic imposed (no use for mass domestic surveillance, no direct autonomous weapons systems, no AI making high-risk decisions automatically). But there is a difference: if Anthropic refused to give full powers to the Pentagon, OpenAI points out that, despite maintaining the same moral principles, the use of its AI is tied to the legal use that the Department of Defense wants to make. This is ambiguous because if a certain use is considered legal, it does not conflict with that “morality.” We will see if it is a mere exchange of chips resulting from anger because someone opposed a government order or if the change from Anthropic to OpenAI translates into what the US needs for its security. In Xataka | The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

how to migrate the memory of everything other AIs know about you to Claude

Let’s tell you how to migrate memories from ChatGPT or Gemini to Claudeand thus perform a migration of a artificial intelligence to another. Claude has just launched a fairly easy-to-use function that allows you to import memories from ChatGPT, Gemini or any other AI you use. Artificial intelligence chats have a memory system with which they store important data about you and your tastes based on the things you ask them repeatedly. They will know your musical tastes, your pets, if you have plants, and so they take all this into account to personalize their answers. And why can it be useful to import these memories into Claude? Well, because if you have decided to start using this artificial intelligence model, you can make it know about you all the specific data that your other AIs use to personalize their results and adapt them to you. Import memories from another AI to Claude This option It is only available for paying users of Claude, who have Pro, Max, Team or Enterprise subscriptions on the web, and for users of Claude Desktop or Claude Mobile. What you have to do is enter the settings of the AI ​​website or application. Once inside the settings, Click on the section Capabilities in the left column. On the screen you go to, go to the section Memoryand in it click on the option Start import that will appear to you. This will open the import memory screen. In it, above you have a prompt that you must copy to use in another AI to extract the memories, and below you will have a field where you have to write the imported memory that generates the prompt above. Therefore, here click on the button Copy of the text you have above. Now, the text that you have copied in Claude you have to paste it in a chat with the AI ​​where you want to extract the memories. Simply paste it exactly as you have it into ChatGPT, Gemini or another, and send it. This will make the AI generate a code with all the memories what he has on you. You will have to copy this code and stick it in Claude’s field what’s in the window we opened before. With this, Claude will recognize the memories and start saving them internally. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

Anthropic releases a new feature to download all your memory to leave ChatGPT and switch to Claude

This weekend Anthropic has gone from being an AI used by the Pentagon, other US agencies and having partners such as Microsoft or Amazon to total ostracism: from Friday at 5:01 p.m. It is classified as a “risk to the supply chain”. Total veto, a serious threat to the survival of a company valued at 380,000 million dollars and also a challenge for those entities that in less than six months will have to transition to another alternative. The Pentagon itself He already has an agreement with OpenAI to succeed him. Anthropic’s situation is delicate to say the least serving its strategic clients and alliances, something essential to continue growing in the tough battle of intelligence. The company led by Dario Amodei, which was firm in its principles when expressing its concern about the use of artificial intelligence for mass civil surveillance and the development of weapons capable of firing without human intervention, has already announced that he will contestbut for now they look rough. He only has the civil…in every sense, because Claude has risen to number 1 for free downloads in the App Store in the United States, as reported by CNBC. Because yes, this tug of war with the US government has brought an increase in the popularity of Claude, less known than other alternatives such as ChatGPT or Gemini. On the other hand, this movement in which the US Administration has said goodbye to Anthropic in favor of OpenAI also has a reading in which Claude wins: the terms of the agreement and how it affects ChatGPT users. Anthropic Coup de Effect. So Anthropic has been taken out of the sleeve a new feature to facilitate the transition from other AI models, such as ChatGPT or Gemini, to Claude. Because if you have been using ChatGPT for a while for example and already knows youstarting from scratch is a step backwards in every sense. The new feature allows you to import all your memory from other models into Claude so that it immediately knows everything about you (everything that your previous AI already knew). You no longer start from scratch. How to download your memory and load it in Claude. To incorporate your preferences and context from other AI providers into Claude you have to do two steps: Copy and paste the prompt below into the AI ​​you normally use, like Gemini or ChatGPT: I’m moving to another service and need to export my data. List every memory you have stored about me, as well as any context you’ve learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: (date saved, if available) – memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I’ve given you about how to respond (tone, format, style, ‘always do X’, ‘never do Y’). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I’ve made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. The model will return everything it knows about you in a block of text, which you have to copy and paste later into Claude. Go to ‘Settings‘ > ‘Capabilities‘and there in Import Memorypaste the answer. Then, tap ‘Add to memory’. From that moment on, Claude already knows what your previous AI knew. It has small print. This is a feature for users on a paid plan (Pro, Max, Team or Enterprise). If you are on the free version, at most you will only be able to have that context in that conversation, but not permanently. In short: the import is free as a manual process, but for Claude to remember it permanently a payment plan is required. In Xataka | Claude: 23 functions and some tricks to get the most out of this artificial intelligence In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

Select the model to use between Claude, GPT, Gemini, Kimi, Grok or Sonar

Let’s tell you how you can choose the artificial intelligence model What are you going to use with? Perplexity in a prompt. This is a chatbot known for allowing you to access many cutting-edge models from third-party companies, something it does automatically depending on the request you make. However, if you are going to use Perplexity, it is advisable to know one of its functions basic, being able to choose by hand which model you want to use. And yes, every time Google, Anthropic or OpenAI launch a new model of artificial intelligenceat Perplexity they are going to add it to their catalog. The results will not be exactly the same as if you use the paid versions of ChatGPT, Grok, Claude or Gemini, because Perplexity may modify them a little. However, you will be able to take advantage of the reasoning power of these models. Choose the AI ​​model to use in Perplexity To choose the AI ​​you want to use in Perplexity, you have to look at the box where you write the prompt. In it, you must click on the option AI modelwhich will appear with the icon of what appears to be a chip. It is to the far left of the series of icons that appear at the bottom right in the prompt writing field. When you click on that button, it will appear a list of all models of artificial intelligence that you can use. Both the best and the latest available from Gemini, GPT, Claude, Grok, Kimi or Perplexity’s own Sonar will appear. This is something that you can do in its web version or in its mobile or computer applications. Here, you should know that you can choose the model with each prompt within a conversation with Perplexity. Come on, you can ask a question with one model, and then ask the next question with another. Also, below the list you will see the number of queries you can make with the most modern models. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Anthropic just accused DeepSeek and other Chinese companies of “distilling” Claude

For months we have talked about the race between the United States and China to dominate artificial intelligence as if it were only a question of who trains the most powerful model or launches the next version first. But the pulse begins to move to another, more delicate area: that of the rules of the game. When one laboratory accuses another of extracting capabilities from its system to accelerate its own development, the discussion goes beyond the technical. That’s exactly what Anthropic just did by denounce “distillation” campaigns against his model Claude. The complaint. In a text published this Monday, the company claims to have detected “industrial-scale campaigns” aimed at extracting Claude’s capabilities. According to their version, the activities attributed to DeepSeekMoonshot and MiniMax reportedly involved more than 16 million queries, question and answer interactions, and were channeled through approximately 24,000 fraudulent accounts, in violation of their terms of service and regional access restrictions. The race and the suspicion. The announcement by the firm led by Darío Amodei occurs in a context of growing tension around the progress of Chinese AI. Let us remember that DeepSeek altered the Silicon Valley landscape a year ago with the launch of R1, a competitive model that was presented as Developed at a fraction of the cost of American alternatives. The impact was immediate on the markets and revived the political debate in Washington about the technological advantage over China. Distilling is not always cheating. Anthropic itself recognizes that distillation is a common technique in the sector. It consists, in simple terms, of training a less capable model using the responses generated by a more powerful one, something that large laboratories use to create smaller, cheaper versions of their own systems. The problem, according to the company, appears when this practice is used to “acquire powerful capabilities from other laboratories in a fraction of the time and at a fraction of the cost” that developing them independently would entail. In that case, distillation would cease to be an internal optimization and would become, always according to Anthropic, a way of taking advantage of the work of others. Recognizable pattern. The three laboratories would have used fraudulent accounts and proxy services to access Claude on a large scale while trying to avoid detection systems. The company details infrastructures, what it calls “hydra cluster”, extensive networks of accounts that distribute traffic between its API and third-party cloud platforms, so that when one account was blocked, another took its place. Anthropic maintains that what differentiated these activities from normal use was not an isolated query, but rather the massive and coordinated repetition of requests aimed at extracting very specific capabilities from the model. Three campaigns. Although Anthropic presents the campaigns as part of the same dynamic, it distinguishes relevant nuances. DeepSeek would have focused its more than 150,000 queries on extracting reasoning capabilities and generating safe alternatives to politically sensitive questions. Moonshot, with more than 3.4 million queries, would have been oriented towards the development of agents capable of using tools and manipulating computing environments. MiniMax would concentrate the largest volume, more than 13 million queries, and according to Anthropic’s account, it reacted in a matter of hours to the launch of a new system, redirecting its traffic to try to extract capabilities from its most recent system. A geopolitical issue. The company states that illicitly distilled models may lose safeguards that seek to prevent state or non-state actors from using AI for purposes such as the development of biological weapons or disinformation campaigns. It also argues that distillation undermines export controls by allowing foreign laboratories to close the gap in other ways, while at the same time recognizing that executing these large-scale extractions requires access to advanced chips, thus reinforcing the logic of restricting their availability while, at the same time, remembering that the risk would grow if these capabilities end up being integrated into military, intelligence or surveillance systems. Images | Xataka with Nano Banana Pro In Xataka | Seedance is the greatest brutality we have seen generating video. And it has an uncomfortable message: it has surpassed Sora and Veo without NVIDIA chips

The great revolution of GPT-5.3 Codex and Claude Opus 4.6 is not that they are smarter. It’s that they can improve themselves

Last week, OpenAI and Anthropic simultaneously launched their new AI models specialized in programming: GPT-5.3 Codex and Claude Opus 4.6. Beyond the improvements they represent in performance or speed, which are truly amazing, both companies also stated something that completely changes the rules of the game: AI models are actively participating in their own development. Or put another way: AI is improving itself. Why does this change matter?. Generative artificial intelligence tools are reaching a high level of efficiency and precision, becoming in a few years from being co-workers for simple and specific tasks to being able to be involved in a good part of a development. According to the technical documentation of OpenAI, GPT-5.3 Codex “was instrumental in its own creation,” being used to debug its training, manage its deployment, and diagnose evaluation results. On the other hand, it is worth highlighting the words of Dario Amodei, CEO of Anthropic, who in his personal blog affirms that AI writes “much of the code” in his company and that the feedback loop between the current generation and the next “gains momentum month by month.” In detail. What this means in practice is that each new generation of AI helps build the next, more capable one, which in turn will build an even better version. Researchers call it the “intelligence explosion,” and those developing these systems believe the process has already begun. Amodei has declared publicly that we could be “just 1 or 2 years away from a point where the current generation of AI autonomously builds the next.” Most people use free language models that are available to everyone and are moderately capable of certain tasks. But they are also very limited, and are not a good reflection of what a cutting-edge AI model is capable of today. In a brief session with 5.3-Codex I was able to draw this same conclusion, since the AI ​​tools that big technology companies use in their development are nothing like the most commercial ones that we have freely available in terms of capabilities. The code-first approach. Initial specialization in programming makes more sense than we think. And the idea of ​​companies like OpenAI, Anthropic or Google that their systems were exceptional by writing code before anything else is linked to the fact that developing AI requires enormous amounts of code. And if AI can write that code, it can help build its own evolution. “Making AI great at programming was the strategy that unlocked everything else. That’s why they did it first,” Matt Shumer, CEO of OthersideAI, said in a publication that has given us something to talk about these days on social networks. Between the lines. The new models don’t just write code: they make decisions, iterate on their own work, test applications as a human developer would, and refine the result until they are satisfied. “I tell the AI ​​what I want to build. It writes tens of thousands of lines of code. Then it opens the app, clicks the buttons, tests the features. If it doesn’t like something, it goes back and changes it on its own. Only when it decides it meets its own standards does it come back to me,” counted Shumer describing his experience with GPT-5.3 Codex. What changes with self-reference. Until now, each improvement depended on human teams spending months training models, adjusting parameters and correcting errors. Now, some of that work is performed by AI itself, accelerating development cycles. Just like share Shumer and referring to METR dataan organization that measures the ability of these systems to complete complex tasks autonomously, the time that an AI can work without human intervention doubles approximately every seven months, and there are already recent indications that that period could be reduced to four. And now what. If this trend continues, by 2027 we could see systems capable of working autonomously for weeks on entire projects. Amodei has spoken of models “substantially smarter than almost all humans in almost all tasks” by 2026 or 2027. These are not distant predictions, since the technical infrastructure for AI to contribute to its own improvement is already operational. And these capabilities are what are really turning the technology industry on its head. Cover image | OpenAI and Anthropic In Xataka | We have a problem with AI. Those who were most enthusiastic at the beginning are starting to get tired of it.

What is Claude Cowork, how it works, and what things you can do with this AI assistant on your computer

Let’s explain to you What is Claude Cowork and how does it work?one of the advanced tools of the artificial intelligence of Claude. It is an automation assistant for the computer, a kind of AI agent which you can ask to do tasks on your PC without you having to touch anything. Let’s start by explaining what it is so that you understand the concept. Then we will tell you how it works, to finish by giving you some examples of the things you can do with it. What is Claude Cowork Claude Cowork is basically a personal assistant with artificial intelligence Designed to work natively on your computer. This way, you can use Claude on your Windows or Mac PC to ask it to do things automatically. It has been designed above all to help you with the repetitive tasks you do in your daily life with files, folders and applications. Imagine being able to ask the AI ​​to do things like rename files in a folder, look for duplicates, or even give you summaries of the contents of these files. It is something similar to an AI Agent, but it is not exactly this. AI agents are capable of doing complex tasks for you, like booking a hotel. However, Claude Cowork is designed specifically for automate tasks with files and applicationsand manage the operating system of your local computer. So it doesn’t have as many features, but it does what it’s trained to do better. This tool is available in the Claude desktop appalthough only for paying users. This means that you always have it available. In addition to this, You can also give access to your browser to be able to ask it to do tasks on it or interact with web content, but for that you need to install the extension Claude in Chrome. How Claude Cowork works The way Claude Cowork works is very simple. You open the Claude application and go to the Cowork tab, and in there you ask him what you want him to do using natural language. When making the request, you will have to specify what you want, the folder where you want it done, and all the details you want. Here, you should think that you are asking a person for the task. If you want to change the name of the files in a folder, you will have to specify that you want to rename them, indicate what folder it is, and even the format, in case you want it to be “Year-Month-Name” or any other. Cowork has controlled access to your file systemso that you can decide and customize which elements you can touch and which ones you can’t. When you make a request you can even choose the folder where you want it to act. This tool will first process your text to understand what you want, and then will chain several actions to carry it out. It will be Claude’s own AI that will figure out the way he wants to do it, and if necessary because it doesn’t work, rectify it to do it another way. In the Claude app, within the Cowork section, you will be able to see step by step what it is doing this assistant. The AI ​​will ask you for permission on each piece of data, for example to rename files or connect to a tool, and you can always see the progress and stop it whenever you want. Lastly, you should know that you can use the connectors and extensions to link web services and applications on your computer and be able to do things in them. You can add your notes application, Spotify, or the messaging app among many others. But also web services such as Gmail, Google Drive, Notion, Trivago, WordPress, and many others. What you can do with Cowork The uses of this tool depend on many things, although there are a series of basic actions that you can know and that will save you a lot of time. They are the following: File management: Manage files in any folder, organizing downloads, renaming batches of files with specific patterns, moving documents between folders, finding and deleting duplicates, zipping and unzipping files, and more. Document processing: You can process various document types by extracting text from PDFs, converting files from one format to another, combining multiple documents into one, or extracting specific data from multiple files to create summaries. Automation of repetitive tasks: It can also help you automate tasks you do every day or week, such as preparing reports by putting together data from different files, creating folder structures for new projects, or making organized backups of certain files. Cleaning and maintenance: You can also ask it to do tasks like asking it to delete old files that you no longer need, clean up temporary folders, organize your photo or music library, or find large files that are taking up space. But these are just the basic features of Cowork, and you can get it to do many more things connecting it to cloud services, other applications, or installing the extension to use Chrome. To give an example, I have asked you to create a text file with the list of all the songs (more than 600) that I have in a certain playlist on my Spotify account. So Claude ran his Chrome extension, I could see it go to my Spotify account, I gave him permission to log in, he then looked for various ways to read the songs in the list (first a script and then using the mouse to scroll), and then he created the plain text document. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

What is Claude Code and what this tool can do to program with artificial intelligence from your computer terminal

Let’s explain to you what is Claude Code or Claude CodeAnthropic’s tool to create code with the artificial intelligence directly into your computer terminal. This will mean that you will not need to install anything or be asking questions without stopping. Claude. We are going to start by explaining to you in a simple way what this tool is and the basics of how it works. Then, we will explain to you what things can you do and what this program for developers is for. What is Claude Code Claude Code or Claude Code is a command line application developed by Anthropic, the same creators of Claude’s AI. This is a program that allows you perform programming tasks from the terminal from your computer without having to use another program. The computer’s terminal is that command screen that you have in Windows called PowerShell, and in macOS and GNU/Linux it is simply the terminal. Instead of installing a common program that you have to open, the program is installed directly in the terminal, and you can use it to do so. With this program, you can use Claude to generate code within the terminal. And it not only generates code snippets, but can also act and reason directly on your projects by linking it to Github. Claude Code can read, analyze and edit content in your codebase. But in addition to this, you can also run tests and correct any errors generatedalso managing workflows. The classic way to generate code with Claude is to enter his app or website, explain what you want, and have the AI ​​create the code for you. Then you have to copy the code, paste it into the code editor you have installed and do the tests, so that if something fails you can go back to Claude, explain the problem, have him generate the corrected code again and repeat the process. Meanwhile, with Claud Code the process changes and is radically simplified. You simply open your terminal, run Claud Code in it, write a prompt or command saying what you want and that’s it. Then this AI will access your files, write code, run it, detect errors, fix them, and try again. It does all this autonomously, although you can supervise the process and intervene whenever you want. What Claude Code can do Claude Code has direct access to your file systemand can execute real commands on the computer. With all this, what this tool can do is the following: Read your files to see the code that you already have created in a folder, and thus understand the context of your project. Create new files complete with code, but also with configurations and documentation. Modify existing files editing the code you have in them to make any type of modifications. Work on an interim basisbeing able to read the error messages that appear if something fails in the code, and starting to correct these errors automatically. All this will save you a lot of time in your programming work, since you will not need to manually create folder structures, configure development tools, configure databases, create interfaces, write code, or anything. Claude will do all this automatically with just You explain the type of application you want to create in a prompt. You can also ask you to add features to existing projects with a command in which you mention the project, debug errors, review code, whatever you need. Therefore, we are faced with a tool for developers which will help you save a lot of time. Although as always happens in artificial intelligence, can make mistakes and have hallucinationsalthough within the world of AI programming Claude is one of the best. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.