What is Claude Dispatch and how to activate it to use Cowork on your computer from your mobile

We are going to tell you what Claude Dispatch is, a new option for Claude that allows you to control your computer from your mobile. It is related to Claude Coworkand in fact it is like a remote control to control this artificial intelligence remotely. We are going to start by explaining what Claude Dispatch is, so that you understand what you can do with it and the implicit risks involved in using it. And then, we will tell you the few steps you have to take to activate it. What is Claude Dispatch Claude Cowork is a personal assistant to control your computer, a feature of the paid version of this AI. It is something close to a artificial intelligence agentwhich takes control of a folder on your computer or even your browser to perform the tasks you ask of it. laude Cowork is designed specifically for automate tasks with files and applicationsand manage the operating system of your local computer. It is available in the Claude desktop app, and also for users of the extension Claude in Chrome. Claude Dispatch is like a remote control for Cowork. Because Claude’s agent is only for the computer, so you can’t use it outside the home. However, this app allows you to ask it for things remotely to do on your PC. Come on, if you activate this function you can control Cowork on your computer from the Claude app on your mobileand thus do tasks even if you are not in front of your team. However, you must remember that using it can be dangerous if you’re not careful. Cowork is trained to be careful and ask permission with every step it takes, but you are always exposed to a malfunction that deletes files that it shouldn’t or performs online actions that you don’t want. And when you are not in front of your computer to control it, you have less possibility of preventing something from happening if the tool loses control. How to activate Claude Dispatch To use Claude Dispatch you only need activate the tool and have Claude on your mobile. To activate it, you have to enter the Claude application on your computer, go to the coworkand click on the option Dispatch that appears inside, in the left column. On this screen you will see a list of things you can do with Dispatch activated. In it, press the button Begin to start the activation process. First you will be asked to download Claude for your mobile, and then you will go to a screen where you will be able to activate the permissions that Cowork needs to operate remotely. It will ask you to install the browser extension, give it access to your files and keep your computer active when you have the app running, so that it does not go to sleep and stop what it is doing. When you have everything click on Finish Settings. And that’s it. When you finish giving him access to everything, you just have to enter the section Dispatch from the Claude app on your mobile. This will take you to the section coworkwhere you can request it to perform tasks on the computer. In Xataka Basics | The best AI agents that are faster and easier to use to do tasks for you without complications or long installations

How to create a Claude AI chatbot that responds solely based on your own documents

We are going to explain how to create a chatbot with AI Claude that respond solely based on your own documents. Thus, if you have reliable sources of information or you want to work or make requests around them, you will be able to have everything you ask based only on them. This is something that we have already taught you how to do with NotebookLMa much more specialized service in this. But if Claude is your artificial intelligence Mainly, you should know that you will also have the option to do so. you just need a project and the proper instructions to be able to do it. First you have to prepare the sources The first step you will have to take will also be the most tedious: choose the fonts you want to use so that the AI ​​relies on it when responding to you. These sources must be text files or PDFs, including images. They can be in documents in any language. The ideal for this is save the sources you find in a folderand then upload them all at once. You will also have an option to write texts by hand to use as a font. You can also ask Claude to help you searching for PDF files on specific topics on the Internet, although it is best that you personally choose the sources that you consider the most reliable. Keep in mind that we are going to ask the IAA not to seek information beyond the sources we give them, although there will always be the possibility of doing so. In any case, it is best to complete as complete a knowledge base as possible. You will also be able to upload new files whenever you want. Create your bot with custom fonts The first thing you have to do is enter Claude, and enter the section Projects. Projects allow you to create a separate workspace where you can organize related conversations, and upload reference documents or give personalized instructions. In here, you have to create a new project. This project is the one we are going to use to upload documents and make the AI ​​respond only with the data in them. To create it, give it a name and a description. This will not affect the operation of the project, it will only help you distinguish it from the others you have created. Now, you have to upload the documents you want to use as sources in the section of Files. You can upload documents in PDF, DOC and many more formats, you can upload them from Google Drive or other services that you have linked, or even write the text manually. Take some time to upload all the documents you need. Now, you have to edit the section Instructions to explain to Claude that in this project you just want me to use the documents for reference. For that, we have used this prompt: “All answers to questions and requests made in the chats of this project will be sought only in the content of the files. ONLY the files can be used. In the event that something is asked that does not appear in the files, Claude must say that the answer is not in the files, and ask if I want to look it up on the Internet. By default, the Internet is not used, and it can only be used with explicit consent each time you go to search.” As you can see, in this prompt we tell him several times that you only want to use the information in the documents when you ask him a question. We have also specified that if you do not find the answer in the documents, tell us, and that before you start searching for information on the Internet outside of the information we have given you, ask us for permission. Now all you have left is ask Claude any questions and requests you wantand will compose them using the information you have given him as a source. You can ask him specific questions, travel itineraries, and even ask him to ask you trivia games. Although for the latter remember that we told you how to create a test game from PDF files also with Claude. And as we have told you, if we ask him something that does not appear in the files we have uploaded, Claude will tell you. In the end, we have programmed your instructions for that. In addition, it will also ask us for permission to search for the information on the Internet. It will not do this proactively so that we can know that the information is within the documents. In Xataka Basics | What is Claude Cowork, how it works, and what things you can do with this AI assistant on your computer

Claude just demonstrated it with Firefox

For years, finding serious vulnerabilities in complex software has been a task reserved for specialized researchers who spend weeks or months examining millions of lines of code. That scenario is beginning to change. Artificial intelligence models are no longer limited to generating code or helping to debug it, they are also beginning to detect security flaws on their own. A recent example has been shown by Anthropic with Claude Opus 4.6its most advanced model, when put to the test with Firefox. The experiment is especially striking because Firefox, managed by Mozilla and used by hundreds of millions of people, is one of the most audited open source projects in the web ecosystem. Analyze the Firefox browser code. During two weeks of testing, the system identified 22 different vulnerabilities, according to information published by both organizations. Mozilla assessed 14 of them as high severity flaws, meaning they could have served as a basis for attacks if someone had developed the appropriate exploit code. According to those responsible for the project, most of these problems have already been solved in Firefox 148, the version published in February, while the rest will be corrected in future versions. Inside the experiment. Claude’s work was not a simple automatic search for errors. According to Anthropic, the team first used the model to try to reproduce historical vulnerabilities recorded in Firefox, a way to test if it was able to recognize real failure patterns. Then they moved on to the most interesting part of the experiment: asking it to analyze the current version of the browser to locate problems that had not yet been reported. The process started in the JavaScript engine and then expanded to other areas of the code. In total, the analysis covered thousands of files from the project, including several thousand C++ files, generating a long list of findings that were subsequently reviewed by the researchers. A striking fact. Claude found more high-severity bugs in two weeks than the browser usually receives in about two months through its usual investigation channels. During the process, the Anthropic team submitted 112 unique reports to the project’s bug tracking system, although not all were confirmed vulnerabilities. Part of Mozilla’s job was precisely to review, debug and classify those findings before determining which ones had real security implications. The experience ended up becoming a direct collaboration between both organizations to review the results and prioritize corrections. The other half of the problem. The Anthropic team also wanted to see how far the model could go beyond detecting errors and turning those failures into real attacks. To do this, they asked him to develop exploits capable of taking advantage of the discovered vulnerabilities. The experiment included hundreds of runs with different approaches and cost approximately $4,000 in API credits. Still, the result showed a clear difference between the two capabilities: Claude only managed to generate two working exploits in a simplified test environment, without some of the defenses present in a real browser. Beyond the specific case of Firefox, the experiment reflects a change that is beginning to worry and interest the security community at the same time. AI-based tools are rapidly improving at detecting vulnerabilities in complex software, which could help developers fix bugs more quickly. Images | Anthropic | Rubaitul Azad In Xataka | iPhones were supposed to be the most secure cell phones in the world. It was supposed

summarize everything in your email inbox with Claude, Gemini or ChatGPT

Let’s explain to you how to make summaries of the newsletters you have in your email punctually with artificial intelligence. So, if you see that they have been accumulating but you don’t have time to read them, you will be able to ask the AI ​​to summarize them all for you. If your email is Gmail you can resort to Gemini already Claudeand if you have an outlook email then you can do it with ChatGPT. These are the AIs that have connectors for each mail service. But we will also start by telling you how we recommend organizing the newsletters in the email so that it is easier for the AI ​​to find them. First, organize your newsletters Before you start, I recommend tag all newsletters with the label or category system that Gmail and Outlook have. This way, you will be able to later ask the AI ​​to search directly in these categories instead of having to analyze the entire content of your email. Therefore, take your time entering the newsletters and tagging them. At first you will have to label them all, but then, each email address will be linked to the labelmeaning that the next ones that arrive to you and are not new will already be well labeled. Now link AI to your email Claude has a connector system where you must add and activate Gmail. Gemini allows you to do the same with its Connected Appsand in ChatGPT you have a section Applications which allows you to connect Outlook. With this previous step, you will have to link your email account to the AI ​​so that it can access and read your emails. If you are most concerned about your privacy Maybe you should reconsider doing this, because in the end you are going to link your account to the AI, so it can read and process all your emails when you ask it, storing its content on your company’s servers. The emails will no longer be private, you will be sharing them. Now, ask the AI ​​for a summary And now it’s time to go to the AI ​​and write a message asking for the summary. This prompts It has to mention Gmail or Outlook depending on the AI ​​you use and the email you have linked, and if you have done what we have recommended you have to indicate the newsletter label and ask for a summary. Besides, you can specify the structure of the summary so that it is more to your liking. This is the prompt that I have used: I want you to enter my Gmail account, analyze all the emails in the “Newsletters” label, and give me a summary of their content. It has to be a schematic summary, with an H2 for each email telling me the title and sender, and then bullets where you explain the most interesting points of its content. With this, the AI ​​will start to see the emails within your account and will give you a summary as you have requested. Here, keep in mind that you can simply tell it to search for the newsletters without having tagged them, but then there is the possibility that it will not find them all or consider something as a newsletter that really is not. Each AI will give you the results in its own wayalthough maintaining the structure that you have requested if you have specified it. Thus, with the prompt that we have used you will have everything summarized in several points so that you can read it in just a few minutes. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

The US Government stopped using Claude because it was a “woke AI”. Right after he bombed Iran using Claude, according to WSJ

This February 28, Israel and the United States They bombed Iran. It is something that occurs in parallel to a ‘war’ that is taking place on American soil: that of what AI should the country’s military arm use. Because yes, AI has become an essential tool for Intelligence operations, to the point that there are reports that suggest that Claude was key in the massive bombings on Saturday. But there is a problem. Hours before the attack, Trump ordered that Claude and any Anthropic artificial intelligence tools not be used in military operations. And the fact that the Pentagon has disobeyed only responds to one thing: Claude is too deep inside the United States military systems. The Anthropic Mess. This topic is complex, so let’s go with some context before getting into it. When the United States was looking for an AI to support its defense systems and will integrate with PalantirAnthropic offered theirs for the modest price of one dollar. That it was worth it a 200 million contract and both Anthropic and the Pentagon got to work integrating the company’s models into all kinds of systems. Claude’s support is so important to the Pentagon in massive scale data analysis that it is estimated that he was used for the capture of Nicolás Maduro a few months ago. The “problem” is that Anthropic programmed its AI not to violate two red lines: It will not be used to massively spy on American citizens. It will not be used for the development or control of autonomous weapons and attack systems. “The Woke AI”. The War Department and Donald Trump They didn’t agree with this. and last week they released a ultimatum: Either Anthropic gave up its ‘unleashed’ AI, or there would be consequences. What consequences? Play the card Defense Production Act of 1950 to take over the force of Anthropic’s creation. The company had until 5:01 p.m. last Friday to respond, and boy did it do so. In a long statement signed by Dario Amodei, CEO of Anthropic, it was stated that the company was on the side of the country’s defense interests, but not at any price. Their moral standard was very clear and they were not going to give in to the blackmail of a United States that hours before threatened to “make them a Huawei” by putting Anthropic on a blacklist. Amodei’s response infuriated Trump and Pete Hegseth. The Secretary of Defense called Claude an “AI Woke,” a line that Trump himself followed. On his social network Truth Social, Trump pointed out that Anthropic is a “radical left-wing AI company run by people who have no idea how the real world goes.” Striking, to say the least, and with another response: the United States ended its collaboration with Anthropic and prohibited the use of its AI. The problem is that it’s… fake. “I am ordering ALL US federal agencies to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and we will not do business with them again! – Donald Trump Claude to attack Iran. As soon reported The Wall Street Journalthe air attack against Iran was carried out with the help of those same radical left tools. The media noted that commands around the world, including the United States Central Command in the Middle East, used Claude’s tools to assess the situation, identify targets and simulate battle scenarios. Dependence. And this just paints a scenario, one in which the Pentagon is going to have a very difficult time removing those Anthropic tools from its system. It happened in Venezuela and it seems that it has happened again in Iran. Claude is too deep inside the Pentagon’s systems, maintaining an almost symbiotic relationship with the Palantir software, and breaking that from one day to the next seems complicated. HE esteem that it will take six months to eliminate Claude’s trace from the Pentagon software, but despite the prohibition of use and his inclusion on the blacklist by Hegseth, another decision seems to prevail: if we already have this, we will use it until we find a successor. OpenAI goes out for the crumbs (millionaires). And it didn’t take them even half a second to find that new AI provider. OpenAI -ChatGPT- issued a release in which he noted that “the United States needs AI models to support its mission, especially in the face of growing threats from potential adversaries that are increasingly integrating artificial intelligence technologies into their systems.” Interestingly, they have the same red lines that Anthropic imposed (no use for mass domestic surveillance, no direct autonomous weapons systems, no AI making high-risk decisions automatically). But there is a difference: if Anthropic refused to give full powers to the Pentagon, OpenAI points out that, despite maintaining the same moral principles, the use of its AI is tied to the legal use that the Department of Defense wants to make. This is ambiguous because if a certain use is considered legal, it does not conflict with that “morality.” We will see if it is a mere exchange of chips resulting from anger because someone opposed a government order or if the change from Anthropic to OpenAI translates into what the US needs for its security. In Xataka | The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

how to migrate the memory of everything other AIs know about you to Claude

Let’s tell you how to migrate memories from ChatGPT or Gemini to Claudeand thus perform a migration of a artificial intelligence to another. Claude has just launched a fairly easy-to-use function that allows you to import memories from ChatGPT, Gemini or any other AI you use. Artificial intelligence chats have a memory system with which they store important data about you and your tastes based on the things you ask them repeatedly. They will know your musical tastes, your pets, if you have plants, and so they take all this into account to personalize their answers. And why can it be useful to import these memories into Claude? Well, because if you have decided to start using this artificial intelligence model, you can make it know about you all the specific data that your other AIs use to personalize their results and adapt them to you. Import memories from another AI to Claude This option It is only available for paying users of Claude, who have Pro, Max, Team or Enterprise subscriptions on the web, and for users of Claude Desktop or Claude Mobile. What you have to do is enter the settings of the AI ​​website or application. Once inside the settings, Click on the section Capabilities in the left column. On the screen you go to, go to the section Memoryand in it click on the option Start import that will appear to you. This will open the import memory screen. In it, above you have a prompt that you must copy to use in another AI to extract the memories, and below you will have a field where you have to write the imported memory that generates the prompt above. Therefore, here click on the button Copy of the text you have above. Now, the text that you have copied in Claude you have to paste it in a chat with the AI ​​where you want to extract the memories. Simply paste it exactly as you have it into ChatGPT, Gemini or another, and send it. This will make the AI generate a code with all the memories what he has on you. You will have to copy this code and stick it in Claude’s field what’s in the window we opened before. With this, Claude will recognize the memories and start saving them internally. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

Anthropic releases a new feature to download all your memory to leave ChatGPT and switch to Claude

This weekend Anthropic has gone from being an AI used by the Pentagon, other US agencies and having partners such as Microsoft or Amazon to total ostracism: from Friday at 5:01 p.m. It is classified as a “risk to the supply chain”. Total veto, a serious threat to the survival of a company valued at 380,000 million dollars and also a challenge for those entities that in less than six months will have to transition to another alternative. The Pentagon itself He already has an agreement with OpenAI to succeed him. Anthropic’s situation is delicate to say the least serving its strategic clients and alliances, something essential to continue growing in the tough battle of intelligence. The company led by Dario Amodei, which was firm in its principles when expressing its concern about the use of artificial intelligence for mass civil surveillance and the development of weapons capable of firing without human intervention, has already announced that he will contestbut for now they look rough. He only has the civil…in every sense, because Claude has risen to number 1 for free downloads in the App Store in the United States, as reported by CNBC. Because yes, this tug of war with the US government has brought an increase in the popularity of Claude, less known than other alternatives such as ChatGPT or Gemini. On the other hand, this movement in which the US Administration has said goodbye to Anthropic in favor of OpenAI also has a reading in which Claude wins: the terms of the agreement and how it affects ChatGPT users. Anthropic Coup de Effect. So Anthropic has been taken out of the sleeve a new feature to facilitate the transition from other AI models, such as ChatGPT or Gemini, to Claude. Because if you have been using ChatGPT for a while for example and already knows youstarting from scratch is a step backwards in every sense. The new feature allows you to import all your memory from other models into Claude so that it immediately knows everything about you (everything that your previous AI already knew). You no longer start from scratch. How to download your memory and load it in Claude. To incorporate your preferences and context from other AI providers into Claude you have to do two steps: Copy and paste the prompt below into the AI ​​you normally use, like Gemini or ChatGPT: I’m moving to another service and need to export my data. List every memory you have stored about me, as well as any context you’ve learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: (date saved, if available) – memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I’ve given you about how to respond (tone, format, style, ‘always do X’, ‘never do Y’). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I’ve made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. The model will return everything it knows about you in a block of text, which you have to copy and paste later into Claude. Go to ‘Settings‘ > ‘Capabilities‘and there in Import Memorypaste the answer. Then, tap ‘Add to memory’. From that moment on, Claude already knows what your previous AI knew. It has small print. This is a feature for users on a paid plan (Pro, Max, Team or Enterprise). If you are on the free version, at most you will only be able to have that context in that conversation, but not permanently. In short: the import is free as a manual process, but for Claude to remember it permanently a payment plan is required. In Xataka | Claude: 23 functions and some tricks to get the most out of this artificial intelligence In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

Select the model to use between Claude, GPT, Gemini, Kimi, Grok or Sonar

Let’s tell you how you can choose the artificial intelligence model What are you going to use with? Perplexity in a prompt. This is a chatbot known for allowing you to access many cutting-edge models from third-party companies, something it does automatically depending on the request you make. However, if you are going to use Perplexity, it is advisable to know one of its functions basic, being able to choose by hand which model you want to use. And yes, every time Google, Anthropic or OpenAI launch a new model of artificial intelligenceat Perplexity they are going to add it to their catalog. The results will not be exactly the same as if you use the paid versions of ChatGPT, Grok, Claude or Gemini, because Perplexity may modify them a little. However, you will be able to take advantage of the reasoning power of these models. Choose the AI ​​model to use in Perplexity To choose the AI ​​you want to use in Perplexity, you have to look at the box where you write the prompt. In it, you must click on the option AI modelwhich will appear with the icon of what appears to be a chip. It is to the far left of the series of icons that appear at the bottom right in the prompt writing field. When you click on that button, it will appear a list of all models of artificial intelligence that you can use. Both the best and the latest available from Gemini, GPT, Claude, Grok, Kimi or Perplexity’s own Sonar will appear. This is something that you can do in its web version or in its mobile or computer applications. Here, you should know that you can choose the model with each prompt within a conversation with Perplexity. Come on, you can ask a question with one model, and then ask the next question with another. Also, below the list you will see the number of queries you can make with the most modern models. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Anthropic just accused DeepSeek and other Chinese companies of “distilling” Claude

For months we have talked about the race between the United States and China to dominate artificial intelligence as if it were only a question of who trains the most powerful model or launches the next version first. But the pulse begins to move to another, more delicate area: that of the rules of the game. When one laboratory accuses another of extracting capabilities from its system to accelerate its own development, the discussion goes beyond the technical. That’s exactly what Anthropic just did by denounce “distillation” campaigns against his model Claude. The complaint. In a text published this Monday, the company claims to have detected “industrial-scale campaigns” aimed at extracting Claude’s capabilities. According to their version, the activities attributed to DeepSeekMoonshot and MiniMax reportedly involved more than 16 million queries, question and answer interactions, and were channeled through approximately 24,000 fraudulent accounts, in violation of their terms of service and regional access restrictions. The race and the suspicion. The announcement by the firm led by Darío Amodei occurs in a context of growing tension around the progress of Chinese AI. Let us remember that DeepSeek altered the Silicon Valley landscape a year ago with the launch of R1, a competitive model that was presented as Developed at a fraction of the cost of American alternatives. The impact was immediate on the markets and revived the political debate in Washington about the technological advantage over China. Distilling is not always cheating. Anthropic itself recognizes that distillation is a common technique in the sector. It consists, in simple terms, of training a less capable model using the responses generated by a more powerful one, something that large laboratories use to create smaller, cheaper versions of their own systems. The problem, according to the company, appears when this practice is used to “acquire powerful capabilities from other laboratories in a fraction of the time and at a fraction of the cost” that developing them independently would entail. In that case, distillation would cease to be an internal optimization and would become, always according to Anthropic, a way of taking advantage of the work of others. Recognizable pattern. The three laboratories would have used fraudulent accounts and proxy services to access Claude on a large scale while trying to avoid detection systems. The company details infrastructures, what it calls “hydra cluster”, extensive networks of accounts that distribute traffic between its API and third-party cloud platforms, so that when one account was blocked, another took its place. Anthropic maintains that what differentiated these activities from normal use was not an isolated query, but rather the massive and coordinated repetition of requests aimed at extracting very specific capabilities from the model. Three campaigns. Although Anthropic presents the campaigns as part of the same dynamic, it distinguishes relevant nuances. DeepSeek would have focused its more than 150,000 queries on extracting reasoning capabilities and generating safe alternatives to politically sensitive questions. Moonshot, with more than 3.4 million queries, would have been oriented towards the development of agents capable of using tools and manipulating computing environments. MiniMax would concentrate the largest volume, more than 13 million queries, and according to Anthropic’s account, it reacted in a matter of hours to the launch of a new system, redirecting its traffic to try to extract capabilities from its most recent system. A geopolitical issue. The company states that illicitly distilled models may lose safeguards that seek to prevent state or non-state actors from using AI for purposes such as the development of biological weapons or disinformation campaigns. It also argues that distillation undermines export controls by allowing foreign laboratories to close the gap in other ways, while at the same time recognizing that executing these large-scale extractions requires access to advanced chips, thus reinforcing the logic of restricting their availability while, at the same time, remembering that the risk would grow if these capabilities end up being integrated into military, intelligence or surveillance systems. Images | Xataka with Nano Banana Pro In Xataka | Seedance is the greatest brutality we have seen generating video. And it has an uncomfortable message: it has surpassed Sora and Veo without NVIDIA chips

The great revolution of GPT-5.3 Codex and Claude Opus 4.6 is not that they are smarter. It’s that they can improve themselves

Last week, OpenAI and Anthropic simultaneously launched their new AI models specialized in programming: GPT-5.3 Codex and Claude Opus 4.6. Beyond the improvements they represent in performance or speed, which are truly amazing, both companies also stated something that completely changes the rules of the game: AI models are actively participating in their own development. Or put another way: AI is improving itself. Why does this change matter?. Generative artificial intelligence tools are reaching a high level of efficiency and precision, becoming in a few years from being co-workers for simple and specific tasks to being able to be involved in a good part of a development. According to the technical documentation of OpenAI, GPT-5.3 Codex “was instrumental in its own creation,” being used to debug its training, manage its deployment, and diagnose evaluation results. On the other hand, it is worth highlighting the words of Dario Amodei, CEO of Anthropic, who in his personal blog affirms that AI writes “much of the code” in his company and that the feedback loop between the current generation and the next “gains momentum month by month.” In detail. What this means in practice is that each new generation of AI helps build the next, more capable one, which in turn will build an even better version. Researchers call it the “intelligence explosion,” and those developing these systems believe the process has already begun. Amodei has declared publicly that we could be “just 1 or 2 years away from a point where the current generation of AI autonomously builds the next.” Most people use free language models that are available to everyone and are moderately capable of certain tasks. But they are also very limited, and are not a good reflection of what a cutting-edge AI model is capable of today. In a brief session with 5.3-Codex I was able to draw this same conclusion, since the AI ​​tools that big technology companies use in their development are nothing like the most commercial ones that we have freely available in terms of capabilities. The code-first approach. Initial specialization in programming makes more sense than we think. And the idea of ​​companies like OpenAI, Anthropic or Google that their systems were exceptional by writing code before anything else is linked to the fact that developing AI requires enormous amounts of code. And if AI can write that code, it can help build its own evolution. “Making AI great at programming was the strategy that unlocked everything else. That’s why they did it first,” Matt Shumer, CEO of OthersideAI, said in a publication that has given us something to talk about these days on social networks. Between the lines. The new models don’t just write code: they make decisions, iterate on their own work, test applications as a human developer would, and refine the result until they are satisfied. “I tell the AI ​​what I want to build. It writes tens of thousands of lines of code. Then it opens the app, clicks the buttons, tests the features. If it doesn’t like something, it goes back and changes it on its own. Only when it decides it meets its own standards does it come back to me,” counted Shumer describing his experience with GPT-5.3 Codex. What changes with self-reference. Until now, each improvement depended on human teams spending months training models, adjusting parameters and correcting errors. Now, some of that work is performed by AI itself, accelerating development cycles. Just like share Shumer and referring to METR dataan organization that measures the ability of these systems to complete complex tasks autonomously, the time that an AI can work without human intervention doubles approximately every seven months, and there are already recent indications that that period could be reduced to four. And now what. If this trend continues, by 2027 we could see systems capable of working autonomously for weeks on entire projects. Amodei has spoken of models “substantially smarter than almost all humans in almost all tasks” by 2026 or 2027. These are not distant predictions, since the technical infrastructure for AI to contribute to its own improvement is already operational. And these capabilities are what are really turning the technology industry on its head. Cover image | OpenAI and Anthropic In Xataka | We have a problem with AI. Those who were most enthusiastic at the beginning are starting to get tired of it.

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.