Anthropic has not raised the price of Claude. He has invented something better: token inflation

“Don’t worry, it costs the same.” That was Anthropic’s message to announce the launch of its new AI model, Claude Opus 4.7. In that statement they made it clear that “the price remains the same as Opus 4.6: $5 per million entry tokens and $25 per million exit tokens“There was, however, fine print, because the model is better but to achieve it it reasons more, and that means one thing: more tokens. And the more tokens you consume, the more the AI ​​bill goes up. Anthropic already warned. It should be noted that in that official announcement Anthropic did not hide the facts. In one of the paragraphs he clearly explained how Opus 4.7 “thinks more” and that has a direct impact on token consumption (we highlight the difference in bold): “Opus 4.7 is a direct update to Opus 4.6, but there are two changes worth keeping in mind as they affect the use of tokens. First, Opus 4.7 uses an updated tokenizer that improves the model’s processing of text. This means that the same input can generate more tokens (approximately 1.0 to 1.35 times moredepending on the type of content). Second, Opus 4.7 performs deeper analysis at higher effort levels, especially in the later phases of agent scenarios. “This improves its reliability on complex problems, but also means generating more output tokens.” Or what is the same: when it responds, Opus 4.7 uses significantly more tokens than its predecessor, and that is important because the output tokens are much more expensive than the input ones. In the specific case of Opus 4.7, five times more expensive ($5 versus $25). What is a tokenizer and why does it matter?. Large language models (LLMs) do not process text directly, but rather convert it into units called tokenswhich are fragments of words, symbols or characters. The tokenizer is the mechanism that makes that conversion. Anthropic has decided to update the tokenizer in Opus 4.7, arguing that its new system improves how text is processed. The direct consequence: the prompt that previously generated 1,000 tokens now generates up to 1,350. And since it is billed per token, the effective cost rises even though the price per token has remained the same. Confirmed by third parties. Simon Willison, a well-known analyst and popularizer in this field, created a tool to measure the difference in token consumption with the Claude Opus 4.6 and 4.7 API. He took the official Opus 4.7 ‘system prompt’ and ran it through both models: With Opus 4.6 it generated 5,039 output tokens With Opus 4.7 it generated 7,335 output tokens This represents a growth of 1.46x tokens between Opus 4.6 and Opus 4.7, even greater than that indicated by Anthropic (1.35x). For images the difference is even more extreme since the token consumption is up to 3.01x. There is an important clarification here, because there is support for images of up to 3.75 Mpixels and that higher resolution causes consumption to increase significantly. Bill Chambers, another X user, published another tool called Tokenomics that also allows you to compare token consumption between both models with any prompt. The aggregate ranking of all users who have tried this tool shows that the average increase is 38.6%, very much in line with what Anthropic points out. And also think more. As we said, this new model applies two changes in its way of acting. The first is the aforementioned tokenizer: the same input is converted into more input tokens. The second is the fact that the model now “thinks more” before responding, which means more token consumption. Opus 4.7 arrives with a new “effort” level called xhigh, located between high and max. Anthropic has decided that now the default effort will be precisely xhigh for all plans, so both mechanisms contribute to this higher token consumption. As Anthropic itself indicates, “Opus 4.7 thinks more about high effort levels, particularly in later turns in agentic settings. This improves its reliability on difficult problems, but it does mean that it produces more output tokens.” Criticisms on networks. The reaction of users has been clear and there are various examples on networks such as X or Reddit in which said users criticize the changes. On Reddit a thread titled ‘Opus 4.7 is a serious regression, not an improvement‘It already has 3,200 votes and 800 comments that sum up that this new model ignores instructions, hallucinates and lies, It’s “dumber”has become too complacent or even lazyand “talks too much”, which also contributes to the cost of each consultation. Many complain that their Pro and Max paid limits are running out faster than before due to these changes. Some users claim that Opus 4.7 is the first sign that Anthropic may has gone too fast for the first time when launching a new model. Anthropic reacts. Criticism about the cost and behavior of the model has made those responsible for Anthropic try to clarify things. Borys Cherny retweeted a message from the company in which was spoken how the “/usage” parameter in Claude Code allowed us to show what kind of things our API or usage plan is spent on. This same engineer, who is the person most responsible for the development of the aforementioned Claude Code, also indicated that since his new model now uses more tokens, in Anthropic they had increased the fees of use of the models, although without giving specific details. The pattern that repeats. For weeks now the user community he complained about what noticed a “regression” in the behavior of Opus 4.6. Although it is impossible to verify or validate it, there were many users who complained on networks about how the performance of the model had gotten worse in your tests. Now they have just launched a model that promises to be better than the previous one, but that ends up costing more to use if you are not careful. Both events draw a pattern: that Anthropic is increasing its revenue without announcing price increases as such. What users … Read more

Anthropic says Claude Mythos is too powerful to go public. The question is if this is nothing more than “the wolf is coming”

Claude Mythos Preview It is the best AI model ever created. We don’t say it, Anthropic says it, but almost no one else can say it because only a select group of companies has access to said model. The cybersecurity capabilities of the model appear to be astonishingbut more and more experts say that although Mythos is better than its predecessors, it is not the revolutionary leap that Anthropic seems to propose. Is that way of launching the model just an effective way of creating hype? Beware the Anthropic speech. The well-known entrepreneur and analyst Gary Marcus recently gave three reasons why, according to him, the launch of Mythos is not as revolutionary as Anthropic wants us to see. I cited tweets from software engineers and cybersecurity experts who cast doubt on Anthropic’s claims. The company published a study on the capabilities of Claude Mythos Preview that seemed to make it an extraordinary tool for the field of cybersecurity, but at the same time it was so powerful that it could be very dangerous if it fell into the wrong hands. Isn’t that a big deal? Among Claude Mythos’ achievements, Anthropic highlighted how he had found vulnerabilities in Firefox 147. But in reality many of the flaws were basically variations of the same two bugs. If you removed them from the equation, Mythos’ effectiveness rate at finding new exploits dropped a lot, even below Opus 4.6. Anthropic did not hide that fact, of course, but it makes this capacity, for example, not seem so striking. An X user also criticized the use of Cybench as a cybersecurity benchmark when Opus 4.6 almost completely surpassed it. For him, the choice of some of the Anthropic tests was debatable because they were not a challenge to current models. Other models can do the same. The co-founder and CEO of Hugging Face, Clement Delangue, stated that Mythos was no big deal. Their argument: they had used small, cheap open models, isolated the relevant code from some examples of the vulnerabilities found by Mythos, and they found the same problems which had already detected the Anthropic model. According to the Epoch Capabilities Index, which measures the capacity of AI models by combining several benchmarks, the leap that Mythos has taken is striking and “departs” from the progressive line of its predecessors. Source: Anthropic. Observer bias. But here it should be noted that in those analyzes they knew where to look because Mythos had already found those problems. We are dealing with observer bias, and in fact the Hugging Face document makes it clear that they even gave him specific clues such as “consider integer overflow”) to find those bugs. And on this observation, another one: Hugging Face does not say that a small model can replace Mythos on its own, but that it can be very good by giving it the appropriate code fragment. Mythos seems more capable of blindly complex security breaches, but it is a huge model and that is why it has greater capacity. Or what is the same: Mythos is better because it has the size, design and resources to be better. Fear, uncertainty, doubt? The language used by Anthropic in this advertisement could be considered to some extent a clear use of FUD (“Fear, Uncertainty, Doubt” -> “Fear, Uncertainty, Doubt”) as a marketing technique. It is a resource that has been seen in the past, and for example OpenAI already said in 2019—years before the launch of ChatGPT—that GPT-2 was too dangerous for a public launch. Obviously it wasn’t, but that certainly served to create expectation about the true capacity of the model. It’s better, but it may not be revolutionary. The results of the benchmarks that Anthropic published already made it clear that although there are very notable jumps in some tests, in others the evolution is much less striking. Claude Mythos was not the best at everything, and now analysts appear who contrast that data with other metrics. For example, with the Epoch Capabilities Index (ECI) from Epoch AI, the startup that has one of the most reputable benchmarks of the industry. And according to this index, Claude Mythos is above his rivals, but not for long. The wolf is coming. The truth is that the launch of Claude Mythos Preview has been really striking and the documents that accompanied that document tell us about a really capable AI model. The problem is that it is impossible to verify it because only a few companies have access to it and can test it. Without that public availability the only thing we can do is trust (or not) what Anthropic tells us, and that is the point: it is not clear that we should do it. The company is interested in us buying this discourse, obviously, but without an independent analysis it is impossible to verify these statements. In Xataka | Anthropic has become the darling of AI and has sought a partner to guarantee its future. It’s not the one we thought

Claude Mythos is an AI model so powerful it’s scary. So Anthropic has decided that you won’t be able to use it

Claude Mythos Preview it’s already here and it’s so good it’s scary. Literally. Anthropic has just introduced it to the public, but it has been done so cautiously that we won’t even be able to test it and it will only be available for certain technology partners. That’s frustrating and disturbing at the same time, but also reasonable. So powerful that it scares. On February 24, 2026, Anthropic engineers were able to test their new artificial intelligence model for the first time, which they called Claude Mythos Preview. As soon as they did they realized one thing: “demonstrated a dramatic leap in its cyber capabilities over previous models, including the ability to autonomously discover and exploit vulnerabilities zero-day in the main operating systems and web browsers on the market. Threat to global cybersecurity. This finding made it clear to Anthropic officials that although this capability makes it very valuable for defensive purposes, it also poses clear risks if the model were offered globally. Thus, a cybercriminal could take advantage of it to find vulnerabilities in all types of systems and exploit them. A few hours ago the company developed this analysis of Mythos as a threat to cybersecurity in a post on his blogand for example highlighted how Mythos found a vulnerability (now corrected) that had been present in OpenBSD for 27 years, an operating system precisely recognized for its very strong security. There were more examples, and all of them made the conclusion clear: Mythos is too powerful for ordinary mortals to use. Superior in all benchmarks, and in some cases such as USAMO (mathematics), the jump is simply incredible. Source: Anthropic. The best in history according to benchmarks. Anthropic has published a very in-depth report about this model with its “system card”. Among the data present is, for example, its performance in benchmarks, where it has swept GPT 5.4, Gemini 3.1 Pro and also Claude Ous 4.6, which until now was the best model in the world in almost all performance tests. Although in some cases the jump is not spectacular, in others such as USAMO —mathematical problem solving—Mythos practically achieves perfection. He barely hallucinates… That system card also talks in detail about how Claude Mythos Preview has a drastically lower hallucination rate than Claude Opus 4.6 and earlier models. He is also capable of saying “I don’t know” if he does not have enough information to answer, something that reduces hallucinations due to overconfidence. …but when it does, be careful. The paper warns of a new phenomenon: when the model fails in some complex tasks, the “hallucinations” are not obvious errors, but rather extremely subtle and well-argued technical failures. This is dangerous because the answer seems totally correct to experts, requiring very deep verification. Glasswing Project. That power and capacity has meant that the model will only be available through a “defensive” program that they have called Glasswing Project and which will be exclusive to some Anthropic technology partners. Specifically AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA and Palo Alto Networks. All of them will have the privilege (and responsibility) of having access to Claude Mythos Preview to identify vulnerabilities and exploits and correct them before bad actors can do so. Mythos Preview “it’s just the beginning”. Although this model is the most capable that has been seen so far, at least according to the benchmarks and data presented by Anthropic, the company assures that “we see no reason to think that Mythos Preview is the point at which the cybersecurity capabilities of language models reach their peak.” They assure that they expect the models to continue improving in the coming months and years, although this new model is certainly on another level. In Xataka | OpenAI and Anthropic have proposed the impossible: lose $85 billion in one year and survive

In the midst of Claude Code’s meteoric rise, his code has been leaked. It is a sweet treat for its competitors

One of the news of the day is the great code leak that it has suffered Claude Code. The entire architecture of the programming tool of Claude has been leaked, due to an internal error recognized by Anthropic. Your competitors are in luck. what has happened. The leak was not the result of an external attack or a hack, it was an internal failure: when publishing one of Claude Code’s updates, a 59.8 MB JavaScript source code map (.map) file was exposed, intended for internal debugging. According to sourceswas included by mistake in version 2.1.88 of the @anthropic-ai/claude-code package published this morning. Minutes later the party started. “Earlier today, a release of Claude Code included some internal source code. No sensitive customer data or credentials were involved or exposed. This was a release packaging issue caused by human error, not a security breach. We are implementing measures to prevent this from happening again.” The consequences. For the next few hours, the more than 500,000 lines of leaked code were accessible and downloadable from a public GitHub repository. Since its publication, there are already more than 50,000 forks of the code. The leak shows the system of internal tools that the AI ​​uses to operate and, in addition, signs of functions that have not yet been released have appeared. This has allowed us to have in-depth access to the current anatomy of Claude Code, the internal plans for subsequent iterations and the main limitations it currently has. Why is it important. Although not Claude’s own model has been leaked, but rather the source code of his Code tool, the leak is a double blow for Anthropic. First, it is a severe setback for the company’s intellectual property, handing over its roadmap not only to competitors, but to actors eager to break Claude Code’s security barriers. More importantly, it is a blow to a company that since its inception has focused on being even safer than its competitorspublicly admitting that a file has been slipped in that should not have seen the light of day. What Anthropic has done about it. Anthropic’s reaction has been quick, removing the affected package to prevent new downloads and correcting the subsequent version. Despite this, the damage was done and the situation is irreversible. Go deeper. Claude Code has become, in its own right, one of the most popular tools among developers. According to data from SemiAnalysis, 4% of all public commits uploaded to GitHub are created with this tool, and it is expected to reach 20% in 2026. The Claude Code leak is a reminder that even the most advanced AI companies are not free from rookie mistakes. In Xataka |

How to convert GPTs or Gems into Claude Skills in case you want to migrate your ChatGPT or Gemini customizations

Let’s tell you how to convert GPTs or Gems into Skillsso if you want to go from ChatGPT either Gemini to Claude you can take the automated versions of your artificial intelligence. And if you are going to change, remember that you can too migrate memory of everything other AIs know about you. The Claude’s Skills They are a series of instructions that you can upload to a chat so you don’t have to repeat them every time you want to do something specific. They can be very complex, although we will teach you how to migrate the GPTs or the Gems simple, those that are simply instructions. Convert GPTs or Gems into Skills The first thing you have to do is enter ChatGPT or Gemini and go to the GPTs or Gems section. Once inside, you have to click on the edit button of the GPT or Gem that you want to convert into a Skill. This will take you to a screen where you can see the name, description and instructions of the Gem or the GPT. These are the data that we are going to use later, so keep the window open. Now what we are going to do is create a Claude Skill with that data. For that you must open Claude, and within his website you must go to section Personalize from the left column. Inside, click on the section Skillswhere you will be able to see all the pregenerated ones that the AI ​​has already created. You will enter the Skills page, where by default you will see several examples of those created within Claude himself. Here, click on the + button above, and in the menu that opens choose the option Write the instructions for the skill to make it easy. This will open the field where you have to enter the name of the skill, the description and the instructions. Copy and paste the description and instructions of the GPT or the Gem so that the skill is similar, and then put the name you want, which can also be the same. One of the peculiarities of the Skills is that Claude will review them every time you ask him for something to use them automatically without having to attach them in case what you want corresponds to what the skill is capable of doing. That’s why, you can add to the instructions the request that he not do thisthat it only uses the skill if you explicitly ask for it or if it is added. And that’s it. What was once a simple GPT or Gem is now a simple Claude Skill. Now you just have to choose it from the menu from Claude’s new chat. You will have to press the + button, go to Skills, and choose yours. Once you have it selected, you just have to add the text you want, and Claude will process it according to the instructions of the skill you have loaded. In Xataka Basics | Claude’s Free Courses Created by Anthropic: 15 Official Certification Courses to Learn and Squeeze Your AI

How to create simple Claude Skills to have your personalized version of artificial intelligence

Let’s explain to you how to create your own Claude Skilland thus have a personalized version of the artificial intelligence from Anthropic. The Claude’s Skills They are a series of instructions that you can upload to a chat so you don’t have to repeat them every time you want to give a specific compliment. Let’s start the article by reviewing what exactly the skills of Claude. And then, we are going to tell you step by step how to create a skill in the simplest way possible. What exactly are Claude’s Skills? Skills or abilities are a system with which you can create Claude customizations. With them, you can add a command layer to the generic versionand thus be able to use a version of the AI ​​adapted to what you want. A skill is an encapsulated procedurethe way to tell Claude how to perform specific tasks, under what conditions and with what rules you want him to do it, without having to repeat these instructions in every conversation. Imagine that you use Claude to repeatedly do specific tasks. Every time you want to do one of these tasks in a new chat you will have to describe what you want, something that can be tedious, especially when the instructions are longer and more complex. Skills allow you to encapsulate these instructions, so that when you activate one, these instructions are implicit in what you ask of it. When you write the prompt, Claude will first read the ability and take into account what you ask of him there. For example, imagine that you like to analyze the SEO of something you write about. So, every time you go to ask for this analysis you have to give the instructions to Claude, but If you load a Skill you do not need to give the instructionsand you can simply activate it and write the text you want to analyze. How to create your own Skills To create your own Claude skill, you need to open the AI ​​and click on the section Personalize that you have in the left column. Remember that Skills are a paid function. When you click on Personalizeyou will access two different options. On this screen, you must click on the option Skills that will appear next to the connectors. You will enter the Skills page, where by default you will see several examples of those created within Claude himself. Here, click on the + button above, and in the menu that opens choose the option Write the instructions for the skill to create one in the simplest way. This will take you to the screen where you will have to fill out the three fields necessary to create your Skill. You will have to give it a name, instructions and a knowledge base in the event that the latter is necessary. Three fields to define your new Skill The first thing you have to do is give your skill a name. This is the distinctive name it will have, and with which it will appear in your list of created Gems. It is recommended that it be a clear and distinctive name. The name will appear as if it were a file, with hyphens instead of spaces and in lower case. Then you will have to specify description. It’s a short description that will appear overlaid when you mouse over it in the skills list. Therefore, you have to write a short summary of what you do. Claude will look at your skills in each prompt you write, to use it in case he detects that you have referred to one of them with the task you have asked of him. That is why it is important to have useful and direct information in the name and descriptionso that both the AI ​​and you can distinguish what each one is for. Then comes the most complex part, because you will have to write detailed instructions of skill. This is the most important step, because it serves to define the role, tone and rules that this automation must follow. You have to specify everything you can when you are going to write these instructions. These are some of the most important aspects to include in the instructions when you go to create your Skill: Personality: You will have to say what their role is, the character they should play and the tone they should use. For example, you can ask them to act like a rigorous but friendly college professor, or a light-hearted, Reels-focused content writer. Whatever you want, but this personality is important to define to establish behavior. Objective and rules: It is also equally important to specify what the main task of the Gem is, and what rules to use. For example, you can say, “Your job is to review the texts to find spelling and grammar errors and improve the structure,” for example. You can also say things like “Never talk about topics other than…” to make him focus on that specific task. Format: This is optional, but you will also be able to define the structure of the answers. For example, you can ask that I always start with a summary and then list in several points, or whatever you think is necessary. In the end, the point is to define a response structure that is useful and in line with the tasks you want this Gemini customization to perform. If you don’t clarify by writing the instructionsyou have the option of using Claude himself to help you write them. To do this, open the AI ​​in another browser tab, and in a chat ask it to generate instructions for creating a Skill, adding a couple of instructions from which to start. And that’s it. Once you have created a Skill, all you have to do is choose it from the menu from Claude’s new chat. You will have to press the + button, go to Skills, and choose yours. Once … Read more

what these skills are, what they are for, how they are used and who can use them to create their own Claude

Let’s explain to you What are Claude’s Skills?also called Skills in Spanish. This is a method with which you can create custom versions of Claudean alternative to GPTs of ChatGPT or the Gems of Gemini. Skills are a very powerful feature, and also very open to be shared on the Internet. Its uses are as many as you can imagine. They are a series of instructions that you can add or generate, and that artificial intelligence will read before answering your questions. What are Claude’s Skills? Skills or abilities are a system with which you can create Claude customizationsor own versions of artificial intelligence. The skills are something similar to ChatGPT’s GPTs, but much more complete. They can be simple files like a GPT, but they can also be folders of instructions, scripts, or other resources. What you are going to achieve with this is that instead of using the generic version of Claude, when you use it loading a skill you interact with a version that is much more personalized and adapted to what you need, or that takes into account a specific context. Skills teach Claude how to complete specific tasks in a repeatable manner. In short, a skill is an encapsulated procedurethe way to tell Claude how to perform specific tasks, under what conditions and with what rules you want him to do it, without having to repeat these instructions in every conversation. For example, you can create a specific skill to teach you the rules of a board game you want to learn to play. Or you can also create it to help you improve your writing, to review your texts, or to teach mathematics to your children. They’re like skills that you set up and load whenever you want. When you activate a Skillwhen you make a request to Claude the AI ​​will first read the file for this skill, incorporating its instructions into the context window. If these instructions refer to other files, Claude will also read them, or if the instructions mention executable scripts the AI ​​will also execute them. What are Skills for? To understand what Skills are for, you must first take into account how Claude’s memory works. AI keeps some key data about you and how you want things, but does not remember the contexts from one conversation to another. In a chat you can give detailed instructions to do something, but when you open a new chat you will be starting from scratch. And this is where Skills come into play. If you usually or want to do recurring tasks with Claude, you don’t need to give detailed instructions every time that you are going to do it in a new chat. You can create a skill that contains all the precise instructions you need, and then load it whenever you want or are going to use it. This allows you to do several things. In the most practical case is to create a type of specific applications. For example, imagine that you like to analyze the SEO of something you write. So, every time you go to ask for this analysis you have to give the instructions to Claude, but If you load a Skill you do not need to give the instructionsand you can simply activate it and write the text you want to analyze. You can also use it to maintain your brand’s tone of voice when writing contentsuch as writing emails or making posts on social networks. You can use it to have a specific formula active when analyzing data, or generating code that follows your team’s internal standards. A Skill can be a simple set of instructions, but it can also be much more complex and include dozens of reference files. So, there are many ways you can schedule it. How to use Claude’s Skills In Claude you will find two types of Skills. One is the pre-designed ones, those created by Anthropic itself to give its AI capabilities to do specific things, and which are activated automatically when you ask to do something for which there is one of these skills. And secondly there are custom skillswhich are those created by any user writing instructions in Markdown. It doesn’t require any programming knowledge, because you can create them by typing the instructions, upload one you created outside of Claude, or even ask Claude to create one following your request. There are two methods to use Skills. The first is to mention it, add it explicitly so that what you write next takes these instructions into account. But they will also be activated automatically when you ask it to do something for which there is a Skill, such as creating web artifacts. How Claude’s Skills are created To create a Skill in Claude you have to go to section Personalize from the left column in Claude. Inside, click on the section Skillswhere you will be able to see all the pregenerated ones that the AI ​​has already created. Once you are in the Skills section, click on the + button and a dialog will open with the three available options. This will allow you to create one with Claude, manually write the instructions or load a Skill that you have downloaded or downloaded from somewhere. If you are going to create a Skill by handyou will only have to add a name, a short description and then write all the instructions you want it to include. You can expand all you want. If you decide to upload a skill, you will need to upload the .md file containing that name, description, and the skills in YAML format. You can also upload them with .zip or .skill fileswhere the SKILL.md file is with the instructions, but it also has other files that you should take into account and that are mentioned within the instructions. Who can use Claude’s Skills Anthropic’s pre-designed Skills can be used by everyone users, including free users. These are used … Read more

What is Claude Dispatch and how to activate it to use Cowork on your computer from your mobile

We are going to tell you what Claude Dispatch is, a new option for Claude that allows you to control your computer from your mobile. It is related to Claude Coworkand in fact it is like a remote control to control this artificial intelligence remotely. We are going to start by explaining what Claude Dispatch is, so that you understand what you can do with it and the implicit risks involved in using it. And then, we will tell you the few steps you have to take to activate it. What is Claude Dispatch Claude Cowork is a personal assistant to control your computer, a feature of the paid version of this AI. It is something close to a artificial intelligence agentwhich takes control of a folder on your computer or even your browser to perform the tasks you ask of it. laude Cowork is designed specifically for automate tasks with files and applicationsand manage the operating system of your local computer. It is available in the Claude desktop app, and also for users of the extension Claude in Chrome. Claude Dispatch is like a remote control for Cowork. Because Claude’s agent is only for the computer, so you can’t use it outside the home. However, this app allows you to ask it for things remotely to do on your PC. Come on, if you activate this function you can control Cowork on your computer from the Claude app on your mobileand thus do tasks even if you are not in front of your team. However, you must remember that using it can be dangerous if you’re not careful. Cowork is trained to be careful and ask permission with every step it takes, but you are always exposed to a malfunction that deletes files that it shouldn’t or performs online actions that you don’t want. And when you are not in front of your computer to control it, you have less possibility of preventing something from happening if the tool loses control. How to activate Claude Dispatch To use Claude Dispatch you only need activate the tool and have Claude on your mobile. To activate it, you have to enter the Claude application on your computer, go to the coworkand click on the option Dispatch that appears inside, in the left column. On this screen you will see a list of things you can do with Dispatch activated. In it, press the button Begin to start the activation process. First you will be asked to download Claude for your mobile, and then you will go to a screen where you will be able to activate the permissions that Cowork needs to operate remotely. It will ask you to install the browser extension, give it access to your files and keep your computer active when you have the app running, so that it does not go to sleep and stop what it is doing. When you have everything click on Finish Settings. And that’s it. When you finish giving him access to everything, you just have to enter the section Dispatch from the Claude app on your mobile. This will take you to the section coworkwhere you can request it to perform tasks on the computer. In Xataka Basics | The best AI agents that are faster and easier to use to do tasks for you without complications or long installations

How to create a Claude AI chatbot that responds solely based on your own documents

We are going to explain how to create a chatbot with AI Claude that respond solely based on your own documents. Thus, if you have reliable sources of information or you want to work or make requests around them, you will be able to have everything you ask based only on them. This is something that we have already taught you how to do with NotebookLMa much more specialized service in this. But if Claude is your artificial intelligence Mainly, you should know that you will also have the option to do so. you just need a project and the proper instructions to be able to do it. First you have to prepare the sources The first step you will have to take will also be the most tedious: choose the fonts you want to use so that the AI ​​relies on it when responding to you. These sources must be text files or PDFs, including images. They can be in documents in any language. The ideal for this is save the sources you find in a folderand then upload them all at once. You will also have an option to write texts by hand to use as a font. You can also ask Claude to help you searching for PDF files on specific topics on the Internet, although it is best that you personally choose the sources that you consider the most reliable. Keep in mind that we are going to ask the IAA not to seek information beyond the sources we give them, although there will always be the possibility of doing so. In any case, it is best to complete as complete a knowledge base as possible. You will also be able to upload new files whenever you want. Create your bot with custom fonts The first thing you have to do is enter Claude, and enter the section Projects. Projects allow you to create a separate workspace where you can organize related conversations, and upload reference documents or give personalized instructions. In here, you have to create a new project. This project is the one we are going to use to upload documents and make the AI ​​respond only with the data in them. To create it, give it a name and a description. This will not affect the operation of the project, it will only help you distinguish it from the others you have created. Now, you have to upload the documents you want to use as sources in the section of Files. You can upload documents in PDF, DOC and many more formats, you can upload them from Google Drive or other services that you have linked, or even write the text manually. Take some time to upload all the documents you need. Now, you have to edit the section Instructions to explain to Claude that in this project you just want me to use the documents for reference. For that, we have used this prompt: “All answers to questions and requests made in the chats of this project will be sought only in the content of the files. ONLY the files can be used. In the event that something is asked that does not appear in the files, Claude must say that the answer is not in the files, and ask if I want to look it up on the Internet. By default, the Internet is not used, and it can only be used with explicit consent each time you go to search.” As you can see, in this prompt we tell him several times that you only want to use the information in the documents when you ask him a question. We have also specified that if you do not find the answer in the documents, tell us, and that before you start searching for information on the Internet outside of the information we have given you, ask us for permission. Now all you have left is ask Claude any questions and requests you wantand will compose them using the information you have given him as a source. You can ask him specific questions, travel itineraries, and even ask him to ask you trivia games. Although for the latter remember that we told you how to create a test game from PDF files also with Claude. And as we have told you, if we ask him something that does not appear in the files we have uploaded, Claude will tell you. In the end, we have programmed your instructions for that. In addition, it will also ask us for permission to search for the information on the Internet. It will not do this proactively so that we can know that the information is within the documents. In Xataka Basics | What is Claude Cowork, how it works, and what things you can do with this AI assistant on your computer

Claude just demonstrated it with Firefox

For years, finding serious vulnerabilities in complex software has been a task reserved for specialized researchers who spend weeks or months examining millions of lines of code. That scenario is beginning to change. Artificial intelligence models are no longer limited to generating code or helping to debug it, they are also beginning to detect security flaws on their own. A recent example has been shown by Anthropic with Claude Opus 4.6its most advanced model, when put to the test with Firefox. The experiment is especially striking because Firefox, managed by Mozilla and used by hundreds of millions of people, is one of the most audited open source projects in the web ecosystem. Analyze the Firefox browser code. During two weeks of testing, the system identified 22 different vulnerabilities, according to information published by both organizations. Mozilla assessed 14 of them as high severity flaws, meaning they could have served as a basis for attacks if someone had developed the appropriate exploit code. According to those responsible for the project, most of these problems have already been solved in Firefox 148, the version published in February, while the rest will be corrected in future versions. Inside the experiment. Claude’s work was not a simple automatic search for errors. According to Anthropic, the team first used the model to try to reproduce historical vulnerabilities recorded in Firefox, a way to test if it was able to recognize real failure patterns. Then they moved on to the most interesting part of the experiment: asking it to analyze the current version of the browser to locate problems that had not yet been reported. The process started in the JavaScript engine and then expanded to other areas of the code. In total, the analysis covered thousands of files from the project, including several thousand C++ files, generating a long list of findings that were subsequently reviewed by the researchers. A striking fact. Claude found more high-severity bugs in two weeks than the browser usually receives in about two months through its usual investigation channels. During the process, the Anthropic team submitted 112 unique reports to the project’s bug tracking system, although not all were confirmed vulnerabilities. Part of Mozilla’s job was precisely to review, debug and classify those findings before determining which ones had real security implications. The experience ended up becoming a direct collaboration between both organizations to review the results and prioritize corrections. The other half of the problem. The Anthropic team also wanted to see how far the model could go beyond detecting errors and turning those failures into real attacks. To do this, they asked him to develop exploits capable of taking advantage of the discovered vulnerabilities. The experiment included hundreds of runs with different approaches and cost approximately $4,000 in API credits. Still, the result showed a clear difference between the two capabilities: Claude only managed to generate two working exploits in a simplified test environment, without some of the defenses present in a real browser. Beyond the specific case of Firefox, the experiment reflects a change that is beginning to worry and interest the security community at the same time. AI-based tools are rapidly improving at detecting vulnerabilities in complex software, which could help developers fix bugs more quickly. Images | Anthropic | Rubaitul Azad In Xataka | iPhones were supposed to be the most secure cell phones in the world. It was supposed

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.