Claude just demonstrated it with Firefox

For years, finding serious vulnerabilities in complex software has been a task reserved for specialized researchers who spend weeks or months examining millions of lines of code. That scenario is beginning to change. Artificial intelligence models are no longer limited to generating code or helping to debug it, they are also beginning to detect security flaws on their own. A recent example has been shown by Anthropic with Claude Opus 4.6its most advanced model, when put to the test with Firefox. The experiment is especially striking because Firefox, managed by Mozilla and used by hundreds of millions of people, is one of the most audited open source projects in the web ecosystem. Analyze the Firefox browser code. During two weeks of testing, the system identified 22 different vulnerabilities, according to information published by both organizations. Mozilla assessed 14 of them as high severity flaws, meaning they could have served as a basis for attacks if someone had developed the appropriate exploit code. According to those responsible for the project, most of these problems have already been solved in Firefox 148, the version published in February, while the rest will be corrected in future versions. Inside the experiment. Claude’s work was not a simple automatic search for errors. According to Anthropic, the team first used the model to try to reproduce historical vulnerabilities recorded in Firefox, a way to test if it was able to recognize real failure patterns. Then they moved on to the most interesting part of the experiment: asking it to analyze the current version of the browser to locate problems that had not yet been reported. The process started in the JavaScript engine and then expanded to other areas of the code. In total, the analysis covered thousands of files from the project, including several thousand C++ files, generating a long list of findings that were subsequently reviewed by the researchers. A striking fact. Claude found more high-severity bugs in two weeks than the browser usually receives in about two months through its usual investigation channels. During the process, the Anthropic team submitted 112 unique reports to the project’s bug tracking system, although not all were confirmed vulnerabilities. Part of Mozilla’s job was precisely to review, debug and classify those findings before determining which ones had real security implications. The experience ended up becoming a direct collaboration between both organizations to review the results and prioritize corrections. The other half of the problem. The Anthropic team also wanted to see how far the model could go beyond detecting errors and turning those failures into real attacks. To do this, they asked him to develop exploits capable of taking advantage of the discovered vulnerabilities. The experiment included hundreds of runs with different approaches and cost approximately $4,000 in API credits. Still, the result showed a clear difference between the two capabilities: Claude only managed to generate two working exploits in a simplified test environment, without some of the defenses present in a real browser. Beyond the specific case of Firefox, the experiment reflects a change that is beginning to worry and interest the security community at the same time. AI-based tools are rapidly improving at detecting vulnerabilities in complex software, which could help developers fix bugs more quickly. Images | Anthropic | Rubaitul Azad In Xataka | iPhones were supposed to be the most secure cell phones in the world. It was supposed

Mozilla wanted to turn Firefox into an AI-powered browser. The community has forced a change that was not in their plans

For years, Mozilla and its Firefox browser have represented a rarity: a product shaped by demanding users, jealous of their control and unwilling to accept imposed changes. That’s why, when the word “AI” began to appear in his official speechdid not sound like a simple technical update, but rather a possible identity change. It was not a discussion about specific functions, but about limits. How far can Firefox stretch while still being recognizable to those who choose it precisely because it doesn’t look like the others. Before the controversy broke out, Mozilla had already begun to draw out its AI roadmap with a deliberately cautious tone. In his communications he talked about choice, transparency and preventing artificial intelligence from becoming a permanent layer of the browser. The AI, according to that initial approachhad to coexist with the classic Firefox experience without replacing it, offering specific and deactivatable tools, and maintaining the promise that the user decides if, when and under what conditions they use them. AIWindow. The most visible piece of that roadmap is a new window designed specifically for interacting with an AI assistant while browsing. Mozilla describes it as a separate, completely voluntary space that allows you to ask for contextual help without altering the rest of the browser experience. It does not replace the classic or private window, but is added as an additional option that the user decides whether to activate or not. The company insists that it can be deactivated at any time and that its development is being done openly, with a waiting list to test it and send comments. Why Mozilla thinks it’s important. The organization argues that AI is becoming a new way of accessing the web and that ignoring this change would leave the browser in a passive position. Their thesis is that, as more interactions go through assistants, it becomes essential to preserve principles such as transparency, accountability and decision-making capacity. Firefox, as a standalone browser, thus presents itself as an intermediary that uses AI to guide the user to the open web, rather than retaining them in a closed conversational environment. That balance began to break down in December, when the message about AI was publicly reinforced from Mozilla’s leadership. The reaction was not accidental if you understand who Firefox is addressing. A good part of its users do not come to the browser out of inertia, but after having searched deliberately, moving away from options such as Chrome, Edge or Safari. This more technical and critical profile tends to monitor any change that it perceives as a transfer of control. In this context, AI is not evaluated only by what it does, but by the precedent it sets and the risk of normalizing decisions made without the user’s explicit consent. The “AI kill switch” and the calendar. Faced with escalating criticism, Mozilla moved from generalities to explicit commitments. In a response to an open letter posted on RedditCEO Anthony Enzor-DeMeo wrote: “Rest assured, Firefox will always remain a browser built around user control,” adding: “You’ll have a clear way to disable AI features. A true kill switch (kill switch) will arrive in Q1 2026.” With that promise, Mozilla made a verifiable commitment: an option to completely disable all artificial intelligence functions by a specific deadline, the first quarter of 2026, as a way to reinforce trust. When the deabte is still open. The announcement of the “kill switch” did not close the debate, but rather moved it to a more basic question: when does AI come into play. For many users, the fact that there is a switch to turn it off implies that the AI ​​would be present from the beginning and that it is the user who must deactivate it. The alternative they demand is the opposite, that the AI ​​is completely turned off when installing Firefox and is only activated after an explicit decision. On Mastodon, the Firefox for Web Developers account admitted that there are “gray areas” about what optional means in the interface, such as whether a new button counts as such, but he insisted that the “kill switch” will disable the AI ​​completely. With the discussion already on the table, Mozilla has been forced to do something that was not in the initial script: specify, clarify and publicly commit more than expected. The discourse around AI in Firefox has moved from general principles to uncomfortable details, and that’s where the trust of its community is at stake. The promises are made, the deadlines marked and the words written. Now the difference will not be made by the communications, but by how those guarantees are translated into the final product and if Firefox manages to integrate AI without diluting what made it different. Images | Firefox | Denny Muller In Xataka | AI has allowed developers to program faster than ever. That’s turning out to be a problem.

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.