Mythos will be the most dangerous AI model, but companies are already taking note of its security tips

Top AI companies are in the race to create the best artificial intelligence model. That race has been won by Anthropic with Mythos. At least, That’s what they claim (of course)with phrases like it is so powerful that they cannot make it public. There is reasons to take Anthropic’s words with a grain of salt, but what is evident is that Mythos is already working.

Although the company has not released it, has already given access to certain technology partners. The decision is based on the company’s fear that the model will be used maliciously. They themselves have described as a threat to cybersecurity based on the number of zero-day vulnerabilities that Mythos would have found in both the main operating systems on the market and in browsers.

And, just when the model is arousing opinions from some and others, Mozilla arrives to affirm that the latest version of Firefox 150 It has security fixes for 271 vulnerabilities that have been discovered thanks to this preliminary version of Claude Mythos.

For its part, OpenAI does not believe anything at all.

“Just as capable as a human”

Mozilla it details in one of the latest posts on his blog. The company had been collaborating with Anthropic for some time and using the Claude Opus 4.6 model to find errors. In January, it found 22 vulnerabilities in a couple of weeks, 14 of them rated very serious. Of those 22 found by Opos 4.6, which is already a powerful model, we move on to the 271 discovered by Mythos.

It is a huge leap and Mozilla wanted to continue investigating to see to what extent the new model surpasses Opus. Analyzing Firefox 147, Mythos generated 181 functional exploits. Opus 4.6? Just two. 90 times less.

Those results have led Mozilla to write that Mythos Preview is “just as capable as the best human cybersecurity researchers”adding that they have not found any categories that humans can detect that Mythos cannot.

This has another reading since, as the company itself states, seeing that the model is capable of finding so many errors in such a short time makes them wonder if it is possible to stay up to date in cybersecurity work when alternatives to Mythos are developed that do fall into hands not controlled by those responsible.

There is always the fact that Mythos has not found any errors that Mozilla’s human ‘watchmen’ have not detected and that a tool like this will help to have a more secure system. All of this, in the end, pushing that narrative that Mythos is practically a technological miracle.

a nuclear bomb

The other side of the coin is that Sam Altman, head of OpenAI, doesn’t believe anything. Taking advantage of his recent participation in a podcast, he has qualified The entire Anthropic movement as a fear-based marketing ploy.

He accuses Dario Amodei’s company (Altman’s public enemy) of wanting to restrict AI to a small number of people in a strategy that he has compared to having an atomic bomb, threatening to release it and making a living by selling bunkers to protect themselves from that same bomb.

“It is evident that this is an extraordinarily powerful marketing strategy. We have created a bomb and we are going to drop it. You can buy a bunker from us for 100 million dollars”

It is one more point in that historical rivalry in which both companies (and managers) have been involved for some time, but it comes just when Anthropic is having a greater role and OpenAI is being forced to release ballast in the form of services like Sora.

Altman is not the only one who thinks that Anthropic is repeatedly using this discourse of “We have something so powerful that we cannot make it public” because it is a good strategy to obtain financing. There are already voices that they point that Mythos is not that big of a deal and, in fact, other models have proven to be able to do the same, finding the same errors and problems detected by Anthropic.

But, above all, we must remember that, in 2019, someone already said that a model was too dangerous for public release. Who? OpenAI itself with GPT-2. Obviously, it wasn’t that dangerous.

In Xataka | OpenAI and Anthropic have proposed the impossible: lose $85 billion in one year and survive

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.