Sam Altman attacked Anthropic for using fear tactics with their new AI. He then did exactly the same thing.

The big AI companies have set themselves a goal: practically every week They must present a new model or start warming up the atmosphere by commenting on what is to come. Delays are not tolerated because the speed at which everything happens is overwhelming, but those who continue to dominate the conversation in terms of the power of their models are OpenAI and Anthropic. And what had to happen has happened: if Anthropic has a new “dangerous” model, now OpenAI says they also have one.

And it is a example very clear of “where I said I say, I say Diego.”

GPT-5.5 Cyber. A few days ago, OpenAI released GPT-5.5 Cyber. This is a variant of GPT-5.5 focused and specialized in advanced cybersecurity capabilities. It is a model focused on tasks such as the exploitation of vulnerabilities, penetration tests, malware reverse engineering and other types of actions highly focused on that sector of computer security.

In a reality in which, thanks to AI tools, there are systems that are more vulnerable than ever (and all this when we are on the threshold of the era of post-quantum cryptography), such specialized models seem like a very sweet tool for companies. But, of course, also for someone with other intentions.

Access control. Due to concerns over potential dual use, OpenAI has made the decision to restrict access to GPT-5.5 Cyber ​​to “critical cyber defenders.” Who are these? Those that protect essential infrastructure such as electrical or financial networks.

OpenAI has a certified access program with robust safeguards and rejection of malicious requests so that not everyone has access to this tool. In addition, they have a monitoring system to detect suspicious activity carried out by the model.

With cannon shots. It is, in essence, the discourse of fear. Once again, an artificial intelligence company saying that they have a product so powerful that it cannot fall into the hands of just anyone. It’s not the first time that OpenAI uses this speech, but the times have been very curious.

A few days ago, Anthropic presented Mythos. It is a tool very similar to that of OpenAI, one that is already giving some results in companies, with examples like Mozilla pointing out that, thanks to Mythos, the latest version of Firefox has a lot of security patches because AI has greatly streamlined the processes for finding vulnerabilities. It is one more example of the two titans of the AI ​​industry captaining ships with enormous firepower and “shooting” their best product with that speech of fear.

Precisely, that’s where the problem lies.

sam altman
sam altman

The hypocrisy. After the presentation of Cyber, Sam Altman commented at X that they were working with the Government to establish trusted access control to their tool. They have not shared the identities of those who will have initial access or, really, many details of the model. It has simply been a “oops, oops, this is very powerful and we can’t release it to the general public.”

And, as we say, the problem is that Sam Altman himself harshly criticized Anthropic’s strategy when Mythos was presented. The CEO spoke about the strategy of fear and compared the maneuver of Anthropic and its declared enemy, Dario Amodei, with that of someone who manufactures an atomic bomb and, at the same time, sells you the bunker to protect you from it. This has not been overlooked by the media because he harshly criticized that strategy just before copying it word for word.

At par. Despite everything, neither one nor the other is wrong. When AI companies present a model, curiously it is always better than the competition in almost everything. On this occasion, a assessment The UK AI Security Institute reflects that both Mythos and GPT-5.5 Cyber ​​are two of the most powerful models they have analyzed in their cybersecurity tests and that they are, basically, on par.

Compared to previous or non-specific models, the difference is palpable. In expert-level tasks, GPT-5.5 achieved an average success rate of 71.4% compared to 52.4% for GPT-5.4. Mythos Preview, for its part, stayed at 68.6% compared to 48.6% for Opus 4.7. The Institute concludes by pointing out that this is evidence that the potential in cybersecurity is a trend among frontier models, one in which they can begin to achieve the desired benefits in order to become listed companies.

Another reading is that countries that want to stop depending on cutting-edge American technology must start getting their act together as soon as possible. And that is, precisely, the message from the CEO of Mistral, the French AI company that recently pointed out that Europe had to stop being a technological vassal of the United States to become a power.

In Xataka | Someone has had a simple idea so that data centers do not collapse in Spain: “unplug them” 18 days a year

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.