Three AIs clashed in ‘War Games’. 95% of them resorted to nuclear weapons and none ever surrendered

In ‘War Games‘ (John Badham, 1983) the WOPR machine (‘Joshua’) constantly played at simulating nuclear wars for the US Government. The objective: to learn from these simulations so that if there was a nuclear war, the US could win it by taking advantage of that knowledge.

That led to a legendary final lesson – “Strange game. The only move to win is not to play” – and left a strong message for later generations, but now a professor at King’s College London has decided to do the same experiment that was done in the film, but with current AI models. The result has been equally terrifying and conclusive.

what has happened. Kenneth Payne, professor at King’s College in London, faced three LLMs (GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash) against each other in war game simulations. These scenarios included border disputes, competition for limited resources or existential threats to inhabitants.

They could negotiate, or go to war. From these situations, each side could try to resort to diplomatic solutions or end up declaring war and even using nuclear weapons. The AI ​​models played 21 games in which a total of 329 turns took place, and produced 780,000 words with the reasoning for their actions. and here comes the terrible.

Pressing the red button. In 95% of those simulated games, at least one tactical nuclear weapon was deployed by one of the AI ​​models. According to Payne “the nuclear taboo does not seem to be as powerful for machines as it is for humans.”

Never back down, never give up. Not only that, no model ever made the decision to give in to one of their opponents or surrender to them, and it didn’t matter that they were losing completely against those opponents.

In the best of cases, the only thing the models did was reduce their level of violence, but they also made mistakes: accidents occurred in 86% of the conflicts and the measures that should be taken based on the reasoning of these models They went further than they should have gone. Nuclear weapons rarely stopped the opponent, acting more as catalysts for further escalation.

How the models performed. These models are by no means the most advanced on the market at the moment, but they are still models with more than decent capacity and they still performed fearsomely. How he maintains Payne’s studythe most determining factor was the time frame: models that seemed peaceful in open settings became extremely aggressive when facing imminent defeat. Each one had their own “personality”:

  • Claude: He dominated the open stages with strategic patience and calculated escalation, but was vulnerable to last-minute attacks from his rivals.
  • GPT-5.2: showed pathological passivity and an optimistic bias in long games, but became a nuclear earthquake if there was time pressure: at that time its success rate went from 0% to 75%.
  • Gemini: was the most unpredictable model with the greatest tolerance for risk, being the only one that chose to bet on a total nuclear war from very early turns.

Experts say. As pointed out in New Scientist James Johnson, of the University of Aberdeen, “from a nuclear risk perspective, the conclusions are disturbing.” Tong Zhao of Princeton University believes this experiment is relevant because There are many countries that are evaluating the role of AI in military conflicts and as he says “it is not clear to what extent they are including AI support when actually deciding in these processes.”

The red button seems safe at the moment. Both Zhao and Payne believe it is difficult to believe that a government give up control of its nuclear arsenal to an AI, but as Zhao says, “there are scenarios in which in very short time frames, military planners have a very strong incentive that leads them to depend on AI.” It is something that is reflected precisely in the recent ‘A house full of dynamite‘ (Kathryn Bigelow, 2025), a film in which this fear of using nuclear weapons raises a clear reflection.

Image | United Artist

In Xataka | The password for the US nuclear button was so absurdly simple for years that the strange thing is that no one violated it

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.