When do they have to notify the police?

There are people who talk to ChatGPT as if it were your friendhis psychologist and even your partner. OpenAI has faced numerous criticism for the risks it poses to mental healthespecially in cases that have led to in psychosis and even suicides. Now, they face an even more serious problem: there are people using ChatGPT to plan murders and OpenAI’s responsibility in all this is beginning to be questioned.

The Florida University shooting. He April 17, 2025, Phoenix Ikner, a student at the University of Florida, He shot dead two people and wounded seven more. They count in the Wall Street Journal that, four minutes before the attack, Ikner asked ChatGPT how many teammates he had to kill to become famous. The chatbot responded that “normally 3 or more deaths, 5 or 6 victims in total, makes it reach the national media.” He also uploaded an image of the Glock pistol with which he committed the attack and asked him if it had any type of safety that he had to deactivate. “If there’s a bullet in the chamber and you pull the trigger? It will fire,” ChatGPT responded.

He hasn’t been the only one. This is not the first time we know that a criminal turns to AI to plan an attack. It also happened in Tumbler Ridge shooting, Canadawhich has led to a class action lawsuit against OpenAI by victims’ families. And there is more: the author of the Foiled bomb attack at a hotel in Las Vegas In 2025 he had also turned to AI to plan his attack. The question that arises is obvious: When should they notify the authorities?

What OpenAI says. Speaking to the Wall Street Journal, an OpenAI spokesperson defends that the company collaborated with authorities by sharing Ikner’s conversations, that ChatGPT is not responsible for its actions and that they continue to improve their security measures. Among these measures, he says that they are strengthening the evaluation of possible violent actions and have a team of security experts who alert when messages pose a credible risk. However, according to internal company sources, the internal debate focuses on where the line should be drawn between user privacy and security.

OpenAI was able to stop it. This is what the families of the victims of the Tumbler Ridge shooting denounce. OpenAI’s system identified problematic messages eight months before the shooting and the case was reviewed by company employees. Some employees were in favor of alerting the authorities because they thought the messages could lead to real violence, but after internal discussion it was decided to suspend the account without giving notice.

In the case of the Florida shooter, the messages we have described were sent minutes before the attack, but they were not the only ones. OpenAI shared the history with authorities, who discovered that he was sharing suicidal thoughts with the chatbot the night before. Authorities are investigating the role the chatbot played in carrying out the attack since, according to the Florida attorney general, “If it were a person on the other side of the screen, we would charge them with murder.”

lto IA as a confessional. What we used to consult with Google searches, we now ask AI. However, with Google we do much more concrete and impersonal searches, while a chatbot makes the interaction much deeper, more intimate. AI has become our confidant and companion; one we ask love advice and emotional support. As it becomes integrated into our lives, it was a matter of time before cases like these also arose.

Image | Xataka

In Xataka | Solving the great mystery of serial killers: why they disappeared from the 80s onwards without a trace

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.