On February 10, an 18-year-old girl shot and killed her mother and brother. Then he went to the institute and murdered seven more people, finally committing suicide. The disturbing thing is that the author had talked about it with ChatGPT and OpenAI had the opportunity to notify the police, but chose not to.
What has happened? They count in the Wall Street Journal that, in June of last year, OpenAI’s automated system detected several messages that a user had sent to ChatGPT describing scenarios of armed violence. For some employees they were very worrying because they could end in real violence, so there was an internal debate about whether to notify the Canadian authorities. They finally closed his account, but they didn’t notify anyone. Now Canadian authorities have summoned them to ask for explanations.
There is more. He Tumbler Ridge shooting It is not the only case in which AI has been used to plan a crime. At the beginning of 2025, a man parked a Cybertruck full of explosives in front of a hotel in Las Vegas with the intention of detonating it (although in the end the only victim was himself). Days before, the author I had asked ChatGPT how to do it. In this case, the chatbot did not detect any concerning messages, but we know this because OpenAI searched through its messages after the fact.
In Seoul, a woman was jailed for the alleged murder of two people due to benzodiazepine poisoning. The investigation revealed that the accused had gone to ChatGPT to find out what the dangerous dose was and what happened if it was mixed with alcohol. The messages in this case are not that alarming and could arise out of genuine doubt, but it is another example of ChatGPT being used in the commission of a crime.
Why is it important. Artificial intelligences have become a kind of confessional to which we tell all kinds of secrets, even the darkest. There are those who consider that AI is a friendhis psychologist or even his lover. In this sense, it is not strange for someone to tell ChatGPT that they are going to kill their family or want to detonate a car full of explosives. What is worrying, and where we should focus, is what companies are doing about it. At the moment, it seems not enough.
Are they obligated? Confessing to your psychologist or psychiatrist that you want to hurt someone is one of the reasons why you not only can, but should break your relationship. professional secret and alert the authorities. However, no matter how much we use chatbots as psychologists, at the moment there is no law that forces AI companies to report these types of interactions, but it is an internal decision. The obligation, therefore, is not legal, but ethical.
How to make a homemade bomb. Cases like that of the Tumbler Ridge shooter are not something that has begun to happen with the arrival of AI chatbots. Instructions for creating homemade bombs have been around for decades. bringing the authorities to their heads, Even before the use of the Internet became popular, manuals of this type were circulating. The same thing happens with the suicide cases; You don’t need to ask ChatGPT, we can Google it or write in a forum.
In statements to New York Timesa former OpenAI employee highlights an important nuance: with a chatbot you don’t usually do a simple search, but rather you can have a longer conversation where the intentions are clearer. In this sense, it may be easier to detect cases like the Tumbler Ridge shooter, but there may also be many false positives due to users who are writing fictional stories or using AI as role-playing. Complicated.
In Xataka | Investing in data centers for AI is insane, and it’s going to get worse. much worse

GIPHY App Key not set. Please check settings