There are people using AI to plan murders. The question is what AI companies are doing about it

On February 10, an 18-year-old girl shot and killed her mother and brother. Then he went to the institute and murdered seven more people, finally committing suicide. The disturbing thing is that the author had talked about it with ChatGPT and OpenAI had the opportunity to notify the police, but chose not to. What has happened? They count in the Wall Street Journal that, in June of last year, OpenAI’s automated system detected several messages that a user had sent to ChatGPT describing scenarios of armed violence. For some employees they were very worrying because they could end in real violence, so there was an internal debate about whether to notify the Canadian authorities. They finally closed his account, but they didn’t notify anyone. Now Canadian authorities have summoned them to ask for explanations. There is more. He Tumbler Ridge shooting It is not the only case in which AI has been used to plan a crime. At the beginning of 2025, a man parked a Cybertruck full of explosives in front of a hotel in Las Vegas with the intention of detonating it (although in the end the only victim was himself). Days before, the author I had asked ChatGPT how to do it. In this case, the chatbot did not detect any concerning messages, but we know this because OpenAI searched through its messages after the fact. In Seoul, a woman was jailed for the alleged murder of two people due to benzodiazepine poisoning. The investigation revealed that the accused had gone to ChatGPT to find out what the dangerous dose was and what happened if it was mixed with alcohol. The messages in this case are not that alarming and could arise out of genuine doubt, but it is another example of ChatGPT being used in the commission of a crime. Why is it important. Artificial intelligences have become a kind of confessional to which we tell all kinds of secrets, even the darkest. There are those who consider that AI is a friendhis psychologist or even his lover. In this sense, it is not strange for someone to tell ChatGPT that they are going to kill their family or want to detonate a car full of explosives. What is worrying, and where we should focus, is what companies are doing about it. At the moment, it seems not enough. Are they obligated? Confessing to your psychologist or psychiatrist that you want to hurt someone is one of the reasons why you not only can, but should break your relationship. professional secret and alert the authorities. However, no matter how much we use chatbots as psychologists, at the moment there is no law that forces AI companies to report these types of interactions, but it is an internal decision. The obligation, therefore, is not legal, but ethical. How to make a homemade bomb. Cases like that of the Tumbler Ridge shooter are not something that has begun to happen with the arrival of AI chatbots. Instructions for creating homemade bombs have been around for decades. bringing the authorities to their heads, Even before the use of the Internet became popular, manuals of this type were circulating. The same thing happens with the suicide cases; You don’t need to ask ChatGPT, we can Google it or write in a forum. In statements to New York Timesa former OpenAI employee highlights an important nuance: with a chatbot you don’t usually do a simple search, but rather you can have a longer conversation where the intentions are clearer. In this sense, it may be easier to detect cases like the Tumbler Ridge shooter, but there may also be many false positives due to users who are writing fictional stories or using AI as role-playing. Complicated. In Xataka | Investing in data centers for AI is insane, and it’s going to get worse. much worse Cover image | Pexels, Unsplash

For years they leaked murders and violations for Facebook in Barcelona. Now they claim 4 million to the goal for psychological damage

Goal had a moderation center in Barcelona that was responsible for monitoring its contents. Had. Telus Digital, the subcontracting company, closed its facilities in April. Now, twenty -nine of their then content moderators They have filed a criminal complaint against Meta and its Barcelona subcontractor for the mental damage suffered after years filtering external material. Why is it important. It is the first criminal accusation admitted in Europe against the technological giant for the sequels caused by the moderation of content. Workers claim 150,000 euros of compensation each. Almost 4.5 million euros in total. The facts. For years, these employees had to visualize murders, decapitations, rapes, child pornography, live suicides and terrorism for eight -hour days. His rest time was set in five minutes per hour while processing up to 800 videos a day. The consequences they expose were tranquatic stress transport, panic attacks, suicidal ideas, nightmares and phobias. An employee has been in psychological treatment for six years and has temporary disability since 2022. Between bambalins. The complaint ensures that Mal CCC Barcelona Digital Services, then absorbed by Canadian Telic Digital. The American matrix set schedules, productivity and quality demands, requiring 98% success in moderation decisions. The best performance workers were “promoted” to high priority tail, where the most disturbing content arrived. Yes, but. Companies knew that many employees would lead to “serious psychic pathologies” but did not implement protection measures. During the hiring process they deliberately hid the nature of work, limiting themselves to verifying knowledge of languages. The day -to -day detail arrived later. The context. In 2023 it was uncovered that between 20% and 25% of Télus template was decline, many for psychological reasons. Meta definitely closed the Barcelona center a few weeks ago, dismissing the entire squad after dispense with moderation servicesthat now remain in the hands of the users themselves, as in X. The complaint accumulates to another filed in 2024 by a worker, already admitted by the Court of Instruction 29 of Barcelona. In the United States, Goal paid 52 million dollars to more than 11,000 moderators in 2020an average of $ 4,700 per person. In Xataka | Four AI companies are monopolizing the intellectual future of humanity. They are not good news Outstanding image | Greg Bulla in Unspash

House owner in Miami murders his tenant after fighting for the thermostat: police

Carlos Alberto González, 57, died after being shot by his lessor, Adam Louis Anson, 37, in what the Miami-Dade Police, In Florida, describes as a violent altercation related to the control of the thermostat of the house. The events occurred on Saturday morning in a house located in the 9400 block of SW 17th Terrace, where Anson rented an attached apartment to González. According to the police report published by Law & Crimethe dispute began when Anson went to the apartment to adjust the thermostat, located in the study hall. After hitting the window, González opened the door, And the owner allegedly entered the force, hit him and dragged the tenant to the backyard, where he shot him several times in his head. After the attack, Anson adjusted the thermostat before returning to his side of the house and calling 911. González was transferred to the hospital, but did not survive the wounds. Out -of -defense allegations Anson told the authorities that he acted in self -defense, arguing that González tried to hit him. However, the researchers did not find evidence to support that version, According to Law & Crime. Therefore, the owner was arrested under positions of second degree murder and armed handbug with aggression. At the moment, remains detained without bail at the Correctional and Rehabilitation Center of Miami-Dade, Waiting for a scheduled audience for February 18. Neighbors report previous conflicts The case has generated restlessness in the community. Neighbors reported that the tensions between the owner and his tenants were frequent and that the police had intervened several times. Leyani Pérez, a resident of a nearby home, said he heard a fight before the shots. “After a few seconds, I realized that something bad was happening in my backyard and I was very scared,” Pérez said to the local chain WTVJ, NBC affiliate. “I started screaming and called my husband: ‘Hey, go to the backyard because I feel there is a stranger there.’ Then I heard bum, bum, bum, bum. ” Although the exact reason for the thermostat dispute is not clear, the low temperatures recorded in Miami that day, unusual for the region, could have intensified the conflict. According to Acuweather, Temperatures were around 40 degrees Fahrenheit, something unusual in southern Florida, where people are accustomed to a warm climate. Continue reading: (Tagstotranslate) Crime (T) Florida

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.