AI is making it increasingly difficult to know what is real and what is not. And child sexual predators are taking note
The era of artificial intelligence is making everything happen very quickly, very quickly. Companies like Mercadona, Google or Anthropic already point out that the vast majority of your code is written by an AI, Mozilla flowers Mythos and Jensen Huang is excited about the benefits of AI and everything that allows us to create. Among these creations there is everything, such as an enormous amount of child sexual abuse material. Researchers ask for help, and I can tell you that there are things that are not easy to assimilate. In short. This Wednesday, Bloomberg published a report that condenses six months of research in which they focused on the rapid proliferation of child sexual abuse material generated by artificial intelligence. It is not something that is surprising if we take into account the enormous amount of false information that we already have to deal with every day both in text form and in increasingly realistic images and even on videobut there is an important nuance. While in the case of the fake news It is the press and the people who must deal with it with common sense so as not to swallow the hoax, wasting time in the process. In the case of this content generated by AI, it is the police and investigators who must waste their time to rule out that it is true information. And, while they are doing that, they stop investigating cases of child sexual abuse, other types of abuse or disappearances that are real. The figures. They are scary, of course. The report points out that the National Center for Missing and Exploited Children received more than 1.5 million reports during 2025 that had a general generative intelligence component. Among them, more than 7,000 reports about users generating or possessing child sexual abuse material generated with AI and more than 30,000 cases of people generating this type of content. It’s stupid. On the other hand, the IWF (Internet Watch Foundation) evaluated more than 8,000 AI-generated images and videos that very realistically depicted acts of child sexual abuse. There were 3,443 videos compared to the 13 registered in 2024. In total, an increase of 26.385%. The growth is important and the IWF itself pointed out in the report that 65% of this video content generated by AI shows events classified as ‘Category A’, the most serious within this type of material. Cases. Do not think that they are cases of dark websince AI chatbots have been discovered on the open web that host these images and encourage them to be created. What type of content are we talking about? Here comes the delicate part. Bloomberg points out some examples that are part of those thousands of more complex cases that are currently being processed by the authorities and where there is… everything. A man who used the faces of children in his neighborhood to generate content with them having sexual relations with their mothers or grandmothers, another who produced sadistic images of small children and babies, another man accused of altering the image of a prepubescent girl to enlarge her breast and posting the images on Onlyfansa priest who spent years collecting material and then created even more thanks to AI or an army soldier who sexualized images of children he knew with AI. Where do the images of children come from?. Facebook and Instagram. That’s the simple answer. According to Joe O’Barr, one of the researchers who lent a voice to the Bloomberg investigation, “people steal images from Facebook and Instagram, things that parents freely post, and They post them on artificial intelligence platforms”. One pattern that has been discovered is that many create images involving children they know in the real world, thus fulfilling their fantasies. For example, a case of a man who took photos of his partner’s six-year-old daughter, manipulated them with AI and then posted them next to his own naked images on Onlyfans. “The fact that the perpetrator knew the victim made my hair stand on end. It meant that the girl could be in real and imminent danger” – Joe O’Barr What platforms do. Those same platforms on which the images are hosted do not sit idly by. Researchers trust that those same companies will raise the alarm when they find something, but there is a problem: Google, xAI, Meta or OnlyFans are delegating the monitoring task to AI. Numerous cases have been reported of humans who can’t stand flagging that type of content and that is why they either hire in countries like Kenya, or they directly delegate to AI. The problem is the number of false positives that the AI finds and that end up overloading the researchers’ ‘mailbox’. A North Carolina investigator notes that he has seen an 11-fold increase in those ‘tips’ his office receives, and last year alone the volume doubled to 52,000 reports. He points out that any human would say “this is not content to investigate”, but since the AI does not know this, it sends everything: from serious things to investigate to simple insults. Unbearable. “The more cases we have to investigate, the more difficult it is to treat each case individually,” says one of the researchers. Meta itself has recognized that its system is not perfect and adds “some noise” to the researchers’ network. As we said before, it is an enormous amount of material that prevents investigators from doing their work in real cases of minors who are being abused or who are missing. “We are doing this massive job with the same amount of resources we had ten years ago. We can’t take it anymore and we don’t want to miss a real child who is being sexually abused,” an agent tells Bloomberg. Legislators, get your act together. The work of these investigators depends on the Department of Justice and funding that consists of about $30 million for 61 state task forces. They point out that it is a very small figure and, to put … Read more