Someone has passed ‘One Hundred Years of Solitude’ to an AI text detector. He said he is an AI

Tools to detect text generated by AI They systematically fail when analyzing great literary works. The biblical Genesis, the US Constitution, ‘Harry Potter’ or ‘One Hundred Years of Solitude’ are identified by these detectors as creations of machines. The reason has a perverse logic: what algorithms interpret as AI writing is actually good writing. Robot Bible. The tools for detect AI generated text They have been accumulating absurd verdicts for months. You just have to submit ‘One Hundred Years of Solitude’ by Gabriel García Márquez to one of these systems and you will obtain that 100% of the novel has artificial origin. The biblical Genesis or the North American Constitution do not fare better: the ZeroGPT tool rates the first text with a 88.2% chance of being AI writing and the second, as written by AI at 96.21%. Experiments with ‘Harry Potter’ or the lyrics of ‘Bohemian Rhapsody’ show similar results. The pattern is so consistent that it goes beyond the anecdote: these tools have an underlying problem. Good bad. The irony is that AI-generated text detectors were designed to identify writing done by machines. However, they end up pointing out exactly the opposite: texts that exhibit greater stylistic care, greater internal coherence, and greater mastery of narrative rhythm are considered unlikely to have been made by humans. That is, writing well, in technical terms, is similar to writing as a language model. How it works. To understand why this happens You also have to understand how these tools work. Most are based on two main indicators. The first is perplexity (perplexity): how predictable the choice of words in a text is. If each word follows the previous one in an expected way, perplexity is low. If the text jumps unpredictably between registers, vocabulary, and syntactic structures, perplexity is high. The second indicator is the burst (burstiness): the variation in the length of the sentences. Humans alternate long paragraphs with very short sentences, while language models tend to produce sentences of more uniform length. A well-constructed text (precise vocabulary, clear structure, uniform rhythm) has low perplexity by design. Like García Márquez, who chooses the exact words in his texts, with almost surgeon-like precision. The Genesis has an almost hypnotic narrative cadence, deliberate, without noise, like a song with balanced meter. “Writing well” is a very complex concept, but it can mean, among other things, being predictable in the most virtuous sense: that the reader understands the text effortlessly. And that, for a detector trained in distinguishing “what a language model would do”, sets off alarm bells. It’s the same. What complicates the problem is that generative AI models have been trained, precisely, with quality human writing. ChatGPT, Claude or Gemini produce fluent, coherent, low-perplexity texts because they learned from millions of human texts that also had those characteristics. Detecting writing done by an AI and differentiating it from good human writing is an almost impossible task for these algorithms. Another way to fail. These criteria can take multiple forms. For example: a study on the performance of seven popular detectors when analyzing newspaper essays. TOEFL (official English exam for non-English speakers) in front of essays by American high school students. The results: 61.22% of essays written by non-native students were marked as generated by AI. In 20% of the cases, the seven detectors agreed on the erroneous diagnosis. The native student texts passed without problems. The explanation is the same mechanics of perplexity: someone who writes in their second language uses a more limited vocabulary, simpler structures and fewer grammatical variations. It doesn’t write badly, but its tools are more limited, and AI detectors systematically penalize writers with less command of the language. The team that carried out the study recommended avoiding using these tools in evaluation contexts, especially when international students are involved. In Spain, an episode of this type took place: In 2024, the Australian Catholic University opened files to nearly 6,000 students using Turnitin, the most widespread screening platform in universities. Many of them had not used AI at any time. Force the machine. Edward Tian, ​​CEO of GPTZero (one of the reference detectors, with more than eight million users) openly acknowledged that many tools in the sector adjust their thresholds to intentionally generate more false positiveswith the aim of not passing through texts generated by AI even if that means wrongly pointing out a human text. Tian talks about how GPTZero fights to avoid this proliferation of false positives, but the adulteration of the results is there as a clear problem. The last case. The publisher Hachette has just canceled the publication in the United Kingdom and the United States of ‘Shy Girl‘, a novel that the Pangram tool has detected as 78% generated by AI. The author denies having used the tool. Whatever the truth in that specific case, the episode illustrates the factual power that these tools are acquiring: they can destroy publishing contracts and put humans under suspicion before there is any definitive proof on the subject. In Xataka | OpenAI has an AI-written text detector that works almost perfectly. And he doesn’t want to put it on the market.

We have searched for dark matter with the most sensitive detector in history and we have found nothing. And that is a success

The search for dark matter It becomes more and more like a game of hide-and-seek where, as we improve our vision, the target appears to become more invisible. The last thing we tried to do to find it was drill 1,500 meters deep underground, although in the end we had a very bad result, although it did allow us to find things that we were not looking for. The dark matter. It is without a doubt one of the great mysteries of physics. While many researchers suggest that this matter surrounds us and is the main component of the universe, others believe that we were wrong and it doesn’t exist. Although little by little evidence is emerging that it is true that it exists so that our own theories fit. This whole mess is mainly focused on the fact that we do not have the ability to detect this matter. We know it’s there, but we don’t ‘see’ it. Something that generates a great confrontation within the world of physicistsand that is why these types of experiments try to shed light on this matter that allows us understand much better the composition of what surrounds us. New tools. Science has exploited the LUX-ZEPLIN (LZ) experimenta very sophisticated tool built by humanity to hunt down these ghost particles. To understand it, it is nothing more than a sensor that had to be buried 1,500 meters deep, in the facilities of the Sanford Underground Research Facility (SURF), in South Dakota. The reason? Use the rock as a shield to block the cosmic radiation that bombards the surface. The concept. The magnitude of this experiment has undoubtedly been quite considerable, since at its core 10 tons of ultrapure liquid xenon have been housed. The theory here is that if a dark matter particle passes through the Earth, it should occasionally collide with a xenon atom that produces a tiny flash of light. In total, the LZ has analyzed data collected for 471 daysbetween March 2023 and April 2025. A period of time that makes this the most exhaustive search that has been done so far. The sound of silence. The main result is that no direct interaction with the particles has been detected. However, this null result is practically worth gold in the field of physics. And by not finding anything, scientists have been able to rule out a huge range of possibilities about what dark matter is and what it is not. In short, we have been able to establish tighter margins to detect dark matter, now having the strictest limit in the world on the cross sections of dark matter particles for a very specific mass. And it is that being of such a small masswhich is why it offers so many problems when it comes to detecting them. The surprise. The most fascinating thing about these results is not what was missing, but what appeared. Although the detector did not see dark matter, it did validate its extreme sensitivity by recording something incredibly difficult to capture: solar neutrinos. This marks a bittersweet milestone: the experiment has officially entered what physicists call the ‘neutrino fog‘. This means that we have reached a point of such extreme sensitivity that neutrinos (that go through everything without flinching) begin to generate background noise that could be confused with dark matter. And the truth is that we are facing a big problem, since technology will have to find a way to distinguish dark matter from neutrinos. The future. The experiment does not stop here. Although these results cover until April 2025, the official plan is to continue taking data until 2028, with the aim of accumulate more than 1,000 days of observations. And many experts continue to point to the same thing: 85% of the mass of the universe It’s dark matterand although it escapes us, we are getting closer to knowing what the universe is made of. Images | Karo K. In Xataka | The strangest event that humanity has witnessed occurred in 2019 under a mountain in Italy

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.