Wikipedia has banned using AI to write or rewrite articles in English. Human knowledge begins to raise barriers

The English version of Wikipedia has just banned articles made with AI. In the last update of their guidelines are clear: content generated with language models violates content policies. The largest encyclopedia on the internet positions itself as a refuge for content created by humans. AI no thanks. The ‘AI yes or AI no’ debate has been going on for a while generating tension on Wikipedia and they have finally opted to support human content with an overwhelming majority 40 to 2. The new restriction imposed reads as follows: “Text generated by large language models (…) often violates several of Wikipedia’s fundamental content policies.” Those fundamental policies What it refers to are the neutrality of the content, verifiability and that the content cannot be original research, but must be attributed to reliable sources. With this change, editors are prohibited from using LLM “to generate or rewrite article content.” Two exceptions. Wikipedia contemplates two scenarios in which the use of AI is allowed: Basic style suggestions and corrections, as long as the LLM does not introduce its own content. They warn that it must be used with caution since LLMs tend to “go beyond what is asked of them and alter the meaning of the text.” Translation of articles into other languages, as long as it is reviewed by a person competent in the two languages ​​involved. Here it is important to note that Wikipedia has already had dramas in the past because of AI translations. Why is it important. Wikipedia has positioned itself as a repository of genuinely human content in an internet that is flooded with artificial content. At a time when distinguish the authentic from the synthetic is increasingly difficult, the largest encyclopedia in the world chooses to rely on human authorship as a guarantee of reliability. There is certainly something ironic and that is that Wikipedia rejects AI, but AI continues to draw on Wikipedia to provide answerscausing them to lose clicks and saturating your servers. AI generated vs human made. Until recently we thought that the solution was flag artificial content on platforms with the classic ‘AI’ label, but we are already at a point where it is more valuable and useful to highlight the opposite: that it is made by humans. The advancement of image generation tools and the amount of texts made with AI are overwhelming, to the point that an anti-AI current is emerging; Some artists are starting to designing “badly” to differentiate itself from AI homogenizationthey have created extensions to return to the internet before ChatGPTthere is browsers that filter AI results and even ‘Not by AI’ badge has been created. The point is that it is a David against Goliath. The Etsy case. It is perhaps one of the most bloody cases of the flood of low-quality AI content. The platform that It was presented as a refuge for the authentic, today it is an AI market which also tries to pass itself off as artisanal. Ghibli-style portraits for 20 euros, profiles managed entirely by AI that say things like “I can’t wait to draw you”… Etsy allows content made with AI, but says you have to label it as such. Nobody does it. Proof that the label is no longer useful. A key detail. The last paragraph of Wikipedia’s guidelines is especially striking because it talks about possible sanctions for those who violate the rule, the problem is how they plan to detect who uses AI. Wikipedia admits that “some editors may have writing styles similar to those of large language models” and that “more evidence than mere stylistic or linguistic clues is needed to justify the imposition of sanctions.” We have no idea how they are going to do it, what we do know is that AI text detectors fail more than a fairground shotgun. Image | Wikipedia, edited In Xataka | The last barrier against AI is good taste. The problem is that an entire generation is growing up without developing it

There is a word that has multiplied exaggeratedly in scientific articles for a reason: ChatGPT likes it

That there are academic articles written by AI is something that has been proven beforethe question is how serious it is. To know the magnitude of this practice, a group of researchers has reviewed millions of paper abstracts published in PubMed and have found something interesting: there is a word that the AI ​​loves and the reason why it likes it so much is quite murky. Delve. Its translation is ‘go deeper’ and its use multiplied by 28 between 2022 and 2024, which coincidentally coincides with the boom of ChatGPT and language models. Other words such as ‘underscore’ or ‘showcasing’ are also cited, with a frequency increase of x13.8 and x10.7 respectively. None of them are a noun or a word related to the content, but rather have more to do with the style of writing and are very characteristic of the flowery language that LLMs usually use. flowery language. Does this mean that if we see one of these words in a paper it was written with AI? Not necessarily, but the increase is brutal. Researchers have compared the rise of ‘delve’ to other keywords, such as pandemic, which had a huge peak in 2020 and began to decline in 2021. The increase in the frequency of use of ‘delve’ is much more pronounced than all the others. It’s not coincidental. There is a stage in the process of creating a chatbot like ChatGPT that requires human intervention to fine-tune the responses; This is what is known as reinforcement learning from human feedback (for its acronym in English). RLHF). It turns out that most of the workers who are dedicated to this refining work are in African countries, such as Nigeria. guess where The use of these words in formal English is quite common. Exactly, in Nigeria. African style. ‘Delve’ is a fairly common word in business English in Africa, especially in Nigeria, and it is not the only one. There are also others like ‘leverage’, ‘explore’ or ‘tapestry’ that are more common in African English. According to 311institutealthough human feedback is very small compared to the enormous amounts of training data, it has a great impact since it is what defines the tone of the model when responding to us. Data labeling. It is a key step for training large language models and requires humans to be behind it. The problem is that the majority of workers who dedicate themselves to this are from impoverished countries such as Nigeria, Kenya or India, among others. In case the endless days and the ridiculous salaries were not enough, many times workers must review violent and very explicit imagesall without any type of psychological support. In Xataka | Being a porn moderator is not fun at all. He was exposed to “extreme, violent, graphic and sexually explicit content” Image | National Institute of Allergy and Infectious Diseases in Unsplash

Wikipedia opted for AI to summarize her articles. Its editors have avoided it through a rebellion

The Wikimedia Foundation has paused an experiment which showed summaries generated by AI in the upper part of the articles after an avalanche of criticism of their own editors. Why is it important. Wikipedia remains one of the last great bastions of human content on the Internet, in front of the survey wave that has degraded other platforms. His model, which is committed to democratic governance, has just stopped an important technological advance. What has happened. He “Simple Summaries” experiment He was born with the intention of making complex articles more accessible through automatic summaries marked as “not verified.” These summaries were made by an aya model of COPE. The editors responded with comments such as “very bad idea”, “my strongest rejection” or simply “Puaj”. The background. OpenAi continues to advance in Your plan to become the next GoogleGoogle herself He has embraced the generative AI even in his search engine. In this environment, Wikipedia has maintained the quality of its articles for its human commitment. In fact, its editors actively filter the content generated by AI, and that makes the platform a reliable information refuge. You know knowing that there will be no Slop. Marked in red, an example of Wikipedia’s summaries. Image: 404 average. Between the lines. These protests speak of something deeper than the simple acceptance of synthetic content: Wikipedia must evolve to attract new generations … … but its editors fear that AI destroys decades of collaborative work. “No other community has dominated collaboration to such a wonderful point, and this would throw it down,” said an editor quoted by 404 average. Yes, but. The Foundation has not ruled out the AI ​​completely, at least for the moment. He has promised that any future function will require “participation of editors” and “human moderation workflows.” It sounds like tactical pause. In addition, the experiment was born precisely from discussions in Wikimania in 2024, when some editors did see this format potential. In summary. The question now is if Wikipedia will be able to maintain its enormous historical relevance, already eroded since Chatgpt reached our lifewithout sacrificing part of the human criterion that distinguishes it. The answer to this question, which will not arrive tomorrow, will be what determines whether Wikipedia remains a reasonably reliable knowledge … or another space in automated internet noise. Outstanding image | Oberon Copeland @seeyinformed.com in Unspash In Xataka | Wikipedia is being filled with content generated by AI. So much, that you already have a team dedicated to finding it

Pocket was the place where you saved articles you never read. The dopamine chute of social networks has killed it

I was an Pocket user. And like many others, I still did not read the articles that kept me. It was the condemnation of a service that opted for deferred and repossed reading of all kinds of articles – especially long – but found an apparently invincible nemesis: social networks. Mozilla closes Pocket. The Mozilla organization has announced That closes the Pocket service. On July 8 the platform will stop offering articles and will enter “mode only export”: users can export their articles saved until October 8, 2025, at which time “user data will be deleted permanently.” Reasons. According to those responsible, although Pocket has helped millions to save articles and discover stories that were worth reading, “the way people use the web has evolved, so we are channeling our resources to projects that best fit their navigation and online needs habits. “ A great service … Pocket was born in 2007 as Read It Later, a service that allowed to mark as favorite articles that you could read quietly when it came to you. The idea was gaining traction by focusing on extensive pieces of both journalism and creative writing. Mozilla He bought it In 2017 and made it one of the star services associated with its Firefox browser. … that we did not use so much. Many users will coincide with me that Pocket was fantastic but we did not take advantage of it. I kept keeping articles regularly with the hope of reading them in my Kobo e-book reader (which had this function integrated), but rarely ended up doing so. And on top we used a little bad. Pocket was so simple and comfortable to use that many ended up using it not only to keep promising – but not always wonderful – long texts (Longform), but all kinds of links with news or even tweets. And when you opened Pocket, two things used to happen. The first, the feeling of having another obligation before you, another list of tasks in the form of a list (endless) of articles to read. The second, to throw yourself for the short items that you knew you were going to consume in a short time to be able to “erase” from the list. Check “Reading later” had become a “take it off later.” But it was still a fantastic service, and it has not been we who have killed it. The culprit is another. Doomscrolling. Social networks They have stolen us Our capacity to concentrate. The dopamine chute that they offer us with the famous Doomscrolling has proven unstoppable. We love to displace the screen vertically on our mobile phones to see the following content, and that immediacy and instant gratification have ended up shaking our attention capacity. Until always, Pocket 🙁 Slot machine. The algorithms that govern social networks They are inspired by the slots. Its objective is to generate addiction and have us glued to the platform on duty without leaving it. A study Of 2021, the diabolical simplicity of our way of dealing with these contents revealed precisely. The experiment was overwhelming: A group of participants were given a single video and asked if they preferred to see another or perform a certain task. Another group were given five videos and asked them the same question. The second group was much more predisposed to watch more videos. Then the two groups saw the same number of videos, but the first group saw more diverse videos and the other saw more homogeneous videos. The second group showed its predisposition to see more videos instead of moving on to another task. AND ECO CHAMBERS. These results reflect our current reality. Social networks not only raise infinite content, but also do so Locking us increasingly in echo cameras with homogeneous content that reinforce our tastes and opinions. In Pocket we probably also built a large echo camera, true, but at least we did it, not an algorithm. The problem was to reserve 5, 10 or 15 minutes to read a long article is increasingly difficult before the avalanche of images, texts and especially short videos always suggestive, great and fun. Google Reader moment. Pocket’s closure remembers to some extent that we live with Google Reader, feeds RSS reader that the searches giant killed because although we loved him it was probably not profitable. As in that case, Pocket was a fantastic product but also very niche. And even in that niche, underutilized. Digital Diogenes. In fact, Pocket contributed to Our digital diogenes syndrome. It was the place where you saved, saved and saved articles that you never read. In that sense, it was less functional than Google Reader, that when you used you did, you took it to read those headlines of the news that were coming from the feed RSS. But that Save everything syndrome In order not to consume it or enjoy it, it occurs in many other scenarios, Like photos and videos of the mobile or in the video games that we download and to those who will never play. In fact it is not that we no longer play: is that We prefer to see others play. Alternatives. Pocket’s death makes us look for inevitable alternatives to continue keeping articles that we may never read. Among them stands out Instapaperbut they are also Readwise, Wallabag, Rindropeither Mymind. For those who have a kobo there is also somebut not so direct. Curse. Image | Mozilla In Xataka | Internet, let me forget

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.