Wikipedia has banned using AI to write or rewrite articles in English. Human knowledge begins to raise barriers

The English version of Wikipedia has just banned articles made with AI. In the last update of their guidelines are clear: content generated with language models violates content policies. The largest encyclopedia on the internet positions itself as a refuge for content created by humans. AI no thanks. The ‘AI yes or AI no’ debate has been going on for a while generating tension on Wikipedia and they have finally opted to support human content with an overwhelming majority 40 to 2. The new restriction imposed reads as follows: “Text generated by large language models (…) often violates several of Wikipedia’s fundamental content policies.” Those fundamental policies What it refers to are the neutrality of the content, verifiability and that the content cannot be original research, but must be attributed to reliable sources. With this change, editors are prohibited from using LLM “to generate or rewrite article content.” Two exceptions. Wikipedia contemplates two scenarios in which the use of AI is allowed: Basic style suggestions and corrections, as long as the LLM does not introduce its own content. They warn that it must be used with caution since LLMs tend to “go beyond what is asked of them and alter the meaning of the text.” Translation of articles into other languages, as long as it is reviewed by a person competent in the two languages ​​involved. Here it is important to note that Wikipedia has already had dramas in the past because of AI translations. Why is it important. Wikipedia has positioned itself as a repository of genuinely human content in an internet that is flooded with artificial content. At a time when distinguish the authentic from the synthetic is increasingly difficult, the largest encyclopedia in the world chooses to rely on human authorship as a guarantee of reliability. There is certainly something ironic and that is that Wikipedia rejects AI, but AI continues to draw on Wikipedia to provide answerscausing them to lose clicks and saturating your servers. AI generated vs human made. Until recently we thought that the solution was flag artificial content on platforms with the classic ‘AI’ label, but we are already at a point where it is more valuable and useful to highlight the opposite: that it is made by humans. The advancement of image generation tools and the amount of texts made with AI are overwhelming, to the point that an anti-AI current is emerging; Some artists are starting to designing “badly” to differentiate itself from AI homogenizationthey have created extensions to return to the internet before ChatGPTthere is browsers that filter AI results and even ‘Not by AI’ badge has been created. The point is that it is a David against Goliath. The Etsy case. It is perhaps one of the most bloody cases of the flood of low-quality AI content. The platform that It was presented as a refuge for the authentic, today it is an AI market which also tries to pass itself off as artisanal. Ghibli-style portraits for 20 euros, profiles managed entirely by AI that say things like “I can’t wait to draw you”… Etsy allows content made with AI, but says you have to label it as such. Nobody does it. Proof that the label is no longer useful. A key detail. The last paragraph of Wikipedia’s guidelines is especially striking because it talks about possible sanctions for those who violate the rule, the problem is how they plan to detect who uses AI. Wikipedia admits that “some editors may have writing styles similar to those of large language models” and that “more evidence than mere stylistic or linguistic clues is needed to justify the imposition of sanctions.” We have no idea how they are going to do it, what we do know is that AI text detectors fail more than a fairground shotgun. Image | Wikipedia, edited In Xataka | The last barrier against AI is good taste. The problem is that an entire generation is growing up without developing it

In 1993 Microsoft created Encarta to revolutionize knowledge. Twenty years later it would be devastated by a tsunami

It became so popular that its logo and the sound of their intros They became two brands just as identifiable as those of Nokia or Windows. If – like the person writing this – you had to go to school or high school between the second half of the 90s and the first half of the 2000s, talk about the Encarta It does not require large presentations. If not, don’t worry; It won’t take us much time. Before Wikipedia offered free online knowledge and even the use of the Internet became popular, Microsoft launched a digital encyclopedia that revolutionized the sector and became a phenomenon between more or less 1993 and 2009. Its name: Encarta. Today, ironies of history, “Encarta” is one more entry in the index of other encyclopedias; but there was a time when it transformed our way of accessing knowledge. From having to spend their eyelashes and fingertips scrolling through pages in search of information, students began to search for information with the click of a button. The Encarta offered an agile, comfortable and above all didactic way to satisfy curiosity. With articles, yes; but also with videos, audios and even virtual visits and games. You could read about Nepalese temples in the Salvat. Or open the Encarta and “tour” one. Its “pull” was so great that it put the old paper encyclopedias in trouble. When the Spanish edition was presented in early 1997, those responsible presumed that the Encarta CD-ROM, a format that you could store in a drawer or even a folder, contained information that It was equivalent to 29 volumes and 1.2 meters of shelving. Not only that. The Encarta cost 24,900 pesetas, four times less than an equivalent printed encyclopedia. To make matters worse, his landing in Spain was protected by Santillanaa publishing house with considerable weight in school classrooms. How to compete with that? The product was liked and published in Spanish and other languages. He did well until, with the same ones with which he had become a phenomenon, ended up succumbing to the competition. In a way, his success is due to his good sense of smell in the 90s; its decline, to the inability to adapt in the 2000s. This is your story. Objective: reinvent the old encyclopedias In the mid-1980s Microsoft He began to think about the idea of ​​creating a digital encyclopedia. The idea was ambitious. Those from Redmond wanted, neither more nor less, to rethink the concept and operation of a product apparently as mature and closed as the volumes that publishers’ commercials were dedicated to selling door to door. To make its debut in a big way, the multinational tried to negotiate a license with the creators of what was probably the most respected publication internationally: the Encyclopædia Britannica. It didn’t go well for them. In the 1980s, paper volumes of Britannica were sold and They left huge profits. As Enrique Dans remembershis books cost about $250 to produce and the selling price ranged between $1,500 and $2,200, depending on the quality. Why would the firm want to digitize content on a CD and risk killing the goose that lays the golden eggs? Microsoft did not give up and looked for ways to move the idea forward. He even had a name for the initiative: Project Gandalf. Some time later he closed a contract with Funk & Wagnalls to use your New Encyclopediaof 29 volumes, in a database that was created at the end of that same decade. To complete its contents, years later two other McMillan encyclopedias would be added, the Collier’s and New Merit Scholar. They were not the Britannica; but it would have to do. However, doubts arose in Redmond about whether or not the project was viable and they decided to park it. It was resumed at the turn of the decade, in 1991, when Microsoft decided to go all out. In 1993, the first edition of the Encarta Encyclopedia was launched, which included the 25,000 Funk & Wagnalls articles and extra material, such as images and some animations. The tool was comfortable, much more agile than the kilometric tomes and even fun, but it started with a huge mistake: the shot was centered wrong. At the beginning of the 90s there were still many houses without a PC and the marketing price was exclusive. When it came out, the Encarta cost about $400, which greatly limited its range. The cost deterred customers and was not too far from that of another competitor that was testing the same niche with a recognizable brand, Compton, which also launched your own multimedia version in 1990, with text and supports such as images and sounds. In Redmond they knew how to react and soon they were deploying a more aggressive strategy. They launched promotions that allowed you to get the Encarta for 99 dollarsthey included their CD with the Windows software package and negotiated with manufacturers to incorporate it into their computers, a tactic not unlike that used with Windows and Office. The promotion of Microsoft itself gave the final push. The new encyclopedia gained fame and began to chain editions, translate into different languages and enrich content with multimedia supports. In 1995, abridged versions of some articles were offered for Microsoft Network ISP subscribers, and starting in ’96, standard and deluxe editions began to be released, an enriched version that could be updated month by month. In 1998, its creators went one step further and acquired the rights to several electronic encyclopedias. The product was growing and, above all, it demonstrated that the sector was experiencing a clear paradigm shift. The best example: in 1996 the once powerful company Britannica ended up underselling for their difficulties. “It allows young and old to explore the world by themes and characters,” their promoters boasted in the Spanish market. And so it was, indeed. Through articles, photos, illustrations, graphs, maps, timelines, recordings, videos and even virtual tours, Encarta won over an entire generation of students. … Read more

How to have your own specialized AI with the data and knowledge that you give it using NotebookLM

Let’s explain to you how to have an AI trained with your own dataso that you can then ask questions related to them. For this we are going to use NotebookLMwhich is slowly becoming the artificial intelligence more useful and productive. What we are going to do is explain to you in a very simple way the main function of this tool, which is to create a notebook and upload your own sources of knowledgeso that all the questions you ask are based on this information you have uploaded. To process the information and generate the answers, the model will be used Gemini. It will maintain features such as understanding what you write and generating text, but the answers will be based solely on the sources you have added. An AI specialized in what you tell it The first thing you have to do is enter notebooklm.google.comand create a new notebook. NotebookLM is a tool that you can use for free, although with a limit of 50 free fonts. If you want your notebook to have more, then you will have to pay. When you’re creating your first notebook, you’ll be able to upload your first font. You can upload fonts in several waysusing a website whose data you want to use, uploading files that can be PDFs, images, documents or audios, YouTube videos, Drive folders or copied text. When you upload the first source, your notebook will be created. In it, at the top you can give it a specific name to differentiate it. You will have a tab of Sources where you can upload new elements as sources, and activate or deactivate the ones you want whenever you want. If you enter the tab Chatthen you will go to a Gemini type chat. In it, you can ask him questions about whatever you want, and will respond to you with the data from the sources that you have uploaded. If the answer to your question is not in the sourcesthen Gemini within your NotebookLM will respond clearly with a message. In this message he will tell you that the documents provided do not have mentions of what you have asked him. You have a configuration button in the section Chatwhere you will be able to define both the role of AI and its objectives in conversations and the length of the responses it gives you. This way, you can adapt it to what you need. You also have a tab Studiowhere you have different types of elements that you can create based on the sources or documents you have uploaded. So, if you are looking for more than textual answers, you can also create other elements such as audio summaries to listen to whenever you want, presentations or flashcards. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Internet access democratized knowledge you were rich or poor. AI is destroying that conquest step by step

$ 250 per month for using the most advanced AI. That is The figure that Google has put. Six months ago, OpenAi set his in 200. A few days ago Anthropic expanded Claude’s few limits with another 200. In any case It’s not just about paying for technology. It’s about buying power. And, therefore, to mark distances. Until recently, to talk about ia was talking about universal access. The free chatgpt or gemini versions were far from their older sisters, yes, but allowed to try, learn, benefit from their abilities even if it was a bit. Today that has changed. The powerful version is no longer available to everyone. It has been encapsulated after a monthly three -digit subscription that does not even start by ‘1’. It is the beginning of a new gap: Not between those who use AI and those who do not, but between those who can automate their tasks, think with help, execute complex flows … and those who do not. What Google or Openai offers is not just a better chatbot. Is an operating system of intellectual work. An assistant who not only responds, but understands the context, recalls, acts, generates, automates. Tools like Deep Research (OpenAI) or Project Mariner (Google) represent the decisive step towards Autonomous agents. They execute tasks that previously occupied days of human work. And in many cases they do better. With that, productivity is redefined. But inequality is also redefined. The question It is not only what this technology can do, but who can afford it. Because we don’t talk about a luxury. We talk about a tool that multiplies the performance of those who use it. An invisible advantage but that impacts everything: from the quality of work to the speed with which objectives are achieved. Who accesss these models lives in another learning curve, in another economy of results. Who cannot pay them, is trapped in a slower, less capable, more limited version of himself. This has a clear echo in history: the machinery of the industrial revolution also multiplied productivity … but at first only for those who could afford it. The same goes for now. The advanced AI begins to consolidate as an elite infrastructure. As if at the beginning of the Internet century, they were only available for those who paid $ 3,000 a year. And that changes the knowledge map. Because These tools not only report: they model how you learn, how it is decided, how it competes. Its effect is not immediate or visible, but cumulative. Day after day, those who access them will work with less friction, to make better decisions, to delegate more tasks, to generate more and better content. The others only observe, as much with access to the good, but not maybe. And they lag behind. The future is already priced. In Xataka | Google has become the most leading company at night. And the main winner is Android Outstanding image | Xataka with Mockuuuups Studio, Openai

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.