Wikipedia opted for AI to summarize her articles. Its editors have avoided it through a rebellion

The Wikimedia Foundation has paused an experiment which showed summaries generated by AI in the upper part of the articles after an avalanche of criticism of their own editors.

Why is it important. Wikipedia remains one of the last great bastions of human content on the Internet, in front of the survey wave that has degraded other platforms. His model, which is committed to democratic governance, has just stopped an important technological advance.

What has happened. He “Simple Summaries” experiment He was born with the intention of making complex articles more accessible through automatic summaries marked as “not verified.” These summaries were made by an aya model of COPE.

The editors responded with comments such as “very bad idea”, “my strongest rejection” or simply “Puaj”.

The background. OpenAi continues to advance in Your plan to become the next GoogleGoogle herself He has embraced the generative AI even in his search engine. In this environment, Wikipedia has maintained the quality of its articles for its human commitment.

In fact, its editors actively filter the content generated by AI, and that makes the platform a reliable information refuge. You know knowing that there will be no Slop.

Simple Dopamine Summary
Simple Dopamine Summary

Marked in red, an example of Wikipedia’s summaries. Image: 404 average.

Between the lines. These protests speak of something deeper than the simple acceptance of synthetic content:

  • Wikipedia must evolve to attract new generations …
  • … but its editors fear that AI destroys decades of collaborative work.

“No other community has dominated collaboration to such a wonderful point, and this would throw it down,” said an editor quoted by 404 average.

Yes, but. The Foundation has not ruled out the AI ​​completely, at least for the moment. He has promised that any future function will require “participation of editors” and “human moderation workflows.” It sounds like tactical pause.

In addition, the experiment was born precisely from discussions in Wikimania in 2024, when some editors did see this format potential.

In summary. The question now is if Wikipedia will be able to maintain its enormous historical relevance, already eroded since Chatgpt reached our lifewithout sacrificing part of the human criterion that distinguishes it.

The answer to this question, which will not arrive tomorrow, will be what determines whether Wikipedia remains a reasonably reliable knowledge … or another space in automated internet noise.

Outstanding image | Oberon Copeland @seeyinformed.com in Unspash

In Xataka | Wikipedia is being filled with content generated by AI. So much, that you already have a team dedicated to finding it

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.