The Internet has never had so many users or so much content, and yet it increasingly looks like a handful of repeated screens. Much of what we read, watch or search for goes through the algorithms of a few large platformswho compete for our attention and they convert many of our clicks into measurable data.
In the midst of this standardized landscape, projects survive that operate with a different logic, such as Wikipedia, OpenStreetMap or the Internet Archivewhich are not financed by ads, do not sell detailed profiles of their users and continue to support a simple and demanding idea at the same time: that information and knowledge should be a shared good.
The web did not begin as a showcase for large platforms, but rather a dispersed and almost artisanal laboratory. In the early nineties, those who published on the web They did it from university serversinstitutional or domestic, using open standards such as HTML, HTTP and URIs. They were fundamental pieces of a network designed so that information could circulate without depending on technological owners or closed systems.
This technical architecture fueled the idea that the Internet could be an open and accessible space.
The Internet was not born to sell data: the market found how to do it
That enthusiasm, however, lived with obvious limits. As we say, participation was concentrated in universities, research centers and a minority of enthusiasts with technical knowledge and resources. The figures of the time show that just one minimum fraction of the world’s population had access to the Internet, which means that this supposed openness was real in technological terms, but not socially widespread.
Starting in the mid-nineties, and especially at the end of that decade, the Internet began to receive more attention. Companies saw economic potential in a network that connected millions of people and allowed information and services to be distributed on a global scale. Commercial providers, popular browsers and the first portals emerged, and with them came the logic of the market: there was traffic, there were users and, therefore, there were business opportunities. Access to the web stopped being an experiment and began to become a massive, measurable and profitable activity.
This change promoted a model that would quickly consolidate: segmented advertising. It was not just about showing ads, but about analyzing user behavior and obtain data about your interestshabits and preferences. It was the moment when human attention began to acquire a concrete economic value. Clicks, dwell time and browsing patterns ceased to be technical traces and became raw material for a new digital market.
In this increasingly commercialized context, some projects maintained another way of understanding the Internet. They were not born to attract traffic or to compete for attention, but to build public information infrastructures. Wikipedia was launched in 2001 with a goal that seemed unrealistic at the time: to create a free, collectively written encyclopedia available to anyone with an Internet connection.
OpenStreetMap began its journey in 2004 with a similar idea, but applied to the territory, collaboratively documenting the streets, roads and places of the world. Since 1996, the Internet Archive had been preserving pages, documents, audio and video so that they would not disappear over time.


Two decades later, these projects are not only still active, but are central pieces of the current web. Millions of people consult Wikipedia every day to check a fact, understand a context or learn something new. OpenStreetMap maps power everything from mobile applications to public services and humanitarian projects. And the Internet Archive has become a long-term digital memory, a place where the web is not deleted, but preserved. They are initiatives collectively built that have achieved global impact without adopting the dominant business model.
Wikipedia is supported by millions of small donors, most of them are readers who contribute small amounts, usually around ten euros a year. The Wikimedia Foundation manages these resources and maintains the technical infrastructure, including servers, software development and security systems. He also manages the Wikimedia Endowmentan independent fund created in 2016 to ensure that the project can continue operating even if revenue drops one year. Since 2021, there is also Wikimedia Enterprisea way for organizations that intensively reuse content, such as search engines or artificial intelligence companies, to access structured and stable versions of the data.
Financed on the backs of the people
OpenStreetMap has a different and much more decentralized structure. The OpenStreetMap Foundation is responsible for servers and general coordination, but much of the work comes from local communities organizing events, training and collaborative mapping tasks. The financing comes in the form of voluntary duestechnical sponsorships and support from organizations that use the data in logistical, humanitarian or educational projects.
In the case of Internet Archive, the costs fall on an infrastructure that stores millions of pages, documents and files, financed through individual donations, grants from foundations and public organizations, and archiving and digitization services for institutions.


When we talk about open projects, we can confuse openness with absence of organization. However, its operation is based on explicit rules and distributed structures. Wikipedia exemplifies this better than anyone. Editorial decisions are not made by a small group, but by thousands of people who apply public standards such as neutral point of view or verifiable content. The profile of the person contributing does not matter, but rather whether their contribution meets those criteria. Administrators can intervene to protect pages or resolve disputes, but their role is primarily technical and maintenance, with no hierarchical editorial authority over content.
OpenStreetMap works with a similar logic, but on geographic data: the information is built from the local and is review collectively to ensure consistency. There are regional communities They coordinate tasks, organize meetings and define practices, but the base remains open. In the case of Internet Archive, the process is not so much editing as cataloging and preservation, and external collaboration focuses on improving the quality of records and avoiding the loss of digital documents.
Living with the technological giants means assuming that a good part of the access to these projects comes through them. A large portion of readers enter Wikipedia from a search engine, not by typing the address by hand, and many maps based on OpenStreetMap are presented within commercial applications where the visible brand is another. The Internet Archive, for its part, acts as a reference repository for journalists, researchers and organizations, but the average user is barely aware that there is an independent infrastructure supporting all of this.
Living with the technological giants means assuming that a good part of the access to these projects comes through them.
This dependence creates new tensions. Artificial intelligence models and large search services reuse content and data generated by volunteer communities on a large scale, sometimes without clear visibility of the original source. This increases the load on the servers.complicates infrastructure planning and can reduce direct exposure of the projects before the general public, exactly the public on which they depend to continue receiving support. The creation of services like Wikimedia Enterprise is part of that adaptation: ordering mass access without giving up the original mission.
The future of these projects is marked by a constant challenge: to remain useful without giving up their founding principles. Artificial intelligence, advanced search engines and systems that automatically reuse information increase dependence on open sources, but they can also hide them from the user, reducing their public visibility. Wikipedia, OpenStreetMap or the Internet Archive face a scenario where their content is consumed more than ever, but in many cases without those who consult it knowing where it comes from. This invisibility does not put its usefulness at risk, but it can affect its sustainability, especially if direct community support is reduced.
The open projects are still there, but they are not guaranteed either. They require stable infrastructure, mechanisms to maintain quality, and active communities that continue to contribute and review. They are part of the knowledge architecture of the Internet, and the question that remains open is whether the digital society will be able to keep holding them as common goods, or whether they will end up silently integrated into commercial services that only use their data, but not their values.
Images | Xataka with Gemini 3 | Screenshot
In Xataka | Have I been Trained: how to know if your data and work has been used to train an artificial intelligence

GIPHY App Key not set. Please check settings