This engineer found 1,351 loose photos in his grandmother’s house. He ended up building a personal Wikipedia of his entire life

It all started with a closet full of old loose photos. Last year an engineer named Jeremy visited his grandmother’s house for the first time since the pandemic and unknowingly came across a treasure. 1,351 on paper, without order, without dates and without context. Some were in black and white, from when his grandparents were 20 years old. Others were from his mother as a baby. The last ones were from him in high school, just before smartphones arrived and everything moved to the cloud. What began as a family organization exercise became a fascinating project over the weeks: a personal encyclopedia. A Wikipedia of his own life. First, the physical photos and the grandmother. The first problem he encountered when starting his project is that physical photos do not have EXIF metadata. There is almost never a capture date (although some cameras superimposed it), there are no GPS coordinates and there is no information that allows them to be easily sorted. What Jeremy did was resort to a much more direct solution: sit down with his grandmother and ask her about the photos. Remembering that it is a gerund. In that conversation she rearranged the photos of their wedding and narrated the details while he took notes. Names, places, who was sitting where, what each ritual meant. With those notes, he set up a local instance of MediaWiki, the same software that Wikipedia uses, and wrote a page about the wedding following the same format that was used on Wikipedia to royal wedding between Prince William and Kate Middleton in 2011. Within two afternoons I had a complete article with scanned photos, captions, links to empty pages about each person mentioned, and links to the real Wikipedia to give historical context to the events. Digital photos and Claude Code to get the job done. Jeremy realized that things could get worse and took the opportunity to do tests with digital photos, which do have EXIF data with date and time and even GPS coordinates. With that information he wanted to see how far he could go without interviews, so he took 625 photos from a family trip to Coorg (India) in 2012, put them in a folder and opened Claude Code in that directory with a simple instruction: compose a Wikipedia page by browsing the images. The model used ImageMagick to create contact sheets that allowed him to process multiple photos at once, and the magic of AI did the rest. The result was a detailed draft chronicling the trip organized by time of day. Without location data, just with timestamps and visual content, the AI ​​model was able to identify the places that appeared in the photos, including some that Jeremy himself had forgotten. It even detected the means of transportation used between destinations just with what it saw in the images. When AI starts remembering for you. Then came the most ambitious experiment, when he wanted to go further with a trip he took to Mexico City in 2022. He had 291 photos and 343 videos taken with an iPhone 12 Pro with GPS coordinates in the metadata, but he also exported his Google Maps location history, his Uber trips, his banking transactions and his Shazam history. By including all that data and sources, the model was able to cross-reference banking transactions with location data to identify the restaurants where he had eaten. For example, he found images of a soccer match in the photos but did not remember which teams were playing, but he found out that information by crossing those photos with bank transactions in which he found a Ticketmaster invoice with the name of the tournament and the teams, and incorporated them into the page. He also used Shazam’s history to describe the music playing in each location. From photos and memories to a personal encyclopedia. A wonderful project that now anyone can replicate thanks to the whoami.wiki website. First the trips, then the friendships. What started as a travel documentation project evolved into something more personal. The Facebook, Instagram and WhatsApp archives contained some 100,000 messages and several thousand voice notes exchanged with close friends over a decade. The AI ​​model managed to convert all this information into a unique biography, identifying vital episodes of the protagonists, then converted into pages that, according to Jeremy, “read as if they were written by someone who knew us both.” When he shared the pages with those friends, they couldn’t stop reading those stories and wanted more. MediaWiki as a master ingredient. One of the most interesting decisions of the project is the choice of software. MediaWiki, Wikipedia’s engine, turned out to be an extraordinarily suitable tool for that use case. AI models understand this perfectly because they have been trained with millions of Wikipedia pages and know their structure and functioning. Discussion pages serve to control the development of those pages, categories group pages by topic, and revision history monitors the evolution of each page. All of this infrastructure already existed, and it was not necessary to create a new platform to organize the information that Jeremy was providing. Surpriseyes. At the end of his story, Jeremy explains that after the process: “I realized that I was no longer alone working on a family history project. What I had been creating, page by page, was a personal encyclopedia. A structured, navigable, interconnected record of my life compiled thanks to the data I already had around me.” Documenting her grandmother’s life revealed things she didn’t know: her years as a single mother or the decisions she had to make, for example. Going through the history of his friendships allowed him to recover moments that he had almost forgotten and made him call some of them to remember them together. “The encyclopedia not only organized the data, it made me pay more attention to the people in my life,” he explained. you can do it too. The project has been so rewarding for him that he … Read more

Wikipedia has banned using AI to write or rewrite articles in English. Human knowledge begins to raise barriers

The English version of Wikipedia has just banned articles made with AI. In the last update of their guidelines are clear: content generated with language models violates content policies. The largest encyclopedia on the internet positions itself as a refuge for content created by humans. AI no thanks. The ‘AI yes or AI no’ debate has been going on for a while generating tension on Wikipedia and they have finally opted to support human content with an overwhelming majority 40 to 2. The new restriction imposed reads as follows: “Text generated by large language models (…) often violates several of Wikipedia’s fundamental content policies.” Those fundamental policies What it refers to are the neutrality of the content, verifiability and that the content cannot be original research, but must be attributed to reliable sources. With this change, editors are prohibited from using LLM “to generate or rewrite article content.” Two exceptions. Wikipedia contemplates two scenarios in which the use of AI is allowed: Basic style suggestions and corrections, as long as the LLM does not introduce its own content. They warn that it must be used with caution since LLMs tend to “go beyond what is asked of them and alter the meaning of the text.” Translation of articles into other languages, as long as it is reviewed by a person competent in the two languages ​​involved. Here it is important to note that Wikipedia has already had dramas in the past because of AI translations. Why is it important. Wikipedia has positioned itself as a repository of genuinely human content in an internet that is flooded with artificial content. At a time when distinguish the authentic from the synthetic is increasingly difficult, the largest encyclopedia in the world chooses to rely on human authorship as a guarantee of reliability. There is certainly something ironic and that is that Wikipedia rejects AI, but AI continues to draw on Wikipedia to provide answerscausing them to lose clicks and saturating your servers. AI generated vs human made. Until recently we thought that the solution was flag artificial content on platforms with the classic ‘AI’ label, but we are already at a point where it is more valuable and useful to highlight the opposite: that it is made by humans. The advancement of image generation tools and the amount of texts made with AI are overwhelming, to the point that an anti-AI current is emerging; Some artists are starting to designing “badly” to differentiate itself from AI homogenizationthey have created extensions to return to the internet before ChatGPTthere is browsers that filter AI results and even ‘Not by AI’ badge has been created. The point is that it is a David against Goliath. The Etsy case. It is perhaps one of the most bloody cases of the flood of low-quality AI content. The platform that It was presented as a refuge for the authentic, today it is an AI market which also tries to pass itself off as artisanal. Ghibli-style portraits for 20 euros, profiles managed entirely by AI that say things like “I can’t wait to draw you”… Etsy allows content made with AI, but says you have to label it as such. Nobody does it. Proof that the label is no longer useful. A key detail. The last paragraph of Wikipedia’s guidelines is especially striking because it talks about possible sanctions for those who violate the rule, the problem is how they plan to detect who uses AI. Wikipedia admits that “some editors may have writing styles similar to those of large language models” and that “more evidence than mere stylistic or linguistic clues is needed to justify the imposition of sanctions.” We have no idea how they are going to do it, what we do know is that AI text detectors fail more than a fairground shotgun. Image | Wikipedia, edited In Xataka | The last barrier against AI is good taste. The problem is that an entire generation is growing up without developing it

Elon Musk is trying to win the AI ​​race by creating the Wikipedia of AI. We have many questions

Grokipediathe new online encyclopedia created by xAI, is now available. The project that Elon Musk has been talking about for some time is just what we expected: a version of Wikipedia in which the content has been generated by Grok, the AI ​​model developed by Musk’s company. And that is precisely the problem. What is Grokipedia. Basically, a copy of Wikipedia in which, as we say, the writing of the texts is done by Grok. The design is simple, with a home page that is a search engine. The articles follow the design of Wikipedia and its structure of different headings and photos. At the moment there do not seem to be any photos in those articles, and Grokipedia does not currently allow users to edit those pages either. If AI makes mistakes, how can we trust AI? The essential question that determines the validity of the idea of ​​Grokipedia is precisely that. Considering that AI makes things up and makes mistakes, what can you expect from an online encyclopedia created by an AI model? Grokipedia on the left, Wikipedia on the right. The PS5 article is an absolute copy of the Wikipedia original. Content “adapted” or directly copied from Wikipedia. Some Grokipedia pages display the message that the content has been adapted from Wikipedia taking advantage of the Creative Commons Attribution-ShareAlike 4.0 license. This happens, for example, with the article dedicated to MacBook Air. In other articles such as that of the PlayStation 5 That message falls short because the article is basically the same as Wikipedia’s. An encyclopedia with biases. In Grokipedia there are signs that the theoretical neutrality and objectivity that should be fundamental pillars of such a project are faltering. As indicated in Wiredthere are worrying examples such as the one that talks about the slavery of African Americans in the US in which they talk about “ideological justifications.” In an entry about “gay porn“false information is shown indicating that the proliferation of these contents fueled the AIDS epidemic in the 1980s. In the entry on the genre, Grokipedia indicates that “gender refers to the binary classification of humans as males or females based on biological sex.” Wikipedia start entry stating that “Gender is the range of social, psychological, cultural and behavioral aspects of being a man (or boy) or woman (or girl), or a third gender.” In the image and likeness of Elon Musk. and the article about Elon Musk It contains 11,000 words and 300 citations/references compared to the 8,000 and 523 of its Wikipedia version. In both encyclopedias there are curiosities about that article, and for example in Wikipedia there is a section dedicated to Musk’s controversial greeting which is not on Grokipedia. And on the opposite side, Grokipedia does have mention of the “fart guy” controversy which is not available on Wikipedia. This is just the beginning. This version “0.1” of Grokipedia contains 885,000 articles, while Wikipedia has more than 8 million entries. In 2017 Elon Musk posted a tweet in which he praised the work of Wikipedia, but over time that perception changed, probably due to the comments included in the entry about him on Wikipedia. This year tweeted the message “Stop financially supporting Wikipedia until balance is restored!” The danger. Although Elon Musk assures that Grokipedia is open source and anyone can use it for free, it remains to be seen the ability that its users will have to edit articles created by AI. The risk is that this project poses a new attempt to control the conversation, and as he says entrepreneur Gary Marcus, “whoever writes the encyclopedia controls the narrative.” Jimmy Wales warns. The creator of Wikipedia, Jimmy Wales, indicated in an interview in The Washington Post a few days ago that he was curious to know what Grokipedia would end up being, but that he did not have too many expectations about the result. For him, AI language models “are simply not good enough to write encyclopedia articles. There will be a lot of errors.” Lauren Dickinson, spokesperson for the Wikimedia Foundation, explained in The Verge how “Wikipedia knowledge is and always will be human.” Problems for the free and human-created encyclopedia. Even so, Wikipedia is threatened by AI. Not only because this legendary online encyclopedia has been the great manual for training AI, but because it is suffering a traffic crisis. The xAI project is the latest attack on that source of knowledge and information, which, from being under control and editing completely carried out by human beings, now cedes those editing and writing tasks to xAI’s AI model, Grok. Image | dvids In Xataka | There is a reason why Wikipedia resists as the last human bastion against AI: because its editors rebelled

Wikipedia opted for AI to summarize her articles. Its editors have avoided it through a rebellion

The Wikimedia Foundation has paused an experiment which showed summaries generated by AI in the upper part of the articles after an avalanche of criticism of their own editors. Why is it important. Wikipedia remains one of the last great bastions of human content on the Internet, in front of the survey wave that has degraded other platforms. His model, which is committed to democratic governance, has just stopped an important technological advance. What has happened. He “Simple Summaries” experiment He was born with the intention of making complex articles more accessible through automatic summaries marked as “not verified.” These summaries were made by an aya model of COPE. The editors responded with comments such as “very bad idea”, “my strongest rejection” or simply “Puaj”. The background. OpenAi continues to advance in Your plan to become the next GoogleGoogle herself He has embraced the generative AI even in his search engine. In this environment, Wikipedia has maintained the quality of its articles for its human commitment. In fact, its editors actively filter the content generated by AI, and that makes the platform a reliable information refuge. You know knowing that there will be no Slop. Marked in red, an example of Wikipedia’s summaries. Image: 404 average. Between the lines. These protests speak of something deeper than the simple acceptance of synthetic content: Wikipedia must evolve to attract new generations … … but its editors fear that AI destroys decades of collaborative work. “No other community has dominated collaboration to such a wonderful point, and this would throw it down,” said an editor quoted by 404 average. Yes, but. The Foundation has not ruled out the AI ​​completely, at least for the moment. He has promised that any future function will require “participation of editors” and “human moderation workflows.” It sounds like tactical pause. In addition, the experiment was born precisely from discussions in Wikimania in 2024, when some editors did see this format potential. In summary. The question now is if Wikipedia will be able to maintain its enormous historical relevance, already eroded since Chatgpt reached our lifewithout sacrificing part of the human criterion that distinguishes it. The answer to this question, which will not arrive tomorrow, will be what determines whether Wikipedia remains a reasonably reliable knowledge … or another space in automated internet noise. Outstanding image | Oberon Copeland @seeyinformed.com in Unspash In Xataka | Wikipedia is being filled with content generated by AI. So much, that you already have a team dedicated to finding it

How to use chatgpt or gemini by taking out the information only from Wikipedia as the only source

Let’s explain the way you can do Chatgpt consultations using Wikipedia as the only source. Sometimes, when you ask for a thing to artificial intelligence you can make mistakes, and even when you want me to explain something obtaining internet information, you can use unreliable sources to generate the answer. Meanwhile, Wikipedia has been positioned for many years A great source of collective knowledge. Yes, it also has errors, but there are much less. Therefore, we are going to tell you and explain the prompt that you can use both in Chatgpt as in COPILOT, Deepseek either Gemini. Ask the AI ​​to use wikipedia What we want to get is that the artificial intelligence chat we use use wikipedia as the only source to obtain information. In this way, you will look for what you have asked there, you will get written information, and generate an answer based on it. We can meet two problems. The first is that in addition to Wikipedia also use other sources, so that everything will no longer come from a single site. It may also happen that the answer is too technical. We will solve both things with the prompt. This is the prompt that we recommend: Explain to me in a simple way what is XXXX taking the information only from Wikipedia. Here, what you have to do is change the XXXX for what you want me to explain. You can also ask you to explain who a person is, or adapt it in the way you need for the request you have in mind. What we have done in this prompt is to add the “in a simple way” so that the answer it generates is colloquial. Besides, We have added the term only To specify that only use Wikipedia as a source, and that you do not obtain data from any other web page. In Xataka Basics | How to improve chatgpt responses: 9 steps to guarantee higher quality and better sources

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.