ended up sneaking in errors and references that didn’t add up

Artificial intelligence has become an everyday tool for millions of people. Today many use it to write emails, summarize documents or translate texts in a matter of seconds. However, this speed has a less visible side: generative systems can also make mistakes, invent data or alter sources without the user immediately noticing. When these errors appear in one of the largest encyclopedias in the world, the situation changes completely. That is precisely what has happened on Wikipedia with a series of translations carried out with the help of AI. The opening episode. It all started within the Wikipedia community itself. Some editors began reviewing recent translations and noticed something strange: certain texts included phrases that did not appear in the cited sources or references that did not seem to fit with what the article stated. According to 404 Mediathese translations were part of a project promoted by an organization that sought to expand the presence of Wikipedia content in different languages ​​using language models to speed up the process. When translation invents. As editors began to examine these translations in more detail, the problems became more evident. One of the cases cited by 404 Media is that of a draft article about the French royal family La Bourdonnaye. The translated text included a reference to a book and a specific page to explain the origin of the family. However, when editor Ilyas Lebleu, known on Wikipedia as Chaotic Enby, reviewed that source, he discovered that the page cited was incorrect. Lebleu added that, when doing a quick review of several translations, he also found interchanged references, phrases without a source, and cases in which paragraphs were added based on material unrelated to what was being written. It was published or remained in draft. The case also raised a relevant question: whether these errors had appeared in already published articles or whether they were detected during the review process. At least one of the problematic examples was identified in a draft translation, allowing editors to revise it before it was finalized. With the material provided here, however, it cannot be stated how many translations with problems were published and how many remained under review. Who is behind these translations. Here appears the name of Open Knowledge Association (OKA), a non-profit organization that claims to work to improve Wikipedia and other open platforms. As the organization itself explains on its website, its model consists of offering monthly stipends to collaborators and translators who work full-time expanding the encyclopedia’s content, and “taking advantage of AI (large language models) to automate most of the work.” According to 404 Media, editors who investigated the project concluded that it relied on contractors. The editors’ response. As more problematic examples appeared, the Wikipedia community decided to intervene. The editors reviewed the operation of the translation project and ended up establishing new restrictions for those who participated in it. OKA-linked translators who accumulate four strikes for unverifiable content within a six-month period may be blocked without additional notice if a new case appears. Additionally, content added by a translator that ends up being blocked may be removed preventively, unless another reputable editor takes responsibility for reviewing it. OKA explains. The organization mentioned in the debate also offered its version of the events. Jonathan Zimmermann, founder and president of the Open Knowledge Association, explained to the aforementioned media that the project’s translators work on an hourly basis and that there is no fixed goal of articles per week. In addition, he admitted that “errors happen,” although he defended that the system includes human verification and review of sources. Following the discussion on Wikipedia, he added, the organization is introducing a second review with another AI model to detect possible errors before publishing, and is studying the possibility of adding peer review mechanisms if necessary. Images | Oberon Copeland @veryinformed.com | Luke Chesser In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

There is a risk with AI agents and accumulated errors: that they are a "sneaky phone"

In the game of the “sneaky phone” (or broken, or broken) a group of people transmits a message from one to a secret one. What usually happens is that the original message does not have much to do with what the last recipient receives. And the problem we are seeing is that something similar can happen with the promising agents of AI. Accumulated errors. Toby Ord, a researcher at the University of Oxford, recently published A study On AI agents. In it I talked about how these types of systems have the problem of accumulated or compound error. An AI agent chains several stages autonomously to try to solve a problem that we propose – for example, create code for a certain task – but if you make an error in one stage, that error accumulates and becomes more worrying in the next stage, and more in the following, and even more so in the next. The precision of the solution is thus compromised and may not have much (or nothing) to do with the one that would really solve the problem we wanted to solve. AI can program, but not for a long time in a row. What this expert raised was the introduction of the so -called “half -life” of the AI ​​agent, which would help estimate the success rate according to the length of the task that an AI agent wants to solve. For example, an agent with a half -hour life would have a 50% success in two -hour tasks. The message is overwhelming: the longer an AI agent works, the more likely the success rate declines. Benjamin Todd, another expert in AI, I expressed it differently: an AI can schedule for an hour without (barely) errors, but not for 10 hours. They are not real or definitive figures, but express the same problem: AI agents cannot – at least for the moment – function indefinitely, because accumulated errors condemn the success rate. Humans either are saved. But be careful, because Something very similar happens With human performance in prolonged tasks. In the ORB study, it was pointed out how the empirical success rate is falling remarkably: after 15 minutes it is already approximately 75%, after an hour and a half is 50%and after 16 hours of just 20%. We can all make mistakes when performing certain chained tasks, and if we make a mistake in one of them, in the next task of the chain that error condemns all subsequent development even more. Lecun already warned. Yann Lecun, who directs the research efforts of AI in the finish line, has been notaring the problems with the LLMs for a long time. In June 2023 Indian how the autregressive LLMs cannot be factual and avoid toxic responses. He explained that there is a high probability that the token that generates a model takes us outside the correct answers group, and the longer the answer, the more difficult it is correct. {“Videid”: “X8HJ0VY”, “Autoplay”: False, “Title”: “Chatgpt: What you did not know what you could do | tricks”, “Tag”: “”, “Duration”: “790”} That is why is the correction of errors. To avoid the problem, we need to reduce the error rate of AI models. It is something well known In Software Ingenería, where an early code review is always recommended following a “Shift Left” strategy for the software development cycle: the sooner an error is detected, easier and cheaper is to correct it. And just the opposite does not happen if we do not: the cost of correcting an error grows exponentially the later it is detected in the life cycle. Other experts They point to the Reinforcement learning (Reinforcement Learning, RL) could solve the problem, and here Lecun responded that would do it if we had infinite data to polish the behavior of the model, which we do not have. More than agents, multi -agents. In Anthropic They recently demonstrated How there is a way of mitigating even more mistakes (and subsequent accumulated errors): Use multi -legal systems. This is: that multiple agents of AI work in parallel and then confront their results and determine the optimal path or solution. The graph shows the length of the tasks that AI agents can completely complete over the last years. The study reveals that the time that an AI agent can operate to complete tasks with a 50%success rate can be folded every seven months. Or what is the same: agents are improving in a sustained (and notable) way over time. But models and agents do not stop improving (or not?). Todd himself He pointed something important and that allows to be optimistic about that problem. “The error rate of AI models is being reduced by half approximately every five months,” he explained. And at that rate it is possible that AI agents can successfully complete dozens of tasks chained in a year and a half and hundreds in another year and a half later. In The New York Times They did not agree, and recently pointed out that although the models are increasingly powerful, they also “hallucinate” rather than previous generations. The “system card“O3 and O4-MINI precisely points to the fact that there is a real problem with the error rate and” hallucinations “in both models. In Xataka | The hallucinations are still the Achilles heel of the AI: the latest OpenAI models invent more of the account (Function () {Window._js_modules = Window._js_modules || {}; var headelement = document.getelegsbytagname (‘head’) (0); if (_js_modules.instagram) {var instagramscript = Document.Createlement (‘script’); }}) (); – The news There is a risk with AI agents and accumulated errors: that they are a “squeezed phone” It was originally posted in Xataka by Javier Pastor .

The first fotonic cubal immune to errors is already ready

The development they have experienced Quantum computers During the last decade it is amazing. This discipline is no longer attractive only for the research centers linked to some of the most prestigious universities on the planet; The governments of the United States, China, Germany, France, Australia, the United Kingdom, India, Canada or Russia are some of those who have openly manifested The strategic character that has for them Quantum computing. However, a good part of the greatest advances that we are as witnesses come from the private company. Google, Intel, Honeywell or IBM are some of the companies that are bidding to make possible the innovations required by the challenges that this computing paradigm has placed in front of us. Experts agree that There is still much to dobut the two greatest challenges that need to be overcome to make the tuning of fully functional quantum compute error correction that guarantees that the results we read are the correct ones, and also the scaling of the number of cubits. In fact, both challenges go hand in hand. Xanadu invites us to tie the future with more optimism than ever Xanadu’s trajectory started in 2016, but what placed this young Canadian company in the center of the debate in early June 2022 was the article published in Nature in which Jonathan Lavoie, his scientific heading, and his team explain how they achieved achieve quantum supremacy. Interestingly, in their project they used a programmable quantonic quantonic processor that they baptized as borealis and is able to operate at ambient temperature. The Lavoie team managed to solve in just 36 microseconds a problem in which a classic supercommer pertrected with the best available algorithm would have invested 9,000 years. However, this is not all. In addition, these researchers assured that their technology had allowed them minimize your hardware imperfections and reach a computational advantage in execution time 50 million times higher than that they have thrown other computers that also resort to photonic quantum processors. Xanadu’s quantonic quantonic processor can be manufactured using the same photolithographic technology used in the production of chips that reside inside our computers and smartphones Whatever the most shocking thing is that at that time, exactly three years ago, Lavoie and his team assured that they intended to be ready before this decade is a quantum computer of a million cubits endowed with the ability to amend their own errors. One of the greatest buzas in favor of the Xanadu scientific team is that its photonic quantum processor can be manufactured using the same photolithographic technology used in the production of the chips that reside inside our computers and smartphones, which opens the door to its mass manufacturing. However, the most interesting thing is the strategy that the team led by Lavoie has devised to make possible the scaling of its quantum hardware until it is able to bring together one million cubits. What he pursues is, in broad strokes, interconnect his quantum processors using a fiber optic network with the purpose of can exchange quantum information and face the same problem in a coordinated way. The most obvious advantage that this approach puts on the table is that, if everything goes as Xanadu researchers have planned, nothing will prevent them from continuing to climb their quantum hardware beyond one million cubits. In any case, these scientists have not wasted time for the past three years. Just a few hours ago they have published an article in Nature in which they describe with great detail the characteristics of their new silicon photonic cubits resistant to errors and integrated into a chip. It is the first time that a team manages to demonstrate the proper functioning of a quantum integrated circuit like this. Very broadly, your strategy is to overlap many photons to encode information in a way resistant to errors. This does not imply that quantum computers endowed with the ability to correct their own mistakes are already ready. To be viable using photonic technology it is necessary Optimize manufacturing and packaging processes with the purpose of mitigating optical losses throughout the platform. Anyway, there is no doubt: this achievement of Xanadu is very important. We will follow the track closely. Image | Xanadu More information | Nature In Xataka | Bitcoin encryption and other cryptocurrencies will fall. And those responsible will be quantum computers

The mental contrast -based system that is close to achievements anticipating errors

The motivation to achieve objectives is a crucial element to achieve them. The lack of solid arguments that justify maintaining constancy to acquire a new habitto move forward a personal project or achieve professional objectives contributes to that initial thrust to dilute over time and end abandoning your goals. The woop method is a positive thinking technique By mental contrast that puts the dreams, objectives and desires of people on the table, and confronts them to the difficulties that will be on the way. That allows differentiating the “cravings” of consolidated objectives and desires, helping you to prioritize those objectives that really improve your life and are in your hand to achieve. In short, it is a strategy that allows those who put it into great dream, while providing it with The necessary tools to overcome the inevitable challenges that will be in the process. What is the woop method The woop method is the acronym for words in English for desire (wish), result (outcome), obstacle (hindered) and planning (plan). It is a planning strategy that combines the visualization of positive results (desires and objectives to achieve) with the identification and planning of the obstacles that will appear on the way until they reach those goals. Unlike other approaches, which focus solely on optimism (pursue your dreams and come true because you are worth it), Woop gives a realistic and strategic vision to achieve them being aware of the steps that must be taken and that will not be a path of roses. This methodology is based on the “mental contrast with implementation intentions” (MCII) defined In a study by Gabriele Oettingen, Professor of Psychology at the University of New York State and the University of Hamburg, and author of the book “Rehinking Positive Thinking“. Oettingen has more than 20 years studying Human motivation and has detected great shortcomings in positive thinking. In Your studiesOettingen has discovered that positive thinking alone is not enough to achieve the objectives. Dreaming is finebut it is the obstacles of the way that make us grow and improve our skills to achieve the sets set. How to apply the woop method The first step, Desire (wish)it implies identifying a specific, challenging but realistic desire. At this point we must not be influenced by the expectations of others, but by what you really want to achieve, no matter if a personal or professional desire: to obtain an ascent, do more sports, work less hours, or eat healthier. The second step is Result (outcome). Here it is visualized what that desire would be to achieve. It would be necessary to imagine that the objective has been achieved and it is convenient to identify what positive feelings and the benefits would get to achieve it. Allow yourself to dream and experience that feeling of achievement. This second point continues to remain within the positive thought zone, but it already allows differentiating from what is a “whim” from what are real objectives and, therefore, to which priority must be given. The third step refers to Obstacle (hindered). Here We begin the questioning of dreams and contact with reality. To carry it out, honesty and self-reflection is required. What internal obstacles exist (fears, doubts or bad habits), and prevent you from achieving your desire? Identify those obstacles in the most specific way possible. A 1977 study From the American Psychological Association revealed that becoming aware of them, you are already marking a path to solve them and continue advancing, or on the contrary, it makes you take a more expand vision of your options to achieve it. In addition, by identifying these obstacles, the perspective of the obstacle is changed. For example, if your goal is to eat healthier, identify industrial pastries as an obstacle, it allows you to see it from another perspective: it is no longer so much sacrifice to give up this short -term whim, because the long -term objective is another and has priority. Finally, Planning (Plan)It consists of creating a “si-then” plan to overcome those obstacles. For example, if your goal is reduce sugar consumptionan approach for this point could be: “If you invite me to a party and there is a cake, I sign up for the party and socializo. I take coffee, I share time with friends, but I will not try the cake. If someone asks me, I can explain my goal openly.” In short, it is about drawing a contingency plan to the possible obstacles of the previous point, so that they cease to be a problem. In Xataka | A millennial, an X gene and a gen z changed their smartphones for “Dumb phones”: only one of them enjoyed the experience Image | Unspash (Kelly Sikkema, Peter Jones)

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.