If Ukraine promoted the use of drones, Iran has triggered the Terminator algorithm. And that was already a problem in science fiction

In the gulf war 1991, the international coalition took more than a month to launch some 100,000 airstrikes after weeks of planning. Three decades later, the ability to process military information has changed radically: satellites, sensors and drones generate amounts of data that no human team could analyze alone. In this new technological environment, the true battlefield is no longer just the air or the land, but the speed at which information is interpreted. From the drone to the algorithm. Recent wars had already anticipated a profound transformation of modern combat, but the conflict with Iran seems to have crossed a different technological frontier. If the war in ukraine popularized the massive use of drones as a dominant tool from the battlefield, the campaign against Iran has introduced a logical even more radical: integration artificial intelligence at the very heart of military decisions. In fact, the initial attacks showed an intensity difficult to imagine just a few years ago, with hundreds of targets hit in a matter of hours and thousands in a few days. That speed was not only the result of greater firepower, but also of the use of capable systems of analyzing enormous volumes of data and transforming that information into almost instantaneous attack plans. Understanding the “kill chain”. I remembered this morning the financial times that traditional war, the so-called chain of destruction (from identifying a target to launching the attack) was a long and bureaucratic process. Intelligence officers analyzed information, wrote reports, commanders evaluated options and finally the coup was authorized. A process that could take hours or even days. The incorporation of AI is reducing that cycle drastically. We are talking about platforms that integrate data from satellites, drones, sensors and intercepted communications that are capable of generating lists of targets, prioritizing them and suggesting the appropriate weapon in a matter of seconds. The result is extreme and disturbing compression of the kill chain: What once required prolonged deliberation now becomes an almost instantaneous sequence. The digital brain of the battlefield. Behind this acceleration are data analysis systems that act as a true operational “brain.” These platforms combine geospatial intelligence, machine learning and advanced language models to interpret information and propose military actions. Its most disruptive capacity is that it no longer only summarizes data, but can reason step by stepevaluate alternatives and generate tactical recommendations. This allows military commanders to process volumes of information that are impossible to handle manually and multiply the number of operational decisions made in the same period of time. In practice, algorithms are allowing select and execute objectives at a scale and speed that were previously unthinkable. Bomb faster than thought. The result of this transformation is a war that begins to move at a rapid speed. higher than human pace. Artificial intelligence can now analyze information, detect patterns and propose attacks faster than a team of analysts could even formulate the right questions. Some experts describe This phenomenon as a form of “compressed decision,” in which planning is reduced to such short windows of time that human managers can barely review what the machine has already processed. In this context, another disturbing idea: that destruction can precede the human reflection process itself, that is, first comes the recommendation generated by the algorithm and then the formal approval of the person who must execute it. And there, there is no doubt, we can have a problem of colossal dimensions. The human dilemma in algorithmic warfare. Because this technological acceleration is generating a growing debate about the real role of humans in military decision-making. Although the armed forces they insist As final control remains in the hands of people, the time available to evaluate system recommendations is increasingly reduced. Some analysts fear that this will lead to a form of “cognitive download”one in which military leaders end up automatically trusting the decisions generated by algorithms. Other countries like China itself observe this evolution with concern and warn of the risk that automated systems end up directly influencing life or death decisions on the battlefield, associating the scenario with the closest thing to the “Terminator algorithm” due to the unequivocal way in which all paths approach James Cameron’s fantastic proposal. A new accelerated war. If you will also, what is emerging is not just a new military technology, but rather a new time of the war. AI makes it possible to process information on a massive scale, identify targets more quickly, and execute attacks with unprecedented simultaneity. This means that military campaigns can develop at a pace that overflows the models traditional planning. From this perspective, war no longer advances solely at the pace of logistics or firepower, but at the pace of algorithms capable of interpreting the battlefield in real time. And in this unprecedented scenario, strategic advantage could increasingly depend on who is able to think (or calculate) faster than the adversary. Although neither of them be human. Image | Ministry of Defense of Ukraine In Xataka | China has just found a hole in the US’s quietest weapon: an algorithm has hacked its B-2s in Iran In Xataka | The great paradox of war: the US ignored Ukraine’s pleas to Russia and now needs it in Iran

An algorithm has hacked their B-2s in Iran, and they have the audio

In modern military history there are weapons so sophisticated that for decades they seemed practically impossible to follow or anticipate. However, as satellites, sensors and massive data analysis multiply, the battlefield begins to change. change nature: It is no longer always whoever wins the most advanced plane, but rather whoever is able to interpret before anyone else millions of seemingly unconnected signals. In this new scenario, algorithms begin to play a role that previously only radars had. The bomber that changed war. He B-2 Spirit It is one of the most exclusive and secret pieces of the American arsenal. There are only 20 operational units and each one cost more than 2 billion dollars, making it the most expensive airplane ever built. Its flying wing design eliminates vertical surfaces and reduces to a minimum the signal that bounces off enemy radars. Added to this are radar-absorbing materials, engines hidden within the fuselage and flight profiles designed to remain undetected. The result is a true “bug” capable of penetrating dense air defenses, penetrating deep into enemy territory and attacking strategic objectives. without being seen. For decades, that combination of stealth and range has made the B-2 the silent weapon par excellence from the United States, a platform designed precisely to operate without the adversary even knowing it is there. Epic Fury, the invisible attack on Iran. That capability was tested again when the US Air Force launched four B-2As (identified by the callsigns Petro 41, Petro 42, Petro 43 and Petro 44) to attack Iranian facilities hidden in mountainous complexes during Operation Epic Fury. The mission was part of the coordinated military campaign between Washington and Tel Aviv and was designed to hit high-value targets, including centers linked to the Iranian missile program. The B-2 is designed precisely for those types of operations: fly thousands of kilometers, penetrate advanced air defense systems and launch precision-guided munitions against strategic targets. Its greatest advantage is not speed or firepower, but the stealth. The enemy doesn’t have to intercept it if he doesn’t even know the attack is happening. The Chinese spy: an algorithm. But as we said at the beginning, modern warfare is beginning to introduce a new type of sensor: the software. A Chinese technology company, Jingan Technologyhas announced that its artificial intelligence-based military analysis system (one called Jingqi) detected linked signals to the American deployment weeks before the attack. The system reportedly combines satellite images, flight paths, ship movements, public records and other open sources to reconstruct patterns of military activity. According to the companythis analysis made it possible to identify since January an accumulation of US forces in the Middle East that even exceeded that registered before the Iraq war. The AI ​​would have followed transport aircraft routes, reconnaissance missions and movements of aircraft carrier groups until reconstructing the sequence that led to the military operation. A hole. The most striking statement came after the attack. Jingan assured that his system detected radio communications from the bombers during their return flight, despite the fact that operations of this type are usually carried out under strict silence on radio. The company maintains that it could rebuild the route of the bomber group and even published an audio fragment to support your claim. If this interception is correct, it would imply something much more significant: the weak point would not be in the enemy radar, but in the data ecosystem surrounding the operation. Put another way, the B-2 may be nearly invisible to traditional sensors, but the accumulation of indirect signals (communications, logistics, support movements) can allow trained algorithms to find patterns that previously went unnoticed. Algorithm war. If you like, the episode illustrates the extent to which artificial intelligence is transforming the way to wage war. Analysis systems like the Chinese Jingqi compete with American platforms that also use AI to plan military operations. In the campaign against Iran, Washington used tools like the model Claude by Anthropic and the Maven Smart System developed by Palantir to analyze large data streams and generate attack recommendations. This type of technology makes it possible to reduce in a lot the time needed to identify objectives: processes that could previously take three days are now completed in a matter of hours. The ultimate goal is to compress the entire attack chain (detect, evaluate, hit and re-evaluate) in just minutes. A new front. Plus: artificial intelligence is also altering another front of the conflict, the informative. The proliferation of AI generated videos is starting to make it difficult to distinguish between real and manipulated images on social media. Platforms like X have warned that they will penalize users who share AI-generated war content without warning, after numerous fake videos will begin to circulate during the crisis. Thus, in a scenario like the current one, where algorithms analyze military operationsgenerate propaganda and detect invisible patterns To the human eye, the battlefield is no longer limited to air, sea or land. It is also released in data centers. And in that terrain, even the quietest bomber on the planet can leave traces that no one knew before hear. Image | Jonathan Cutrer, goretexguy In Xataka | The arrival of the B-2s to Iran can only mean one thing: the search for the greatest threat to the United States has begun In Xataka | Iran is planting sea mines in Hormuz. And what threatens to blow up is not ships: it is the world economy

5,000 Stanford students have given their love lives to what an algorithm decides. And it’s consuming the university

It’s Tuesday at 9:00 p.m. in Palo Alto and the silence of the Stanford dormitories is broken by a simultaneous notification: it’s Date Drop. In seconds, the hallways are filled with students who, according to The Wall Street Journalthey “huddle” on their screens with a mixture of anxiety and hope. Ben Rosenfeld, a residential assistant, describes the phenomenon as an “all-consuming force”: Students talk about nothing else while they figure out whether their destiny that night is a free drink date at the On Call Cafe or an anonymous complaint on the forum Fizz. What began as a simple class project has escalated into a massive sociological phenomenon that has hijacked campus social life. The numbers are compelling: in a university of approximately 7,500 undergraduate students, more than 5,000 have already surrendered their love lives to the decisions of this algorithm. From a class assignment to a startup millionaire. The architect of this obsession is Henry Weng, a computer science graduate student who coded the platform in just three weeks. As detailed TechCrunchwhat Weng started as a tool to help his colleagues has transformed into The Relationship Company, a startup that has already raised $2.1 million in venture capital. The list of investors includes Silicon Valley heavyweights such as Mark Pincus (founder of Zynga and of the first investors of Facebook), Elad Gil (of the first investors in AirbnbStripe and Pinterest) and Andy Chen (former partner of Coatue). Success. The premise has been so successful that it has transcended the walls of Stanford. The service has expanded to ten other elite universities, including Columbia, MIT, Princeton, and the University of Pennsylvania. Weng, who curiously took a subject called “introduction to clowning” that taught him to “delight in failure,” seems to have found a winning formula far from failure. “Our matches turn into real dates at ten times the speed of Tinder,” assures TechCrunch. Optimizing love in the age of fatigue. The success of Date Drop It is not a coincidence; It is symptomatic of an exhausted generation and an environment obsessed with efficiency. As they point out in The Wall Street Journal, It’s a very Stanford solution to a very Stanford problem. On a campus where students are high achievers (high achievers) obsessively focused on academic and professional success, organic social interaction has atrophied. “People have difficulty starting conversations in general, and much more so for romantic interactions,” student Alena Zhang explains to the outlet. But the problem goes beyond Stanford. An analysis of Forbes reveals a general crisis In the world of digital dating: 78% of users report emotional or mental exhaustion from using traditional apps. He ghosting (suffered by 41% of those surveyed) and the feeling that the profiles are a catalog of lies have created chronic fatigue. Added to this is the “Paradox of Preparation” (Readiness Paradox). Generation Z wants to find love more than any generation before it, but they feel paralyzed by the fear of “public failure.” They have replaced asking for a face-to-face date with asking on Instagram, entering a cycle of infinite “testing.” Date Drop it seems to break that paralysis by externalizing the decision: you no longer have to choose and risk public rejection; the algorithm chooses for you. Goodbye to Swipehello to the data. The application is radically different from the mechanics of Tinder. There are no photos to compulsively swipe left or right. The process, detailed on the website itselfbegins with a 66-question questionnaire designed to capture the essence of the user. It’s not just about superficial tastes, but about deep values ​​and political stances: “Is having children essential for a fulfilling life?”, “What are your core values: ambition, curiosity, discipline?” Weng explains that the system uses standard economic “matching theory” combined with an Artificial Intelligence that is trained with feedback (feedback) of the appointments that occur. However, the most innovative—and Machiavellian—feature is the social component. The platform allows friends to play Cupid. Wilson Adkins, a freshman cited by him WSJdiscovered that his friends had “conspired” through the app to match him with a girl from his residence. The algorithm validated the conspiracy with a compatibility score of 99.7%. Not everything is perfect in data heaven. Despite the enthusiasm and millions of investment, the road is not without obstacles. Date Drop It’s not the first attempt to automate love at Stanford. In 2017 he was born The Marriage Pact, a similar project which has already generated 350,000 matches. According to the WSJthe creators of this original project sent a “cease and desist” letter to Weng in November, alleging that the marketing of Date Drop It seemed too familiar to them. Furthermore, technology has limits compared to logistical reality. Gabriel Berger, another student, says that, although he had a great connection with his matchestheir schedules were incompatible: he was vice president of his fraternity and she had dance rehearsals. “We are not interacting well,” they concluded. For her part, Mila Wagner-Sanchez, freshman interviewed by Business Insideradds a note of realism: the novelty fades. After a fun first date (with a friend), and a second matches who never wrote to him, the pressure of midterms caused the app to take a backseat. “I would be open to trying again,” she says, but academic life sometimes outweighs algorithmic curiosity. Optimizing loneliness. Henry Weng has ambitious plans. He sees his company as a “Public Benefit Corporation” intended to facilitate not only romance, but “all meaningful relationships,” including friendships and professional connections. Perhaps the best summary of this phenomenon comes from Madhav Abraham-Prakash, a junior who helped bring the app to campus. Although Date Drop He hasn’t gotten him a girlfriend, he has given him connections on LinkedIn. His justification for The Wall Street Journal sums up the spirit of a generation that doesn’t want to leave anything to chance, not even fate: “I would be sad if my soulmate was here and I couldn’t find it. Or if my co-founder was here and I couldn’t find it, or if my business partner was … Read more

For the first time, we can see how an algorithm decides who receives aid

The Supreme Court has condemned the Government to deliver the Bosco source code, the application that automatically decides which vulnerable families receive the electric social bonus. The sentence responds to Resource of the Civio Foundation after seven years of legal battle. Why is it important. This decision creates jurisprudence and establishes that citizens have constitutional right to know how algorithms that manage social rights work. The sentence raises algorithmic transparency to the level of fundamental, inseparable right of the democratic state. The context. Bosco has worked as a black box that emitted binary verdicts. It was limited to two results: Without more explanations, leaving thousands of vulnerable families in helplessness. Civio detected errors that harmed groups such as widows or large families, but The government refused to reveal the code alleging intellectual property and national security. The panoramic. The “Bosco Doctrine” establishes two principles: Transparency improves security by allowing experts to detect failures, rejecting the argument of “dark security. Public interest exceeds intellectual property when it comes to algorithms that manage social rights. Between the lines. The supreme has invested the burden of proof: now it will be the administration who must demonstrate specific and serious risks, not theoretical possibilities. The sentence quotes the precedent of Covid radarwhose code was published by the government itself. And now what. Public authorities have the obligation to explain “understandably” the functioning of all algorithms that affect citizens. This doctrine will apply to future cases of AI and automated systems in the public administration, creating a new era of transparent “digital democracy”. In Xataka | Access to positions of officials A1 and A2 in Spain is broken. The government wants to solve it … with a master’s degree Outstanding image | Judicial Branch

It is the first for not allowing workers to access their algorithm

The Generalitat of Catalonia has imposed a fine of the multinational Amazon for the lack of transparency of the company with the operation of the algorithm in charge of monitor productivity Of the more than 2,000 employees working in their logistics center of El Prat de Llobregat, located in the immediate vicinity of Barcelona airport. The economic fine itself does not have greater importance for a multinational in Amazon’s size, but it does have a symbolic value since it is The first in Spain that acts on the lack of transparency of algorithms that control labor relations in the digital economy, such and as he published exclusively Chain ser. The productivity algorithm marks the rhythm. At the end of 2024, the facilities of the Amazon Logistics Center in the Prat de Llobregat, the largest that the company has in Spain, received a “macro -inspection” of work by computer specialists at the Generalitat. The inspection occurred before the denunciation of the union delegates in terms of prevention of occupational hazards to which Amazon had denied access to the operation of the algorithm that It measures productivity of the template. This algorithm controls the daily work of each of the employees, and determines whether its productivity is adequate. What the union delegates demanded was to know what parameters used the algorithm for Determine that productivity. Raúl Hernández, Warehouse Mozo at the Prat and CGT member of the Company Committee, declared Chain ser That “they measure article by article everything you are placing and if they believe it is not enough, they demand more.” The employee ensures that, not knowing the parameters that govern the control algorithm, they cannot know how their work is being in advance until they receive the warning of a supervisor. “Many times they come and tell you: ‘You are not going to the average of the day.’ But how am I going to know the average?” The employee says. Amazon “symbolic”. In the Labor Inspection of the late 2024, several minor infractions related to the determination of the schedules and rest times of the staff were found, as well as the allocation of time to go to the bathroom, parameters that also controls the aforementioned productivity algorithm. In this sense, sources of CCOO consulted by Chain ser They estimate that the set of sanctions could be around 100,000 euros. Of that amount, 2,401 euros would correspond to the sanction for non -compliance with the right of workers’ participation. In other words, for not attending to the requests of the labor security committee to know the productivity parameters that the algorithm uses. “Even if it is an economically symbolic fine, it is essential to consolidate a new right of workers, which is to know if the algorithms have an impact on the workplace,” he said BE Dani Cruz, head of Digital Transition of CCOO of Catalonia. Amazon does not agree. Asked by Xataka, Amazon sources disagree with the sanction claiming that the regulations on risk safety and prevention are respected and announces the appeal of the sanctions imposed. “We disagree with the position of the Labor Inspection in its resolution proposal, which is not firm and, therefore we have resorted to. We will continue to collaborate to answer their questions and requests in this regard. In Amazon, our working people provide services in a modern and safe environment, under the highest standards of safety and health at work. The safety and well -being of our people employed is our top priority. The infrastructure and facilities of our logistics centers are Designed to guarantee a safe and comfortable work environment. The Rider Law and the Algorithm. One of the most critical points of the called Rider Law approved in 2021 for regulate the labor market promoted by the new digital platforms, it was to include in the statute of the workers the right of the Company Committee to be informed about the operation of algorithms of these platforms. In its article 64.4 a new section D was introduced in which it was indicated: “Be informed by the company of the parameters, rules and instructions on which the algorithms or artificial intelligence systems are based that affect the decision making that can influence the working conditions, access and maintenance of employment, including the elaboration of profiles.” According to the Labor Inspection, Amazon would not have offered due information on the functioning of this algorithm to the Occupational Risk Committee. The “Gray List”. The lack of information about the parameters that measure the productivity of the workforce throws a shadow of suspicion about the company’s labor practices. Employees denounce that this data could be being used to argue dismissals among less productive workers once they conclude high demand campaigns (Black Friday, Christmas, etc.). According to union sources consulted by Chain serthe data provided by this algorithm could place employees in “gray lists” without their knowledge. The employees of these lists could be candidates to be fired under the justification of breach of safety standards. “They put things like you have not respected safety standards. They are situations that may have happened, but that are very thorough and with that justify the dismissal,” says Raúl Hernández. According to union sources, before a judicial resource, most of these layoffs end solving as inadmissible. In Xataka | Companies have found a way to fire indefinite after labor reform: disciplinary dismissal Image | Flickr (Álvaro Ibáñez)

Modern algorithms decide for us to see. YouTube is the last redoubt where the algorithm does not choose for you

The 2025 Internet is dominated by algorithms that seem to know each other better than ourselves. They were only missing Chatbots with memory than adding to your ability to read between the lines. In this scenario, YouTube is increasingly a beautiful anomaly. While Tiktok, Instagram or X drag us from one subject to another according to the whims of a system that optimizes engagement Pure, Google’s video platform maintains almost anachronistic respect for our choices. It is the last redoubt where what we are looking for still matters more than what makes us react. The difference is in Its algorithmic architecture. YouTube recommends mainly within the thematic ecosystems that we have already chosen. Tiktok, on the other hand, can launch us from vegan recipes to conspiracy theories at a time if that keeps our thumbs sliding. This thematic verticality It is not altruism, it is part of its business model: You need to maintain long sessions within specific issues, where segmented ads have greater value. Only its consequence is positive for the user. Or at least more positive than the rest. The best way to understand what makes YouTube different is to live it as a user. When I am looking for videos of Valencia, the algorithm keeps me in that world: post-part interviews, gatherings, montages of the best goals of the season and usually memories of a better past. It does not suddenly jump to polarizing politics or drag me to incendiary content it happens to provoke my outrage. YouTube respects the thematic ecosystem that I choose. Amplify our searches, do not try to manipulate us better. The user experience reinforces this sense of control: A prominent search bar. Channels to subscribe. Lists we actively build. A history that we can manage. They are vestiges of an internet where we sailed with purpose, not where we were navigated. It is also fair to indicate that YouTube belongs to Google, one of the great architects of the current algorithmic Internet. It is not immune to problems – the clickbait it blooms and its own attempts with YouTube Shorts They show that it is not above the market. However, it maintains a different balance. And the question is obvious: if this more balanced model works for the world’s largest video platform, why do the rest of the industry opt for systems that virtually annul our agency? YouTube also has serious problems. Their Rabbit Holes (Something like ‘Backgroundless wells’) They can take us on paths cobbled by radicalization. Its monetization system favors the extension and recurrence of quality. We are not facing a hero, but before a survivor who has found a niche where he prosper without completely eliminating our autonomy. In the end this this chronology of the evolution of the Internet. The web (yesterday Gloria, today survival) was originally a space where we chose our destinations. Today algorithms decide for us. YouTube retains vestiges of the previous model while adapting to the new one, becoming a kind of “Internet inside the Internet.” This “limited algorithmic autonomy” allows something not only good, but almost sacred: predictability. We can anticipate what we will find, creating a more satisfactory experience. It also allows the fragmentation of communities focused on specific interests, without forcing that everything competes in a single Feed homogenized, which is The great evil of the current X and the perennial identity of Tiktok. YouTube is not perfect – no one is – but it makes us question if we can design platforms that serve users who want to enjoy healthy, without being hooked or dragged where they do not want, and not just advertisers. YouTube, with all its contradictions, is a sign that an intermediate path is possible. Where there is some algorithmic manipulation (it is the market, friend), but that coexist with the total user agency. In Xataka | Podcasts are living their great revolution, but not in Spotify or Apple Podcasts: YouTube is winning the game Outstanding image | Xataka with Mockuuuups Studio

Experts begin to think that the problem is not mobile phones, but the algorithm

He wrote Hermes Trismegisto in his Kybalioneverything in creation has its rhythm and that rhythm, if we follow it, always takes us back to the beginning. It takes us to the world as will and representation, in the way Zaratrusta spoke, to the recurrence theorem of Poincaré … and, of course, to the discussion of whether the technology is leaving us fried by the brain. Or, rather, what, how and why it is affecting our cognitive ability. Because the fact that something is changing nobody discusses it. And how to discuss it? Of course something It is changing. And not only functionally, the changes are structural and all levels. The simple presence of screens It has substantially modified our somatosensory cortex; That is, they have changed the way we touch the world. There is nothing. But the situation goes further: how Manuel Sebastián explained years agoresearcher at the Cerebral Cartography Unit of the Complutense University, “we know that the text that includes links (hypertext) seems to remember worse in general, which is totally logical because they constitute distractors and the role of attention is critical in memory.” We knew that, but we didn’t know what it meant. “The fact that information is processed differently is not necessarily bad,” Sebastián told us. In the background, our brain Life is reorganizing And historically, that has been a good news. The question is, therefore, if it will continue to be in the future. And there are many experts who believe no. Data are not lacking. As John Burn-Murnoch explained in the Financial Timesalthough in recent years we have lent a lot of interest in the impact of pandemic on the cognitive and emotional development of young people, there are more and more experts who think we are not seeing it in perspective. Both the scores of most standardized tests of the world (as the PISA report) and Some specialized surveysThey point out that the problem began to appear in the mid -2010. And that problem It has many faces: From the difficulty to concentrate on the problems to learn new things. It is a bit complicated. Because, the truth is that, if we look at things like the time of use of screens and cognitive or emotional problems, we discover that There is nothing problematic. The traditional explanation has been that the important thing is not that we use new technologies, the important thing is what we do with them. And that’s where the algorithm enters. Because, As Burn-Murdoch saysthere is a change perhaps more fundamental than mobiles and networks: “the change in our relationship with information.” We have gone from web pages limited to infinite and constantly updated feeds, with a constant notifications bombardment. We no longer spend so long navigating the web or interacting with acquaintances, but we find a content torrent. This represents a transition from self -directed behavior to passive consumption and the constant alternation of context. And that’s where the problems may be. Unfortunately, There is still little research on this And, to the extent that everyone has launched this type of technologies and algorithms, we find a very difficult area to investigate. The good news is that, little by little, we are clearing the issue. The bad is that there is still much to know, even, the impact of all this. And, despite everything, as we usually check Every time an extensive study is published On our relationship with technology, there are no great alarm signs. Anyway, it seems that the plasticity of the human brain Find the way of “returning home”to overcome possible obstacles and catch up. Hopefully let’s know how we can eliminate those obstacles. Image | Luis Villasmil | Ben White In Xataka | Alone and connected, the paradox of loneliness at the time of the thousand “friends” in networks

Goal has taken the first step to dismantle its external verification system. And he has done it with an X algorithm

Shortly before Donald Trump returned to the White House, Mark Zuckerberg announced What a goal would end his Third Party Data Verification Program And I would bet on a system of community notes in the purest X style, the social network of Elon Musk, one of the leading allies of the new president. With this decision, goal sought Leave in the hands of users The task of pointing out and contextualizing misinformation on Facebook, Instagram and Threads. When Zuckerberg made the announcement, it was clear that, although the company pointed out with this new approach, its implementation would gradually begin in the United States. There were no indications of a possible deployment in the European Union, where the Digital Services Law could mean an obstacle. Now, with more details about the initial launch, we can get a better idea of ​​its operation. Fixed goal in the calendar the beginning of its new initiative. The social networks giant He said that It will begin to test the community notes on March 18. That is, in less than a week, this new approach will take its first steps. As planned, the deployment will be gradual and will be limited, for now, to the United States. From that day, the program taxpayers will begin to leave their collaborations. How will the system work? The community notes system in Meta allows users to add context to publications that can be misleading or need additional information. However, not any note is published immediately: there is an evaluation process that, as explained by Meta, seeks to ensure that contributions are useful and reflect different perspectives. A collaborator detects a publication on Facebook, Instagram or Threads that needs context. For example, a viral post says: “Bats are completely blind and depend only on sounding to move.” The collaborator writes an explanatory note. It must have a maximum of 500 characters and include a link to a reliable source. Example: “It is a common myth. Bats are not blind; They have good vision and combine ecolocation, hearing and smell to adapt to different environments. More info here: (Link). “ Other collaborators check the note and vote if it is useful. It is not published automatically. You must have the support of people with different perspectives, not just with a majority. If the note receives enough support, it is published. It appears next to the original publication to give context. The publication about bats is still visible, but with the explanatory note. Goal will use the X algorithm. The company led by Mark Zuckerberg has chosen to use, in a first stage, the algorithm of the notes of the community of X. This system, which promotes a similar function in the social network previously known as Twitter, is open source and It is available in githubwhich allows anyone to audit and reuse it. However, the adoption of the X algorithm by goal is not a definitive step. The company plans to take it as a basis for developing its own version. “As our own version develops, we may explore different or tight algorithms to support the way in which the community notes are classified and described.” No scope penalty. Goal ensures that its new system against misinformation will not punish the distribution of publications. Unlike the old fact verifiers program, where the marked contents could lose visibility, the community notes will only add context without affecting who can see them or how far they can go. But the notes will not appear yet. As we have said, the community notes system will begin to take its first steps on Facebook, Instagram and Threads this month, but users will not see them yet. It is an evaluation phase. The idea is that the deployment occurs has taken the first step to dismantle its external verification system. And he has done it with the X algorithm when the company is sure it works as a wait. During this period, they plan to make more adjustments and improve the system. Old system verification labels will continue to appear in the United States until the new system begins to deploy generalized in that country. And what will happen in Spain? Although the Meta External Verification Program began in 2016, did not land in Spain until 2019. In the country, the multinational continues to have verifiers such as Newtral, Maldita.es and AFP, which continue with their usual work. Meta ensures that its intention is to expand community notes globally, but it remains to be seen how possible challenges related to European legislation. Images | Goal | Gage Skidmore In Xataka | Goal has fired 35,000 workers in five years. And many of them fear having entered their “black lists”

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.