the last time all humans were on Earth

It sounds like the beginning of a work of science fiction, but it is a quiet milestone in the history of our species. On Tuesday, October 31, 2000 marked the last day in which every human being on the planet was on this side of the atmosphere. Since then, there has not been a single moment in which all of humanity has been confined to our home planet. A historic launch. That October 31, 2000, A Soyuz spacecraft took off from the Baikonur Cosmodromein Kazakhstan, carrying Expedition 1 of the International Space Station on board: American commander Bill Shepherd of NASA and Russian cosmonauts Sergei Krikalev and Yuri Gidzenko of Roscosmos. The crew arrived at a fledgling ISS on November 2, 2000. It only had a couple of modules (the Russian Zarya and the American Unity, assembled in 1998), but since then, the orbital laboratory has been occupied continuously. For 24 and a half years, there is always a human floating about 400 kilometers above our heads. A quarter of a century. The International Space Station is a collaborative project between five space agencies (the American NASA, the Russian Roscosmos, the European ESA, the Japanese JAXA and the Canadian CSA). It is not only a symbol of international cooperation, but an unparalleled scientific laboratory, which orbits the Earth every 90 minutes at a speed of almost 28,000 km/h. In this quarter of a century, the orbital station has reached a habitable volume greater than that of a six-bedroom house, with a wingspan of 109 meters and an average of seven people always on board. It can dock up to eight spacecraft simultaneously and has hosted almost 3,000 investigations from more than 108 countries, taking advantage of microgravity to study everything from particle physics to the effects of space travel on the human body. The ISS passes the baton. Aged and with age-related ailmentslike the air leaks that cause headaches for its operators, the ISS partners plan to abandon it in 2030, before a ship developed by SpaceX tow it to a safe place for atmospheric reentry. NASA’s strategy is clear: move from being the primary owner and operator to becoming a key customer, thus ensuring continued human presence in low-Earth orbit. This will allow further research in microgravity (which is crucial for future missions to the Moon and Mars), maintain international collaboration and foster a commercial space economy. USA announced last year its intention to reduce the budget allocated to the ISS hoping for a quick transition to new commercial space stations. Companies like Axiom Space (with its Axiom Sation project), Blue Origin (with its Orbital Reef) or Voyager Space (with Starlab, in collaboration with Airbus) are developing new private orbital platforms. What if they are not ready on time? If commercial stations do not arrive by 2030, humanity will continue to inhabit low orbit thanks to China. Banned from the ISS, China has expanded its presence in space with Tiangong space stationcontinuously inhabited since 2022. Not only does China plan to double its size from three to six modules in the coming years, but it is already opening its doors to international cooperation, as demonstrated by the recent agreement to train and send Pakistani astronauts to the Chinese space station. With NASA focusing on a business model and deep space exploration, Beijing is strategically positioned as a central player and potential alternative in low orbit, especially for nations seeking to collaborate outside the American framework. A changing environment. But there is another reason why the United States has focused on the Moon and Mars. Low Earth orbit faces the increasingly critical challenge of space debris. Millions of objects, from dead satellites and rocket upper stages to small undetectable fragments generated by collisions or anti-satellite missile tests. This debris travels at enormous speeds and represents a constant and potentially catastrophic collision risk for astronauts. The ISS itself has had to carry out numerous evasive maneuvers in recent years. Managing this problem through better tracking systems (especially for small objects), active removal of the most dangerous debris and, above all, prevention and mitigation in the generation of new space debris (such as rapid deorbitation of rocket stages) will be essential to ensure the safety of future crews in the long term. For now, and for just over 25 years, we continue to inhabit the space. October 31, 2000 was the last day of an era in which humanity was anchored exclusively to the Earth. Since then we have been, without interruption, a species with an extraterrestrial presence. Human permanence off Earth seems assured, but its sustainability will require even more effort and global cooperation. Image | THAT In Xataka | Elon Musk has made public his latest recommendation for Trump: deorbit the International Space Station in two years In Xataka | Boeing has lost: NASA will cancel the SLS rocket and look for a cheaper alternative to colonize the Moon and Mars A version of this article was published in May 2025

“Left click, right click”, this is how the AI ​​decides an attack in war. China, Russia and the US need fewer and fewer humans

A group of Google engineers signed an internal letter to protest against a project in which your own software was being used by the Pentagon, sparking an unprecedented debate within the company about how far the technology they had created should go. Since then, almost 10 years have passed, an “eternity” with the implementation of AI. The war accelerates… without humans. They counted last week in The New York Times that modern warfare is entering a phase in which human intervention is no longer the center of decision-making, but rather an almost symbolic step within processes dominated by algorithmswhere artificial intelligence systems identify targets, recommend attacks and generate complete plans in a matter of seconds. Programs like Project Maventoday developed by Palantir and integrated with models like Anthropic’sshow the extent to which the decision chain has been compressed: satellite images, drone data and intercepted signals are automatically processed to generate target lists and attack solutions, reducing human intervention to something as simple as selecting options on the screen, in the words of Pentagon officialsit is as simple as clicking “Left click, right click”. Powers in the same race. Because at the center of this transformation are the United States, China and Russia, competing to lead a new arms race based on autonomous systems capable of operating without direct intervention. In China, for example, the development of coordinated drone swarms by artificial intelligence and capable platforms to operate alongside fighters manned reflects a commitment to scale and automation. Meanwhile, in Russia they are betting on systems like the Lancet droneswhich evolve towards capabilitiesand autonomous selection of objectives. For its part, the United States is trying to close the gap by encouraging companies like Anduril to speed up production of autonomous drones, in a race where the speed of development is almost as important as the technology itself. The Chinese WZ-8 drone Ukraine as a turning point. How have we been countingthe war in Ukraine has been the turning point that has turned these technologies on real tools combat, demonstrating that relatively simple systems can evolve rapidly towards semi-autonomous capabilities and changing the balance on the battlefield. Adapted commercial drones, unmanned vessels and data analysis systems have allowed resist a superior adversary, while Russia has responded incorporating automation progressive in their own systems. As pointed out analyst Michael Horowitz, “the battlefield in Ukraine has served as a laboratory for the world,” accelerating a transition that is no longer experimental, but operational. Silicon Valley at war. Unlike previous arms races, the Times I remembered that the role does not fall solely on the States, but also in technology companies and start-ups that are redefining military development. Here are companies like Google that initially participated in projects like Maven before withdrawing due to internal pressures, while others like Palantir or Anduril have occupied that space with a more vision aligned with the defense. In China, the “civil-military fusion” model directly integrates to private companies in the development of military systems, while in the West attempts are made to replicate that dynamism with million-dollar investments and growing collaboration between Silicon Valley and the Pentagon. Algorithms against algorithms. The result is a form of war in which the confrontation is no longer only between armies, but between automated systems that operate at speeds impossible for humans, a scenario that we have counted where drones launch drones to take on other drones and sensor networks connect globally to execute real time attacks. Projects like the Chinese attempt to replicate networks similar to the Joint Fires Network American forces reflect this trend toward an interconnected war, one where a sensor at one point on the planet can trigger an attack on another without direct intervention. At this point, superiority no longer depends solely on the quality of weapons, but on the ability to integrate data, process it and act faster than the adversary. Uncontrolled speed. There is no doubt, this acceleration carries risks that worry even those who pushed these systems, as automation can trigger military responses before humans can intervene or fully understand the situation. Studies such as that of RAND Corporationworks that have shown scenarios in which autonomous systems inadvertently escalate conflicts, while experts warn of a possible “escalation spiral” driven by the decision speed of machines. As recognized General Jack Shanahan, promoter of Maven, the reality is that there is a danger of deploying “untested, insecure and poorly understood” systems in a context of competition where each actor fears being left behind. Less humans, more automation. Thus, the panorama that is drawn is that of a war every time more automatedwhere human intervention is progressively reduced and critical decisions are delegated in artificial intelligence systems capable of analyzing, deciding and acting in seconds, something very different which is do it “well”. From autonomous drones to target analysis platforms, through global combat networks, the trend seems clear, that of a war of the immediate future that will be decided less in offices and more in algorithmsin an unstable and certainly chilling balance, because we are talking about technological speed being on track to surpass the human capacity to control it in the middle of a war. Image | StockVault, Infinity 0 In Xataka | Russia is no longer surrendering to Ukrainian soldiers, but to machines: the rules of war are being redefined In Xataka | China was the power that launched drones. Now he has realized his danger with a decision: close the sky to them

clone humans using digital avatars

Big tech companies are clear that, to promote their AI services, need content creators. At the same time, a good part of content creation goes through influencers created with AI. In case we thought that the loop couldn’t be solved, there are companies obsessed with achieving another goal: that content creators can create their own content creators… with AI. Heygen. If you’re not into the world of digital avatars, Heygen may not sound familiar to you, but the Los Angeles-based company is reliving the ChatGPT moment, but with avatars. Founded in 2020 as MovioLab, with Joshua Xu as CEO and a valuation of more than $500 million, Heygen competes head-to-head with giants like runway e ElevenLabs in the generative video space. Avatar V. Heygen has been obsessed with creating the best avatar model for years. And this week they published what, according to them, is the most advanced model in the world, Avatar V. They have data to support it, since as the company shows in a 30 page paperthe company has solved micro-expressions, gestures and lip synchronization (especially at the rhythm level) better than the rest. There is a real war between American and Chinese companies over digital avatars. a real war. As the paper shows, Heygen is not alone. Kling AI, Veo 3, OmniHuman, Seedance… Some of the most relevant companies worldwide are betting on the generation of avatars. And it’s not a random whim. Heygen has more than 40,000 companies paying for corporate video generation using avatars. The barrier to entry is falling lower and lower, the savings compared to productions with influencers are quantifiable, and production times are compressed from weeks to minutes. The key is to offer the most competitive model and an interface that works at the drop of a hat. What’s coming. Currently, avatars work with a handicap: their latency. The direction set by the paper is clear: solve this problem to achieve an avatar connected to LLMs in order to maintain conversations in real time (meetings, interviews, conferences…). The avatar industry is still emerging, but winning it is essential for a goal that AI Big Tech wants in the future: for AI to stop being a chatbot and become as close to a person as possible. In Xataka | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

humans born there will cease to be Homo sapiens

with the mission Artemis II operational around the Moon, humanity has Mars among its colonizing desires. Past and present missions, such as NASA’s Curiosity rover, aim to analyze its surface for clues to past habitability. And although we have found them, leave a lot of unknowns. We haven’t set foot on Mars yet and we already have in mind how we will build the houses there (spoiler: with bricks and urine). And that if one day a human being is born in a possible human colony on Mars, it will not be homo sapiens on the anthropological level. Because in short, if we get to Mars and start being born there, we will no longer be the same species: Scott Solomon, an evolutionary biologist at Rice University, has been studying this question for years and has reached that conclusion, which he recently published in his work “Becoming Martian“. If you are born on Mars, you are not homo sapiens. Solomon differentiates between those who arrive from Earth to Mars and survive there, those colonists who arrive at the red planet with a body molded by millions of years of evolution here. But their creatures and their creatures will not have the same luck. In short, it will be the beginning of the end for homo sapiens. Mars has 38% of Earth’s gravity, radiation two or three times higherthere is no protective magnetic field nor the microbial biosphere with which our immune system It was evolving. All of the above constitutes an engine of biological change and evolution that has marked our anatomy and its absence, too. Why is it important. Evolutionary biology has a name for what will happen: allopatric speciation. That is, when a population is isolated and develops in a new environment, natural selection and genetic drift continue their course within the adaptation to the environment with respect to the original population (in this case, those who remain on Earth). The passage of time can cause the two groups to become so different that they are another species, a new human species. And something paradoxical would happen: by looking for planets other than Earth as an alternative to continue preserving the species, we would stop being the same. Context. You don’t have to go to future generations to see the consequences of space life. There is evidence of astronauts on the ISS who have suffered accelerated loss of bone mass, muscle atrophy, cardiovascular problems, vision problems and stress. Until your blood is mutating. The creatures born there will develop their skeleton and nervous system directly under these conditions. Salomon offers concrete changes: denser and shorter bones, greater eumelanin production (a type of melamine responsible for the dark coloration) as protection against radiation, an immune system calibrated for the closed environment of the colony and potentially vulnerable to diseases common on Earth. However, the most sensitive point is reproduction: we do not know for sure whether humans will be able to conceive, gestate and give birth successfully on Mars. Experiments with mammals in microgravity are worrying. The biologist also anticipates that childbirth on Mars would inevitably be surgical: the lower bone density and muscle atrophy make it an even more risky activity. What will happen next. For Solomon there are two possibilities: Let natural selection take its course and shape future generations. The second is to resort to genetic engineering: get ahead of the problem before sending them there. In any case, the macro result is the same: two branches of humanity evolving on separate paths, in different conditions and in different worlds. A dystopian future of genetics and ethics. It should be noted that thousands of generations are needed for speciation to occur, which gives sufficient time for humanity to take measures, such as frequent travel or assisted reproduction with transferred genetic material. Or that genetic engineering steps on the accelerator so much that natural selection takes a backseat. Ethics also comes in here: if a boy or girl is born on Mars and cannot return to Earth because their body cannot resist it, humanity will have made an irreversible decision without their consent. Solomon warns also of that gap in humanity in terms of identity and rights. These are questions that we cannot answer now, but that should be clear before the existence of a colony on Mars is seriously considered. In Xataka | Europe has thought of throwing three robots into a volcanic lava tube and now colonizing the Moon or Mars is closer In Xataka | If the question is “how are we going to build houses on Mars” the answer today is “with bricks made of urine” Cover | Photo of Dmitry Grachyov in Unsplash

AI chatbots are more flattering than humans giving personal advice. And that’s a problem

Before, to create your echo chamber you could only follow like-minded people on networks, now you can create your own personalized echo chamber with an AI. A Stanford study has thoroughly analyzed the excessive adulation of LLMs and the result is clear: if you want to be told what you want to hear, it is better to talk to the AI ​​​​than with a person. The study. The Researchers analyzed eleven language models, among which were the most popular ones like ChatGPT, Gemini, Claude or DeepSeek, and they fed them with data sets about personal dilemmas. In addition, they included 2,000 prompts taken from the Reddit community. Approximately one-third of all scenarios included harmful or outright illegal behavior. Then, they compared the LLM responses with human responses to see who tends to agree with the user more. In a second part of the study, they recruited 2,400 participants and had them chat with flattering and non-flattering language models. We like to be proven right. Chatbots tend to be much more flattering than a human when giving personal advice, but not only that, people generally prefer these types of responses. The models endorsed the user’s position 49% more than humans in general dilemmas and endorsed harmful behavior 47% more. In the second experiment, people who chatted with different models considered the sycophantic model more trustworthy and preferable. Furthermore, she came away more convinced that she was right and less willing to apologize or repair the conflict. Why is it a problem. According to the authors, LLMs can reinforce egocentrism and make people more morally dogmatic. According to Myra Cheng, co-author of the study, “By default, AI advice does not tell people that they are wrong or give them a reality check (…) I worry that people will lose the ability to deal with difficult social situations.” In addition, there is another worrying fact and that is that users perceived the models as equally objective, which suggests a lack of critical vision to be able to distinguish a flattering AI from a non-flattering one. AI is not a person. It is obvious, but the reality is that every day we address AI chatbots as if they were one. Thank him and ask him for things please It is a harmless symptom of our mania for anthropoformize everything. However, when We use AI as a substitute for a psychologist or when we establish intimate relationships with a chatbotthat’s where we start to step in swampy terrain. The authors of the study consider it urgent that companies introduce safeguards to reduce the excessive complacency of LLMs and advise avoiding using them as a substitute for a person to deal with personal conflicts. The counterpoint. There are voices that argue that AI is not generating these echo chambers, at least not with as much intensity as we have seen with social networks. According to John Burn-Murdoch in Financial Timeslanguage models tend to raise consensus with experts and generate more moderate opinions than networks. Their argument is that the economic architecture of networks rewards inflammatory and polarizing content, while chatbots compete to offer reliable answers to users who use them to make important decisions. It is not just an opinion, it has also done an experiment in which it has simulated thousands of political conversations between users with extreme positions and several of the main chatbots on the market. Based on electoral surveys and data on the use of these tools, it measures how positions would move if a part of the citizenry used AI to inform themselves. The author concludes that, on average, the models tend to push the most radical ones towards more temperate positions closer to the expert consensus, also validating many fewer conspiracy theories than those that routinely circulate on social networks. In Xataka | AIs have become accompanying tools against loneliness. For some researchers it is “junk food” Image | Zulfugar Karimov in Unsplash

robot vs robot battles where humans only watch

In the year 2024, a relevant event occurred in the context of the war in Ukraine. So the number of drones produced for military use far surpassed to that of traditional armored vehicles, with tens of thousands of units deployed on the front. That change not only reflected a question of cost, but a profound transformation in how a modern war is conceived and fought today. One where humans have less and less to say. Forbidden to humans. In Ukraine, a new type of battlefield has emerged that breaks with everything known: the called “kill zones”those strips of several kilometers where any movement is detected and destroyed almost instantly by swarms of drones. In these spaces, human presence has become extremely limited and dangerous, almost inaccessible, forcing soldiers to remain buried for weeks or months and move alone in exceptional conditionswhile the terrain between the lines becomes a kind of permanent “no man’s land”, one saturated with sensors, mines and constant surveillance. If in the 19th century battles and quarrels were fought with steps and guns in duels in the sunTwo centuries later, duels have mutated into disputes between machines. Wars without troops. I remembered a few weeks ago the financial times that, in this new environment, direct combat between people has ceased to be the central element, replaced by confrontations where machines take center stage. Aerial drones patrol, detect and attack targets continuously, while unmanned ground vehicles advance, hold positions or execute ambushes in places where an infantryman I couldn’t survive. There have even been documented situations in which systems from both sides confront each other without direct human presenceevidencing a qualitative change in the nature of combat. Robots against robots. The most striking result is the appearance of authentic “duels” between unmanned systems, where UAVs and UGVs search, hunt and they destroy each other. Drones waiting on the ground like smart mines, vehicles that ambush routes supply or systems specifically designed to locate and neutralize other robots reflect an autonomous combat dynamic in constant evolution. Thus, each advance generates an immediate response from the adversary, creating a cycle accelerated innovation which is more reminiscent of a technological ecosystem or a futuristic war video game than a conventional war. Fully automated logistics. Even tasks that historically defined the rear, such as supplies, evacuation or minelaying, have been absorbed and replaced by machines. Now drones transport food, water and ammunition, while ground vehicles extract wounded people or deploy explosives in inaccessible areas. This change, furthermore, is not only tactical, but rather structural, because the battlefield seems not to admit The human presence continues, forcing a kind of outsourcing of essential functions to systems that can take risks that no soldier could accept. The leap to self-employment. They explained in Forbes that, although many of these systems continue to depend on human operators, the trend points towards a increasing autonomywith robots increasingly capable of detecting, deciding and acting with less intervention. If you also want, the integration of artificial intelligence, advanced sensors and coordination in swarms anticipates a scenario where hundreds of systems operate simultaneously in air, land and sea, further expanding these inaccessible areas and reducing the room for human maneuver. The future in real time. In summary, what is happening in Ukraine It is not only an adaptation to the current conflict, but it could be said that it is a preview of what they will be like. the wars of the future. The unprecedented combination of total surveillance, combat automation and progressive replacement of the soldier in the most dangerous areas is transforming war in an unprecedented confrontation between systemsone where humans are relegated to supervision and strategic decision-making. From that perspective, rather than a gradual evolution, the conflict in Eastern Europe has suddenly accelerated a transition that seemed very distant a few years ago, turning science fiction into something similar to an operational reality. Image | Telegram In Xataka | Ukraine has become the world’s leading specialist against Iranian drones. And he won’t share his antidote In Xataka | If Ukraine promoted the use of drones, Iran has triggered the Terminator algorithm. And that was already a problem in science fiction

It is literally the largest and heaviest machine ever built by humans and it does one thing: extract coal.

In North Rhine-Westphalia, western Germany, the largest machine that man has put on earth operates. Forget about huge ships, aircraft carrier either oil platforms: It’s an excavator. It is called Bagger 293, and its very existence is the moving memory of what industrial engineering is capable of when it is demanded without limits. What is it, exactly? The Bagger 293, also known as the MAN TAKRAF RB293, is a bucket wheel excavator (those that have a giant toothed disc at one end) designed for open pit mining. It was built by the German company TAKRAF, a subsidiary of the MAN group, between 1990 and 1995 in Leipzig. His goal from day one was only one: extract lignitethe so-called brown coal, in the Hambach mine, one of the largest mining operations in Europe. Today it remains operational, owned by RWE Power AG, Germany’s second largest energy producer. Numbers. It is 96 meters high, equivalent to a building of more than 30 floorsand 225 meters long, which is more than two football fields placed in a row. It weighs 14,200 tons. The Guinness Book of Records officially recognizes it as the largest and heaviest land vehicle in the world. Shares title with its predecessor, the Bagger 288although the 293 surpasses it in size and capacity. It also cannot be transported. And moving it about 120 kilometers requires more than three weeks of continuous work, with progress of just 5 or 6 kilometers a day. How it works istea monster. The heart of the machine is a 21.3 meter diameter rotating wheel armed with 18 buckets, large steel buckets, each capable of loading up to 15 cubic meters of material per cycle. That wheel spins non-stop, tearing off layers of earth and rock to reveal the veins of lignite, which are then transported by giant belts to the electricity generation plants. Under normal conditions, the Bagger 293 can move up to 240,000 tons of material in a single day. Furthermore, it is estimated that what it does in one day is equivalent to the manual work of about 40,000 miners. All this with only five operators on board, controlling the system from a central cockpit. electric appetite. To start such a structure, a direct external energy source of 16.56 megawatts is needed (about more than 22,500 HP if we do the conversion). This would be approximately equivalent to the electricity needed to supply a city of about 20,000 inhabitants. On the other hand, it should be noted that the Bagger 293 does not have its own conventional engine, it is permanently connected to the industrial electrical network. Its 12 steel tracks, each 3.8 meters wide, distribute the immense weight over the ground in a controlled manner so that the ground does not give way under it. Leaf where you work. The excavator works in the Hambach mine, the largest open-pit mine in Germany, with an approved area of ​​up to 8,500 hectares and a depth that reaches 500 meters below ground level. According to Bloombergthe mine produces around 40 million tonnes of lignite per year, enough to power around 8 million homes. But the mine is not without controversy. Brown coal is the most polluting fossil fuel per unit of energy produced, and the exploitation of Hambach 90% of the historic Hambach Forest has been wiped outan ecosystem more than 12,000 years old. As of 2012, environmental activists They occupied the remaining trees for years in a protest that ended up becoming a symbol of the climate debate in Germany. In 2018, tens of thousands of people demonstrated against the mine’s expansion. Greta Thunberg herself visited the place in 2019stating that he found it “devastating” to see places like the Hambach mine. In January 2020, the German government agreed to preserve the remaining forest, and in August of that same year Germany committed to its definitive exit from coal by 2038. According to Global Energy Monitormining at the Hambach mine will cease in 2029, and the plan is to transform the territory into a reclaimed landscape that will include a large artificial lake. Images | Andreas Lippold (Wikimedia Commons), Stefan Fussan (Wikimedia Commons), Steve Rowell In Xataka | The key hidden infrastructure for AI is not data centers: it is undersea cables and the Middle East leads the way

Science had always believed that only humans understand geometry. Until we noticed the crows again

The perception of geometric regularity in shapes, a variant of elementary geometry, has long been considered an ability that only human beings had. And it is no wonder, since from quite early stages of development and across multiple cultures, our species has demonstrated a natural understanding of spatial rules. But this has changed in a species similar to crows. A radical change. Although this innate quality of humans was quite well established, science has now shown that the crows too They have geometric understanding. A cognitive milestone that rethinks what we thought we knew about animal intelligence and the evolution of pure mathematics. A myth. The scientific bases showed a notable gap between human abilities and those of the rest of the animal kingdom with regard to euclidean geometry. Previous research had already seen that primates lacked the ability to recognize geometric regularity in tests of visual perception of shapes, something fundamental, since they may be the first that come to mind when thinking about this property. And this was crucial to determining that humans have an innate ability to process geometric regularity, since the recurring inability to species like baboons After intensive training he laid these foundations. However, the researchers decided to explore these abilities in birds known for their impressive cognitive and arithmetic skills. Touch screens. To test birds’ spatial intuition, scientists from the University of Tübingen They designed an experiment based on the detection of visual anomalies. In this case, two 10- and 11-year-old male crows were trained using touch screens located inside conditioning chambers. Here the birds could observe an array that displayed six simultaneous shapes on the screen and the task was to detect an “intruder”, that is, to peck at the shape that differed in its visual parameters with respect to the other five base stimuli. The tests. For the final test, five reference quadrilaterals were used, ordered by their level of regularity: the square, the isosceles trapezoid, the rhombus, the right hinge, and a completely irregular shape. From here on, the “intrusive” figures were artificially generated moving the lower right vertex of the original figure at a fixed distance equivalent to 75% of the average distance between the vertices. Results. The most impressive thing seen was the immediacy of understanding the problem, as the crows were able to apply the concept of detecting the intruder immediately upon being exposed to the new sets of quadrilaterals. Both subjects dramatically exceeded the 16.7% chance level during their first trials, demonstrating that they understood the task without hesitating or mindlessly pecking. Furthermore, during the first 60 trials, the first crow achieved 48.3% success and the second crow 56.7%. The most impressive thing. The most revealing data from these tests was precisely that the birds showed significantly better performance with shapes that presented properties of pure Euclidean geometry, such as right angles, parallel lines or symmetry. It is crucial here to highlight that this performance advantage did not require extensive prior training, but rather the regularity effect was present from the very beginning of the testing phase. Because? Faced with the logical question of why crows achieved what other primates failed, the authors of the study recognize certain important methodological differences compared to classic experiments with baboons. In this case, they point out that the crows were subjected to a strict progress criterion during training, needing to maintain 75% correctness over five consecutive sessions. In contrast, baboons only needed to reach a criterion of 80% correct responses only once, without the need for consecutive sessions. And although this difference may make a direct and exact comparison between the species difficult, the main finding is incontestable: crows recognize geometric regularity. Images | Tyler Quiring In Xataka | Punch, the monkey clinging to a stuffed animal and a victim of bullying, has achieved the impossible: uniting the Internet under the same cause

International law was written with humans who decide in mind. AI just broke that chain and no one knows who answers now

Pete Hegseth’s threat to Dario Amodei has a subtext that goes far beyond the $200 million contract that the Pentagon can cancel: If the US military deploys AI-controlled autonomous weapons without the safeguards that Anthropic requiresyou will have removed the only firewall that has historically prevented an illegal order from being executed. Why is it importantand. The entire legal and ethical system of the US military rests on a principle that seems obvious but has important consequences: a soldier can and should disobey a manifestly illegal order. It is the mechanism that, in theory, prevents war crimes. A drone AI-controlled autonomous vehicle does not have that mechanism. You can’t refuse. You can’t hesitate. He cannot be tried in a court-martial. Between the lines. Amodei speaks of “autonomous weapons that fire without human intervention” to point out a legal vacuum. If an AI makes the decision to kill, who is responsible criminally? The programmer? The general who activated the system? The president who signed the order? International humanitarian law (including the Geneva Conventions) was written with human beings making decisions in mind. And now AI dissolves that chain of responsibility. The backdrop. The mass surveillance argument is also a bitter pill to swallow. The Fourth Amendment of the US Constitution protects citizens from warrantless searches and interventions. It works, among other reasons, because the State has never had the physical capacity to process everything that happens in public spaces. And in the same way, with AI that operational limit disappears: we move to millions of conversations recorded in real time, transcribed, classified and connected in just seconds. What was previously impossible due to lack of human resources becomes routine with a LLM. Constitutional protection until now has depended, in part, on the inefficiency of the State, its limitations. Yes, but. The Pentagon has an argument that cannot be ruled out: other democracies are also developing these capabilities, and China or Russia are not going to wait for the United States to resolve their ethical dilemmas. The practical question is whether having those unrestricted capabilities makes you safer or simply more dangerous to your own citizens. The big question. OpenAI and Google have accepted the Pentagon’s conditions“all legal uses” without specific exceptions, and xAI has just been cleared to operate on classified systems. Anthropic has been left alone in its position. And what is at stake now is not whether Claude survives as a military supplier or not, it is whether the AI ​​industry is going to set some limit on what it sells to the State, or whether that debate will be settled directly by Congress, the courts or, in the worst case, the first serious incident that no one could have foreseen. It seems like a matter of time. In Xataka | AI is already a battlefield: Anthropic has just accused DeepSeek and other Chinese companies of “distilling” Claude Featured image | Xataka

AI consumes obscene amounts of energy. Sam Altman compares it to the cost of “training” humans

OpenAI CEO Sam Altman participated in an event organized by The Indian Express. During the interview made some striking statements, but the greatest of all of them was the one he dedicated to talking about what it costs to train an AI model. In fact, he complained about how many of ChatGPT’s energy consumption discussions they are unfair. Training humans also consumes a lot. The interviewer asked Altman about ChatGPT’s energy consumption and Sam Altman took a few seconds to answer the question, and then made a peculiar comparison (my bold): One of the things that is always unfair in this comparison is that it talks about how much energy it takes to train an AI model compared to what it costs a human to perform an inference query. But it also takes a lot of energy to train a human. It takes about 20 years of life and all the food you eat during that time before you become intelligent. And not only that, it took the widespread evolution of the hundred billion people who have lived and learned not to be eaten by predators and to understand science and so on to create you. The fair comparison is if you ask ChatGPT, how much energy does it take once their model is trained to answer that question compared to a human? And AI has probably already caught up in terms of energy efficiency if we measure it that way. A previous Epoch AI study corroborates that energy consumption during inference (when we actually use ChatGPT, for example) is low. Source: Epoch AI. Training is one thing, inference another.. The answer may be controversial, but to a certain extent it is logical: learning, both in the case of humans and AI, takes time and consumes many resources, but that cost is one thing and the cost of inference, of “applying that training”, is another. Once we have learned, it is not too difficult to answer things. This is what Altman is trying to point out here, who recognizes that AI does indeed consume a lot of energy in training, but that it has then become very efficient in the inference phase, when we actually use ChatGPT. The problem is that although Altman has already spoken that in inference consumption is minimal, does not provide evidence of this. The water problem is no longer a problem. He also spoke about the controversial water consumption that was theoretically carried out in large AI data centers. Although he acknowledged that this was a problem when “we used to use evaporative cooling in data centers.” Now, however, “we don’t do that,” he recalled, and made it clear that those accusations that “ChatGPT uses 17 gallons per query, or whatever” is totally false, “totally crazy, it has no connection with reality.” But again, there is still no official data from AI companies in this section. How much does AI really consume? The truth is that at this point we still do not have really clear data on how much the AI ​​consumes both in the training phase and in the inference phase. There are those who have investigated energy and water consumption and have made a mistake. wildly exaggerating the databut for example in the US, where a large number of data centers are concentrated, there is no legislation that forces transparency with those figures. Increasingly more efficient models and data centers. One of the most interesting studies was the one made by Epoch AI in February 2025, and at that time it was also concluded that AI did not actually consume as much as it was said to consume. In fact, it consumed relatively little and the models have only improved in efficiency. Chips and cooling systems have also improved, and although data centers have certainly require enormous amounts of energywe continue blindly in this section. In Xataka | Spain has a plan to capture more data centers than anyone else: “shield” them from energy costs

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.