a Russian startup has hacked their brains to turn them into drones with wings

Nothing more a priori innocent than a pigeon flying over the buildings of a city or perched in a square. Or not, because in addition to being just another city dweller (sometimes excessively so, which becomes a problem), pigeons have been used as discreet express messengers from the ancient Sumerian and Egyptian civilizations. And also in war scenarios: in World War I, the United States Army created a carrier pigeon service called United States Army Pigeon Service for tactical messaging when all else failed or was destroyed. Now the Russian startup Neiry assures having given them one more twist: it has turned pigeons into biological drones. An electrode in the brain. What the Russian company proposes is not to biomimic a drone so that it resembles a pigeon, but to convert this animal into a transport vector by equipping it with implanted neural interfaces. More specifically, they implant electrodes in the brain, which are then connected to a stimulator attached to the head. That is, a kind of GPS that speaks with the brain of the bird. Neiry explains that the interface provides mild stimulation to certain brain regions, thus causing the bird to (artificially) prefer a certain direction. Otherwise, the bird behaves naturally. This system does not replace the bird’s will, but rather biases its sense of orientation to follow pre-established routes. Why birds? According to the Russian startupthe objective is to use biological carriers in situations where drones have limitations in range, weight or others such as a restricted area. Alexander Panov, CEO of the company, explains that birds can maneuver in complex environments, fly for long periods and operate in places where drones are restricted, such as collects Bloomberg. Anyone who has handled a drone knows that there is one critical element: the battery. Unlike unmanned aerial vehicles, a pigeon does not need to change its battery nor does it require frequent landings: its nature gives it everything necessary to carry out a long-distance flight. Millions of years of evolution make a bird beat any commercial drone and its 20-minute battery life in terms of flight stabilization and energy efficiency. In fact, up to 400 kilometers a day without stops. Pigeons with backpack. In the test flights that Neiry has carried out with these pigeon drones, the birds were equipped with this neural interface, in addition to a small backpack with the controller, solar panels mounted on the back and a camera. Of course, without giving as much singing as a drone, they did not go unnoticed, as can be seen in the video provided by the company. Pigeons are just the beginning. Panov has explained that although they currently focus on pigeons, “different species can be used depending on the environment or payload.” Bloomberg echoes of other similar implantations, such as the brain of cows for NeuroFarming, so that they produce more milk. And a rather spooky ultimate goal: “to create the next human species after Homo sapiens: Homo superior.” Possible applications. After the tests, the company ensures that the system is ready for practical implementation. According to Neiryhave no plans to use these birds for military purposes despite the fact that in a war or surveillance scenario their use is disruptive: the radars are programmed to filter out winged fauna as ‘noise’ or false positives. In short: they would go unnoticed. Among the ideas of use where they see an opportunity are infrastructure inspection, support for search and rescue, coastal and environmental observation or monitoring of remote areas in places like Brazil or India. Where is the ethics?. Mechanical drones are easier to control, they are capable of carrying larger loads and obviously, they do not need to feed nor will they defecate on you. And that’s not to mention the ethical implications of altering an animal’s behavior. Gizmodo details that after the surgery to implant the chip, the pigeons are almost ready to fly, so the risk “is low for the survival of the birds.” Of course, the startup has not provided independent third-party reviews, which makes specialists question the ethical implications of its technology. The bioethicist and law professor at Duke University Nita Farahany affirms that “Every time we use neural implants to try to control and manipulate any species, it is disgusting.” In Xataka | The war in Ukraine has become something absurd: there are drones shooting at Russian soldiers dressed as “penguins” In Xataka | We had seen everything in Ukraine, but this is unprecedented: Russia is not launching drones, it is launching “Frankensteins” Cover | sanjiv nayak and Andreas Schantl

OpenAI founder says AI does not imitate brains

Andrej Karpathy, co-founder of OpenAI and former head of AI at Tesla, has offered a radically different view on the current state of AI in an extensive interview with Dwarkesh Patel. Faced with overwhelming optimism, he maintains that current systems are “digital ghosts” that imitate human patterns, not brains that evolve like animals. His prediction: AGI Functional will arrive in 2035, not 2026. Why is it important. Comparisons between AI and biological brains are dominating technical discourse and guiding many investment decisions. Karpathy argues that this analogy is “misleading” and raises unrealistic expectations. His experience leading autonomous driving at Tesla for five years has given him a unique perspective on the gap between killer demos and truly functional products. The difference. Animals evolve over millions of years, developing instincts encoded in their DNA. A zebra runs minutes after being born thanks to that “pre-installed hardware.” Language models learn by imitating text from the Internet without anchoring that knowledge in a body or a physical experience. “We’re not building animals,” he says. “We are building ethereal entities that simulate human behavior without really understanding it.” Ghosts. The problem of reinforcement learning. Karpathy says that the RL (reinforcement learning) current is “terrible” because it rewards entire trajectories instead of individual steps. If a model solves a problem after a hundred failed attempts, the system reinforces the entire path, including the errors. We humans reflect on each step and adjust. The collapse. The models suffer from “entropy collapse”: When they generate synthetic data to self-train, they produce responses that occupy a very small space of possibilities. ask ChatGPT one joke and you’ll get three repeated variants. Poor human memory is an advantage: it forces us to abstract. The LLM They remember perfectly, which allows them to recite Wikipedia but prevents them from reasoning beyond the memorized data. Between the lines. Karpathy saw that Claude Code and OpenAI agents proved useless for complex code during development. nanochat. They work with repetitive code that abounds on the Internet, but fail when faced with new architectures. “Companies generate slop“, he said. “Perhaps to raise financing.” The core. Their proposal: build models with a billion parameters (dwarf compared to those most used today) trained with impeccable data that contain thinking algorithms, but not factual knowledge. The model would look for information when it needs it, just like we do. “The Internet is full of garbage,” he explains. Giant models make up for that dirt with raw size. With clean data, a small model could feel “very smart.” The unexpected turn. Karpathy expects no explosion of intelligence, only continuity. Computers, mobile phones, the Internet: none have altered the GDP curve. Everything is diluted in the same ~2% annual growth. “We are experiencing an explosion,” he said, “but we see it in slow motion.” His prediction: AI will follow that pattern, spreading slowly through the economy, without causing the abrupt jump to 20% growth that some have anticipated. In Xataka | Privacy is dying since ChatGPT arrived. Now our obsession is for AI to know us as best as possible Featured image | Dwarkesh Patel

China’s plan to make its military ruthless if electronic warfare shuts down technology: use its brains

In the training camps of the People’s Liberation Army, the sound of drones and electronic simulators coexists with something unexpected: the echo of an ancient tradition. Between radars, missiles and touch screens, some soldiers practice invisible operations with their fingers in the air, moving imaginary beads on an abacus that no longer exists. It is not a ritual or an eccentricity, but a new military experiment: learning in case one day the machines suffer a blackout. Calculate with your mind. China has rescued an ancient tradition to apply it to modern warfare: mental calculation with abacus. In a context of increasing dependence on artificial intelligence, the People’s Liberation Army has applied logic: train soldiers capable of becoming a kind of “human abacus”, ready to operate when digital systems fail. In fact, in a recent exerciseCaptain Xu Meiduo predicted the trajectory of three targets in seconds after a radar failure simulation, guiding artillery fire with precision. State television has turned his feat into an emblem of self-sufficiency, reminding us that the human mind remains a decisive weapon even in the age of algorithms. From the classroom to the battlefield. The program is inspired by an educational practice still common in Asia: the mental abacus, or AMC, an ancient technique that allows complex calculations by visualizing an imaginary abacus. Used in China for a long time more than eight centuriesthis discipline has shown benefits measurable cognitive– Improves concentration, memory and reasoning speed. What’s more, studies from Harvard and Stanford confirmed a few years ago the trained children with mental abacus surpass in calculation and understanding to those who learn traditional mathematics. Now, the Chinese army transfers that advantage to the military fieldconvinced that mental precision and resistance under pressure can make the difference in combat. Millennial and current. The abacus, created in China ago more than 800 years and used for centuries in trade and imperial administration, it never completely disappeared. Although calculators and computers relegated it to a cultural symbol, in schools in China, Japan or Singapore continues teaching as a method of cognitive development. His mental version, based on imaginary manipulation From accounts, it has been the subject of neurological studies that demonstrate structural changes in the brain. Hence, the Chinese army has seen this plasticity as perfect training for modern warfare, where mental quickness and calmness under stress are as valuable as marksmanship. Tradition and vulnerability. The goal of the program, it seems, it’s double: reinforce the cognitive readiness of soldiers and reduce vulnerability to electronic warfare. In a confrontation where radars, GPS and networks can be nullified, human calculation capacity becomes a strategic backup. If you like, Beijing also seeks to demonstrate that its military strength does not depend solely on of drones or hypersonic missilesbut also of soldiers capable of thinking and deciding for themselves. Facing total automationChina aims for balance: a technologically advanced, but sustained army by trained brains to calculate without machines, in the conviction that, even in the digital age, war remains a human act. Between humans and algorithms. In that sense, the contrast with the United States is revealing. While Washington boasts or promotes highly trained soldiers and trusts in the superiority of its command systems, the Pentagon warns that excessive technological dependence can be an Achilles heel. US officials have pointed out that, when communications are interrupted and artificial intelligence degrades, what decides a battle is human initiative. From that perspective, China seems to have taken note. Your bet on rescue the mind As a war tool it is not intended to replace technology, far from it, but rather to complement it. In a world where machines can fail, true superiority, according to Beijing, may once again lie in the most basic: the human brain. Image | Picryl In Xataka | China has asked Russia for an airborne battalion and training. That can only mean one thing: they are preparing a landing In Xataka | The US studied what would happen if it went to war with China: now it has begun a desperate race to duplicate missiles

This is the goal plan that has stolen brains to OpenAi and Google

Mark Zuckerberg is distributing checks as if there were no tomorrow. The Meta CEO announced yesterday a great restructuring of its artificial intelligence division, which now aims to “superintelligence.” And to reach it, Zuckerberg is stealing engineers from OpenAi, Anthropic and Google with a carver. Restructuring with two star signings. The new division of the company is called the SuperintenTintelligence Labs (MSL) goal, and will be led by Alexandr Wang, Excel of Scale AI. That has been the first star signing of Zuckerberg, which decided to invest 14.3 billion dollars in that company to buy an important participation (49%) but above all to get Wang’s services. The second great signing is Nat Friedman, Exceiver of githubwhich will co -direction that new division with Wang and will focus according to the statement on AI products and also in R&D. The internal statement, to which it has had access Bloomberg And that also shares CNBCreveals more interesting details. Up to 100 million bond. To form this new team, Meta has offered bonds of up to 100 million dollars to the most important engineers of other companies. The salaries and compensation offered in detail are not known, but the figures are simply astronomical. In Bloomberg they point out that the company has offered some investigators shares worth dozens of millions of dollars. Stealing employees to the greats of AI. The engineers who have managed to sign a goal They come from OpenAIGoogle, Deepmind and Anthropic. These are experts who have been the great ones responsible for developing and training products such as GPT-4O, Gemini 2.5 or Claude, and precisely that experience is the one that goal wants to take advantage of making a definitive qualitative leap when it comes to dominating the market. Media like Wired have A list With 11 of those star engineers. And Lecun, what? In that internal statement, however, There is a name that is not mentioned and that is especially relevant. We talk about Yann Lecunthat until now had been visible head of the efforts of the finish line. Lecun has been making it clear that the generative AI It is not the valid path To achieve artificial superintelligence or an AGI. His speech It has always been almost “anti -judicial”, and has been Very critical With the ability of current models, although it has always supported the launch of IA Open Source models, of which Llama is the greatest exponent. Better spend a fortune than regret it later. According to Bloomberg Zuckerberg, he stated that Meta will spend “hundreds of billions of dollars” in AI projects in the coming years. Last summer He already indicated that “it is very likely that many companies are oversizing (their investments in AI). But, on the other hand, I think that all the companies that are investing are making a rational decision, because the disadvantage of being left behind is that you could stay out with the most important technology of the next 10 to 15 years.” And there are also the data centers. Although there are no specific figures on the amount of money that has been spent on salaries and compensation for all that new division, the single investment in Scale AI (14.3 billion) and those hypothetical bonds of tens of millions of dollars already make clear the dimension of the bet. But that’s not all: Mark Zuckerberg himself already indicated months ago that this 2025 plan to invest 65,000 million dollars In data centers. It is a “shy” figure in front of the 75,000 What does Google plan to invest, 80,000 from Microsoft or the 100,000 Millions of dollars that Amazon estimates that it will invest in these facilities for training and inference of AI. Call on the one hand, new models for another. Zuckerberg also explained in the statement how he has a lot of confidence in the progress of flame 4.1 and calls 4.2, which are the goal base AI. However, he says that “in parallel, we will begin to investigate in our next generation of models.” But. The truth is that today calls is an absolute reference for researchers and startups that create their own projects from it, but its performance is behind the most capable models today. Meta has not achieved that the integration of Goal AI in WhatsApp or Instagram I just turned it into a preferential option for AI users, and Chatgpt remains the clear clear in this area. One thing is money and another mission. This “galactic” team can have a problem as Chon Tang explained, of the Berkeley Skydeck Investment Fund, having billionaire salaries can be counterproductive in the Meta Mission In Xataka | The cover of the AI ​​is a goal. He has been betting on her for more than a decade and has much more than she calls

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.