If you downloaded the wrong game on Steam a year ago, now the FBI is looking for you. And yes, it’s the real FBI

By now, we have all learned to distrust a little of what we see on the internet. Alarmist messages, supposedly official warnings, stories that sound too serious to be true. Therefore, if someone tells us that the FBI could be looking for people for having downloaded a game on Steam, the normal thing is to think that it is another hoax and move on. However, in this case it is worth stopping for a second, because what we see before us does not fit into that usual pattern. The advertisement. As Mein-MMO explainswhat we know part of a clear warning from the FBI itselfwhich has launched an investigation to identify users who may have been affected after installing certain games on Steam. Specifically, the Seattle division notes that these titles included malwaresomething that would have gone unnoticed by those who downloaded them. The time frame is broad, from May 2024 to January 2026, and that is where the agency believes the activity was concentrated. When Valve has to confirm it. The curious thing about this case is that the communication itself with users has had to overcome an obvious barrier, mistrust. Several users on Reddit point out that Valve sent messages to those who may have been affected to inform them of the investigation, but added a clarification that is unusual in this type of notice. The message said: “We can confirm that the message and the linked website are, in fact, from the FBI.” It is not a minor detail, because it reflects the extent to which the context can seem suspicious even when it is legitimate. What games are reached? The FBI has been narrowing the case down to a series of games. Besides, Bitdefender describes them as indie titles with little visibility within the platform, something that could have made it easier for them to go unnoticed for longer. The games mentioned so far are the following: BlockBlasters Chemia Dashverse/DashFPS Lampy Lunara PirateFi Tokenova What were they really looking for?. At this point, it is important to understand what type of threat the cybersecurity sources that have analyzed the case describe. According to the aforementioned cybersecurity firm, we would be facing what is known as an “information stealer“, a type of program designed to collect sensitive data from the device without the user realizing it. Among the information it could extract are credentials stored in the browser, authentication cookies that keep sessions open in different services or even data linked to cryptocurrency wallets. The steps to follow. The agency is asking those who believe they may have been affected to fill out a form specific to provide information to the investigation. As detailed by the agency itself, the responses are voluntary, but they can serve to identify victims of a federal crime and, in some cases, provide access to services, restitution and rights provided for by law. The FBI also adds that the identity of the victims will be kept confidential. Images | FBI | Compagnons In Xataka | There are people earning up to $600 a week talking to strangers. The goal: teach AI to sound human

teach AI to sound human

In recent months, many of us have spoken to an artificial intelligence without thinking too much about it. We have asked him questions, we have asked him for advice or we have simply tested how far his ability to keep a natural conversationl. Tools like voice modes ChatGPT either Gemini They have brought that experience closer to something that, not so long ago, seemed reserved for science fiction, with inevitable echoes of ‘Her’. But there’s one question we rarely ask ourselves while talking to them: how have these machines learned to sound less and less like a system and more like a person. To understand it, it is convenient to separate what we see from what we do not see. On the one hand there are the applications that we use daily, those assistants that respond with an increasingly natural voice. On the other hand, the systems that support them, models trained with large volumes of data that they need to learn not only what to say, but also how to say it. We do not know which specific products end up using this type of recording, but we do know that they are part of the ecosystem with which increasingly fluid and credible voice systems are trained. The human hand behind an artificial voice When we get down to the details, what these workers do is not very similar to the classic idea of ​​“training an AI.” In many cases, it involves having conversations with strangers about seemingly trivial topics, from everyday tastes to open questions that require you to develop an answer. In others, the assignment is more demanding: playing a role, following a script without seeming like it or enter emotional terrain. Bloomberg accountFor example, the case of a worker who recounted painful memories of her life while speaking with a man who introduced himself as a pastor and who, within the exercise, played the role of therapist. All that recorded material serves a very specific purpose: capturing nuances. We are not just talking about words, but about pauses, breaths, changes in tone, hesitations or emotional reactions that make a conversation sound human. There are also labeling tasks, in which workers have to distinguish whether an audio contains a sob, a laugh, or someone talking between laughs. The underlying logic is simple: if a machine wants to stop sounding robotic, it first needs to be exposed to how we really speak. After passing an initial voice test, they can qualify for tasks that start at about $17 per recorded hour. From there, the question is inevitable: how do you access this type of job and how much do you really earn? Platforms like Babel Audio They function as intermediaries that connect these workers with specific projects. After passing an initial voice test, they can opt for tasks that start at around $17 per recorded hour, although the final income depends on the evaluation received and the volume of orders available. Income also varies greatly: a worker cited by the aforementioned media claims to earn about 600 dollars a week. This is what the BabelAudio website looks like As we progress, the work begins to show a less visible side. Beyond the rates and the promise of flexibility, the testimonies point to an environment marked by uncertainty and constant control. Platforms can limit access to tasks, interrupt projects or suspend accounts without detailed explanations, leaving many workers in a fragile position. In addition, each conversation is subject to real-time metrics that assess whether someone speaks too much or too little, expressiveness, language proficiency, depth of exchange and even the length of pauses. When we broaden the focus, the debate stops being solely work-related and also becomes personal. Part of the value of these recordings lies precisely in the fact that they capture how we speak and how we relate, which implies that workers are providing more than just a mechanical task. The terms generally allow those recordings to be used in voice assistants, speech synthesis, and “other audio-related products and services.” When we connect all the pieces, what we see is an industry that works thanks to a complex production chain. The Pulitzer Center describes This ecosystem is like a fragmented work network in which workers are usually subject to confidentiality agreements, operate with very little transparency and, in many cases, do not even know what system they are training or what company their work ends up going to. In this context, the conversations that feed voice systems are only one part of a larger machine, where each task contributes to building increasingly sophisticated technologies. Images | Xataka with Nano Banana 2 | Screenshot In Xataka | Congratulations, you already program without knowing how to program. Now prepare to wait six weeks for Apple to listen to you

In 1994, a programmer created a “temporary” interface for Windows. Three decades later he is still with us

Windows is one step away from turning 40 years old. The first version of the operating system appeared in November 1985and since then it has not stopped evolving. However, Microsoft tends to take a long time to update some components of its products. With Windows 10, for example, it released a renewed user interface, but it was not until years after its launch that it began to get rid of some icons from the Windows 95 era. Now, in Windows 11is renewing programs like paint and Notepad. Regardless of how modern Windows 11 may feel, and all the new features that come with its updates, the system still retains some elements that we could classify as historical. Among them we find the utility to format disks. WINDOWS 10: 9 VERY USEFUL and LITTLE KNOWN TRICKS Currently, if you wanted to format a storage drive from Windows 11 you would find a pop-up window practically identical to the one you could find decades ago. In fact, we know exactly who created it. The format drives dialog in Windows 10 A former Microsoft programmer named Dave Plummer recently shared an some interesting facts about this part of the operating system. The now entrepreneur says he created the Format dialog box one rainy morning from the end of 1994. He says that they were migrating millions of lines of user interface code from Windows 95 to Windows NT, and that the formatting section was very different between systems, so it was necessary to create a new user interface. And Plummer took on this task. The programmer did not think of doing a definitive job, but of providing a temporary solution with the help of a sheet, a pen, Visual C++ 2.0 and the Resource Editor. “It wasn’t elegant, but it would do until the elegant user interface arrived,” he says in the message. Plummer also set the 32GB limit for the format of FAT volumes that morning. It is curious, because FAT is capable of working with larger volumes, although to create volumes with this capacity it is necessary to use the command line. The disk formatting utility interface appeared in Windows NT-based operating systems, such as Windows 2000 and Windows XPand it has been with us ever since. Throughout this time it has basically been a temporary solution created in 1994. Images | Windows | Genbeta In Xataka | Intel is hunting and capturing new customers. His next goal: convince Elon Musk and make chips for Tesla

Three months ago Australia banned social media for those under 16 years of age. It is already investigating possible breaches

Just three months ago, Australia launched one of the most ambitious regulations that have been proposed so far on social networks and minors. The measure came into force on December 10, 2025 with a clear message: force platforms to prevent those under 16 years of age from having accounts and give families back part of the control over the digital lives of the youngest. From the first moment it was presented as a pioneering initiative, but something important was also assumed from the beginning: applying it was not going to be easy. The first doubts. The rule has already entered its most delicate phase, checking whether it is really being applied as planned. The eSafety regulator has opened the first formal review and has put platforms such as Facebook, Instagram, Snapchat, TikTok and YouTube under scrutiny. The agency speaks of “significant concerns” and points to failures in control mechanisms. It also points out that current systems are not effectively preventing those below that threshold from continuing to open new accounts. How minors are sneaking in. The report goes beyond a general warning and focuses on very specific failures in the control systems. It has been detected that there are not enough safeguards to prevent users under the permitted age from creating new accounts, but also something more striking: some platforms allow the verification processes to be repeated until the user manages to pass them. Also in certain cases, these profiles are invited to demonstrate that they meet the age requirement even after having indicated that they do not, which shows inconsistencies in the application of controls. A problem that was already anticipated. The difficulties in applying the rule have not arisen now, they were already on the table from day one. When the law came into force, The Australian Government itself admitted that its implementation would not be perfect, and the first signs pointed in that direction. According to ABC, Some minors managed to bypass the verification systems with basic tricks, such as altering their appearance in facial controls. The outlet itself also warned that parents and older siblings could help some children get around the restrictions, an early sign that the challenge was not just in passing the law, but in making it really work. What is at stake for the platformss. The investigation opened by eSafety does not remain a diagnosis, it opens the door to possible sanctions if it is demonstrated that companies have not taken reasonable measures to prevent minors affected by the rule from having an account. Reuters points out that The fines can reach 49.5 million Australian dollars and affect the aforementioned services and platforms. The regulator has already begun collecting evidence and hopes to close at least part of its investigations by mid-year, which places technology companies in a scenario in which non-compliance is no longer just a reputational risk. The Spanish mirror. What is happening in Australia helps to put into context a debate that has also gained weight in Spain, although here it is at a different point. Peter Sánchez announced in February that The Government wants to prohibit access to social networks for minors under 16 years of age within a broader package of measures on age verification, traceability of hate and responsibility of technology managers. The key difference is that that ban has not come into force and is not being enforced. Still, the Australian case offers a useful reference to anticipate what kind of challenges may appear when such a measure moves from political announcement to actual implementation. Images | cottonbro studio In Xataka | “What the hell is happening with Lidl Spain?”: Germans are speechless at the chain’s comic surrealism

An 18-year-old girl has a filter that eliminates 96% of them

Little by little, microplastics are surrounding us as they are present in the drinking waterin the meal and even inside our body. This poses a great environmental and public health challenge, and until now the solutions that were proposed used to be quite expensive with special filters to be able to trap these particlesbut a teenager has changed this idea. A new filter. It was Mia Heller, an 18 year old studentwhich has managed to develop a prototype water filter that has a low cost and has managed to eliminate all microplastics. The most characteristic thing is that the ‘core’ of this invention is not a microscopic network, but a material known as ferrofluid. How it works. The truth is that we are talking about a prodigy of applied physics and chemistry, since the ferrofluidwhen introduced into a volume of contaminated water, causes all the microplastics present to adhere to the material naturally. Subsequently, once the plastic is ‘impregnated’ with this magnetic liquid, the water passes through a magnetic separation system. Here the only thing we must have are powerful magnets that attract the ferrofluid, dragging the microplastics with it, and letting the clean water pass through so that it continues its course. The results. Here we are talking about a simple small-scale prototype, but the truth is that metrics have shown that the invention achieves a rate of 95.52% microplastic removal. But the innovation stops there, since the system is capable of recovering and recycling approximately 87% of the ferrofluid used, which greatly reduces operating costs and makes the system sustainable. Your progress. The development of this filter has not remained a simple school experiment. In this case, Mia Heller presented her creation at the Regeneron International Science and Engineering Fair 2025, which is one of the pre-university scientific competitions most prestigious in the worldwhere he competed against some 1,700 students from 62 countries and 49 North American states. The jury and the scientific community have praised the young woman’s approach, since, by avoiding the use of traditional membrane filters, Heller’s prototype overcomes the problem of clogging due to the accumulation of waste, guaranteeing a constant water flow and requiring minimal maintenance. A revolution. The most promising thing about this development is its economic viability. Being a low-cost filter and made from accessible materials, it has enormous potential to be deployed in vulnerable communities or developing areas with considerable difficulty in accessing advanced water purification systems. The next step here is to scale the technology to integrate into municipal water treatment plant systems or even home filter systems. What is clear is that science knows no age, and anyone with a good idea can make a great revolution. Images | Naja Bertolt Jensen In Xataka | Someone has declared war on microplastics: their plan is to “wash” the semen and rejuvenate from the testicles

Moving ‘Guernica’ requires a complex and dangerous operation for the painting. Now the Basque Government wants to do it

‘Guernica’ is an unusual painting in many aspects. Its history is. It is he tour that took him to several continents during his first decades. And so is its size, much (very) larger than the vast majority of paintings that hang in museums. This sum of factors explains why it is now at the center of a bitter controversy. The Basque Country wants to temporarily take it from Madrid to Bilbao to celebrate the 90th anniversary of the bombing which inspired Picasso, but its current custodian, the Reina Sofía, believes it is a bad idea. The debate is served. What has happened? That the Basque Government wants ‘Guernica’, probably Pablo Picasso’s most famous work, finally exposed in Euskadi. A few days ago, during a meeting with the Minister of Culture, the vice lehendakari Ibone Bengoetxea requested the Government to temporarily transfer the painting to the Guggenheim in Bilbao. She wasn’t the only one. The same request Lehendakari Imanol Pradales has transferred it to the President of the Government. The idea is that ‘Guernica’ ends up in Basque lands nine monthsfrom October 2026 to June 2027. After that period, he would return to what has been his home since the beginning of the 1990s, the Reino Sofía Museum in Madrid, where he acts as the main attraction, capturing tens of thousands of visitors. Click on the image to go to the tweet. Why is it important? Because of its symbolic load. ‘Guernica’ is not just any painting. Picasso painted it between May and June 1937 in his workshop on Rue des Grands-Augustins, Paris, commissioned by the Government of the Republic. The work is also inspired by one of the most disastrous episodes of the Civil War: the bombing of the town of Guernica (Vizcaya) at the end of April 1937 by the Condor Legion and the Italian Legionary Aviation. Although during its first decades it was the protagonist of an intense journey that took it through a good part of Europe, North America and South America, the work did not land in Spain until September 1981. Some historians like The Barroquistahave interpreted his arrival as “the symbolic return of the last exile.” And why is it news? That Euskadi wants it to be exhibited in Bilbao right now, between October 2026 and June 2027, is no coincidence. It would coincide with the 90th anniversary of the constitution of the first regional Executive and the bombing of Guernica. Hence Bengoetxea has insisted in the “deep historical, symbolic and emotional meaning” that the transfer would have for the Basque people. Will it be possible? Of course it won’t be easy. Just one day after the meeting between Bengoetxea and the Minister of Culture, the Reina Sofía Museum published a report of 16 pages in which he “strongly advises against” the transfer of the painting from Madrid to the Basque Country. The reason: the process could damage it. “The work is kept in stable conditions thanks to rigorous control of the environmental conditions. However, in view of a possible transfer, its format, nature of the elements that compose it and state of conservation, together with the numerous damages suffered over time, make it especially sensitive to all types of vibrations that are inevitable in transporting works of art.” Does it say anything else? Yes. In case there are any doubts, underlines: “Such vibrations could generate new cracks, lifting and loss of the pictorial layer, as well as tears in the support.” The opinion of the Reina Sofía of course has not pleased the Basque Government, dissatisfied with both the substance and the form. “It would be serious for a formal request from a government to be responded to without a serious and in-depth analysis. The order must be an analysis of the needs so that the painting can be in Euskadi temporarily,” claims Bengoetxea. The regional Executive emphasizes that this is not a simple technical issue. In the background, they insist, there are much deeper readings that affect “memory” and “repair.” The vice lehendakari first complaint and that at the moment it has not received “any official response” from Moncloa. Is it that surprising? Yes. And no. Everything that revolves around ‘Guernica’ arouses expectation, something understandable if one takes into account that the artistic value of the work is added to its historical and symbolic relevance. However, Reina Sofía herself has been responsible for highlighting that his position is not new. In fact, it has been closing the door to organizations that request a loan for the work for several decades. In 1997 he already said ‘no’ to a request for the painting to be included in the inauguration from the Guggenheim in Bilbao, and that it arrived backed by a report in which “the technical conditions” of the transfer were detailed. Click on the image to go to the tweet. Have there been more cases? In 2000 ddenied a request of MoMA, in 2006 he did the same with the Royal Ontario Museum and in 2007 he rejected another request from the Basque Government. Two years later he again said ‘no’ to the Fuji Group, interested in including the piece in the “50th Anniversary Fuji TV” exhibition, held in Tokyo, and in 2012 he also rejected the request presented by a Korean museum. The painting’s last trips date back a few decades: in 1981 it was packed up at the MoMA for transfer to Spain, where it was first exhibited at the Casón del Buen Retiro and later (from 1992) at the Reina Sofía. There alone the exhibition “Piety and Terror in Picasso”, organized during the 80th anniversary of the work, attracted more than 625,000 visitors. And that in less than half a year. Is it so problematic to move it? The report published by the Reina Sofía Museum not only advises against the transfer of ‘Guernica’. Before reaching that conclusion, he offers a detailed analysis of the current state of the painting, in which he notes “alterations such as cracks, cracks … Read more

The biggest find in twelve years of GTA archeology came from an Edinburgh flea market and a used Xbox 360

It’s fascinating when we discover details years (even decades) after a game’s release that hadn’t come to light before. Secret levels in classics that everyone had examined from cover to cover, unrevealed meanings, unsolved puzzles… and sometimes, versions of the games that should never have seen the light of day and that give clues about the ideas that were considered in the development process. The latest case in that sense: ‘GTA IV’. What has happened? Last weekend, a user of GTAForums known as janmatant He paid £5 at a flea market in Edinburgh for an Xbox 360 in not very good condition. At home he discovered that the console was running Xshell, the operating system for Microsoft development kits. The 120 GB hard drive contained a single game: a beta version of ‘Grand Theft Auto IV’ dated November 2007, several months before its commercial release. The treasures he found were poured into the thread GTA IV Beta Huntwho has been tracking unreleased content from the game since 2014 (and which has generated 14 new pages of comments since posting janmatant). GTA IV on the trail. That the discovery occurred in Edinburgh is not at all coincidental. Rockstar North has been based in the capital since it was DMA Design, in 1987, and that is why the console ended up in the hands of a scrap dealer, a process that clearly should not have happened. Development kits are proprietary hardware that Microsoft distributes exclusively to studios (and in those days also to the press) to run games in conditions close to the final hardware. In theory, at the end of a project cycle, those units are returned or destroyed, but this was not the case. 118 gigabytes of Liberty City. After confirming by the serial number that the devkit was authentic, janmatant uploaded the content to the Internet Archive under the title “Great Stealing of Vehicles four XDK”. The 118 GB file is it executable on a real Xbox 360 with debugging tools, although a fully playable version is not yet ready. The most immediate find was the Liberty City ferries. The barges appear in the game’s first trailer and in some cutscenes, but in the final game they are just a set piece. The realistic ‘GTA IV’ opted for a world focused on cars and taxis and in its day, Obbe Vermeij, former technical director of Rockstar North, counted that the shuttles were removed late in development, with models already finished. Zombie mode. There had always been rumors about a zombie mode for which we had never had solid evidence. Herein build We find hospital beds with direct references to zombies, early models of infected characters and several animations associated with this variant. The Cutting Room Floorthe wiki dedicated to documenting cut content in video games, had already listed the project as “Z: Resurrection” based on code fragments found in the final version, but without visual material to support it. A former Rockstar developer It has taken away some of the epicness of the matter: According to him, zombie mode was simply an “experiment” that artists and programmers played to develop in parallel, not a formal production line. That doesn’t mean the discovery is minor, but rather that the creative leeway within Rockstar North in 2007 allowed a team to test out survival horror mechanics during development. Other divergences. The build includes other substantial differences from the final game. The silenced pistol is in this version’s arsenal, along with other unfinished weapons and a notable number of incomplete animations and unreplaced audio markers, as is the case with any half-developed game. The models of some NPCs are different from the final ones, and the character of Michelle, the FIB informant who appears as Niko Bellic’s early romantic interest, has a look here that forum users describe as strangely disturbing. What may be most surprising to any fan of the game is that about half of the radio stations sound completely different. ‘GTA IV’ has one of the most elaborate soundtracks in the saga, with dozens of real music licenses distributed on thematic stations. That half of that content changed between November 2007 and the April 2008 release says a lot about the licensing negotiation process in the final phases of development. What does Rockstar do? After everything that happened, Rockstar Games and Take-Two have not issued public statements. Although companies have a reputation for relentlessly pursuing leaks, the author of this leak purchased the console legally. In any case, he has put the devkit up for sale on eBay for £800. It’s not too much for material of such magnitude, but the truth is that, once on the Internet, access to these secrets is universal. In Xataka | The best video games of 2026 and the most interesting ones to come

more job offers but it is more difficult to find work

The technology sector has never had so many open vacancies and yet finding a job there has become a task harder than ever. This apparent contradiction is not just a feeling: the data confirms it, and it has everything to do with how AI is redrawing the map of who has a place and who does not in technology companies. A detailed analysis by Lenny Rachitsky, expert in the technological labor market and host of the popular Lenny’s Podcastoffers an image that invites reflection. The figures are the most optimistic that has been recorded in its four editions of the report on the state of employment in the technological product sector, but the reality of many professionals who looking for a new job contradicts that optimism on paper. Numbers are deceiving (or at least, they don’t tell everything). According to the collected data by Rachitsky through TrueUp, a platform that tracks job offers in more than 9,000 technology companies in the world, there are more than 7,300 vacancies open for profiles Product Manager At a global level, 75% above that recorded at the beginning of 2023 and almost 20% more than at the beginning of this same year. In engineering, the figure is even more striking, with more than 67,000 active offers worldwide and 26,000 in the US alone. However, more vacancies do not automatically equal easier finding a job. Rachitsky himself acknowledges in his report that there are many people having a hard time searching, and that this does not change because the overall numbers are good. He labor market growsYes, but it doesn’t do it at the same rate for everyone. not even for all profiles. The boom in roles linked to AI. The great catalyst for this growth is AI. Jobs related to its development and implementation are skyrocketing compared to other technology roles, something Rachitsky describes as a hockey stick-shaped growth curve. This profile demand of software engineering reaches both native AI companies (such as OpenAI, Anthropic or Cursor) and non-technology companies, which looking for product managers specialized in integrating these technologies into their processes. a report of the London School of Economics confirms that more than 76% of product managers expect to expand their investment in AI in 2026, which has triggered demand for managers capable of translating the capabilities of AI models into concrete products. The profile that companies are looking for, however, is very specific and not just any candidate with AI on the resume is worth it, but experienced professionals in implementation and with the ability to make decisions in environments where AI is already part of the development process. Side B: junior profiles are left out. This is where the other side of the paradox comes in. The report by Anthropic ‘Labor market impacts of AI: A new measure and early evidence’reveals that overall unemployment among workers most exposed to AI has not increased significantly since the arrival of ChatGPT, but there is a worrying sign in the data from hiring the youngest. Specifically, the study detects that, since 2024, workers between 22 and 25 years old have increasingly less likely to be hired in jobs most exposed to automation. The incorporation rate for these positions has fallen approximately half a percentage point, reducing by up to 14% the probability that a young man finds a job in those occupations, relative to levels prior to the launch of ChatGPT. For workers over 25 years of age, however, that same drop is not observed. Design, the great forgotten of the recovery. There is another profile that the recovery of employment in the technological labor market seems to have left aside: the design one. While product and engineering roles have been growing for two years, vacancies for designers have practically stagnated since the beginning of 2023, with around 5,700 global offers compared to more than 7,300 for product. The analysis firm Humbl Design confirms in its January 2026 report that design roles oriented toward routine execution will barely grow between 2% and 3% until 2034, while profiles specialized in strategy and problem solving project an increase of 16% in the same period. AI has a lot to do with this stagnation. Its ability to accelerate the work of engineers has reduced dependence on traditional design processes, especially in the prototyping and generation of visual variants phases. That is, AI has assumed that role and is now executed from the development departments, so companies They don’t need so many designers anymore.. In Xataka | “The world is in danger”: Anthropic’s security manager leaves the company to write poetry Image | Unsplash (Mimi Thian)

we will not have to resort to our feces to grow plants

In The Martianthe character played by Matt Damon was forced to use his own feces to grow potatoes in the inhospitable Martian soil. The dust lacking nutrients prevents any plant from growing on it. That’s why he had to desperately obtain nutrients. In the future the story could try to replicate itself, but a team of German scientists has found a somewhat more elegant way to grow crops in the soil of Mars: using cyanobacteria as fertilizer. A lifeless soil. The dust that covers the Martian soil, known as regolith, it is rich in mineralsbut it lacks the organic nutrients necessary for plants to grow on it. Therefore, if in the future an attempt was made to grow plants on Mars, it would be impossible. The rest of the planet doesn’t help either.. Soil is not the only limiting factor for growing plants on Mars. The extreme temperatures, which can reach 60ºC, and the atmosphere composed mainly of carbon dioxide are not great incentives either. On the other hand, there is the lack of liquid water and cosmic radiation seriously endangers any known form of life. The win win of agriculture. Growing food on Mars would be very advantageous for obvious reasons, such as feeding astronauts, but also because plants generate oxygen through photosynthesis. Considering how unbreathable the Martian atmosphere is, this would be very advantageous. The problem is that, to do so, it is not enough to use feces in the purest style of The Martian. The enemy is on the ground. Martian regolith is known to be covered in perchloratestoxic salts that hinder plant growth at many levels. For example, they prevent germination and alter the metabolism of plants. Fortunately, it has been detected that there are specific points on the planet where the wind has caused the accumulation of gypsum, displacing perchlorates. Since there are plants that benefit from gypsum as a substrate, you could try growing them in those spots. The problem is that this solution greatly reduces the growing locations and plants that can be chosen. A peculiar pantry. Many scientists have been researching for years ways to improve the diet of future space colonizers. If it cannot be grown directly in the ground, it will have to be done in the warehouses themselves. For example, already in 2015 lettuce was grown on the International Space Station. Much more recent is the cultivation of tomatoes on the Chinese space station Tiangong. In this case, has been achieved done in the air thanks to a technology that sprays water and nutrients in the form of mist to directly feed the roots of the plants. And if it’s complicated with plants, you can always resort to crickets. It is one of the bets for the future of the European Space Agency. Cyanobacteria to the rescue. Cyanobacteria are capable of using the carbon dioxide so abundant in the Martian atmosphere to generate oxygen in a process in which nutrients can also be extracted from the mineral-rich dust of the Martian soil. For this reason, a team of scientists from the University of Bremen has tried using them as fertilizer. To do this, once the cyanobacteria have been cultured, have resorted to anaerobic fermentation. This is carried out by inoculating bacteria that metabolize cyanobacterial biomass in the absence of oxygen. They are capable of growing in high concentrations of perchlorates and in the fermentation process they release very beneficial nutrients for plants, such as ammonium. In laboratory tests, in which this fertilizer was used to grow lentils, 27 grams of lentils were obtained from a single gram of cyanobacteria processed through fermentation. By the way, a little fuel. In the fermentation process, methane is also released, which can be used as fuel. These are all advantages for a future colonization of Mars. It’s not over yet. With this type of fertilizers, some of the barriers that prevent farming on Mars would fall. However, it should be noted that the study was carried out with a simulator of the Martian regolith, but without simulating the external conditions of the red planet. That is, neither extreme temperatures, low gravity nor cosmic radiation were taken into account. In the future it is hoped to test these cyanobacterial fertilizers again in a much better simulated environment. It is a necessary step so that there will truly come a day when food can be grown on Mars. Image | The Martian In Xataka |Interview with the author of The Martian: a story more of science than fiction

The US has invested 16 years and 8 billion dollars in renewing the software of its GPS network. Result: a failure of epic proportions

The Next-Generation Operational Control System project (OCX) was going to modernize the constellation of the United States’ more than 30 GPS satellites. The company RTX Corporation (previously known as Raytheon) managed to win the project in 2010 with a budget of 3.7 billion dollars. The project was supposed to be completed in 2016, but in reality the US has spent $8 billion and 16 years later has an absolute disaster on its hands. 16 years of broken promises. In 2010 the iPad had just appeared on the scene and cloud computing was a somewhat diffuse concept. The project of the US Government was reasonable, and proposed that the OCX system be operational by the time Lockheed Martin’s new GPS III satellites debuted. The development became a chaos of bugs and requirements changes, and to this day it is unclear when, if ever, it will be completed. In Xataka 90% of Iran’s oil industry depends on a tiny island. One that is already on the radar of the US and Israel A fortune invested. The financial management of the project is the first big disaster. The initial budget was estimated at 1.5 billion dollars, but since the award until today that figure has risen to reach almost 7.7 billion of current dollars, to which another 400 million are added to support an improved version of the satellites, the GPS IIIF. This increase is not due in large part to the project suddenly being much more ambitious or more capable, but rather to the costs of having been fixing everything that has gone wrong since they started working on it. Software costs more than satellites. Every time software fails an integration test, the bill runs into tens or hundreds of millions of dollars. That has made the OCX system one of the most expensive and least efficient software projects in recent US military history. In fact, it far exceeds the cost of the satellites themselves that it had to control: the 22 GPS III satellites of the contract signed in 2018 have a budget of 7.2 billion dollars. Satellites of the future controlled by a fairground shotgun. Currently the United States has a fleet of GPS III satellites in orbit capable of emitting much more powerful “M-code” signals and interference resistantsomething that among other things allocates them especially for military applications. The problem is that since the OCX software not workingthey are managing them with control systems inherited from the 90s. It is as if we had a VHS video connected to watch movies on an 8K Smart TV: the potential is there, but one of the components is an absolute bottleneck. {“videoId”:”x8wlh9q”,”autoplay”:false,”title”:”United States vs. China: The CHIPS WAR”, “tag”:”webedia-prod”, “duration”:”1611″} The cybersecurity nightmare. One of the big problems of this project has been the cybersecurity requirements. OCX was designed to resist cyberattacks from powers such as Russia or China, but that requirement has become a spectacular technical burden. Pentagon standards have evolved so quickly that they have not been able to be adapted to an architecture that begins to become obsoleteand covering successive patches is locking the system in a complex vicious circle: the software is never finished because more and more vulnerabilities appear. Failed tests. The latest report from the Government Accountability Office (GAO) has been the final straw. During the tests the system again showed once again instabilitywhich has forced the final delivery to be delayed to the end of 2026 or even 2027. Frank Calvelli, of the Air Force, has expressed his dissatisfaction with that unacceptable management of private industry: the strategic advantage that this project should offer at a time like this is inaccessible due to the disastrous progress of the project. It’s not that difficult. for a long time the excuse for justify the delays was that OCX was “the most complex software ever created for space,” but other players in the sector have shown that achieving these types of technical milestones is possible. SpaceX has demonstrated this with technical “miracles” like its reusable Falcon 9 or with the development of Starship, for example, so those arguments are falling on deaf ears now. Waiting for a better GPS. These problems also affect us end users, who will not be able to enjoy the L5 signals for now. This much more robust frequency will significantly improve accuracy in urban centers with many tall buildings. The irony is tragic: we cannot use extraordinary space infrastructure because the base stations cannot cope with it. While waiting for the problems to be resolved, the learning is clear: the software cannot be a monster that takes 16 years to build In Xataka The GPS in the Baltic has been experiencing interference for months and the culprit is becoming increasingly clear: Russia And while as always, China. While the US crashes against its project to renew the GPS constellation, China has once again managed to “become independent” from Western technology. Your satellite navigation system Beidouit does not replace GPS, true, but It already complements it in 140 countries. Once again China’s long-term view has its obvious result: it has taken 20 years in deploying its constellation, but they already surpass the GPS system in metrics such as signal availability or integrated messaging services. Europe, by the way, also has its own alternative. In Xataka |GPS “dead zones” are spreading around the world: jammers are to blame for confusing drones (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news The US has invested 16 years and 8 billion dollars in renewing the software of its GPS network. Result: a failure of epic proportions was originally published in Xataka by Javier Pastor .

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.