We have covered the ISS in moss with a single objective. And now the possibility of “terraforming” Mars is closer

Last year, scientists published the results of a study in which they told how they had covered the outside of the International Space Station (ISS) with moss. Although the study It was published in Decemberit was not a Christmas decorative strategy. They wanted to check if this primitive plant is capable of surviving the inhospitable conditions of space. The results were so positive, they could take humanity one step closer to terraforming Mars. A primitive plant to start a new life. The first plants that appeared on Earth were bryophytes, more specifically mosses. They are very resistant plants, capable of growing directly on rocks. From there, they can photosynthesize if they have the right water and nutrients. It is a process in which they capture carbon dioxide from the atmosphere and generate oxygen. In addition, they generate organic matter that, upon death, becomes the perfect substrate. so that other more complex plants can grow. That is why the study was carried out to see if moss can survive in space. It was proven yes, so it could be an interesting candidate for terraforming Mars or the Moon. The study. Basically, what was seen in the study is that the mosses exposed on the outside of the ISS were able to survive for 283 days exposed to extremely cold temperatures and very intense ultraviolet radiation. When they were returned to Earth after that period, more than 80% had survived. In fact, planting them made them germinate. Carl Sagan already predicted it (more or less). The dream of terraforming other planets is not something new, although it is true that for a long time it was almost a fantasy. In 1961, for example, Carl Sagan made an interesting proposal to terraform Venus. It is known that this planet neighboring Earth It is covered by a dense layer of clouds. Since clouds here on Earth are usually made of water, the famous astrophysicist proposed planting cyanobacteria inside them. These microorganisms have the ability to carry out photosynthesis, like plants. Therefore, they could consume carbon dioxide and generate oxygen. The problem is that it was later discovered that the clouds of Venus are actually made of sulfuric acid, so their proposal became unattainable. Proposals to terraform Mars. No further proposals have been made to terraform Venus, but there have been proposals to do the same with Mars. It’s also pretty inhospitable, but it has a lot more potential. In fact, last year was published in Nature a study that talked about the possibility of turning the red planet into something similar to Earth with only four steps. The first would be to melt the ice, so that it becomes an immense ocean of liquid water. For this, the temperature would have to be increased by at least 30ºC. heat is needed. The second step, therefore, is to obtain that heat. It was proposed to use solar sails that direct most of the solar radiation to these ice reserves. Aerosols could also be dispersed in the atmosphere that cause a kind of greenhouse effect, further retaining solar radiation inside the planet. A vaulted habitat. Although Mars has its own atmosphere, it would have to be reinforced with something that would allow it to create a biosphere. Therefore, it would be interesting to build vaults into which to introduce the first Martian inhabitants. Life that brings more life to Mars. Finally, it would be necessary to use genetically modified-extremophilic microorganisms. These are microorganisms capable of surviving in extreme conditions. For example, microorganisms that survive in media with high salt concentrations or very high or very low temperatures are Extremophiles. Even so, it would be necessary to genetically modify them to make them even more resistant to extremely low temperature and pressure conditions. These microorganisms would be photosynthetic, so that they generate oxygen and organic matter. Moss comes into play. Following the results of the International Space Station experiment, it is clear that moss could be a good complement to these extremophile microorganisms to terraform Mars. Unfortunately, it is estimated that to have the technologies necessary to meet all the requirements we will have to wait at least 100 years. It’s a long time, but with everything humanity has waited for, it would only be a little longer. For now, as the road safety advertisements say, the important thing is to arrive. There are already space agencies trying to date that first step. Let’s start there. Image | Julius A OBARO (Wikimedia Commons) and Freepik In Xataka | Chernobyl was filled with mushrooms after the nuclear accident. Thanks to them we discovered a “new form of photosynthesis”

‘GTA Online’ has been making more revenue than many new games for 13 years

A game released in 2013 for previous generation consoles is earning more than a million dollars a day in 2026. It is the online mode of ‘Grand Theft Auto V’, which has just a few months left before its own creators try to replace it with the most anticipated sequel in video game history, ‘GTA VI‘. The dilemma this poses for Rockstar is unprecedented (and the figures that show it came in a not exactly official way). The hacking. On April 11, the ShinyHunters group (responsible for previous security breaches in Ticketmaster or Santander bank, among others) accessed the Rockstar Games servers through an exploit in cloud management software. The company confirmed the attack, although it described it as having limited impact on its operations. What hackers posted after Rockstar refused to pay a ransom for the information it was not code from the long-awaited ‘GTA VI0, but business metrics extracted from the internal analytics platform Anodot. And what they leaked was not bad news, quite the opposite. To the point that Take-Two’s shares rose as soon as it went public. A million a day. According to leaked information (which Rockstar has neither confirmed nor denied), ‘GTA Online’ earned an average of $9.59 million per week between September 2025 and April 2026, with a weekly maximum of almost 28 million and a minimum of 4.7 million. The annual figure is around 500 million dollars. More than a million a day, not bad for a game that debuted in 2013. How they do it. The backbone of the model is Shark Cards, packs of virtual currency that players purchase to acquire cars, properties or weapons within the game. The Shark Cards generated more than 5 billion dollars between 2014 and 2024but the thing is that only 4% of the active player base has spent real money on the game. They are the so-called “whales”, users who concentrate practically all of the spending, and who generate these exorbitant incomes. ‘GTA Online’ is, in that sense, a business model like the most aggressive free-to-play games. The cherry on top of GTA+. Added to all this is GTA+, a paid monthly subscription that Rockstar launched in 2022 and which, according to the same leaked data, reached its peak of 1.3 million subscribers in December 2025, coinciding with the launch of the update.A Safehouse In The Hills‘. It added luxury mansions to the game and resumed the narrative of ‘GTA V’ with the reappearance of one of its protagonists, Michael. The death of ‘Red Dead Online’. These figures also explain why Rockstar stopped updating ‘Red Dead Online’ regularly. The online mode of ‘Red Dead Redemption 2’ generated an average of $507,000 per week between June 2024 and April 2026, compared to 9.59 million weekly for ‘GTA Online’. Take-Two was immediately clear where resources needed to be concentrated. Coexistence. Although Rockstar has not officially detailed what its online component will look like, leaked court documents They suggest that ‘GTA VI’ will include a multiplayer mode (something that, now that the colossal income of ‘GTA Online’ is known, no one doubts). But at the same time, that’s the problem: in a message to shareholders in FebruaryTake-Two CEO Strauss Zelnick said that “I have every reason to believe that we will continue to support ‘GTA Online’. There is a large community that enjoys it and remains engaged.” Does that imply two live-services simultaneous, with the consequent investment in resources, updates and player service? There is a precedent. When ‘GTA Online’ arrived on PS4 and Xbox One in 2014, Rockstar did not close the PS3 and Xbox 360 versions. Both generations coexisted for more than a year receiving the same content, and in 2015 the ‘Ill-Gotten Gains Part 2’ update was the last significant one for the old platforms. Even so, those servers were not permanently turned off until December 2021, six years later. The history problem. If they were two different services, the online mode of ‘GTA VI’ would arrive without twelve years of updates, without thousands of accumulated missions, without the ecosystem of properties, vehicles and businesses that ‘GTA Online’ players have built for more than a decade. The examples of sequels that failed to attract the players of the previous game are numerous, but ‘Payday 3’ stands out among them all (the number of players for ‘Payday 2’ is still five times higher). Big losses. Players who have spent years accumulating virtual money, garages, businesses and personalized clothing in ‘GTA Online’ will hardly dare to start from scratch. And at the moment no one has dared to talk about a transfer of assets between both games: the practical and design implications of such an exchange make it practically unfeasible. (And so not to mention FiveMthe community role-playing mod based on ‘GTA V’ that Rockstar bought in 2023 and is still an active source of income). Most likely. As Kotaku predictswe’ll most likely see a multi-year period of coexistence, with Rockstar gradually trying to move the user base from one game to another while keeping both running. Controlled closure of ‘GTA Online’, indefinite maintenance with minimal updates or something in between? It all depends on how fast ‘GTA VI Online’ grows into a game that, let’s not forget, will attract thousands of new players. At the moment, the bar he has to reach is very high. In Xataka | In a time when almost no one develops their own graphics engine, ‘GTA VI’ arrives to punch the table

an “early summer” arrives in April

In April 2023, an anomalous warm episode broke absolute records from across the southwest. The thermometers marked 38.8 degrees in Córdoba, 37.4 in Morón and 36.9 in Seville. It seemed like an isolated event, something related to the El Niño event that was about to begin that May. But not. Summer arrives this weekend in much of the country. What does AEMET say? According to the agency“temperatures are expected to be noticeably higher than normal for this time of year, especially the maximum temperatures that will be more typical of early summer.” And no, it is not an exaggeration: according to ECMWF data, a large part of the southwest of the peninsula and the Ebro valley will be in the 99% percentile compared to the 1991-2020 reference period to date. That is, we are going to experience one of the warmest temperatures ever recorded for April 20 in those areas. The first heat wave of the year? No, the temperatures will not reach that high. It must be remembered that, from a technical point of viewthermometers should read temperatures that would be high for July and August for three days in a row. Obviously, although the weekend is going to be very hot for the month of April, we are not going to reach those extremes. However, that does not mean that it is not dangerous. It is well documented that the first warm events of the year are the ones with the greatest risk because they usually catch us off guard. It is true that 30 degrees will be more uncomfortable than dangerous for most citizens; But it is good for the most vulnerable to pay attention. Spring is not what it used to be. Not even a year as strange as this one is free from this type of phenomenon. And that’s driving farmers across the country crazy: If a couple of weeks ago the frosts crushed the cropsthis Saturday the heat will do its thing. Just when the grain heads, the stone fruit set and the late flowering of the olive tree occurs in the south. January 2026 has given us many reasons for optimism, but it is very difficult to face the future without seeing all the meteorological setbacks that lie ahead. Image | BenBaso | Xataka In Xataka | AEMET has just made it official: Spain faces its first risky heat wave of the year this weekend

What is a Roman bust doing in a pre-Hispanic tomb in Mexico?

It is not strange that from time to time archaeologists surprise us with fascinating finds. A bone that tells us about shows with wild animals in Roman Britannia, a stalactite that reveals to what extent the Mayans suffered from the droughts, a 16th century wreck sunk with part of the menu of its crew members… The list is long, but in it it is difficult to find milestones like the one left 90 years ago an excavation at the site of Tecaxic-Calixtlahuacain Mexico. While studying a pre-Columbian tomb, historians located what appears to be part of a Roman sculpture, a figure that some experts date to the AD. 2nd and 3rd centuries AD The question is obvious: How the hell did it get there? First jump back: 1933. To understand the enigma we have to jump back 90 years, to 1933, when a team led by José García Payón He was excavating in the Tecaxic-Calixtlahuaca site, 65 kilometers northwest of the capital of Mexico. There the experts located a funeral offering which included pieces of gold, copper, turquoise, rock crystal, jet, ceramic… and something much less common in a pre-Columbian funerary trousseau: a terracotta head. Click on the image to go to the tweet. Two big unknowns. The bust in question shows a bearded face, with a style, features and even a hairstyle that fit more in ancient Rome than in pre-Hispanic America. The piece is so curious that in recent decades it has fascinated archaeologists and led to several investigations that try to answer two big questions: Where did the figure come from? And how the hell did it end up among the offerings in a tomb from the late 15th century? The scope of the mystery is better understood when we know a fundamental piece of information from the 1933 excavation. The ringleader did not appear in an open (and manipulable) space, but among offerings buried under three intact floors of a pyramidal structure. That is to say, everything indicates that no one altered the trousseau since the date of the burial, which experts date between 1476 and 1510. If that small bearded bust that looked like something out of ancient Rome was there, it was, in theory, because someone deposited it before sealing the tomb. Second jump back: II AD The leader of Tecaxic-Calixtlahuaca continued to be involved in unknowns until early 60’swhen Ernst Boehringer, the president of the German Institute of Archaeology, suggested that it was probably of Roman origin and had been made between the 2nd and 3rd centuries AD. He is not the only one who thinks this way. Bernad Andreae, another eminent archaeologist, shares the hypothesis and even has gone one step further: “The hairstyle and the shape of the beard present the typical features of the period of the Severan emperors (193-235 AD).” In case there were any doubts, in the mid-90s the University of Heidelberg, in Germany, subjected the figurine to a thermoluminescence dating test. The time frame he provided is much broader, but it clears up the mystery: he concluded that the head had to be manufactured between the centuries IX BC and XIII AD Some sources even limit that window between the II BC and VI AD If we take into account that the rest of the items of the funerary trousseau were from the Aztec-Matlatzinca era (15th-16th centuries AD) the question was repeated again: How do you explain that an ancient Roman figurine ended up buried there? And what is the answer? The reality is that experts only handle hypotheses, not certainties. Some are fascinating. Others not so much. Among the latter there is one that has been on the table for a while and explains that for decades the academic world viewed the Calixtlahuaca figurine with certain suspicion. We can accept that the bust is Roman, even that it was made at the beginning of our era and ended up in a pre-Columbian tomb that remained sealed until the 1930s; but that does not mean that we have to accept that the figure had been buried there at the end of the 15th century. How is that possible? Very easy. Perhaps someone placed it there 90 years ago, during the García Payón excavation. “It could be a hoax, it could be a Roman figurine placed at the site or laboratory,” slides Michael E. Smithprofessor at Arizona State University. It is not a theory just thrown into the air. The same expert recalls that in the academic world it is rumored that the famous ringleader was snuck in by a student to play a prank. There is even a suspect. “Many archaeologists in Mexico have heard the story and tend to believe it.” The great unknown. When Smith tried to delve deeper into this possible explanation, he encountered a seemingly insurmountable wall. He couldn’t confirm it. Nor deny it. It also doesn’t help that his protagonists have died and that Payón was not especially exhaustive with his notes. In fact there are those who assure that the collection of artifacts extracted from Calixtlahuaca exhibited in the Museum of Anthropology of Toluca includes ceramics that come from other sites. Another plausible possibility is that the ringleader was associated with the trousseau for a mistakenot intentionally. Strange yes, impossible no. It is not the only hypothesis that archaeologists have raised. Smith himself acknowledges that there are others on the table, equally plausible, such as the fact that the figure was brought with him by a Spaniard at the beginning of the colonial period and for some reason ended up included in the trousseau along with other pieces whose origin can actually be delayed until the beginning of the 16th century. After all, the Calixtlahuaca burial occurred prior to prolonged contact with indigenous people, but it coincides with the first years of exploration. As notes Arkeo News That leaves out a remote possibility: What if, for a historical twist, a Roman antiquity traveled in the first colonial expeditions and then … Read more

Snapchat invented the format that dominates the Internet. 15 years later it is still unable to make it profitable

Evan Spiegel this week sent a memo to your employees announcing that Snap is going to lay off about 1,000 people16% of the entire workforce, in addition to canceling 300 vacant positions that had yet to be filled. Snap thus hopes to save more than $500 million in annualized costs starting in the second half of this year, although the cut is expensive in the short term, since it will have to pay between $95 and $130 million in compensation. Nevertheless, the stock rose 7% in response to the layoffs. The markets have been asking for them for a long time. Why is it important. Snap’s is not a “normal” failure story. It’s much more interesting than that. It’s the story of a company that forever changed how we communicate online and yet has failed to build a profitable business on it. In 2025 it lost 460 million dollars, although it is true that in 2024 it lost more and in 2023 even more. He has spent his 15 years of life in that dynamic. It still hasn’t closed a single complete year on a positive note. The context. His paradox begins in 2013, when he launched Stories: photos and videos that lasted 24 hours, published before disappearing. A format that is common today but at that time groundbreaking. A format that freed people from the pressure of permanence, of the trail. In August 2016, Instagram launched exactly the same thing, with the same name, and with much bigger muscle behind it. Within two months, Instagram had 100 million Stories users. It had taken Snapchat four years to reach that number. A year later it had already surpassed Snapchat. Yes, but. The problem was not that they were copied. The problem was that Meta, TikTok and YouTube adopted the format with an advantage that Snap never had: data. Meta and Google know who we are, what we buy, what interests us. Snap knows much less. That’s why their advertising converts worse, and advertisers pay less for it. A vicious circle. The coup de grace was Transparency Tracking AppApple’s privacy policy released in 2021, which sank tracking-based advertising models. Meta also sufferedbut Meta had the scale and ecosystem to absorb the impact. Not Snap, so its stock went from touching $83 to trading today around $6. A drop of more than 90% from its highs, in less than five years. However, Snap has 946 million active monthly users, grows 12% in year-over-year revenue and has one of the youngest audiences on all platforms. The most coveted demographic for fashion and entertainment brands. It has cutting-edge augmented reality technology and also has Snapchat+, your paid subscription, which is growing well. That is the contradiction that a thousand layoffs do not resolve: Cutting costs improves margins, but alone does not truly monetize a platform with almost a billion users when its audience is young and difficult to convert, and its competitors have ten times more resources. There is also an activist fund in the capital, Irenic Capital Management with 2.5%, which has been pushing for months exactly in this direction: cuts. And now what. Spiegel speaks at memo to concentrate investments where monetization already works. That is, give up on markets that are difficult to grow and profitable (Spain has every chance to be one of them) and focus on more powerful ones, presumably in the style of the United States or the United Kingdom. Give up growth in search of sustainability. Snap has been trying to solve an equation that others have solved at their expense for 15 years. These layoffs are bought time to keep trying. Featured image | Shutter Speed In Xataka | Snapchat introduced its own version of ChatGPT in its app. Nothing has gone, nothing good

A gasoline engine that uses 3L per 100km is a dream come true. And only Spain could manufacture it.

With gasoline and absolutely shot dieselsreduce a few tenths (or liters) to 100 It is the wish of practically every Spaniard. Although the efficiency of current engines is increasing, and gasoline consumption is not as high as it was two decades ago, giants like Repsol are struggling to develop ultra-efficient engines that run on renewable fuel. And they have achieved it. They are not alone. Repsol has the fuel, but needs a partner to develop the engines. That partner is horse powertrain, a Joint Venture between Renault and the Chinese group Geely. This is dedicated to designing, manufacturing and selling thermal and hybrid propulsion systems, something that allows both Renault and Geely to continue exploring the combustion vehicle of the future without abandoning their electrification plans. Horse H12 Concept. This is an engine that promises less than 3.3 liters per 100km in the WLTP cycle, with a reduction in consumption according to the company of 40% compared to the average of new gasoline vehicles registered in the last two years. The best of all? The engine has been developed in Spain, and runs on 100% renewable Repsol gasoline. Horse has its operational headquarters in Madrid, engine factories in Valladolid and gearbox factories in Seville. Why is it important. The Horse H12 Concept is not a shot in the dark. It is an evolution of an already existing engine: the HR12. It is a 1.2-liter three-cylinder produced in Romania, and used in models such as the Dacia Duster. What makes this Concept version special is its exhaust gas recirculation system, a specially optimized ignition system and a hybrid gearbox. This Concept version, in alliance with Repsol, shows how far these engines can go with the help of synthetic fuel. It is not an experiment with an engine designed from scratch, it is the refinement of something that already exists. The other 50%. Repsol is now capable of producing gasoline of 100% renewable origin on an industrial scale at its Tarragona plant. According to what it indicates, it is compatible with all current gasoline vehicles, without requiring any type of modification. It’s your Nexa fuelcurrently available at 30 of Repsol’s stations. The same happens with its diesel, which promises to reduce net CO₂ emissions by up to 90%. And if you’re wondering how much the joke costs, approximately 10 euro cents more per liter compared to conventional fuels. Combustion is not dead. The comings and goings of Europe with combustion cars in 2035 They make it clear that the future will involve electrification. But the plans of giants like Geely and Repsol to try to keep more environmentally responsible combustion solutions alive are a clear indication that gasoline and diesel still have life ahead of them. In Xataka | The question is no longer whether diesel will continue to rise: it is whether it will become an expensive fuel forever.

Brands are eager to turn our cars into a subscription service. Honda has reminded us again

Buying a car today can be a whole box of surprises. Sometimes for the better, and sometimes, as recently happened to a Honda Passport owner, for the worse. And just as has shared user on Reddit, the function to open your garage that previously came as standard, has become an option included in a subscription package offered by the firm. The story has some nuances that are worth mentioning, but the reality is that this example has become another reflection of something that has been happening for years in the automobile industry: manufacturers are determined to turn your vehicles into recurring revenue platformsand software is your main tool to achieve this. From opening the garage with a little button in the car, to doing it from an app The Honda Passport in question has removed the rearview mirror with integrated Homelink, the system that allows the car to be synchronized with the garage receiver via radio. In your place now offers the function as standard through the MyQ applicationintegrated into HondaLink. For it to work, the user needs an internet connection in the car, Apple CarPlay or Android Autoand you must also install a MyQ receiver connected to the home Wi-Fi at home. The result is a system that provides more technical complexity to do something that was previously solved with a small radio control attached to the visor. Sling confirmed According to CarBuzz, customers receive a free 30-day trial period, after which they must contract a three- or five-year subscription. If they don’t, the feature is still accessible through the standalone MyQ app, and Honda also sells a rearview mirror with Homelink as an additional accessory for around $170. That is to say: What used to come as standard now has to be paid separately. The main advantage of the new system (being able to check if you have left the garage open from anywhere with a connection) makes some practical sense. But the price of the subscription, between $129 and $179 for three or five years, plus the possible connectivity costs of the vehicle itself, turns something so simple into a payment chain that is difficult to justify. BMW and heated seats: the case that started it all To understand where we are today with the issue of subscription services in vehicles, it is worth remembering the most talked about episode in recent years. In 2022, BMW began to offer in some markets (South Korea, the United Kingdom, Germany, among others) the possibility of activate seat heating through a monthly subscription about 18 dollars a month. The problem here is that the hardware is already installed in the car from the factory, and it is the owners who had to pay a monthly subscription to unlock this feature. Both the press and the users attacked them so much that they had to back away. In September 2023, BMW Chief Sales and Marketing Officer Pieter Nota will confirmed to Autocar the end of that practice: “What we no longer do, and it is a well-known example, is to offer seat heating in this way. Either it comes from the factory or it doesn’t.” But BMW did not abandon the subscription model, but rather reoriented it. The brand confirmed that it would continue to expand the services and functions it offers through subscriptions, but that it will stop charging for hardware functions already installed in the vehicle. After the move, the firm continued with its plans to add subscription services, but this time only in its software, such as driving or parking assistance systems. Through your ConnectedDrive platformoffers functions such as adaptive suspension, high beam assistant, adaptive cruise control or even welcome animations with the lights, through subscription. Mercedes: up to 80 horses per subscription BMW’s example ended up spreading to many other firms. Mercedes-Benz launched its “Acceleration Increase On-Demand” function in 2023 for the electric EQE and EQS models: for $60 per month or $600 per year in the case of the EQE, or $90 per month and $900 per year in the EQS, owners could unlock between 60 and 80 additional horsepower and cut the acceleration time from 0 to 100 km/h by up to one second. They also added a single payment option for life, which is around 2,000 or 3,000 euros, depending on the model. Mercedes’ logic with which it tried to distance itself from BMW’s case was that standard hardwired functions, such as seat heating, would not be offered as “digital extras”, leaving subscriptions for software upgrades. However, the principle is the same– The car has the necessary hardware, but the feature is blocked until the user pays. Mercedes-Benz aimed reach 2,000 million euros of revenue from software subscriptions in 2025, with plans to reach between 7,000 and 9,000 million euros before 2030. This growth would be driven above all by its own operating system (MB.OS) and its autonomous driving system. If we think about it coldly, electric cars usually have lower maintenance costs than combustion cars, which reduces the income of dealers and the brands themselves. Software subscriptions are presented as a way to compensate for that loss. Tesla, GM and Ford: the model that already works Tesla has been the benchmark for this model for years, and in its case the discussion has important nuances. Your system Full Self-Driving Supervised (supervised autonomous driving) could be purchased for about $8,000 as a one-time payment or as a monthly subscription. And we tell it in the past tense because earlier this year, Elon Musk confirmed that Tesla would offer this mode only as a subscription service and not as a one-time payment. The good news is that those who had paid to get the lifetime feature will continue to have this feature. The subscription option costs about $99 per month. Perhaps the main difference here with BMW or Mercedes is that Tesla updates its software continuously with new capacities, which gives greater meaning to the recurring fee model. In the case of General Motors, … Read more

The war machine that the US destroyed, Iran has put it back on its feet

During the Vietnam War, American pilots bombed for days a network of tunnels near Cu Chi convinced that they had completely rendered it useless. When the troops advanced on the ground, they discovered that not only was it still operational, but the combatants they had reappeared from hidden exits a few meters from their positions. The scene left a brutal lesson: destroying from the air does not always mean eliminating what is below. A start of war that changes everything. The first hours of the conflict in Iran set the tone of everything that would come later: an intensity of fire rarely seen, with hundreds of missiles and almost a thousand drones launched in just two days, forcing the defensive systems to operate at the limit from the first moment. That volume not only showed the scale of the Iranian arsenal, but also the type of war that was being waged, one in which saturation was almost as important as precision. From that starting point, the expectation was clear for all the actors: if that rhythm was sustained, the key was not going to be who hit the hardest, because that actor had a name from the beginning, but who last longer. The illusion of total destruction. Because the United States and Israel responded in the first 48 hours of war with a massive campaign of bombings that sought to disable the Iranian military infrastructure, attacking thousands of targets and sealing access to underground bases to leave the launchers trapped. For weeks, the official message It was forceful.: The missile program had been devastated and the country’s response capacity was practically nullified. However, even at that time doubts arose from within the US apparatus itself, which warned that a significant part of these systems had not been destroyed, but simply blocked or temporarily inaccessible. Iranian efforts underway at a missile base in Tabriz on April 10 The mountains as a shield and strategy. It we count at the time. The real differentiating element was not in the missiles, but in where they were stored. Iran has spent decades building a network of underground facilities in mountainous environments, many of them excavated in granitic rock capable of resisting extremely powerful attacks. These “missile cities” not only store weapons, but also integrate complete logistics systemswith tunnels, launch points and escape routes designed to minimize exposure. It is an architecture designed for survive the first blowassume damage and keep the operational core intact, in a logic that prioritizes resilience over invulnerability. A loader over debris blocking an entrance to a missile base near Khomeyn, April 10 Dig, reactivate and launch again. Satellite images now have confirmed that, as soon as a ceasefire window opened, heavy machinery went into action to remove debris and reopen accesses blocked by bombings. As? The Telegraph said Through satellite survey that dozens of excavators, trucks and engineering equipment were deployed at key points to clear sealed entrances and regain access to buried launchers. Again, what is relevant here is not just that it is being done, but the speed: in a matter of days (and even in just 48 hours in some cases) those facilities have become operational again, suggesting that much of the military capacity was not destroyed, but simply paused. Designed to resist. All of this, furthermore, fits with a very specific doctrine: assume that the enemy will have air superiority and design the system to survive it. Unlike a conventional war, where losing control of the air usually implies the progressive destruction of infrastructure, here the logic is different and focuses on protect assets critical underground, absorb the first attack and recover capacity combat as soon as possible. This approach turns conflict into a race of attrition, where each cycle of attack and reconstruction erodes both the attacker and the defender. The real problem. If you like, the direct consequence of this dynamic is that the apparent initial success of Washington (and Israel) has lost weight in the face of the recovery capacity Iranian. Because, although the attacks have been massive and technically effective, the speed with which Tehran is restoring its bases raises an uncomfortable scene for their adversaries: every pause, negotiation or ceasefire in the fighting becomes an opportunity to rearm again or, literally, dust off the bunkers In that context, the question stops being whether an infrastructure can be destroyed and becomes how many times it can function again before the other side is left behind. without resources or without political margin to continue. Image | Airbus In Xataka | If the question is where is the US nuclear aircraft carrier, the answer is uncomfortable: hidden so that it does not sink In Xataka | We sensed that Iran bombed the US military bases with help: some coordinates have revealed its name, and it is Made in China

its latest update brings it closer to Adobe and even Notion

Canva has built its success on being the non-intimidating tool. Easy, cheap and accessible to anyone. With its new update, ‘Canva IA 2.0’, it points in another direction: it adds connectors with Slack, Gmail, Notion or Google Drive; background automations and persistent brand memory. It no longer competes only with Figma or Adobe. Now it even competes with Notion, ClickUp, Microsoft and Google. Why is it important. 250 million monthly users guarantee that the formula has worked. The question is whether adding all this complexity (conversational design, agent orchestration, scheduled tasks…) makes it more powerful or simply more similar to what already exists. canva It seeks to grow and the risk is breaking the balance that has brought it here. Yes, but. All this comes from a press release. The numbers on their own models (up to 30 times cheaper and 7 times faster than the competition, they say) are published by Canva itself. Real access starts today for the first million users. Until there is real-world testing, the headlines deserve some skepticism. In detail. The main news: Conversational design: create from text or voice, without a starting template. Smart orchestration– AI coordinates internal tools to generate entire campaigns from one briefing. Active memory– Learn the team’s style and brand identity and apply it alone. Connectors: Slack, Gmail, Google Drive, Notion, Zoom, HubSpot and Google Calendar. Scheduled tasks: automations that run in the background without intervention. CanvaCode 2.0: Interactive experiences with import of HTML generated by other AIs. AI Spreadsheets: structured tables generated from natural language. Between the lines. The most interesting thing is not technical but strategic: Canva has strengthened its collaboration with Anthropic to integrate its design engine into Claude, and allows importing outputs of Claude either ChatGPT as editable elements within Canva. They clearly want to be at all the points where an idea is born, not just where it is given shape. The other reading. For years, Canva has been edging into Figma’s territory in professional collaborative design. But the connectors and automations in this ad take them away from that path: this is more like Notion or ClickUp than a design tool. It’s not entirely clear whether that’s an evolution or a loss of focus. Time will tell. What’s coming. The experimental version is available today for the first million users who enter from the home page, with progressive rollout in the coming weeks. Featured image | canva In Xataka | Canva’s most ambitious move is not about AI: it’s about locking everyone inside

Universal quantum computers promise to change the world. Now they are closer thanks to giant super atoms

The prototypes of quantum computers currently manufactured by IBM, Honeywell or Google, among other companies, are engineering prodigies. However, they have defectswhich currently greatly limits the range of applications in which it is possible to use them. The most important of all of them is that they make mistakes and they are still not able to correct them effectively. Scientists are working on developing advanced error correction systems, and if they achieve their goal, universal quantum computers capable of dealing with a wide range of problems will arrive. The Achilles heel of current quantum machines is the extreme fragility of their qubits. And they are very sensitive to disturbances from the environment. Their interaction with the space around them can cause quantum information to be lost or altered, preventing them from delivering a correct result. This phenomenon is known as quantum decoherence and it has the ability to degrade the quantum states that need to be protected in order to carry out operations with qubits. Currently, researchers are making an enormous effort to design effective strategies for isolating qubits from the environment. However, efforts are also being made to develop less fragile qubits, and therefore less sensitive to noise. This is the plan that several scientists at Chalmers University of Technology in Sweden are working on. And they have developed a completely new quantum system designed to protect quantum information and minimize interference from the environment. Its purpose is, neither more nor less, to pave the way for universal quantum computers or large scale. Less decoherence leads to more robust and higher quality quantum computers Quantum computing experts maintain that quantum computers that will have the ability to correct their own errors can be used to design exotic materials, and probably also to develop new drugs and in industrial optimization problems, among other tasks. These are some of the applications that the qubits implemented with giant superatoms proposed by the Chalmers University of Technology team led by applied quantum physics professor Anton Frisk Kockum could put in our hands. Giant Superatoms explore two ideas long known to quantum physicists: giant atoms and superatoms. Giant Super Atoms explores two ideas long known to quantum physicists: giant atoms and superatoms. Unlike isolated atoms, a giant atom in this context is an artificial qubit designed to interact with its environment using light or sound waves at multiple physically separated points. This peculiarity allows them to protect quantum states more effectively than conventional systems, reduce decoherence and remember past interactions. The problem with using giant atoms in quantum computers is that they have significant limitations when trying to entangle them. Entanglement is essential in quantum computing because it allows multiple qubits to share a single quantum state and act as a coordinated system. To solve this limitation, the Chalmers researchers have combined giant atoms and superatoms. A superatom is made up of several natural atoms that share the same quantum state and behave collectively as a single larger atom. Lei Du, one of Chalmers’ researchers, explains to us what is a giant super atom: “We can observe it as multiple giant atoms working together as a single entity, allowing them to exhibit a non-local interaction between light and matter. This allows quantum information from multiple qubits stored and controlled as a unit and without the need for increasingly complex surrounding circuits.” For the moment, giant superatoms are a theoretical proposal, but Professor Anton Frisk Kockum and his team are going to try to build a quantum system using them. If they succeed, they could have found a new type of qubit that is much more robust, and, therefore, suitable for use in the development of universal quantum computers. Image | Generated by Xataka with Gemini More information | ScienceDaily In Xataka | We already know what the chips that will arrive until 2039 will be like. The machine that will allow them to be manufactured is close

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.