Iceland has solved it in the middle of the desert

Trapping carbon dioxide emissions and literally turning them into stone seems like an invention straight out of the blue. Futuramawhere in the future everything is recycled. The problem is that this trick of underground alchemy hid a terrifying small print: his exorbitant thirst. To get carbon to mineralize underground, the system needs to swallow absurd amounts of liquid, specifically between 20 and 50 times more water than the mass of CO₂ we are trying to store. However, a new industrial-scale study published in the magazine Nature just rewritten the rules of the game. An international team, with researchers from Iceland, Saudi Arabia and Italy, has shown in the western Saudi desert that it is possible to petrify CO₂ without wasting a single drop of external fresh water. Salvation under the sands of Saudi Arabia. As the authors of the research detail, this area is a real challenge: it is full of large facilities that emit a lot of CO₂, such as refineries and desalination plants, but it lacks the underground saline aquifers or sedimentary traps that are traditionally used to inject carbon. Salvation was under his feet. About 24 kilometers from the Jizan Economic Complex and Refinery, geologists took advantage of an immense bed of highly fractured volcanic rocks (basalts) that have been there for between 21 and 30 million years. There they tested an ingenious system for recirculating subsoil fluids. The gigantic “soda” trick. To carry out the experiment, the engineers used two main wells, separated by just 130 meters: one functions as a “production” well (extracts water) and the other as an “injection” well. The process is a closed circuit and isolated from the atmosphere so that oxygen does not enter or gas escape. They extract the water that already lives in the depths, circulate it through pipes and, 150 meters underground, inject pure CO₂ into it in the form of bubbles until it completely dissolves. According to the project scientists, dissolving the gas in water has two brutal chemical and mechanical advantages: It gets heavy: CO₂-laden water is denser than regular still water, so it creates a non-buoyant fluid, greatly limiting the risk of the gas migrating to the surface and back into the atmosphere. It becomes acidic: This liquid is acidic and greatly accelerates the dissolution of the silicate minerals present in the basaltic rock. As the rock dissolves, it releases metals that provide the cations needed to form stable minerals, such as calcite. A question of geopolitical survival. The data from this pilot is a resounding success. The team injected 131 tons of CO₂ into the subsoil. After monitoring the area with trackers, they discovered that approximately 70% of all that injected carbon had been mineralized within ten months. Measurements showed that the concentration of dissolved inorganic carbon in the returning water had been reduced by 90% compared to what was initially injected. Reusing water from the reservoir itself offers substantial advantages. Not only do you forget about bringing external water, but you also reduce the risk of the pressure of fluids underground increasing dangerously. Furthermore, by injecting water that has the same composition as the original underground reserve, the risk of compatibility problems, such as losses of permeability in the reservoir, is reduced. The current dimension. As we recently analyzed in Xataka In the wake of military escalation in the region, the real Achilles heel of the Arabian Peninsula is not oil, but thirst. Countries like Saudi Arabia depend 70% on their desalination plants to survive. In a scenario where the supply of fresh water is a strategic vulnerability and a matter of biological survival, allocating massive volumes of water to bury emissions was simply unfeasible. Therefore, this advance opens the door for the Middle East – where a large part of global oil production is also concentrated – to be able to use its basalt rocks to store carbon without sacrificing a vital resource. A providential accident. Sometimes setbacks are the best of tests. In September 2023, the submersible pump in the extraction well broke down. When the technicians brought it to the surface, they found its interior full of rock grains cemented by up to 14% calcite, as well as other minerals such as siderite and ankerite. The isotope analyzes made it clear: these solid cements were formed from the CO₂ injected during the pilot project. The gas had literally petrified in the very bowels of the machine. An “energy bargain”. As if that were not enough, we must add energy savings. As the research details, injecting CO₂ with this method requires a surface pressure of only 12 to 14 bars. That’s 8 to 16 times less pressure than conventional carbon capture plants require. Basically, CO₂-laden water is drawn into the system driven by gravity. Regarding its future potential, engineers calculate that the underground pores of this particular area (estimated between 24,000 and 43,000 m³) would have enough space to house between 22,000 and 40,000 tons of mineralized CO₂. Geology dictates: the limit of the stone. Every geological technology has its own physical limits. As experts explain Natureas water, CO₂ and basalt interact, the total volume of solid minerals increases. This means that the pore space is reduced and can end up blocking water flow paths in the long term. To get around this problem, the researchers propose that we may have to resort to fracturing the rock (fracking), an option still little explored in basaltic systems. What is clear is that this technological innovation is proposed as a great complement to conventional capture systems, not as an exclusive alternative, since in the end it is the geological conditions that rule. But thanks to this pioneering experiment, there is something we can take for granted: the lack of rivers or fresh aquifers is no longer an excuse for not returning our emissions to the subsoil and turning them into stone. Image | Eric Gaba and Nature Xataka | Neither oil nor gas: if a total war breaks out between the US … Read more

We have solved the problem of space junk by burning it. A SpaceX lithium trail just proved to be a terrible idea

For decades, the aerospace industry has had a consensus solution to the problem of space junk: burn it. A fairly simple phenomenon that is based on the satellite reentry when it ends its useful life in the atmosphere so that it begins to suffer friction and completely disintegrates. But the reality is that we are facing a huge problemsince physics reminds us that matter is neither created nor destroyed. We have captured him. Science is realizing that we are not removing space junk, we are just vaporizing it into metallic aerosols that are changing the chemistry of our own sky. And the definitive clue to this problem was found on the night of February 19, 2025where a team of German researchers pointed a laser into the sky over Kühlungsborn. What they detected in this case at about 100 kilometers altitude, in the thermosphere, was something that should not be there, since there were large amounts of lithium. And it wasn’t there for no reason, since it just coincided hours before with the re-entry of a SpaceX Falcon 9 rocket which had disintegrated over the Atlantic between Ireland and the United Kingdom. Something new. The signal measured in this case was not very subtle, since was 10 times bigger to the usual concentration in that region, and this finding was collected in an article because it marks a great milestone: it is the first time that the metallic contamination released from a specific piece of space junk at the exact moment of burning has been observed “live” and from Earth. The metallic iceberg. The incident with this Falcon is not something isolated in our society, but is a symptom of the structural change we are experiencing. In 2023, a team of researchers already used different devices to be able analyze more than 50,000 aerosol particles in the stratospherewhich is the layer where our ozone layer resides, at about 15-30 km altitude. What did they see? Historically, the metals found in the stratosphere came from meteorites that entered our planet. But today it is estimated that 210 tons of aluminum per year in the atmosphere comes from the disintegration of satellites and rockets, compared to the 20 tons per year that vaporize naturally from meteors. But lithium is not the only metal in the atmosphere of our planet, since scientists have detected more than twenty elements, among which aluminum, copper, lead or silver stand out… This is something that does not fit with the normal composition of meteorites, but it does coincide with the materials that different aerospace companies use to create their rockets and satellites. There is no planning. The pace of launches has skyrocketed in recent years, and if today we are close to 10,000 objects orbiting the Earth, we have to know that only Starlink aspires to have more than 40,000 satellites in Earth orbit low. But the problem is that the useful life of these devices is short, so their inevitable fate is to end up vaporized over our heads. Its effects. Science here is quite clear that the effects of filling the stratosphere with these metals are currently unknown. But the projections suggest that we should not be calm because elements such as aluminum and copper are important catabolizers that can affect the delicate ozone layer. In addition to this, metallic particles can act as special condensation nuclei, altering the microphysics of polar stratospheric clouds. And if that were not enough, adding anthropogenic material to sulfuric acid aerosols changes their size and ability to scatter sunlight. Ironically, we are altering the reflectivity of the stratosphere, the same layer that some scientists want to use for climate geoengineering, without knowing what the consequences will be. The planetary limit. The models here suggest that, if the planned megaconstellations materialize, the fraction of stratospheric particles contaminated with aluminum from satellites will rise from the current 10% to around 50%. In other words, the load of metals in the stratosphere could grow by around 40% compared to natural levels. Here for years space agencies have assumed that disintegrating satellites was a completely harmless and clean practice. The example of the Falcon 9, which has validated the warnings of the scientific community, shows us that the Earth’s orbit and our atmosphere make up a connected ecosystem. In this way, launching tens of thousands of objects into space and then burning them on our own roof may be a solution to keep space clean, but we are dirtying the sky in return. In Xataka | Spain and Portugal have joined forces to launch satellites with a mission: to monitor catastrophes in real time

In 1986 a man parked on the wrong side of the gas station. That day he solved an embarrassing problem for all drivers

The history of innovation It’s full of big names and epic breakupsbut also of silent advances born from minimal errors, from everyday mistakes that anyone could have made. Sometimes, a small mistake reveals a problem so common that no one had thought of it or knew how to formulate it, and it is enough to look at it differently to find a solution that ends up benefiting millions of people without it being barely noticed. In this case, one man saved millions of drivers from embarrassment. A universal problem. Maybe his name doesn’t sound familiar to you, but the story of Jim Moylan It is more important than it seems. The story begins with a scene as trivial as it is recognizable: a Ford engineer (Moylan) soaked by the rain, standing at a gas station, realizing that he has parked in the wrong side of the pump. Where anyone would have felt frustration or perhaps some embarrassment, he saw an everyday problem that could be solved elegantly, cheaply and definitively, and in a matter of minutes. wrote a memorandum proposing a small symbol on the instrument panel to indicate which side the tank was on, a simple idea born from personal experience and the conviction that eliminating that doubt would save time, inconvenience and, yes, small humiliations for millions of drivers. The path to a great idea. Moylan was not a media figure or a senior manager, but an engineer with a long and discreet career within the all-powerful Ford Motor Company, a man, yes, professionally obsessed. with instrument panels and with making them as clear and useful as possible. Thus, after sending his original proposal in 1986, the man did not think about it again, but the company did: the symbol he had scribbled on a page quickly went into development, it was approved without much resistance. and ended up integrating in the first models of the late eighties, demonstrating that in large organizations there was still room for a good idea, no matter how small and coming from whoever it was, to cross the hierarchy and become a reality. From Thunderbird to the entire world. Months passed until the first public appearance of the arrow came, an almost imperceptible moment, hidden in the instrument panel of a Ford Thunderbird 1989. It didn’t matter, its power lay precisely in that simplicity. It was so obvious and useful that the competition It didn’t take him long to copy itand in a very short time it went from being an internal Ford solution to becoming a de facto standard in the global automobile industry, and it did so to the point that today it appears in practically any car in the world, including electric ones, where it points to the side of the charging port with the same unbeatable logic. The inventor without a patent (or ego). Unlike other innovators, Moylan He never patented his idea nor did he ask for financial compensation or public recognition, content simply to see how his arrow worked and helped people. For decades, millions of drivers benefited from his invention without even knowing his name, while he silently watched as that little “walk of shame” at gas stations disappeared, getting closer sometimes to strangers to explain the usefulness of the symbol, but without ever mentioning that it had been his doing. Late recognition. I remembered a few weeks ago the wall street journal which was not until many years later, thanks to a chance investigation from a podcast and to the rescue of internal files, when Jim Moylan’s name came to light and he was publicly recognized as the author of one of the most discreet and universal innovations in the automobile. The man died without having sought famebut he left a legacy that lives on every time someone stops at a pump and, with a simple glance at the instrument panel, knows exactly where to stand, reminding us that sometimes true genius lies in solving the obvious in the simplest way possible. Image | Josh In Xataka | An engineer decided one day to put the BMW airplane engine in a car. The result was tremendous In Xataka | When an engineer wanted to cross Africa by car, he invented a wooden one. It would be the beginning of the end

The Vitruvian Basilica is the “holy grail” of Roman architecture. Also a huge enigma that we have finally solved

If there is one thing that abounds in the presentations of archaeological finds (no matter where, when or who makes them) they are superlatives. Each discovery is the most important, the definitive one, the last missing piece to complete the puzzle. Another thing is that it really is like that. In the province of Pesaro and Urbino (Italy) the authorities they just announced a finding in which the opposite occurs: yes, there are superlatives, but they fall short. In the end, what they have unearthed there is neither more nor less than the “holy grail” of Roman architecture. In a stroke of luck, archaeologists have found the basilica erected 2,000 years ago by Marcus Vitruvius, which concludes a search for more than five centuries. What has happened? That Italy has put an end to a 500 year adventurethe time that archaeologists, architects and historians have been searching for perhaps the “holy grail” of Roman architecture: the legendary Vitruvian Basilica. Scholars placed it in Fanum Fortunae (current city of Fano) and for decades they probed its soil in search of vestiges or at least some indication. In vain. things changed about three years agowhen during the renovation works of the market square they found themselves (shortly half a meter deep) some remains that, we now know, belong to the basilica. “Millimeter correspondence”. What we have found under the cobblestones of Fano are Roman columns. So far nothing exceptional considering that we are talking about an ancient coastal city in the Marche region of Italy. The curious thing is that these vestiges fit closely with the description that Marcus Vitruvius left us of the basilica in his famous treatise. ‘Of Architecture’. The columns, their arrangement, the shape and layout of the nave coincide. The “definitive confirmation”, clarify from the Italian Ministry of Culture, arrived after the discovery of a fifth pillar that confirms both the position and orientation of the property. A planimetric reconstruction based on the description left by Vitruvius finally provided the guide. The coincidence is so precise that the authorities speak of a “millimeter correspondence”. “Imposing structures”. “The columns, around five Roman feet in diameter (147-150 cm) and about 15 meters high, rest on pillars and pilasters that supported an upper floor,” points out the Italian Government, which recalls that in 2022 experts were on the trail after discovering some “imposing masonry structures and marble floors” on Via Vitruvio. The confirmation that the remains belong to the old basilica does not complete the work. In fact, Cultura is already advancing that it will continue researching with the support of community funds. “Everything necessary will be done to recover and promote this exceptional find,” guarantees the regional president, Francesco Acquaroli. “Like Tutankhamun’s tomb”. During the presentation neither Francesco Acquaroli, nor the Minister of Culture, Alessandro Giuli, nor certainly the mayor of the town, Luca Serfilippi, spared praise (and superlatives). “The column behind us changes the history of the region. It is a discovery comparable to that of Tutankhamun’s tomb,” celebrated the regional leader. Giuli has used similar effusiveness, for whom the location of the mythical Roman basilica, erected ago two millenniabrand “a before and after” in archaeological history. “History books and not just journalistic chronicles will document this day and everything that will be studied about this exceptional discovery in the coming years. The scientific value is of absolute caliber,” he emphasized the Minister of Culture. “The vestiges discovered clearly demonstrate that Fano was and is the heart of the oldest architectural wisdom of Western civilization.” Is it so relevant? Whether the discovery of the Vitruvian basilica is comparable to that of Tutankhamun’s tomb may perhaps be discussed, of course what is undeniable is that it is one of the great archaeological news of the year (and that at the very least). The reason is not only the value of the building but that of its creator, Marcus Vitruvius (1st century BC), architect, engineer, treatise writer and author of ‘Of Architecture’a fundamental manual to understand Renaissance architecture. In his treatise Vitruvius addresses the three axes that would mark architecture for centuries: firmitas (firmness), utilities (functionality) and venustas (beauty). His work influenced, among others, León Battista Alberti, Andrea Palladio and Leonardo Da Vinci, who was inspired by its proportions to create one of the most iconic (and recognizable) drawings of all time: the ‘Vitruvian Man’. In ‘De Architectura’ the Roman architect does something else: he describes in detail the basilica that has now been found (finally) in Fano, a project in which was directly involved. In fact, the Ministry of Culture remember that it is the “only building attributable with certainty” to the Roman writer. Now we no longer need to imagine it. Images | Office Stampa e Comunicazione MiC In Xataka | We have discovered (again) the secret of Roman concrete. It’s less impressive than it seems

How a mummified wolf has solved the mystery of the woolly rhino’s extinction

14,400 years ago, a barely nine-week-old wolf cub feasted on the Siberian stage. Shortly after gobbling that piece of meat, the puppy died and was buried in the permafrostnear the village of Tumat in northeastern Siberia. Something that at first seems insignificant, has given one of the most important milestones in modern paleogenetics. And this one was in this puppy’s stomach. The study. A team of scientists from the Center for Paleogenetics at Stockholm University has achieved what seemed impossible: Recover the complete high-coverage genome of a woolly rhinoceros (Coelodonta antiquitatis) from the undigested remains in that wolf’s stomach. The results, published in Genome Biology and Evolutionforce us to rewrite the books on how and why this megafauna became extinct, since until now we had a very different idea. A biological miracle. The discovery of this puppy is not something recent, since it was found in 2011, and received the nickname Tumat-1. Being mummified in ice, the reality is that it was in perfect condition, but during the autopsy the researchers found a 3-centimeter piece of tissue with remains of blonde fur. Due to the area in which it was initially found, it was thought that it was a cave lion. But the reality is that genetics has said something very different: it was a woolly rhinoceros. Something that is incredible, since it is the first time in history that the complete genome of an Ice Age animal has been sequenced from the stomach contents of another animal. A great milestone. For science to recover the genetic material of a species in these conditions, the truth is that it is something incredible because of the doors it opens. And DNA is where we can find practically everythingeven the genetic health of the species before its end. Genetic decline. For decades, the dominant theory suggested that woolly rhinos disappeared due to slow genetic erosion. It was believed that, as its population was reduced, inbreeding accumulated harmful mutations that doomed the species due to the many diseases caused by having children between relatives. But this is something that has now been completely debunked. When comparing the genome with samples from 18,000 and 48,500 years ago, the researchers found no decline in diversity. Furthermore, there was no indication that the species was in a state of inbreeding as there was no genetic crossing between close relatives. That is why the effective population remained stable at about 1,600 individuals until just a few centuries before its total disappearance. The culprit. If it was not genetics and inbreeding that condemned the species and not human hunting (because thousands of years passed together without it happening)… what happened? Science now points to Bølling-Allerød Interstadial, a period of abrupt climate warming that occurred about 14,000 years ago. This phenomenon transformed the dry, cold steppe (the rhino paradise) into a landscape of bushy vegetation and, most critically, deep snow. Without being able to live. The woolly rhino, with its short legs and heavy body, was not designed to walk on soft snow or dig for grass under thick layers for food. In this way, it was an environmental trap that caused the animals to not be able to adapt correctly to the new habitat that had been generated around them due to how quickly it happened. Looking to the future. What we learn in the past can also be applied today and tomorrow. And this study does not only speak of the past, since in a context of current climate crisis, the case of the woolly rhinoceros is a warning. It shows that even a species with a stable population and robust genetics can collapse almost instantly if the ecosystem changes abruptly. Images | Wikipedia In Xataka | Whale vomit: a rare substance that looks like floating garbage, but can cost up to $71,000 per kilo

Almost all phones with optical zoom have the same problem. This Chinese brand believes it has solved it in a curious way

The greatest illusion trick in mobile photography is continuity between cameras. When we zoom from 1x to 5x on a telephoto smartphone, we are not moving lenses like on a camera; the mobile jumps between fixed sensors and fills the gaps with digital cropping and AI. The result is those sudden jumps in color and image in the viewfinder and a loss of quality in the “intermediate zooms” that we make when pinching the screen. Tecno, the star brand of the giant Transsion—the fifth largest manufacturer in the world hot on Xiaomi’s heels in some markets—has taken advantage of its annual event to present two technologies that attack precisely this problem: a zoom that does not “jump” and a periscope that shrinks. Optical continuous zoom. And from an increase, up to nine. The most ambitious proposal is the “Freeform Continuum Telephoto”. On paper, it promises to maintain optical sharpness throughout. It represents an important leap, although it is not the first: Sony tried it with the Xperia 1 IValthough its range was more limited. LG also showed similar concepts a few years ago, but no one had promised to cover the main angle lens to the long telephoto lens in a single module. To achieve this milestone without turning the mobile phone into a brick, the Chinese firm moves away from the traditional design of lenses that move longitudinally. Instead, they turn to physical principle of the “Alvarez Lenses”: a system that employs two lenses with free-form surfaces that move perpendicular to the optical axis. By sliding one over the other from the side, they change the optical power of the set and achieve that zoom effect. This technology is related to recent reports that Samsung was developing cameras with continuous zoom for Chinese manufacturers. A periscope that folds on itself. The second innovation presented by Tecno attacks the volume. We are obsessed with increasingly larger sensorsbut the space inside the mobile is finite. Periscopic telephoto cameras require a lot of space, but Tecno and its “Dual-Mirror Reflect Telephoto” promise to reduce the size of the module by 50% and its height by 10%. Instead of a simple prism that bends light 90 degrees, the system uses coaxial optics that bounce light multiple times inside the lens using reflective mirrors. It is what allows long focal lengths in a shorter physical distance. However, this design has a physical trace– When using a central obstruction, the bokeh is not circular, but rather takes on a donut shape. Tecno sells it as an artistic feature, the truth is that it is a consequence of mirror optics. Battle against the accused. The new thing from Tecno comes at a time when mobile photography It depends a lot on the processing what are you looking for the photo instagrammable above realism. Going for better optics instead of digital cropping and AI rescaling seems to be the right direction to achieve naturalness. However, we must maintain some caution. The challenge of this zoom is not only that it works, but that it is bright. Maintaining a decent aperture throughout that range is no easy task. If the system is too dark, the ISO will shoot up, generating noise that the software will have to remedy: back to processing. For the moment, we must wait to see if these concepts end up in a commercial mobile phone. Images | Techno In Xataka | I am an amateur photographer, and I will tell you which are the best phones to take almost professional photos without leaving you a fortune.

There are those who think that the housing crisis can be solved by building. At the Polytechnic University of Catalonia they believe they are wrong

Spain has a problem with housing. That is an (almost) objective fact. The CIS says so, which places it as the great concern of the Spanish, but a quick review of the newspaper archive arrives to confirm it. During the last months few topics have generated more political debate or have taken out so many people on the street such as difficulties in accessing a home. What is no longer so clear is how to solve this “crisis” residential area recognized by the Government itself. Should we build more houses? Does Spain suffer from a housing deficit? Do we need more land to build? Usually the answer to those three questions is a strong ‘yes’. Now a new study signed by two professors of the Polytechnic University of Catalonia (UPC) and published in a magazine linked to the Ministry of Housing points out that perhaps we were wrong. What has happened? That two professors from the Higher Technical School of Architecture of Barcelona (ETSAB), Blanca Arellano-Ramos and Josep Roca-Cladera, have published a study about the problems that Spain is facing in terms of housing. The report in question is titled ‘Five theses about housing policy in Spain’ and is included in a monograph of CyTETa magazine published by the Ministry of Housing. So far nothing exceptional. The curious thing is that the text questions many of the ideas rooted in the real estate sector, such as that our country suffers from a housing deficit or needs more land to build. While the Bank of Spain (BE) estimates 700,000 homes the mismatch between supply and demand, the study questions whether there really is a ‘hole’ in the market or that prices will go down if we build more. Is there a housing deficit? As already indicated in its title, the article is structured around five theses. And the first addresses precisely that point: Does Spain suffer from a housing deficit? The question is interesting because it is one of the most deeply rooted ideas in the sector. The Bank of Spain itself has calculated that it would be necessary 700,000 houses to meet residential demand. For Arellano-Ramos and Roca-Cladera the reality is quite different. In his opinion, one cannot talk about a deficit without first taking into account the excess of housing accumulated between 2011 and 2021 and the stock of vacant properties. The researchers remember that between 2011 and 2021 the housing stock exceeded the growth in the number of homes by 959,554 units, generating a considerable pocket. In fact, they assure that in 2021 the “accumulated excess” was close to 8.1 million properties, a “‘cushion’ more than enough to absorb temporary housing deficits such as the one produced during the 2021-2024 period,” recalls the UPC in the statement in which he reports the study. What does that mean? That for researchers it is not so obvious that Spain suffers from a shortage of new housing. In their analysis they also remember that a good part of the excess of houses and apartments corresponds to second homes and empty homes. The INE itself estimates that at least in 2021 there were 3.84 million of uninhabited properties, 14.4% of the real estate stock. That percentage far exceeds what most experts consider “desirable” (5%), but at least in the statement The UPC does not address another fundamental aspect: the distribution of these wasted properties, if they are located in stressed markets, such as Madrid, Barcelona or Malaga, or in centers where demand is minimal or even non-existent, in the case of emptied Spain. What if we build more? That is the second question the researchers address. What if we build more homes? Would prices be reduced? Their response is once again skeptical to say the least: increasing buildings will not lead to greater social equity nor will it serve to soften prices. “On the contrary”, slide the UPC note. “According to the authors of the study, the solution is not to build more new homes so that the laws of the market balance prices. In addition to having serious environmental effects, what favors is the real estate bubble like the one that occurred around 2000.” What happens in other neighboring countries? Among other arguments, Arellano-Ramos and Roca-Cladera recall that the rise in prices is not a problem exclusive to the Spanish market, but rather something widespread on the continent. So the question is obvious: if the increase in prices is due to the imbalance between supply and demand, do the majority of EU countries share that same problem? “Is there simultaneously a restriction of supply in relation to demand occurring throughout Europe in relation to demand that explains the increase in residential prices? It does not seem that this is plausible. Therefore it is not reasonable, prima facieturn to the scarce construction of new housing as the main cause of the price of housing”, they reflect the authors before remembering that Spain has invested a higher percentage of GDP in construction than the European average. Do we need more land? The researchers also question whether in Spain the problem of lack of accessibility to housing can be explained by the scarcity of land. And to prove it, they go to the newspaper archive: between the late 90s and the early 2000s, buildable land was made available in the country, which allowed for “massive construction” of residential housing. This boom was not accompanied, however, by a reduction in the price of the square meter. Quite the opposite: residential prices increased, as in other parts of Europe. If Spain saw housing prices rise between 1996 and 2008, it was not because there was no land on which to build or build new homes. “Spain became more urbanized than ever and the result did not represent a reduction in prices, on the contrary,” underlines the UPC in your statementwhich recalls that between 2000 and 2012 Spain was the European country with the greatest “consumption” of land: more than 2,400 square kilometers (km2), almost as … Read more

Google has solved problems in two hours that would take three years on a supercomputer. It’s the quantum advantage we needed

Google has taken a notable step into the field of quantum computing with a new algorithm called Quantom Echoes. This algorithm has been able to demonstrate for the first time a “practical and verifiable quantum advantage” that makes its quantum computer make fools of today’s large supercomputers. 13,000 times faster than a supercomputer. The new algorithm, called Quantum Echoes (“Quantum Echoes”), has made it possible to demonstrate that a quantum computer – based on Google’s Willow quantum chip— successfully executes a verifiable algorithm that exceeds the capacity of today’s large supercomputers. Thus, that computer managed to execute that algorithm 13,000 times faster than the best current classical supercomputer when executing similar code. “Quantum verifiability”. Google’s quantum supercomputer solved the problem in just over two hours, when in the second supercomputer most powerful in the world, Frontier, would have taken 3.2 years. But it also did it in a verifiable way: the result can be repeated in the quantum computer itself or in any other of similar caliber. Quantum echoes. The algorithm resembles an advanced echo: you send a signal to the quantum system, perturb a qubit, and then precisely reverse the evolution of the signal to “listen” to the resulting echo. This echo is special because it is amplified by constructive interference, a quantum phenomenon where waves add up to become stronger, which allows this effect to be precisely measured. The algorithm allows modeling the structure of systems in nature, from molecules to black holes. An achievement with a lot of Nobel Prize behind it. The milestone is based on decades of research in this area, including that carried out by the recent Nobel Prize winner, Michel H. Devoretwho is part of the Google team. Together with his colleagues John M. Martinis and John Clark he laid the foundations for this advance at the University of California at Berkeley in the mid-1980s. “Quantum verifiability”. Google’s quantum supercomputer solved the problem in just over two hours, when in the second supercomputer most powerful in the world, Frontier, would have taken 3.2 years. But it also did it in a verifiable way: the result can be repeated in the quantum computer itself or in any other of similar caliber. Quantum echoes. The algorithm resembles an advanced echo: you send a signal to the quantum system, perturb a qubit, and then precisely reverse the evolution of the signal to “listen” to the resulting echo. This echo is special because it is amplified by constructive interference, a quantum phenomenon where waves add up to become stronger, which allows this effect to be precisely measured. The algorithm allows modeling the structure of systems in nature, from molecules to black holes. An achievement with a lot of Nobel Prize behind it. The milestone is based on decades of research in this area, including that carried out by the recent Nobel Prize winner, Michel H. Devoretwho is part of the Google team. Together with his colleagues John M. Martinis and John Clark he laid the foundations for this advance at the University of California at Berkeley in the mid-1980s. Hello qubit. His discovery: the properties of quantum mechanics could also be observed in electrical circuits large enough to be seen with the naked eye. That gave rise to the creation of superconducting qubitswhich are the basic blocks with which Google has created (like other companies) its quantum computers. Devoret joined Google in 2023, thus strengthening the company’s trajectory in its search for the now famous “quantum supremacy”. Promising practical applications. The advance is directed directly to the solution of important problems in fields such as medicine or materials science. Quantum computing remains an experimental technology and faces a key challenge with error correction, but Quantum Echoes demonstrates that “quantum software” is advancing at a pace parallel to hardware. Google applied Quantum Echoes to a proof of concept experiment for Nuclear Magnetic Resonance. This technique acts as a “molecular microscope”, a powerful tool that will help design drugs or, for example, establish the molecular structure of new polymers. a marathon. This new milestone demonstrates the progress that this technology has made in recent years, but Google is not alone here. Microsoft or IBM have also made notable advances in recent years, and of course there are numerous startups both in the US like in china who work in this area. In Xataka | Decoherence is the biggest problem with quantum computers. This superconductor wants to end it

science has solved the mystery of plasma rain

Although it may seem incredible, it rains in the Sun. But it is not a rain of water like the one we know on Earth. It’s a rain of incandescent plasmaa phenomenon that for decades has baffled scientists by not understanding it. Now, a team from the University of Hawaii has solved the mysteryand the answer is completely changing our way of understanding the atmosphere of our ‘reference’ star. The discovery. Published in the prestigious magazine The Astrophysical Journal, not only explains why these spectacular plasma condensationsbut also gives us new tools to predict space weather that affects our technology here on Earth. The mystery. The “solar storm“, or more technically coronal rain, occurs in the corona, the outermost and hottest layer of the Sun. There, masses of denser and relatively “cold” plasma condense and fall back towards the solar surface, creating bright arcs and loops. And although we talk about ‘cold’, the reality is that we are talking about tens of thousands of degreescompared to the millions of degrees in the surrounding plasma. Although for us it would be something unthinkable. The big enigma was speed. Solar models predicted that this cooling and condensation process should take hours, or even days. However, observations showed that rain formed within minutes during solar flares. Something didn’t add up. Now the problem has been located in the models that were used. And they assumed that the chemical composition of the corona was static and uniform, a simplification that has undoubtedly resulted in us calculating the phenomena that occur in our star much worse. The key. The key breakthrough came when the researchers, led by graduate student Luke Fushimi Benavitz, decided to abandon that old assumption. They introduced into their simulations a factor that until now had been overlooked: the abundance of chemical elements varies in space and time without being static. And this is where physics gets very interesting. The mechanism. The first thing that happens in this case is a solar flare that heats the chromosphere (the layer below the corona). This impulsive heating causes a large amount of plasma in the chromosphere to “evaporate” and rise at high speed towards the coronal loops. This ‘new’ plasma will have a composition similar to that of the photosphere, which is the surface we see of the Sun. Once the plasma was already in the coronal loop, rich in materials such as iron or silicon, it is pushed and concentrated at the highest point of the arc, creating a ‘peak’ with these elements. One property of these elements is that they can radiate a lot of energy quickly and this causes the plasma to cool. And this sudden concentration at the apex of the loop acts as an ultra-powerful radiator, causing localized and very rapid cooling. Finally, this sudden cooling causes a pressure drop. As a result, more plasma from the surrounding area is sucked into that area, increasing the density. The most interesting thing is that the higher the density, the cooling becomes even more efficient and a ‘thermal runaway’ occurs. As its name indicates, the temperature will plummet and the plasma will condense, forming rain. The importance. For the first time, this model has done something that had not been achieved before: simulate the formation of rain on the Sun. And understanding it goes far beyond solving an old riddle, but it affects us completely. Most importantly for us, it improves our ability to predict space weather. solar flares They can launch enormous amounts of energy and particles into space which, upon reaching Earth, can damage satellites, disrupt communications and overload electrical networks. More precise models of the Sun’s behavior allow us to better anticipate these events that until now gave us very little preparation time. Rewriting. This discovery forces us to rewrite a fundamental part of solar physics. The idea that the composition of the solar atmosphere is dynamic and not static opens a large field of research ahead to understand exactly how energy moves through the star. Images | Javier Miranda In Xataka | As if nothing were going on, the Sun has just caused a radio blackout with its most powerful eruption of 2025

We needed more than ever a way of predicting better storms and hurricanes. AI has solved the problem

Among the areas in which Google Deepmind works, Meteorological prediction It is one of the most precision is obtaining thanks to the refinement of artificial intelligence tools designed for it. And the company has just demonstrated that AI can overcome traditional methods in hurricane prediction. And is that your model Weather Lab managed to forecast more accurately The trajectory and intensity of Hurricane Erinwhich went from tropical storm to Category 5 in less than 24 hours A few days ago. The first real exam. Until now, the meteorological models were a promise. Hurricane Erin became the first real -time fire test for the Google system. During the first three critical prediction, the artificial intelligence model exceeded the official forecast so much of the National Center for American Hurricanes such as several traditional physical models, including the most reliable Europeans and American. How it works. Traditional models are based on complex physical equations that recreate current atmospheric conditions: humidity, pressure, temperature. Google’s approach It is radically different. Its AI has been trained with a massive data set that includes historical weather information of the entire land and a specialized database with details of almost 5,000 cyclones observed during the last 45 years. “They match these long historical data with details about how hurricanes behave and statistically combine them to see patterns that the human eye could not detect,” Explain James Franklin, former head of the Hurricane Specialists Unit of the National Hurricane Center. Why does it matter so much. The precise prediction of hurricanes is vital to know what type of measures are needed to protect themselves from them in case of emergencies. The three to five forecast days are crucial to make decisions about evacuations and preparation measures. In Google’s internal tests With storms of 2023 and 2024, its model managed to predict the final location of the cyclones with about 140 kilometers more precision than the European model (ECMWF), considered the most exact available. Exceptional performance. Franklin stands out The performance of the Google system: “It really surpassed the other guides in terms of intensity. It captured the general form of the change of the life cycle almost exactly. practically without error.” The model not only succeeded in the trajectory, but predicted with surprising precision how Erin’s intensity would evolve throughout his life cycle. Still in development. Despite success, the Google model is not ready for public use. Weather Lab includes a warning that recommends trusting the official forecasts of the National Hurricane Center. However, Franklin It is optimistic About the future: “For next year, you will receive a very serious look and will really play a role in the forecasts that leave the Hurricane Center.” Cover image | Brian McGowan In Xataka | It is no longer necessary to pay to transform our photos into what we want. The latest Google offers it for free for everyone

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.