Two decades ago, dogs flooded Spain with souped-up motorcycles. Today, they sell them for a fortune

If you know what a Yamaha Joga Aerox or one Piaggio ZipI’m very sorry: you are already old. Between the 90s and 2000s, young Spaniards could obtain their moped license from the age of 14, and the 49cc scooter became an object of worship… and souped-up. With the tightening of European regulations, this type of motorcycle has practically stopped being sold. But there are those who are making a killing on second-hand platforms. The fall of the 49cc. The moped market has completely changed. At the end of the 2000s, nearly 200,000 units were sold per year. Two decades later, sales fell more than 90%. Currently, mopeds represent a minimal part of the market: in Spain there are barely more than 20,000 registrations per year, while 125 cc motorcycles dominate sales thanks to the fact that they can be driven with a car license. The fall of the 49cc coincided with key factors such as: The 49cc fever. The thunderous and (for many) unpleasant hum of this type of motorcycle was no coincidence. Preparations were the order of the day: exhaust, cylinder, variator… Mopeds with a tiny engine surpassed many of the current 125cc scooters in performance. In fact, the homologation regulations on paper prevented these mopeds from exceeding 45km/h. The reality? Even the slowest one could double this figure straight out of the factory. It was enough to remove some stops in a matter of minutes, and if we dared to carry out a simple preparation, it was easy to make them touch (or exceed) 100km/h. The pasta. A classic like the Yamaha Jog cost just over 2,000 euros in 2005. 20 years later, it is easy to find units in good condition on Wallapop from 1,200 euros to more than 2,500. Of course, prepared to the brim. In fact, it is practically impossible to find a moped of this style that is not souped up. A safer time. Between the 90s and 2000s, it was common to see minors driving this type of motorcycle. The accident rate per kilometer was very high, and the risk multiplied compared to adults on motorcycles with larger displacements. Today the panorama is very different. The 50 cc has been relegated to a niche, the 125 cc dominates the urban market and electric scooters are beginning to gain ground. But for an entire generation, the metallic sound of a Jog or an Aerox remains the soundtrack of adolescence. In Xataka | I was about to buy the best-selling Chinese motorcycle in Spain. Until I read the fine print

We have been wondering for decades if being vegetarian prevents cancer. We already have a very clear answer

There is a endless diets in different parts of the world, conditioned largely by local society and culture, such as in Spain, where the Mediterranean dietwhich is varied. But the focus of the debate is on what is the best diet to maintain good health in the long term. And here the vegetarian diet has a lot to say. Giving answers. For years, we have known that reducing our consumption of processed meat is beneficial for our health, but a new macro study led by the University of Oxford has put compelling data on the table about how dietary choice directly impacts the risk of developing different types of cancer. The work published in the magazine British Journal of Cancer is consolidated as the further analysis performed to date on this topic. And it is no wonder, since researchers have been able to analyze the histories of 1.8 million women and men who participated in nine prospective studies across three continents. A shield. Until now, previous studies they were already pointing that vegetarians had a lower oncological risk, but there was not the necessary statistical power to refine the data and make this categorical statement. But this study has come to change this, since researchers reveal that vegetarians have a significantly lower risk of suffering from five types of cancer compared to people who eat meat regularly. Results. Obviously, there are many other factors that influence this matter such as weight or lifestyle, but even adjusting the data, a clear result has been seen, which is summarized in the following risk reductions: 31% lower risk of suffering from multiple myeloma. 28% lower risk of kidney cancer. 21% lower risk of pancreatic cancer. 12% lower risk of pancreatic cancer. 9% lower risk of breast cancer. But the curious thing about these data is that for ten other types of cancer studied, such as lung cancer in non-smokers, science has not found a significant difference. And this opens the door to seeing why this diet is so specific for specific cancers. The small print. Not everything is so positive with this diet, since the study has shown that vegetarians have almost double the risk of developing esophageal cancer compared to people who eat meat in their diet. Because? According to researchers, the benefits of a vegetarian diet in cancer are explained by the greater intake of fruits, vegetables, fiber and the absence of processed meats. But the fact that they have a higher risk of having esophageal cancer is related to the nutritional deficiencies that vegetarians may have. And the lack of certain exclusive or more present nutrients in foods of animal origin could be weakening the natural defenses of this tissue. The rest of the diets. In addition to the war that may exist between meat and vegetables, researchers wanted to go further to look at the rest of the diet. In this case, the pescetarianswho do not consume meat, but do consume fish and seafood, had a lower risk of developing breast, kidney and colon cancer. But when we talk about vegansis where there are certain important nuances, since it has been seen that they have a higher risk of suffering from colorectal cancer. However, the researchers themselves point out that there are still not enough statistical cases to accurately evaluate the impact of veganism on rarer cancers. The recommendations. Given this study, everything that had been done in oncology is maintained, since the norm is to prioritize whole grains, legumes, fruits and vegetables in the diet, limiting the consumption of red and processed meats. Although logically always ensuring that all nutritional needs are met and following medical advice. Images | amin ramezani In Xataka | Having a beer or a wine at 65 seems like a harmless indulgence. We have more and more evidence to the contrary.

IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

IBM shares fell about 13.2% yesterday on the New York Stock Exchange for a simple reason: Anthropic advertisement that its AI model, Claude, can be used to modernize systems that are based on the legendary COBOL programming language. And that is something that seemed virtually impossible. The immortal language. As Anthropic itself indicates, it is estimated that COBOL manages 95% of all transactions made at ATMs in the US. A 2022 study revealed that there are 800 billion lines of COBOL code that continue to operate in production systems on a daily basis. That almost no one uses anymore. Faced with this reality is another equally powerful one: almost no one programs in COBOL anymore, because this language has been with us for 65 years and has ended up being replaced by modern programming languages. The question, of course, is who is in charge of those millions of lines of code if there are almost no human programmers who can do it. Anthropic itself made it clear: “the number of people who understand COBOL decreases every year.” AI to the rescue. That’s where Claude, Anthropic’s family of generative AI models, comes in. According to this company, Claude is now capable of “modernizing” COBOL despite how difficult and expensive it was to carry out something like that. IBM has been trying for years and in fact applied that same recipebut its AI (Watson) does not seem to have managed too much progress. Claude helps, but there must be a human expert supervising. At Anthropic they promise that their AI model is capable of reading the entire code base of a COBOL project, identifying entry points, execution paths through subroutines, mapping data flows and documenting dependencies. They highlight, however, that with the supervision of a human expert this can help modernize and polish all types of COBOL-based systems. Critical systems. Of course, the question is whether AI will actually deliver on that promise, especially when we’re talking about absolutely critical systems used in financial transactions. According to Anthropic “the modernization of the code legacy It has been stagnant for years because understanding it cost more than rewriting it. “AI reverses that equation.” COBOL is no longer IBM’s ace in the hole. It’s hard to know how much of IBM’s business depended on COBOL systems, but it’s certainly a relevant part. In 2025 the company achieved revenue of $67.5 billion. About 45% comes from software. The rest is consulting and infrastructure, and this last division is where the IT business is included. IBM Z mainframesclosely linked to COBOL systems. It’s reasonable to think that revenues dependent on mainframes and COBOl are around 20% of IBM’s revenues (and probably more in profits). AI and the SaaSpocalypse. What happened with IBM and COBOL is the latest case of a software that seemed to have a long-term future but with AI may not have such a long-term future. Investors now seem to think that AI will replace many of these systems and SaaS platforms. It is indeed what has been called “SaaSpocalypse” in reference to the stock market falls of this type of companies in recent months: Salesforce, SAP, Microsoft, Adobe, Intuit and Atlassian have suffered notable falls in the stock market that are around 30-40% on average. But. This investor panic that is being experienced contrasts with the current reality: AI models are proving to be able to do surprising things in the field of programming, but they are far from being perfect. The code must be reviewed, and IBM itself he already made it clear In a 1979 training manual: “A computer can never be held responsible. Therefore, it should never make an administrative decision.” IBM has already survived other crises. The blue giant has suffered a blow to the stock market, but it is one of those technology companies that have managed to recover and resist all the attacks of an industry that is normally merciless. IBM itself also has its modernization solutions for its clients, and some analysts they are clear that in fact IBM will make more money than before if COBOL finally goes away. In Xataka | Old programmers never die, and Silicon Valley is realizing that

We have been looking for the end of Neanderthals in weapons and climate for decades. A study proposes to look for it in the placenta

For decades, we have tried to explain why our species has persisted over time and Neanderthals don’t. We have blamed climate changeto competition for resources, to a supposed cognitive inferiority and even to the genetic assimilation. However, a new study suggests that the answer might not lie on the battlefield or in the weather, but in something much more intimate like the placenta. A new idea. In this case, science proposes a hypothesis controversial, since it suggests that Neanderthals could have become extinct, in part, due to genetic susceptibility extreme to preeclampsia. a disorder which is heard a lot today and which is nothing more than a hypertensive condition in pregnancy that can be lethal for both the mother and the fetuses. A price to pay. To understand the hypothesis, we must first understand the human “obstetric paradox”, since in our species we have an almost unique characteristic, which is deep hemochorial placentation. And it is something that may sound very bad, but it is actually necessary to feed a fetal brain as demanding as ours and that of Neanderthals. In this case, the placenta needs to aggressively invade the arteries of the uterus maternal to obtain maximum blood flow, although the problem is that it is something that carries a great risk. The possibilities. Faced with this invasion, the possibilities that open up are several. The first of them is that it works and that the fetus can develop its massive brain. But in the event that this fails, a great immunological and vascular reaction is unleashed in the mother, which is what we know as preeclampsia. This presents with severe hypertension, organ damage and risk of death for both the mother and the fetus. And it is a problem that today is quite significant among human pregnancies, but now science indicates that, although the Homo sapiens evolved a physiological “safety mechanism” to mitigate this impact, Neanderthals were not so lucky. A demographic winter. This study suggests that, as the Neanderthal brain grew, becoming larger than ours, its metabolic needs forced a increasingly aggressive placentation. The fact of penetrating further into the placenta significantly increases the risk of preeclampsia, and the problem is that Neanderthal women lacked the immune mechanism to tolerate this invasion. This is where researchers have created a scenario in which rates of preeclampsia and eclampsia in Neanderthals could have reached between 10% and 20% of all pregnanciescompared to much lower rates in preindustrial humans. The meaning. This scenario translates into logically devastating maternal and fetal mortality, and the direct consequence is that small and dispersed hunter-gatherer populations had a constant decline in reproductive success. And this is a much more effective death sentence than any war, since a sudden catastrophe is not necessary, but it is enough for more mothers and babies to die than are born over a few millennia for a species to end up disappearing. There is skepticism. Within the scientific world there are doubts about what is said in this study, since there is a lack of physical evidence to support this hypothesis. The first thing they point to is that there are no markers in the fossils that have been found that allow us to diagnose preeclampsia in a Neanderthal woman from 40,000 years ago. In addition to this, although we know genetic variants associated with the risk of preeclampsia in modern humans, such as genes linked to FLT1systematic screening of Neanderthal DNA has not yet been performed to confirm whether they possessed the “high-risk” variants or lacked the protective variants. Also like it. What makes this hypothesis attractive to biologists is that it fits with maternal-fetal conflict theory. As different previous reviews point out, pregnancy is not always a perfect cooperation, but rather a tense biological negotiation. In this case, the fetus “wants” more resources to survive, and the mother “wants” to limit that investment to survive and have future children. Preeclampsia is often the result of this conflict getting out of control, and so, if Neanderthals took the “big brain” strategy to the limit without developing the biological counterpart to protect the mother, their own reproductive biology could have become an evolutionary trap. Images | Nanne Tiggelman freestocks In Xataka | A mixture of 4,000 kilometers: we have the first detailed map of the coexistence between Neanderthals and Sapiens

We have been searching for radioactive “monsters” for decades. What we have found is a rapid evolution

When we think about animals and radiation, our minds may imagine a three-eyed fish from The Simpsons or gigantic beasts from science fiction movies. But the reality is that those areas of the planet that have suffered a radioactive disasterpresent a much more complex and often more fascinating reality from an evolutionary point of view. The data. Decades after the accidents Chernobyl in 1986, Fukushima in 2011 and the historic disasters in Mayak, science has begun to collect enough data to understand what occurs when the fauna returns “exclusion zones” that have been abandoned by humans. The most recent studies tell us that there are no monsters, but there are accelerated genetic changes, forced adaptations and physiological scars. The Chernobyl case. The Chernobyl Exclusion Zone has become an involuntary nature reserve, since, without humans, fauna has proliferated, but genetic studies tell a story of invisible stress. One of the most classic and revealing studies focuses on the barn swallow, Since far from being immune, these birds have acted as bioindicators of the disaster. Research has documented an unusually high frequency of partial albinism in its plumagean external sign of genetic instability. In this case, an increase in the germline mutation rate of between 2 and 10 times has been recorded compared to control areas in Italy or uncontaminated rural Ukraine. As a consequence, between 1991 and 2006, were documented high frequencies of physical abnormalities in adults, suggesting that radiation continues to exert a constant selective pressure. The case of the dogs. In Chernobyl, perhaps the most surprising discovery in recent years comes from the descendants of pets that were abandoned during the evacuation. A genomic analysis A recent study of feral dogs living near the nuclear power plant shows a different genetic structure from dogs living in the city of Chernobyl, just a few kilometers away. In this case, scientists have identified changes in candidate genes such as XRCC4, essential for DNA repair. This suggests a multigenerational selection where the dogs with the best mechanisms to repair cellular damage caused by radiation are those that have managed to survive and reproduce. In this case, a meta-analysis covering 45 studies and 30 species confirms that the effect on mutation rates is large and persistent, being curiously stronger in plants than in animals. The case of Fukushima. If we go to Japan, it is where we find one of the most recent nuclear disasters and it is where we have been allowed to observe the immediate impact and the medium-term adaptation of nature. One of the most notable points is found in a new study published in January of this same year, which tells how thousands of domestic pigs escaped from their abandoned farms and began to mate with wild boars in the forest. Here it is pointed out that this encounter not only produced hybrids between pigs and wild boars, but also has accelerated the biology of these animals. And we are not facing “radioactive mutants” like the three-eyed fish from The Simpsons, but rather something biologically more interesting: a accelerated play machine that has managed to dilute its domestic genes in record time. How it looked. The researchers analyzed the mitochondrial DNA, which is inherited only from the mother, and also the nuclear DNA of 191 wild boars and 10 pigs in the area between 2015 and 2018. The results suggested that, although the hybrids look like wild boars, many hide a secret in their maternal lineage. The key to this is the biological difference between both species, since, although the wild boar has a strict annual breeding season, domestic pigs have a continuous reproductive cycle to breed all year round. From here, it has been seen that hybrids that descend from a mother pig They inherit this rapid reproductive cycle, which has caused a rapid generational rotation, detecting more than five generations of hybrids in just a few years after the disaster. In short, wild boars have seen their reproduction accelerate when a few years ago it was much slower. A genetic paradox. Here comes the most curious part of the study, since if these animals reproduce so much, why don’t we see pigs everywhere in Fukushima? The answer is in the massive backcrossing in the field genetic. And the population of wild boars in the area is immensely higher than that of pigs escaped from farms, so hybrids almost always end up mating with pure wild boars. In this way, if hybrid mothers have many offspring thanks to their domestic “engine” and those offspring are crossed again with wild boars, the result is that the pig’s nuclear DNA, which defines appearance and most traits, is quickly diluted. An evolutionary improvement. With this dilution, the study indicates that, although the mitochondrial DNA reveals the domestic origin of these new wild boars, the nuclear genome and its appearance are almost indistinguishable from that of a wild boar. This is why they are, for all practical purposes, reproductively “improved” wild boars that have erased their visual domestic pig trace. The case of the butterfly. If we continue in Fukushima, we find ourselves another interesting case in the butterfly pale grass blue which was monitored between 2011 and 2013. In this case, a reduction in the size of the butterfly’s wings and a delay in growth was observed, which was combined with the appearance of deformities in the eyes and wings. After the initial spike of anomalies, the population appeared to stabilize, but this suggests a “purge” process: the most sensitive individuals died quickly, leaving a more resilient surviving population, an example of accelerated evolutionary adaptation. The Mayak disaster. Although few people know it, before Chernobyl there was this disaster that received very little media attention and which had protagonist to the Techa River in the Urals (Russia). Here, between 1949 and 1952, waste was dumped, creating a historical laboratory for chronic exposure. Technical reports and dose modeling in aquatic organisms such as fish in the Obi-Techa river system remind us … Read more

The world has been fascinated by the collapse of the Mayans for decades. In reality, almost everything we thought we knew was wrong.

They cultivated fields, raised livestock, built some of the most amazing buildings on the planet, developed a rich culture that included advanced astronomical knowledge that still intrigue today to the experts. The Mayans are one of the most fascinating civilizations on the planet. And rightly so. Without it it is impossible to tell the history of Central America. However, little by little and as technology allows us to delve into their secrets, we begin to understand something: much of what we thought we knew about the Mayans was wrong. And that includes its collapse. What happened to the Mayans? The question is very simple. His answer not so much anymore. As our knowledge of the Mayan civilization has expanded (thanks to resources such as LiDAR technology) has also mutated the idea that historians had of its decline. I remembered it recently in Guardian Marcus Haraldsson remembering what we know about Tikalone of the largest urban centers of the Mayans, located in what is now Guatemala. “Sudden and disastrous”? The most recent stele located at the site dates back to the year 869 ADwhich leaves the question of what happened in Tikal from that date on. For a time historians assessed the possibility of a “sudden and disastrous” collapse that marked its fate; But today that explanation seems increasingly distant. Now experts are leaning towards another option: a broad period of decline of around 200 years during which farmers moved north and south and powerful urban centers were abandoned in favor of settlements such as Chichén Itzá, Uxmal or Mayapán, towards the north of the Yucatán Peninsula. There is even talk of the period Classic Terminalwhich goes from the years 750 to 1050. Changing perspective. This perspective has been adapted over the decades and goes beyond the period of decline of the Mayan civilization. “We are no longer really talking about collapse, but about decline, transformation and reorganization of society, as well as a continuity of culture,” comment to Guardian Kenneth E. Seligson, associate professor of archeology at California State University (CSU). “There have been several similar changes in places like Rome. (But) we rarely talk about the great Roman collapse anymore because they re-emerged in various forms, just like the Mayans.” But… What happened? What exactly happened for many of the main Mayan settlements (not all) to begin to collapse towards the 9th and 10th centuries It remains a complex and highly discussed topic. Today the authors point out a combination of factors including changes in trade routes, adverse weather, severe and prolonged droughts and wars, among others. The truth is that in the middle of 2026, researchers continue collecting clues that helps us clear up unknowns about that period. The importance of water. You don’t have to go far back to read new discoveries that tell us precisely about the collapse of the Mayan civilization. Last August a group of scientists published a article in which they basically emphasized the “important role” that “prolonged droughts” played in the Mayan decline. For their study, the researchers analyzed a stalagmite located in a cave in the Yucatan, a true geological and archaeological treasure if its oxygen isotopes are analyzed. The examination revealed a series of periods of severe drought between 871 and 1021, during the Terminal Classic, stages marked by water shortages during which the Mayans found it “extremely difficult” to grow their crops. It may seem exaggerated, but the study revealed eight droughts during the rainy season that lasted at least three years. Not only that. The longest drought lasted about 13 years. Other previous studies, carried out from sediments collected in the Chichankanab lagoon or stalactites rescued in Belizehad already suggested the role that climate played in the Mayan collapse. Question of droughts (and something else). Months after that study, in November, Benjamin Gwinneth, from the Université de Montréal (UdeM), published another that helps complete the ‘photo’. The Canadian institution recalls that between 750 and 900 AD the population of the Mayan lowlands suffered “a significant demographic and political decline” that coincided with “episodes of intense drought.” What Gwinneth’s work questions is whether this collapse is explained only by the lack of water. Curiously, their research is also based on the analysis of sediment samples dating back to around 3,300 years ago. And what exactly did he do? Gwinneth dedicated himself to analyzing samples taken from Laguna Itzán, in present-day Guatemala, near an archaeological site Maya. To be precise, they focused on three “geochemical indicators” that reveal the evolution of fires, vegetation and population density in the area (something they estimate thanks to fecal stanols) for thousands of years. The first conclusion they obtained is that the first settlements appeared in the area 3,200 years ago and for centuries the Mayans cultivated, burned to clear forests and used the ashes as natural fertilizer. It also gradually increased the population of the area. Over time they even changed their “agricultural strategy”, dispensing with fire. A “stable” climate. The second conclusion (and this is the interesting part) is that, unlike Mayan populations located further north that did suffer “devastating droughts”, in Itzán the climate was relatively “stable” thanks in part to its geographical location, near the Cordillera. Curiously, that did not free Itzán from the crisis that they suffered in other areas of the Mayan world. The question is obvious: Why? If it kept raining there, what dragged them into the crisis? “Although there was no drought in the area, the population decreased during the Terminal Classic period. Indicators show a drastic drop, traces of agriculture disappear and the site was abandoned,” Gwinneth points out.which recalls that some archaeologists place the beginning of the Mayan collapse in the Itzán area. Why is it important? Because it suggests that drought (no matter how stubborn) is not enough on its own to explain the Mayan decline. “The answer lies in the interconnection of Mayan societies,” reflects the expert. “Cities did not exist in isolation. They formed a complex network of commercial ties, … Read more

We have been dreaming of infinite “solar gasoline” for decades. A new material inspired by plants has just proven that it is possible

Nature has been keeping a secret in broad daylight for millions of years: photosynthesis. For decades, science has pursued the dream of replicating this process to create clean, sustainable fuels, but “artificial photosynthesis” has always run into walls of inefficiency and technical complexity. Until now. In short. A team of Chinese researchers has developed a method that mimics the natural process of transforming carbon dioxide (CO2) and water into the basic components of gasoline. We are no longer talking about abstract theory; It is a system capable of creating “solar fuel” without depending on expensive chemical additives, bringing us closer to the holy grail of renewable energy. The advance, recently published in the magazine Nature Communicationscomes from a joint team of the Chinese Academy of Sciences and the Hong Kong University of Science and Technology. Researchers have designed a new composite material: tungsten trioxide modified with silver atoms (Ag/WO3). The end of chemical “tricks”. The truly revolutionary thing about this “magic dust” is not only its composition, but what it manages to avoid. To date, most attempts at artificial photosynthesis cheated: they used “sacrificial agents”, organic chemical additives (such as triethanolamine) that facilitated the reaction but were irreversibly consumed in the process, making it unsustainable and expensive on a large scale. This new system breaks that barrier. According to the scientific studythe catalyst achieves the light-driven conversion using only pure water (H2O) as an electron donor. No additives, no tricks. The result of this reaction is the efficient production of carbon monoxide (CO). Although it sounds like a harmful substance on its own, in the chemical industry this molecule is pure gold: it is a key intermediate that, mixed with hydrogen, forms the “synthesis gas” necessary to manufacture complex hydrocarbons such as methanol or synthetic gasoline. Air fuel. We are at the gateway to “solar fuels.” The importance of this finding lies in its ability to decarbonize sectors that electric batteries cannot easily cover, such as commercial aviation or heavy shipping. Furthermore, the researchers stand out in their paper who have come up with a “universal strategy”. Its material (Ag/WO3) is not an isolated invention, but a versatile “charger” that can be coupled to various types of catalysts (such as cobalt phthalocyanine, C3N4 or Cu2O) and improve their performance drastically. In fact, by combining this material with cobalt (CoPc), they achieved an efficiency 100 times higher than that of the catalyst acting on its own, equaling the performance of old systems that used polluting additives. It is a pure circular economy: capturing the gas that warms the planet (CO2) and turning it into a valuable resource. The secret is to imitate the leaves. To understand how they have achieved this, you have to look at a tree leaf. In natural photosynthesis, the processes of breaking down water and fixing CO2 are separate. Plants use a molecule called plastoquinone (PQ) to temporarily transport and “store” electrons excited by the sun before using them, acting as an energy buffer. Without this buffer, the electrons would be lost before they could be used. Chinese scientists asked themselves: “Can we build an artificial plastoquinone?” And the answer was tungsten. The developed material works as a bioinspired cargo reservoir: The battery: Under sunlight, tungsten changes its chemical structure (a valence swing from W6+ to W5+), temporarily trapping electrons as if it were a micro-battery. The bridge: When the system needs energy to convert CO2, the silver (Ag) atoms act as a bridge, releasing those stored electrons just at the right moment to recombine with the “gaps” of the catalyst. This solves the big problem of artificial photosynthesis: time and load management. While the water oxidizes, the system “saves” the solar energy to have it ready when the CO2 enters. From the laboratory to the real world. The best thing about this research is that it has not remained a theoretical simulation under perfect lamps. The team built an experimental device equipped with a Fresnel lens (to concentrate light) and took it outside to test it under natural sunlight. The data from the outdoor experiment are revealing: Solar rhythm: The system began to produce detectable gas from 9:00 a.m., reaching its peak production between 1:00 p.m. and 2:00 p.m., faithfully following the intensity of the sun. Durability: The system demonstrated enviable robustness, maintaining its effectiveness over 72-hour test cycles without showing significant downtime. A bridge to the future. As reported by the South China Morning Postthis advancement builds a critical bridge between renewable energy and high-demand industrial applications. The study authors conclude that their work not only eliminates the need for unsustainable sacrificial agents, but provides a versatile design principle for building autonomous photocatalytic systems. Although there is still a way to go to see solar gas stations, the basic science—the mechanism for storing the sun’s energy in a chemical powder—is no longer a theory. Image | freepik Xataka | Germany has had a crazy idea to solve one of the problems of renewables: covering a lake with solar panels

In 1968 a man had the idea to create the first tablet in history. The problem is that he was decades ahead of his time.

If I tell you to think of the oldest tablet you remember, you may go back to the first iPad, which was released in 2010 (and, by the way, I turned seven last week). Or, if you’ve been following the world of technology since before the turn of the century, you might be familiar with the Microsoft Tablet PC from HP Compaq that was announced in 2001. In reality, there was someone who already tried to create one and it was much earlier, in 1968before the term “tablet” was even coined. At that time, Alan Kay was a young worker at the Xerox Palo Alto Research Center who had been mulling over the concept of a personal computer for some time (in contrast to the military, business and professional use that reigned among manufacturers at the time). After speaking with other colleagues who were beginning their research on how the programming language Logo could help younger children advance in math, Kay came up with an idea: “This encounter finally made me see what the real destiny of personal computing was going to be. Not a personal dynamic ‘vehicle’, as Englebart’s metaphors had it as opposed to IBM’s ‘railway tracks’, but something much deeper: a dynamic personal ‘medium’. With a vehicle, one could wait until high school to take ‘driving lessons’. But if it was a medium, it had to extend into the world of childhood.” In 1968, Kay created the Dynabook conceptwhich he would spend several years profiling. in the book “Tracing the Dynabook: a study of technocultural transformations” They define it like this: “Kay called it the Dynabook, and the name suggests what it was going to be: a dynamic book. That is, a medium like a book, but one that was interactive and controlled by the reader. It would provide cognitive scaffolding in the same way that books and print media had done in recent centuries but, as Papert’s work with children and Logo had begun to demonstrate, it would take the advantages of the new computing medium and provide the means for new kinds of exploration and expression.” “A personal computer for children of all ages” With the idea of ​​its function clear, Kay then began to shape it into cardboard prototypes (as can be seen in the image at the top of the article). In 1972, the researcher presented his paper “A personal computer for children of all ages” in which he offered more details not only about his motivation and his vision of personal computing at the time, but about the own device that I had in mind. His idea was to get a kind of tablet-shaped personal computer aimed at education. This would have a reduced thickness, a liquid crystal touch screen and a keyboard. Like a regular notebook in size, with a graphical interface (a revolution for the time) that allowed the reproduction of graphics, music and text, and with internal storage for 500 pages. The keyboard would not be the only way to enter information: it could also be done via voice. In the image that Kay drew, the word “stylus” can also be seen, although he did not comment on it in his paper. Kay’s idea is that the Dynabook that could be connect to other systems to “copy” information to it (among them, the ARPA Network) and even predicted the existence of content “vending machines”, which could not be accessed until payment had been made. “The books can be installed instead of being bought or loaned,” he said. Regarding digital “ownership”, Kay said the following: “The ability to easily make copies and own the information yourself is not likely to weaken existing markets, as has happened with xerography, which has strengthened publishing; and just as tapes have not hurt the music industry but have provided a way to organize one’s own music. Most people are not interested in being a source or a smuggler, but rather like to trade and play with what they have.” According to Kay’s calculations, the components to manufacture it could cost $294, so it was not unreasonable to be able to sell it for $500, something expensive for the time. “The average annual amount spent per child on education is only $850,” he said, and that is why he even proposed a different financing model: “perhaps the device should be given away as if it were a notebook, and only sell the content (cassettes, files, etc.). “This would be quite similar to the way TV packages or music are now distributed.” “Let’s do it!” he said to finish his paper. Unfortunately for Kay, the Dynabook never materialized. Despite Kay’s enthusiasm, the Dynabook itself was never manufactured for lack of support at Xerox and due to the technological limitations of the time. Do you remember what computers were like then? Well, imagine what it would be like to build a tablet. Two Xerox PARC engineers, Chuck Thacker and Butler Lampson, asked for permission to try to replicate a similar machine on their own, and so it came to light. Highwhich was also known as “Interim Dynabook”. It was not a tablet, far from it, but it maintained some of the ideas that Kay had raised in her publication. He Xerox Alto was one of the first personal computers of history and Steve Jobs and Apple engineers they were inspired in some of its innovations and concepts, such as the use of a graphical interface for its own computers. Starting at Minute 2:27, the Xerox Alto graphical interface in action Kay is not only remembered for the Dynabook itself, but for the educational vision he gave to the project, for his peculiar vision of the personal computing paradigm and for how he came to anticipate some of the problems (and even technologies) that would come later. Not only that: in 2001, Microsoft presented its Microsoft Tablet PC, a project that Chuck Thacker and Butler Lampson had led. Yes, the same ones who once tried to implement … Read more

NASA had been refusing to allow its astronauts to carry iPhones for decades. For Artemis II you have made a historic decision

Jared Isaacman, NASA administrator, has announced an important change for astronauts: the crew will be allowed to carry their personal smartphones. The objective is simple, to allow both photographs and videos recorded during space missions to be shared. what has happened. The publication has been informal and outside the official NASA press page. Via X, Isaacman has revealed that the crew of Crew-12 and Artemis II you will be able to fly with “modern smartphones”. “NASA astronauts will soon fly with the latest smartphones, starting with Crew-12 and Artemis II. We are giving our crews the tools to capture special moments for their families and share inspiring images and videos with the world. Equally important, we are challenging legacy processes and enabling modern hardware for spaceflight on an accelerated timeline. This operational urgency will serve NASA well as we strive to achieve the highest value science and research in orbit and on the lunar surface. This is a small step in the right direction.” Without detailing models or limitations, it makes it quite clear that soon we will see more than one iPhone flying over a ship far from our planet. What was happening until now. Historically, NASA has only allowed Nikon cameras (a Japanese company with which it has had an agreement for more than a decade) to be brought on board. Initially with some of their DSLRs, and recently with the Nikon Z9, the latest generation mirrorless authorized for Artemis. Because. For decades, NASA has operated under an extremely strict security framework for any object boarding a manned spacecraft. The devices must not interfere with critical systems, their batteries have to meet very specific requirements to minimize the risk of fire, they cannot contain materials that can fragment in microgravity and they must pass certification processes associated with an exact hardware model. For the first time, the agency will allow the use of mobile phones on a manned mission certified by its own procedures, marking a significant shift in how NASA evaluates and accepts commercial technology on board. When. The departure of Artemis II, after some delayis scheduled for the month of March. After several dress rehearsals, NASA is not prepared to return to the Moon, because of old ghosts like the complexity of liquid hydrogen. It will not be the first time that a modern mobile phone travels to space, but it will be the first time that its use is authorized within a manned mission managed directly by NASA. Until now, mobile phones and tablets had flown on SpaceX missions under more flexible operating frameworks, serving as a background to evaluate their behavior during the mission. In Xataka | When the United States decided to go to the Moon, it did so no matter what the cost. And that included 60% of all its chips

The EU and India finally seal their great trade agreement. Trump has accelerated what had been stuck for two decades

The European Union is beginning to make moves on a board that no longer looks like it did a few years ago. With Donald Trump straining international trade and European dependence on external partners increasingly at the center of the debate, Brussels seeks to gain room for maneuver. This idea of ​​strategic autonomy, repeated for years in speeches and documents, is beginning to be translated into concrete decisions. Some point to digital, others to securityand others to commerce. In this context, the announcement of a great agreement with India after almost two decades of negotiation is understood. The advertisement. The news comes from New Delhi, after a summit in which Narendra Modi and two of the main European figures, António Costa and Ursula von der Leyen, participated. The agreement, negotiated for almost twenty yearsseeks to open a new commercial stage between the European Union and India, with a scope that Brussels has wanted to highlight from the first minute. Von der Leyen lor defined on social networks as “the mother of all trade agreements.” Click to see the original publication in X What goes in and what stays out. The announcement speaks of a broad agreement, but its perimeter is defined quite carefully. According to Reutersthe pact focuses on trade in goods and services and standards, while especially sensitive issues, such as investment protection, are negotiated separately. In addition, there are explicit exclusions: agriculture and dairy are not part of the package, a decision that seeks to avoid resistance from some sectors. The key is in the cars. The EU statement itself recalls that tariffs on cars imported into India can reach 110%, a barrier that in practice blocks the entry of a good part of European models. For this reason, the pact includes cuts that could place these tariffs at a minimum of 10%. These discounts would apply to a volume of up to 250,000 cars coming from the European Union. For European manufacturers, the attraction is obvious: access to a huge market that until now has been almost closed. The exchange of concessions. The potential benefits are distributed, although not symmetrically. India would gain competitiveness in labor-intensive industries, such as textiles and garments, which in Europe still face tariffs close to 10%. It also seeks to improve the access of its professionals and technological services to the European market. The EU, on the other hand, aims at a different objective: to better enter an expanding market, where its exports face a weighted average tariff of 9.3% and especially high charges on cars, chemicals and plastics. A geopolitical acceleration. The timing of the announcement is not coincidental. In recent months, both India and the European Union have felt more closely the protectionist turn that accompanies the new era of Donald Trump. Reuters recalls that India has not managed to close an agreement with the Trump Administration since the White House announced in April the so-called “reciprocal tariffs“, and that in August imposed an additional punitive tariff of 25% for the purchase of Russian oil, raising the total tax on Indian goods to 50%. For Europe, the message has been similar: tariffs have once again been an instrument of political pressure. Nothing is in effect yet. The announcement is important, but the institutional path is just beginning. The final text must still pass legal scrutiny in Brussels and New Delhi. Then comes the most delicate stage: ratification. Reuters notes that the pact will have to be approved by the European Parliament, a process that could take at least a year. For example, the EU-Mercosur pact: it was signed on January 17, 2026 in Asunción, but days later the European Parliament decided to refer it to the Court of Justice of the EU for review, something that could delay its application for up to two years. The movement with India does not have to follow that path, but it invites us to be cautious. Images | Olga Nayda | Mitul Gajera | frank mckenna In Xataka | Something has broken between Europe and the US: France leaving Zoom behind and Teams in its administration points to something bigger

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.