More than 40 years ago we discovered a mysterious hexagon on Saturn. Today there is only one possible explanation

If there is a planet within the Solar system as enigmatic as it is striking, it is Saturn. And not just because of their rings, probably caused by a collision of their moons. But it’s not the only thing that baffles the scientific community: if you look at Saturn’s north pole from space, you will discover a perfect geometric shape: a hexagon. 30,000 kilometers in diameter. To get the idea, two planets could fit inside it. Of that mysterious hexagon We know that it is there at least since 1981, when the Voyager 2 probe flew over the planet, leaving testimony of its existence. It is not that nature is not capable of making geometric shapes, but the hexagon is not exactly the most common. The latest and most solid hypothesis that attempts to elucidate what Saturn’s hexagon is to date was published in the Proceedings of the National Academy of Sciences offering a possible explanation: the internal dynamics of the planet’s atmosphere. The hypothesis. What the research team from Harvard’s Department of Earth and Planetary Sciences suggests is that the hexagon is not a surface structure, but rather is generated by rotating deep convection inside Saturn. The turbulence of the deep layers of its atmosphere generates vortices that push and bend a high-speed air current that surrounds the north pole, deforming it so much that it acquires its hexagonal shape. The hexagon is not the storm, it is the trace of what happens underneath. Qor why it’s important. Because we have been carrying around the mystery of the hexagon since 1981 and none of the previous theories fit as well as this one, capable of generating the hexagon from basic physics without artifice. Also, it answers a question: how far do Saturn’s winds reach? According to this model, to the bottom. On the other hand, if this explanation is correct, it changes the perception of how we understand the dynamics of giant planets, not just Saturn. Saturn hexagon with images from the Cassini probe. NASA/JPL-Caltech/Space Science Institute context. Before this 2020 theory, there were two clear sides: The forced Rossby wave proposed that the hexagon was an atmospheric wave held in place by an anticyclone, visible south of the pole in Voyager 2 data. When the Cassini probe arrived at Saturn in 2004, there was no trace of that anticyclone. That of the surface jet suggested that the hexagon was a surface wind that, when it becomes unstable, undulates and adopts a polygon shape. The problem was that it needed a starting current. Furthermore, it places the phenomenon in superficial layers, which contradicts the gravitational data of Cassini’s Grand Finale whose gravitational data suggest that Saturn’s winds maintain their intensity up to 100,000 bars of pressure. In both cases, they all reproduced the hexagon if you already gave them a base wind, but none of them generated it from scratch. How have they done it. The methodology is quite abstract, but roughly what they did was simulate a slice of Saturn, spinning it and heating it from below and letting physics act. No winds or hexes in the initial setup. So much the code used in the simulation like the data They are openly available, so anyone can reproduce and verify the results. Yes, but. The hypothesis developed by the Harvard team may be the best so far, but the paper itself recognizes Some objections to take into account. Thus, the simulation polygon is faster than what happens in reality, something that could be solved with a more powerful simulation. The simulation polygon moves faster than what happens in reality, something the authors attribute to the computational power available. Furthermore, the simulation only tests specific conditions and for a relatively short time: no one has yet verified whether the result holds under different parameters or on longer time scales. In Xataka | We have just discovered a true cosmic anomaly: an “invisible” galaxy made up almost 100% of dark matter In Xataka | A new “solar system” has just been discovered. There’s just one problem: it shouldn’t exist. Cover | NASA/JPL-Caltech/Space Science Institute

Years ago the series had more scenes. The platforms are cutting them without warning

The seasons of the series streaming They are increasingly shorter, and not always by creative decision. Behind the cuts are skyrocketing production costs, subscriber retention strategies and, sometimes, decisions made from the accounting department and not from the writers’ table. The phenomenon is so broad that it also affects altered versions of classic series, eliminating scenes and changing soundtracks without warning. This episode is not how I remembered. You are reviewing a series that you already know and something goes wrong: a scene that you remembered clearly does not appear. Or the opening credits song sounds different. Or the final episode ends earlier than it should. It is not always a failure of your memory. Often, the version you see in front of you is not the same as the one you saw back in the day. This phenomenon has gained visibility in recent months thanks to users who document on social networks the differences between versions, and its scope is greater than it seems. The platforms of streaming They offer, in many cases and without realizing it, degraded or cut versions of series and movies. The reasons behind these decisions are multiple. Sustained contraction. Before entering into the phenomenon of cuts without notice, it is worth remembering some facts. According to the firm Parrot Analyticsthe average number of episodes per season on free-to-air television fell from 16.2 in 2018 to 11.8 in July 2024. On streaming platforms streamingwhich already started from shorter seasons, have also contracted: from 10.7 episodes on average in 2018 to 9.3 in the same period. Some studies talk that production companies and platforms are increasingly stingy when it comes to renewing or ordering series: they simply ask for fewer episodes. One last piece of information about this reduction: in 2025, the number of original series streaming fell by 11% year-on-year. Beyond those series that suffer, against the will of their creators, cuts in the number of their episodes (as it happened at the time with ‘The House of the Dragon’), there are cuts from series already broadcast. For example, the final episode of ‘Friday Night Lights’, which is now on Filmin (in its shortened version) had a duration of more than 60 minutes. When it moved to NBC for free-to-air broadcast, it was cut to less than 45 minutes to fit with advertising. Complete scenes are missing from this mutilated version. Here we can see a similar case with an episode of ‘The Bill Cosby Hour’ streaming. Music, another point of friction. The licensing rights for musical themes do not always extend to all platforms or expire after a period of time, which requires replacing original songs with others. When ‘The Wonder Years’ arrived on Netflix in 2011 after years of being blocked by rights issues, numerous songs had been replaced, including Joe Cocker’s iconic version of ‘With a Little Help from My Friends’ that opened each episode. In ‘Dawson Grows’, Paula Cole’s original tune was replaced in streaming by a Jann Arden song; The fan outcry was so intense that Cole ended up re-recording his own song. It is a common problem, which has meant that legendary series like ‘Luz de Luna’ or ‘Búscate la vida’ have spent years without being able to be seen, not even in domestic formats (in the case of Chris Peterson’s legendary series, we are still waiting). The war of the formats. We’ve already talked about itbut it is not bad to remember that the image format has been manipulated on countless occasions to adapt it to widescreen televisions. The case of ‘The Simpsons’ is perhaps the most popularsince many jokes with the original square format of the first seasons were lost. Also The case of ‘Buffy the Vampire Slayer’ is popular‘, which in its panoramic format and retouched colors makes the illusion of night scenes from the original series disappear and allows the filming crew and sets to be seen. A disaster that can only be solved by going to the first editions on DVD before their “remastering”. What to do to solve it. One more time: do not completely rule out the physical format. Until now we were certain (video game fans have been suffering from this for years) that fluctuations in catalogs, even of material that you have purchased and that is theoretically yours, can play tricks on you. We are now convinced that the fact that a film is in a catalog today streaming It does not assure us that it will be within a year: reasons as spurious as saving fees or storage space for platform owners can lead to the elimination of thousands of movies and series. The only way to preserve a movie or series without anyone modifying it, to make sure that no one is going to cut it because they think you are not prepared to pay enough attention to it, is to have your own copy. In physical or digital form, but in a safe place. Until recently, keeping our DVDs, our CDs with games, our cartridges and our vinyl, our records with backup copies was something for nostalgic and paranoid people. Now it is a matter of preserving what the platforms continue to mutilate. In Xataka | Generation Z has found the remedy to streaming subscription fatigue: buying DVDs again

India has been wanting to be the new China for years. The Iran war is putting it on a plate

The iran war is demonstrating, once again, the fragility of globalization. Just look at this graph: Graphic: Xataka The price of a barrel of crude oil has rampaged because Iran is attacking refineries, the Strait of Hormuz through which 20% of the world’s crude oil passes It is abuzz and there is instability in the ‘oil well of the world’. Refineries are targetedbut also the new mine of the world economy: data centers. Iran has attacked data centers of Amazon in Saudi Arabia and Big Tech are setting their eyes on nearby countries where they can move. And what is very attentive is an India that has been pursuing an ambitious goal for years: to become the new China. They have been tempting big technology companies for years and with the narrative of being a safe and reliable country in which to manufacture. The war in Iran is now giving it another argument: kamikaze drones do not fall in its data centers. In short. Data centers have become critical infrastructure. They are from the moment you are investing in them. more than we invested to go to the Moonthe economy of some companies and countries is being linked to their success and, above all, they have been since the AI ​​fever has put the world of hardware in an alley. In war and love, anything goes (or so some apply), and this time we are seeing how they bomb schools, hotels and data centers. On March 1 and 2, Iran attacked with its drones two of the Amazon Web Services, or AWS, facilities in the United Arab Emirates and another center in Bahrain. This has forced the technology company to pause activity in those facilities, asking that companies that had services running on their servers migrate to those in other countries. Solutions. Latency plays a fundamental role in certain operations, so they must be servers that are relatively close to those that have been attacked. And that’s where they come into play both that Amazon has in India, specifically the one in Munbai and the one in Hyderabad. These are data centers from Amazon, yes, but the country has big plans to create an industrial fabric based on this type of infrastructure. At the beginning of last year we echoed a mega data center hard to believe. When most of the world’s large facilities remain below 1 GW of energy capacity, an Indian company wants to create a single data center with a capacity of 3 GW. If we return to the Amazon centers in northern Virginia, in the United States, we see that about 300 installations add up to a total of 2.5 GW. And now India wants one to only have 3 GW. And it wants to have it by 2027, a date as ambitious as its own dimensions. Rain of millions. It is estimated that such a facility would cost between 20,000 and 30,000 million dollars, but it is something that today’s India cares little about: they are burning money to attract industry and steal what they can from China. The country has been offering hundreds of millions of dollars to each technology company that wants to settle in its territory. It’s not just money. India is developingits market is growing and something important: young Chinese are increasingly more qualified and labor is getting more expensive. A cheaper workforce in India, added to government incentives, are two powerful arguments for some giants in the technology sector to move to the country. And, little by little, they are achieving it. Xiaomi, Motorola and even Huawei manufacture complete models of some of their lines in India. Asus, HTC, Samsung, Microsoft and LG have plants for some parts and Apple has taken the production of parts to India. old iPhone models. Another one is Micron, one of the main players in the memory segment. tempting everyone. The country wants more and it is gone sitting with representatives of heavyweights such as the aforementioned Apple… and Samsung. They want the South Koreans don’t just make a few piecesbut rather that they invest in artificial intelligence, hardware and in something that India is eagerly seeking: semiconductor research and development. Samsung is one of the world’s leading foundries and is investing millions outside of South Korea. India seeks to be part of that equation. To do this, they have something called PLI. This is a government initiative that encourages the production of a complete portfolio of products. That is to say: the more complete products a same brand manufactures in the country, the more incentives and economic advantages it receives. They also promise less economic friction with the West, although looking at the issue of tariffs and their ups and downs, it is something that can change from one day to the next. And it’s not all about pure and simple money: India is the most populous country on the planet and it is estimated that the average level of income will continue to rise over the next five years, which also “promise” a good national market for those products that companies manufacture on their soil. The Bangladesh Hi-Tech Park project And the result, with Hyundai being the only one with a significant presence and many open fields, buildings under construction… broken dreams. According to estimates, electronics manufacturing in India was a market of 115,000 million dollars and it is expected triple it by 2027. My colleague Laura already detailed that they were executing the technique of being a steamroller based on releasing billsalthough two things must be said. The first is that one of those objectives, the become the foundry of the worldit’s going to be complicated. TSMC is leading the conversation and is moving both on home soil -Taiwan- and in Europe and, above all, the United States. And what is truly worrying for the country is that, in this search for talent at all costs, it has invested a lot of money in the construction of technological cities that … Read more

In 1993 Microsoft created Encarta to revolutionize knowledge. Twenty years later it would be devastated by a tsunami

It became so popular that its logo and the sound of their intros They became two brands just as identifiable as those of Nokia or Windows. If – like the person writing this – you had to go to school or high school between the second half of the 90s and the first half of the 2000s, talk about the Encarta It does not require large presentations. If not, don’t worry; It won’t take us much time. Before Wikipedia offered free online knowledge and even the use of the Internet became popular, Microsoft launched a digital encyclopedia that revolutionized the sector and became a phenomenon between more or less 1993 and 2009. Its name: Encarta. Today, ironies of history, “Encarta” is one more entry in the index of other encyclopedias; but there was a time when it transformed our way of accessing knowledge. From having to spend their eyelashes and fingertips scrolling through pages in search of information, students began to search for information with the click of a button. The Encarta offered an agile, comfortable and above all didactic way to satisfy curiosity. With articles, yes; but also with videos, audios and even virtual visits and games. You could read about Nepalese temples in the Salvat. Or open the Encarta and “tour” one. Its “pull” was so great that it put the old paper encyclopedias in trouble. When the Spanish edition was presented in early 1997, those responsible presumed that the Encarta CD-ROM, a format that you could store in a drawer or even a folder, contained information that It was equivalent to 29 volumes and 1.2 meters of shelving. Not only that. The Encarta cost 24,900 pesetas, four times less than an equivalent printed encyclopedia. To make matters worse, his landing in Spain was protected by Santillanaa publishing house with considerable weight in school classrooms. How to compete with that? The product was liked and published in Spanish and other languages. He did well until, with the same ones with which he had become a phenomenon, ended up succumbing to the competition. In a way, his success is due to his good sense of smell in the 90s; its decline, to the inability to adapt in the 2000s. This is your story. Objective: reinvent the old encyclopedias In the mid-1980s Microsoft He began to think about the idea of ​​creating a digital encyclopedia. The idea was ambitious. Those from Redmond wanted, neither more nor less, to rethink the concept and operation of a product apparently as mature and closed as the volumes that publishers’ commercials were dedicated to selling door to door. To make its debut in a big way, the multinational tried to negotiate a license with the creators of what was probably the most respected publication internationally: the Encyclopædia Britannica. It didn’t go well for them. In the 1980s, paper volumes of Britannica were sold and They left huge profits. As Enrique Dans remembershis books cost about $250 to produce and the selling price ranged between $1,500 and $2,200, depending on the quality. Why would the firm want to digitize content on a CD and risk killing the goose that lays the golden eggs? Microsoft did not give up and looked for ways to move the idea forward. He even had a name for the initiative: Project Gandalf. Some time later he closed a contract with Funk & Wagnalls to use your New Encyclopediaof 29 volumes, in a database that was created at the end of that same decade. To complete its contents, years later two other McMillan encyclopedias would be added, the Collier’s and New Merit Scholar. They were not the Britannica; but it would have to do. However, doubts arose in Redmond about whether or not the project was viable and they decided to park it. It was resumed at the turn of the decade, in 1991, when Microsoft decided to go all out. In 1993, the first edition of the Encarta Encyclopedia was launched, which included the 25,000 Funk & Wagnalls articles and extra material, such as images and some animations. The tool was comfortable, much more agile than the kilometric tomes and even fun, but it started with a huge mistake: the shot was centered wrong. At the beginning of the 90s there were still many houses without a PC and the marketing price was exclusive. When it came out, the Encarta cost about $400, which greatly limited its range. The cost deterred customers and was not too far from that of another competitor that was testing the same niche with a recognizable brand, Compton, which also launched your own multimedia version in 1990, with text and supports such as images and sounds. In Redmond they knew how to react and soon they were deploying a more aggressive strategy. They launched promotions that allowed you to get the Encarta for 99 dollarsthey included their CD with the Windows software package and negotiated with manufacturers to incorporate it into their computers, a tactic not unlike that used with Windows and Office. The promotion of Microsoft itself gave the final push. The new encyclopedia gained fame and began to chain editions, translate into different languages and enrich content with multimedia supports. In 1995, abridged versions of some articles were offered for Microsoft Network ISP subscribers, and starting in ’96, standard and deluxe editions began to be released, an enriched version that could be updated month by month. In 1998, its creators went one step further and acquired the rights to several electronic encyclopedias. The product was growing and, above all, it demonstrated that the sector was experiencing a clear paradigm shift. The best example: in 1996 the once powerful company Britannica ended up underselling for their difficulties. “It allows young and old to explore the world by themes and characters,” their promoters boasted in the Spanish market. And so it was, indeed. Through articles, photos, illustrations, graphs, maps, timelines, recordings, videos and even virtual tours, Encarta won over an entire generation of students. … Read more

Humanity has been wondering for years how to adapt to climate change. The Mayans already achieved it centuries ago

Beyond its architecture, urban planning and art, there is an aspect of the Mayan civilization that fascinates archaeologists: its decline. Over time, historians have understood that the decline was not sudden nor did it respond to a single factor, rather there was a sum that included changes in trade routes, wars and, above all, adverse weather, with droughts. severe and prolonged. Now we know something more. Even during the stages of Classic Terminal (800-1000 AD) and Postclassic (1000-1500 AD), while large urban centers succumbed, there were settlements that adapted to climate changes. What has happened? Which a group of archaeologists has just published an article in which they capture their years of research in a Mayan settlement located in ‘Birds of Paradise’, some wetlands located in the north of Belize. The site itself is not new. Scientists identified it long ago a few years with the help of lidara tool that is revolutionizing archaeology. What is new are the conclusions that its analysis has left. He study is published in the magazine PNAS (Proceedings of the National Academy of Science) and, among other issues, concludes that the wetland offers valuable information about how the Mayans responded to the social and environmental changes they dealt with during two crucial stages of their history: the Classic Terminal and Postclassic, a period that goes from the 9th to the 16th centuries. What have they found out? As they explain from New York University (NYU), to which the main author of the study belongs, one of the most interesting readings that the site leaves is the extent to which the Mayans adapted to the vagaries of the climate. Basically, researchers have proven that at a time when large urban centers were abandoned, pressured in part by intense droughtsthere were Mayan settlements that managed to survive in the wetlands. As? For its ability to adapt to the environment. And how did they do it? Taking advantage of the means they had at hand. “Wetlands provided resources for hunting and fishing to ancient populations, in addition to serving as refuge in periods of drought and social upheavals,” they explain from NYU. The environment supplied them with something else, equally or even more valuable for their settlements: construction materials. The site in question that they have analyzed in Belize in fact includes eight mounds of earth that could have served as a base for building buildings and a large elevated limestone platform. The experts also rescued wooden posts, animal remains and ceramic artifacts, clues that tell us about how life continued while other nearby urban centers declined. What do the experts say? “Together these findings reveal a highly adaptable community with diverse tools, food and construction materials. It shows us that Mayan communities could change habitats and survive extreme climates,” explains Timothy Beachprofessor at the University of Texas at Austin, who nevertheless recognizes that “we still do not know the size of this wetland population and its functioning.” Now archaeologists aim to go one step further. “Our next moves include expanding the excavations to understand how the Mayans built with unconventional wood, how they ate, and how this settlement fit into a region that was suffering from widespread abandonment.” Why is it important? Because of the historical era we are talking about. In their article, the researchers assure that the Belize site demonstrates the ability of the ancient Mayans to adapt to “the profound challenges” that they had to live through from the 9th century AD. For reference, a team led by the University of Cambridge discovered not long ago that between 871 and 1021 they happened eight persistent droughtsof at least three years, in the Yucatán Peninsula. The worst of all actually lasted more than a decade. The scientists arrived at that conclusion after analyzing a stalagmite from a Yucatan cave. And, beyond how spectacular it may be, the data is interesting because it tells us about the challenges that the Mayans faced during the Terminal Classic (800-1000 AD), when the limestone cities of the south they were abandonedthe dynasties declined and civilization moved north, losing part of its political and economic power in the area. Are there more conclusions? “As the large urban centers of the Mayan regions succumbed to interconnected socio-environmental factors, the communities of the Birds of Paradise complex persisted through that transition by constructing a series of elevated structures of earth, stone and wood with direct access to the abundant resources and connectivity offered by the riparian wetland system,” reads the article published in PNAS. “It provides evidence for persistent populations between the Elevated Interior Region and coastal regions during the Terminal Classic to Postclassic. While nearby highland urban centers were abandoned, this population continued to emphasize wetland agriculture and provides our best evidence for other subsistence strategies, such as fishing and gathering other proteins, reflected in the faunal assemblage,” they add the researchers. What did they dig? That is another of the surprises that the study leaves behind. Archaeologists discovered what NYU describes as “the largest collection of architectural wood” located inland, as well as artifacts that help historians understand everyday life in the wetlands. It may seem like a minor issue, but it is not common to find remains of wood in Mayan sites. On the contrary. Their very nature causes them to degrade in tropical environments. In Belize, experts have discovered “a unique opportunity” which allows them to better understand how the ancient Mayans built, what types of wood they used and how they used each one. Is it so uncommon? The majority of preserved Mayan wooden remains are figurines, spears and boxes that were recovered mainly in caves in Belize at the beginning of the 20th century. Remains have also been found in mountainous and saline areas in the south of the country. The new findings go further. “It challenges long-held beliefs that sites like this could not survive in the American tropics and suggests we might be overlooking similar sites,” admits Lara Sánchez-Morales, professor of anthropology … Read more

The first telecommunications network in history arose in ancient Syria, 3,800 years before the internet

Nowadays it is difficult to think of anything other than being able to communicate with anyone instantly, no matter how far away they are. As a millennial, I have lived in the era when sending messages continuously was not common: SMS was not free and forced you to economize on language. And of course, before there were telephone calls, the reception of which today causes fear among youth. We can go back in time to the telegraph or the imperial postal networks and even the discreet carrier pigeons, which have been helping humanity communicate from the ancient Sumerian and Egyptian civilizations. A recent post from the historian and professor of history at the University of Central Florida Tiffany Earley-Spadoni published within a volume on global perspectives of warscapes brings to the fore the first telecommunications network documented both textually and archaeologically 3,800 years ago: a system of beacons to launch an SOS. The discovery. A cuneiform chart excavated at Mari, eastern Syria, dating to 1800 BC is the oldest known historical evidence of signaling using fiery beacons. But we also know what he said: an official named Bannum writes to the king while traveling to the north of the region with concern after observing the successive lighting of bonfires near Terqa and requests reinforcements. That lighting was not accidental: it was a signal of imminent danger on the border, an early warning system for possible attacks on their cities. Early-Spadoni refers to this system as a “fortified regional network,” or FRN for short. A little context. This documentation is framed within the Syrian Middle Bronze Age, a territory of cities – states in constant conflict. Taking the city meant dealing a blow to the rival and keeping its wealth, hence the siege was the star attack. But conquering a territory was much easier than administering it. Thus, these states had great ambitions, but lacked the infrastructure to govern themselves from a distance. So to better defend themselves and control the territories they used two systems: large walls surrounding the cities and a network of forts, towers and guarded roads in rural areas. This second structure is the seed of the development of empires. Why is it important. Bannum’s letter is the oldest known historical testimony of the use of an intentionally designed telecommunications network with shared infrastructure, nodes, and protocol. Do not confuse with communication methods, since smoke or drums are prehistoric and undatable. But it is also key for civilizations insofar as it allowed us to go from “presumptive states” (which conquers territories it cannot govern) to develop real and lasting territorial empires: without this infrastructure of communication and control, the size of the empires would have been simply ungovernable. How it worked. With a physical structure made up of fortresses, forts, watchtowers and wall segments and with an operation protocol. It essentially served to control routes, resupply military personnel, transmit information and track movements in the territory. The physical hierarchy of its infrastructure was distributed along roads and river crossings spaced at regular intervals of about 20 kilometers to ensure visibility between nodes. The large fortresses were the main nodes with smaller forts between them, with watchtowers for signaling to reinforce points that were difficult to see and segments of walls in strategic areas. The system operated continuously: with smoke during the day, fire at night, and had permanent reserves of wood. Each signal was known by all the nodes, so that when a beacon, the signal traveled through the nodes until it reached the center in a relatively short time. Speed ​​was its great asset and its handicap was how limited it was: it could only transmit simple messages. The early “internet”. Comparing it with the current Internet is not just a rhetorical question: FRNs share with the Internet several of its principles, such as distributed nodes, redundancy to avoid failures, protocols agreed in advance and a topology to maximize connectivity between distant points. A before and after to build empires. This system did not disappear with Mari. For more than a thousand years, each new empire that emerged in the Near East encountered these networks, recognized them as a valuable structure, and implemented them to suit their needs. The Neo-Assyrian integrated them into walled cities and in parallel developed a horse relay system for more complex and confidential messages, impossible to transmit with the original infrastructure. The Urartian Empire made them the organizing principle of an entire empire. And the Persian Empire took the model to its maximum expression with the royal road that Herodotus describes in his Histories: forts at regular intervals, relay of messages and archaeologically confirmed fire beacons in Anatolia. Earley-Spadoni’s conclusion is that without these infrastructures, the largest empires of the ancient world would not have been able to manage themselves. In Xataka | From when a monstrous telecommunications tower and its more than 4,000 cables blocked the sun from the inhabitants of Stockholm In Xataka | In 1901, a Spanish man had one of the ideas of the century: invent the remote control before television Cover | حسن and Ezra Jeffrey-Comeau

30 years later he leaves without making a sound and that is the worst of all

The alarms went off when the model could not be chosen in the configurator on their website, but it now seems that there is no turning back: Audi has closed orders for the A8 in Germany. Which means that the manufacturer is going to stop marketing them, at least for the moment. It is the end of an era for its flagship. More than 30 years giving the call. Since 1994 it has been Audi’s reference in the large luxury sedan segment, competing directly with the Mercedes S-Class and the BMW 7 Series. The fact that it is going to disappear from the market has its reason, but above all it responds to a trend that we have been experiencing for years: people are moving from sedans in the premium segment: they want an SUV. What exactly happened. On February 18, Audi stopped accepting configurations of the A8 in its German domestic market. The brand’s website now redirects interested parties to second-hand alternatives. Motor1, which was the media that confirmed the news, received a response The manufacturer is vague about its availability: “it depends on inventory levels and other factors,” said a spokesperson for the brand. The A8 is currently still active in the UK and US configurators, where it starts at $95,100, but everything indicates that once stock runs out in each market, the Audi A8 will say goodbye. The D5 generation I was on my way to 10 years. The current model arrived in 2017 and received a facelift in 2021, falling short of standing up to its rivals. While Mercedes has renewed the S-Class with around 50% new or redesigned components, according to they count from Autoblog, they already point to the next big leap planned for 2029; and more of the same for BMW, which is already preparing new features for the 7 Series. In this context, the A8 has aged without being able to catch up. On the other hand, according to they counted in October from the German media Automobilwoche, the Euro 7 regulations would make a new restyling of the D5 unfeasible without a prohibitive cost. A segment in retreat. The A8 does not fall alone. Lexus said goodbye to the LS after 37 years, and brands such as Jaguar, Maserati, Cadillac, Lincoln and Infiniti have already abandoned or drastically reduced their presence in this segment. And the numbers of the A8 itself throughout its history say it all: the first three generations sold between 150,000 and 200,000 units each worldwide. The current D5 generation, presented in 2017, has not reached 50,000 units in its entire life cycle, according to they count from Motor News. The door is not completely closed. Audi spokesman Marcel Bestle dropped to Motor1 that the brand “will communicate more details about a possible successor at a later date.” Maybe we’re not going to say a definitive goodbye, but it seems that Audi is going to have to rethink things a lot to go back to betting on the A8. Maybe make the jump to pure electric? In this context, they still have to give it some time, but being left without a representative sedan for longer than necessary can end up damaging the brand’s image. Meanwhile, it seems that Mercedes and BMW have free rein to continue accumulating more market share in this segment. Cover image | Audi In Xataka | If you have a 1991 car and you are registered in Madrid, the City Council has a message for you: do not throw it away

Europe has found a hole that has been sending sensitive material to Russia for years: a “Mercadona” from Germany

More than 400 billion packages circulate around the world every year, and the international postal system is designed to move them as quickly as possible. To achieve this, many shipments cross borders with simplified controls and risk-based reviews, not full inspections. That logistical efficiency, designed to speed up commerce and everyday correspondence, sometimes generates unexpected cracks in much larger systems. An unexpected hole. Since the invasion of Ukraine in 2022, the European Union has lifted one of the sanctions regimes wider of its history with the aim of economically isolating Russia and hindering access to technology that can feed his military machine. Advanced electronics, sensitive components or certain industrial equipment are theoretically blocked to prevent them from reinforcing the Kremlin’s war economy. However, the practical application of these restrictions faces a constant problem: the more complex the sanctions system, the more ingenious They become the routes to avoid it. And in this case the weak point has appeared in a place so everyday that it is difficult to believe. A clandestine channel in the supermarket. The story was told in a report in Politico. Apparently, in several Russian chain supermarkets throughout Germany, among shelves of sweets or freezers, advertisements have appeared that promote a logistics service specialized in sending packages from Germany directly to Russia. What at first glance seems like a postal service for the Russian diaspora has become an unexpected crack within the European sanctions system. Customers may drop off boxes that supposedly contain clothing, books or small personal items. No one inspects the contents and, for a few euros per kilo, the package begins a journey that ends in Moscow or St. Petersburg. In this apparently innocent flow, even sensitive electronic components whose export is prohibited. The inherited logistics network. The middle counted that behind this circuit is LS Logistics Solution GmbH, a German company created by former employees by RusPostthe subsidiary that the Russian state postal service had established in Germany before sanctions forced it to close. After the invasion of Ukraine, that structure did not completely disappear. It was reorganized under a new namekept part of its staff and continued to operate from Germany with a similar system. The result is a kind of parallel postal network that collects packages throughout Europe and concentrates them in a warehouse near the Berlin airport, from where shipments to Russia are organized. The seal trick. The key to the system is an apparently bureaucratic detail. The packages do not have labels from the Russian Post, but from the state postal service of uzbekistan. Since that country is not subject to European sanctions, the shipment can take advantage of special rules that protect international postal traffic. In practice, this means that packets move with lighter controls than traditional commercial shipments. This administrative difference, designed to facilitate mail between citizens, becomes a back door for sensitive goods to cross borders without raising too many suspicions. A kilometer trip through Europe. The route of the packages illustrates chow it works the system. After being picked up from supermarkets or delivery points, they spend a day or two in Germany before moving to a large logistics warehouse near Berlin airport. From there they are loaded onto trucks that cross Poland on the A2 highway and continue to Belarus. Even though this country is also sanctioned for its support to Moscow, the packages continue to advance thanks to your status international postal mail. After traveling more than 2,000 km, they end up arriving at addresses in Moscow or Saint Petersburg. The problem of sanctions. Plus: the episode also reflects a challenge that those who design economic sanctions are well aware of. Officially blocking trade is relatively simple, but preventing alternative routes appear It is much more complicated, and that is already we have told it in the drone war in Ukraine. Each new restriction forces the creation of more complex control systems, while those who try to circumvent them constantly search new legal cracks or logistics. The result is an endless game of adaptation in which authorities try to close holes just as new ones begin to appear. Always one step behind. They finished the report explaining that European authorities are already reviewing the case and have strengthened the rules to pursue sanctions violations. Be that as it may, the discovery of the network itself demonstrates to what extent the system can make fun. As governments design increasingly strict legal frameworks, makeshift logistics networks continue to find ways to move sensitive goods across of unexpected routes. And in this case, the blind spot that allowed this channel to Russia to be kept open was not in an industrial port or a large cargo terminal, but in something as everyday as the check-in counter. a supermarket. Image | flowcomm, RawPixel In Xataka | In 2022, the war in Ukraine sent supermarket prices soaring. Iran threatens to make it child’s play In Xataka | The EU has a perfect plan to suffocate Russia. The problem is that now it needs its oil to survive

TicketMaster executives privately admit what their clients have suspected for years:

Slack messages exchanged in 2022 between two regional directors by Live Nation (declassified this past March 12 in full antitrust trial) describe their own clients as “idiots” from whom they are “robbing hands full.” These are not mere private outbursts: they are involuntary testimony to how a company that controls 80% of the primary ticket sales market in the US works. It is no surprise to those who have been paying parking fees of $250 for a Kid Rock concert for years. But seeing it in writing has a special weight. What they said. Ben Baker, then regional director of ticketing for Live Nation venues in Florida and Jeff Weinhold, senior director in the Virginia area, had been exchanging views on their work for months. In one conversation, Baker boasted about what he was doing with the add-ons that raised the base price of a Kid Rock concert in Tampa Bay. Baker wrote that the customers were “stupid” and that he almost felt sorry for taking advantage of them. Weinhold responded that he had VIP parking for $250. Baker’s retort: ​​They were “robbing them hand over fist, baby, that’s how we do it.” and there is more details: Baker speaks of income of $124,790 in upsells (upgraded tickets, VIP tickets, or better seats) for a Dead & Co. concert, followed by Weinhold’s suggestion to dynamically raise prices before sending the marketing email. “LOL. I’m evil,” Weinhold wrote. Baker used the internal term “dyn up” to refer to raising prices through dynamic pricing. There are also conversations about designing the purchasing interface so that artist names appear next to the upsellsa technique that Baker himself admitted to having “stolen” from the competition. Beyond the anecdote. Live Nation tried to keep the messages from reaching the jury. Their lawyers downplayed them, and when they became public, the company issued a statement attributing them to “a junior employee talking to a friend.” It is not clear Which of the two regional directors with responsibility for pricing are referred to as “junior.” Lawyers for the plaintiff states argued precisely that they are not irrelevant messages: artists have no interest in milking their fans, but Live Nation can do it because artists have nowhere else to go. The giant controls approximately 80% of the ticketing in large US venues and 60% of concert promotion, according to data cited during the trial. The construction of the empire. This vertical concentration was not built overnight. The merger between Live Nation and Ticketmaster was approved in 2010 and created a model in which the same company promotes the tour, manages the venue and sells tickets. After, Ticketmaster also began to charge commissions for resale among fans, which was especially noticeable during the pre-sale of Taylor Swift’s ‘Eras ​​Tour’ in 2022, when the collapse of the system led to a Justice Department investigation and hearings in Congress. And the dynamic pricing model has already been successfully exported (pecuniary) all over the world. The agreement. On March 9, the DOJ and Live Nation agreed to a surprise settlement that ended federal involvement in the trial without the judge being informed until the last minute. The terms required the company to limit its service fees to 15%, cut exclusive contracts with venues to four years, divest from 13 amphitheaters and open its marketplace to competitors like SeatGeek. The agreed payment amounts to between 280 and 300 million dollars for the states that accept the agreement. What the pact does not contemplate is the separation of Live Nation and Ticketmaster. And now. More than 27 states, including New York, California and Illinois, rejected the federal settlement and decided to pursue the lawsuit on their own, since the crucial monopoly issue had not been addressed. Furthermore, the case is not exclusively American. In September 2024, the European Commission launched an investigation into Ticketmaster following the Oasis pricing scandal in the UK, where tickets went from £135 to £350 in a matter of minutes during the sale. The Live Nation model is neither an accident nor a deviation. Baker and Weinhold’s chats reveal, and this is the truly uncomfortable part, that company policy has been exactly what it seemed to be for years. In Xataka | Spotify killed the record and the industry pivoted to concerts. Netflix killed cinema and the industry was left with a “space crisis”

Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

Meta has not thrown in the towel with its MTIA (Meta Training and Inference Accelerators) chips. And although they didn’t have it all on their sidestopping depending on NVIDIA is a very juicy candy to jump to conclusions. For that very reason, They have presented a roadmap of four new chips with which the company intends to accelerate both its content recommendation systems and its generative AI capabilities. The first chip is now operational; The other three will arrive before the end of 2027. Below are all the details. Dependence. For years, Meta has relied almost entirely on NVIDIA and AMD to power its data centers. The development of our own silicon is complicated, but if it is achieved, it can be a very successful financial and strategic bet in these times. According to statements According to its vice president of engineering, Yee Jiun Song, designing its own chips allows the company to “eliminate what we don’t need,” which directly translates into cost reduction. Added to this is greater independence from possible price variations or supply restrictions. Which is exactly what you have announced. The four new chips are the MTIA 300, 400, 450 and 500. Each one has a different use: The MTIA 300 is already in production and is intended to train the algorithms that decide what content Facebook and Instagram users see. The MTIA 400 (known internally as Iris) has completed laboratory testing and is en route to data centers. Meta claims that it offers performance “competitive with leading commercial products,” according to its official statement. The MTIA 450 (Arke) will double the high-bandwidth memory compared to the 400 and is scheduled for early 2027. The MTIA 500 (Astrid), the most advanced, will arrive in mid-2027 and will incorporate, according to the company, improvements in low-precision data processing. The chips are manufactured by TSMC, the world’s largest semiconductor producer, and have been developed in collaboration with Broadcom on the RISC-V open architecture. The rhythm is the most striking thing. What’s unusual is not just that Meta makes its own chips, but the speed at which it plans to do so. The usual cycle in the industry is one or two years between generations. Meta aims to release new versions every six months. “The pace of AI evolution is so fast that we always want to have the most advanced chip available when we need it,” counted Song. This accelerated cadence is possible, according to the company, thanks to a modular design that allows components to be reused between generations. ANDthis does not replace NVIDIA. It is important not to lose sight of the context. Meta remains one of the largest buyers of GPUs on the market. just a few weeks ago signed multi-million dollar agreements with NVIDIA and AMD to supply chips for the next few years, and has also reached an agreement to rent computing capacity on Google chips, as share Wired. MTIA chips are designed for specific and internal tasks (inference and recommendation systems), not for training large language models, so this strategy is complementary to your chip plans with NVIDIA or AMD. Nor should we forget that Meta recently had to abandon its most ambitious training chip, known internally as Olympus, after the project became complicated in the design phase, according to counted The Information. Susan Li, CFO of Meta, confirmed at a Morgan Stanley event that the company still has the goal of developing processors capable of training models, but without giving more details. And now what. The real test of this bet will come when the chips are deployed at scale. The challenge at the moment is to guarantee HBM memory supply before a RAM crisis that is affecting the entire technology sector. Song himself recognized to CNBC that the company “is absolutely concerned” about it, although it stated that they have assured supply for their current plans. In the long term, we will see if Meta can achieve something similar to what Google did with its TPUs. Cover image | Mariia Shalabaieva and Goal In Xataka | OpenClaw has caused a real media earthquake in China. The Government has prevented its officials from using it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.