Today the sequel that took 24 years to film and ended up failing at the box office after spending a huge budget arrives on Netflix

It took Ridley Scott 24 years to return to the Coliseum. When he did it with ‘Gladiator II‘, a cast that was breathtaking was brought in, with Paul Mescal, Denzel Washington, Pedro Pascal and a budget that, depending on who you ask, exceeded 310 million dollars with the expectation of repeating the magic of its predecessor, which had won five Oscars in 2000. It didn’t quite succeed, but in streaming it has a second chance: you have it starting today Tuesday, April 28 on Netflix. The first announcement of a sequel to ‘Gladiator’ It dates back to June 2001, just a year after the release of the original. And Russell Crowe was on board even though his Maximus had died on screen. For years, Scott toyed with crazy ideas that included the resurrection of the character or a plot about the afterlife. The project stalled when DreamWorks sold the rights to the franchise to Paramount Pictures in 2006. What got the sequel out of limbo was that Scott saw Paul Mescal in the first few episodes of ‘Normal People’ and wanted to work with him. Scott also wanted to resolve the plot of Lucius Verus, then a child, now sixteen years after Maximus’ death. He lives under another identity in North Africa, until the Roman army invades and destroys his home, kills his wife and enslaves him. Brought to Rome as a gladiator, Lucius falls under the control of a former slave turned arms dealer, who uses him in the arena of the Colosseum while he secretly weaves his own plans to seize the throne from the corrupt twin emperors Caracalla and Geta. And so began an eventful filming, interrupted by the screenwriters’ strikes, which sent costs skyrocketing, according to some sources, beyond $300 million. With a final collection of 462 million worldwide, the business was somewhat lame. However, with its passage through platforms (in the United States it is exclusively on Paramount+, and has been on VOD for months), it is very possible that ‘Gladiator II’ can boast more comfortable profits and thus give rise to the already planned ‘Gladiator III’ in which Mescal has already expressed his interest. In Xataka | Today the animated spin-off of the platform’s only powerful franchise premieres on Netflix: ‘Stranger Things’

Long before Real Madrid, the Roman Empire had already invented VIP boxes. And they ended in disaster

In the first century, the emperor Nero ordered that some shows will include giant awnings to protect the most privileged attendees from the sun, while the rest of the public endured the heat in the upper stands. That seemingly trivial difference reflected the extent to which the experience of attending an event was already marked for money and status long before modern stadiums existed. Show business in Ancient Rome. Long before modern stadiums like the Bernabéu turned sport into a crazy revenue machine, the Roman Empire had already understood the economic potential of gathering crowds and charging for access. At that time, amphitheaters were not only leisure spaces, but political and commercial tools where prestige and money mixed openly. In fact, businessmen like Atilio They saw the games as a direct opportunity for profit, betting on filling venues at all costs and maximizing every available seat. In that context, the logic of squeezing capacity (with privileged areas for the elites and crowded stands for the rest) not only existed, but was central part of the model. Raised to make quick money. In this context, it is born the Fidenae project with a clear idea: build a lot, quickly and cheaply to start earning money as soon as possible. Attilius, a freedman with entrepreneurial ambition, decided to build a huge wooden amphitheater on the outskirts of Rome, reducing costs in the most critical elements. The structure was supported on unstable ground and was assembled with poor joints, while more seats than planned were added to increase revenue. The result was a building that appeared grand from the outside, but was actually designed more to maximize profits. that to ensure safety of those who were going to occupy it. Spectacle turned into tragedy. What happened? That the inauguration attracted tens of thousands of people who came with the expectation of witnessing gladiatorial combats after a period in which these spectacles had been rather rare. That amphitheater was filled to the limitthere was no room for a pin, with the public distributed by social classes and areas, replicating a hierarchy that also had its economic reflection. Thus, in a matter of seconds, what seemed like a festive day he happened to enter sadly in the Guinness Book of a total sporting catastrophe when the structure began to give way and collapsed simultaneously inwards and outwards. It was not just an accident, since the magnitude of the collapse trapped both those who were inside and those who were trapped. were in the surroundingsleaving a balance of victims that, according to sources, ranged between tens of thousands of dead and injured. The worst sports disaster in history. From then until now, because of its scalethe collapse or collapse of Fidenae was not only a local tragedy, but the biggest sports disaster that has ever been documented, surpassing even many modern episodes in number of victims. The figures, although imprecise at the time, point to a catastrophe comparable to major battles in terms of human losses (they were counted about 50,000 deadsome lost their lives instantly, while others were buried under the rubble), something totally exceptional for an entertainment event. The speed of the collapse, the absence of evacuation measures and the fragility of the construction made any reaction impossible, turning the amphitheater into a mousetrap, a death trap in a matter of seconds. What should have been a profitable business ended up being the most extreme example of how the search for profit can multiply risk to catastrophic limits. From greed to the first rules. There is no doubt, the impact of that disaster shook the Roman Empire and forced an institutional reaction that marked a before and after in the construction regulation. The Senate persecuted the person responsible, Attilius, and sent him into exile, but, more importantly, established rules that They demanded economic solvency to those who wanted to organize shows and forced them to build on safe land. Those measures can be considered one of the first attempts to regulate structural safety in public spaces, born directly from a tragedy caused by negligence. Ultimately, the episode left a lesson that is still very valid: when business prevails over security, the show not only cannot be guaranteed, it can end up becoming in his own catastrophe. Image | Wikimedia C. In Xataka | In 1995, South Korea suffered one of the great architectural disasters of the century. The culprit: the air conditioning In Xataka | If you’re hot at home, remember that Disney made an auditorium with a huge mistake: turning a neighborhood into an unbearable oven

They have kidnapped agents from Anthropic, Google and Microsoft for the sake of science. The three companies ended up paying

In some development teams it is already becoming common to rely on artificial intelligence agents to review incidents, analyze code changes and move through tasks that were previously left in human hands. The problem appears when these systems not only read information that may come from outside, but also operate in spaces where they coexist. sensitive keys, tokens and permissions. That is what recent research puts on the table: we are not simply facing a useful tool that can make mistakes, but rather an architecture that can also become dangerous if it is deployed without very clear limits. The alarm has been turned on Aonan Guan and Johns Hopkins researchers Zhengyu Liu and Gavin Zhong after demonstrating attacks against three agents deployed on the aforementioned platform: Claude Code Security Review, from Anthropic, Gemini CLI Action, from Google, and GitHub Copilot Agent, a GitHub tool under Microsoft. According to your documentation, The failures were communicated in a coordinated manner and ended in financial rewards paid by the companies, but what is relevant is that they point to a broader problem. This is how they managed to twist the agents from within The name that Guan gives to the discovery helps a lot to understand what this is all about: “Comment and Control.” The idea is simple to explain, although the substance is not so simple. Instead of setting up an external infrastructure to direct the attack, GitHub itself acts as an entry and exit channel: the attacker leave the instruction in a titlean incident or a comment, the agent processes it as if it were part of normal work and the result ends up reappearing within that same environment. Everything stays at home, and that is precisely the key to the problem. And that “everything stays at home” is not a minor detail, but the basis of what the research describes. The three agents share a very similar logic: they read normal content from GitHub, incorporate it as a work context, and from there, execute actions within automated flows. The clash appears because that same space not only contains text sent by third parties, but also tools, permissions and secrets that the agent needs to operate. The first case Guan details concerns Claude Code Security Review, an Anthropic GitHub action designed to review code changes and look for possible security flaws. Up to this point, everything is within what was expected. The problem, as the researcher explains, is that it was enough to introduce malicious instructions in the title of a pull requestwhich is the request that someone sends to propose changes to a project, so that the agent will execute commands and return the result as if it were part of your review. The team then managed to go a step further and demonstrate that it could also extract credentials from the environment. The interesting thing is that the same scheme also appeared in the other two services, although with nuances. At Google, Gemini CLI Action could be pushed to reveal the GEMINI_API_KEY from instructions snuck into an issue and its comments; In GitHub Copilot Agent, the variant was even more worrying, because the attack was hidden in an HTML comment that a person did not see on the screen, but the agent did process when another person assigned it to the case. In both scenarios, the background was the same again: apparently normal content that ended up twisting the behavior of the system until exposing credentials or sensitive information within GitHub itself. Guan assures that the pattern made it possible to leak API keys, GitHub tokens and other secrets exposed in the environment where the agent ran, that is, just the credentials that can later open the door to much more delicate actions. Who does this affect? Especially to repositories that run agents in GitHub Actions on content sent by untrustworthy collaborators and, in addition, give them access to secrets or powerful tools. The researcher himself clarifies that the risk depends a lot on the configuration: by default GitHub does not expose secrets to pull requests from forksbut there are deployments that open that door. And here another layer of the matter appears, less technical but just as important. As published by The RegisterAnthropic, Google, and GitHub ended up paying bounties for the findings, but none of the three had published public notices or assigned CVE at the time of that information. Guan was quite clear about this: he said he knew “for certain” that some users were still stuck on vulnerable versions and warned that, without visible communication, many may never know that they were exposed or even being attacked. So although there were mitigations and changes in documentation or in the internal treatment of reports, there was no equivalent public notice for all those potentially affected. Anthropic settled the case on November 25, 2025 and paid $100 Google rewarded the discovery on January 20, 2026 with $1,337 GitHub closed the case on March 9, 2026 with a payment of $500 What makes this case especially delicate is that GitHub does not seem like the end of the road, but rather the first visible showcase. Guan argues that the same pattern can probably be reproduced in other agents who work with tools and secrets within automatic flows, and there he mentions from Slack-connected bots to Jira agentsmail or deployment automation. The logic is the same again: if the system has to read external content to do its job and also has enough access to act, the field is fertile for someone to try to twist it from within. The conclusion that Guan reaches is not about selling a magic solution, but about returning to a fairly classic idea in security: giving each system only what is essential to do its job. If an agent reviews code, they shouldn’t have access to tools or secrets they don’t need; If you’re just summarizing issues, it wouldn’t make sense for you to write to GitHub or touch sensitive credentials. That … Read more

There were four workers in a van and they ended up crashed

The new Madrid Formula 1 circuit He has already had his first accident, and he has not even raced a single-seater on its asphalt yet. A van with four occupants he sneaked into the construction siteand at high speed ended up going off the track. Although they had a good scare, it seems that there have been no serious injuries. What happened. A person who was walking through the Valdebebas area, next to the IFEMA exhibition center where the MadRing is being built, heard a loud noise coming from inside the circuit. When he looked out he saw a van driving along the already paved part of the route at a very fast speed, especially for a construction site. He took out his cell phone, recorded, and He sent the video to the ‘MadriZonaNorte’ account in X. In the images you can see how the vehicle ends up overshooting a curve and crashing, triggering the airbag. The four occupants, according to El Motor, were workers working on the construction of the circuit, and they all left on their own. rolling. The MadRing is still far from finished. The asphalting process should be completed by May 31and that includes three layers: base, intermediate and tread. Right now there is only a first layer on a good part of the route. Filming at high speed inside such an enclosure, with the work underway, has been irresponsible that, in this case, ended without serious consequences due to pure luck. A project against the clock. To understand the magnitude of the nonsense, it is worth taking into account the pace at which the work progresses. The Director of Operations of IFEMA Madrid, Carlos Jiménez, counted to Motorsport media that the works are even going a week and a half or two weeks ahead of what was planned for the track. The construction companies ACCIONA and Eiffage Construction are working non-stop to meet a deadline that does not allow for delays, since after the delivery of the circuit, the FIA ​​homologations will come, with two official visits planned during the works, and all of this must be ready before September arrives. Taking into account that the bun oven is not there, any carelessness within the premises is an unnecessary risk that could further complicate a work that has already caused quite a few headaches. What’s to come. When completed, the MadRing will have 22 curves and 5.4 km in length, the estimated lap time is 1 minute and 32 seconds, and on the 589-meter finish line, drivers will be able to reach up to 340 km/h. The jewel in the crown is La Monumentala 550 meter long curve with an extreme bank of 24%, the highest on the calendar, in which the drivers will have to face an inclination of up to 10 meters high for six seconds. A particularity that does not exist in any other circuit in the world in the competition. The when. The Madrid Grand Prix is ​​scheduled for the weekend of September 11-13 this year. More than 80,000 tickets have already been soldwhich is equivalent to 72% of the total capacity. A few weeks ago, A Red Bull car traveled the tracks of the Madrid Metro as part of an advertising campaign for the event. Now the route has made headlines again, although for very different reasons. a test. Before the debut in F1, IFEMA is working on organizing an internal race, probably without an audience and in a lower category, to verify the state of the circuit. The real cars will arrive in September, so the fewer vans there are testing the Stranjis layout, the better. Cover image | MadriZonaNorte, MadRing In Xataka | It turns out that the “good, pretty and cheap” electric car does exist and is manufactured in China. So Citroën has stepped up

Ten years ago, Bnext was the great hope of fintech. They ended up crashing

Founded in 2016 by Guillermo Vicandi, Bnext It was born as a fintech alternative to traditional banking. In fact, their visible heads assured that it was not a bank, despite offering an account and card. The growth was as fast as the fall. After the collapse of its cryptocurrency, The app announced its closure on April 13. What was Bnext. It was not a bank, that’s what its creators constantly said. It was an electronic money entity (EDE) alternative to traditional banking. In practice, it offered what a bank offers: account, card, loans, insurance, currency purchases, investment plans. The difference was the model: Bnext always acted as an intermediary, connecting the user with the best products on the market through a single app. No offices, no paper, no queues. The golden age. In 2019, Bnext was one of the most visible projects on the Spanish fintech scene. became the fintech that grew the most in Spainwith more than 156,000 registered users and more than 100,000 active clients holding a Bnext VISA. Your second round of financing It closed with 22 million eurosthe highest figure seen in Spain (in 2019) since the Valencian Hawkers raised 55 million euros. That same year, they partnered with giants like MyInvestor to offer financial products. The stumble. Bnext’s first setback comes a year later, in 2021, after its landing in Latin America. Its partner, Cacao Paycard, did not obtain authorization to operate from the National Banking and Securities Commission (CNBV), which translated into a fine of 2.6 million Mexican pesos (about 150,000 euros at the current exchange rate) to Bnext for misleading communication. There was no plan B. Bnext had to cease operations in Mexico, close all its accounts and lose more than 230,000 clients who had trusted the company prior to the sanctions. Meanwhile. In Spain, alternatives like Revolut were growing like wildfire, and Bnext was beginning to run out of oxygen. In 2021, they decided to ally with Algorand, a blockchain firm that became one of the company’s main shareholders. After the alliance they announced their own token: B3X. The play didn’t go well. On March 1, 2022, it was launched to the public with a starting price of two euro cents. Today it cannot even operate from the app, since the service has been dismantled. Its price before the debacle: 0.00006 US cents. What happens to Bnext users. Bnext accounts and cards have already been canceled and the product is no longer marketed. No payments, transfers or receipts can be uploaded. Payroll cannot be received The balance of the account may be requested during a repayment period of 20 years Cryptocurrency management is referred to Onyze… via email User data will be deleted in accordance with the GDPR You will no longer have access to the marketplace services Bnext was once the great hope of Spanish fintech. Now rest in peace. What will become of the company. The company gives the finishing touch to its app, but does not completely cease its operations. “The fintech business and market has changed considerably, and with this, we have had to pivot our value proposition. After several years offering products to the end consumer and in an increasingly competitive environment and with more complex regulation, we have decided to take a step towards the future, focusing on helping companies launch their own payment products.” Guillermo Vicandi, CEO of Bnext. Bnext closes as a neobank, but pivots towards financial infrastructure services. In Xataka | Europe had been asking for a big hit on the table for some time. Revolut just gave it a huge valuation

This engineer found 1,351 loose photos in his grandmother’s house. He ended up building a personal Wikipedia of his entire life

It all started with a closet full of old loose photos. Last year an engineer named Jeremy visited his grandmother’s house for the first time since the pandemic and unknowingly came across a treasure. 1,351 on paper, without order, without dates and without context. Some were in black and white, from when his grandparents were 20 years old. Others were from his mother as a baby. The last ones were from him in high school, just before smartphones arrived and everything moved to the cloud. What began as a family organization exercise became a fascinating project over the weeks: a personal encyclopedia. A Wikipedia of his own life. First, the physical photos and the grandmother. The first problem he encountered when starting his project is that physical photos do not have EXIF metadata. There is almost never a capture date (although some cameras superimposed it), there are no GPS coordinates and there is no information that allows them to be easily sorted. What Jeremy did was resort to a much more direct solution: sit down with his grandmother and ask her about the photos. Remembering that it is a gerund. In that conversation she rearranged the photos of their wedding and narrated the details while he took notes. Names, places, who was sitting where, what each ritual meant. With those notes, he set up a local instance of MediaWiki, the same software that Wikipedia uses, and wrote a page about the wedding following the same format that was used on Wikipedia to royal wedding between Prince William and Kate Middleton in 2011. Within two afternoons I had a complete article with scanned photos, captions, links to empty pages about each person mentioned, and links to the real Wikipedia to give historical context to the events. Digital photos and Claude Code to get the job done. Jeremy realized that things could get worse and took the opportunity to do tests with digital photos, which do have EXIF data with date and time and even GPS coordinates. With that information he wanted to see how far he could go without interviews, so he took 625 photos from a family trip to Coorg (India) in 2012, put them in a folder and opened Claude Code in that directory with a simple instruction: compose a Wikipedia page by browsing the images. The model used ImageMagick to create contact sheets that allowed him to process multiple photos at once, and the magic of AI did the rest. The result was a detailed draft chronicling the trip organized by time of day. Without location data, just with timestamps and visual content, the AI ​​model was able to identify the places that appeared in the photos, including some that Jeremy himself had forgotten. It even detected the means of transportation used between destinations just with what it saw in the images. When AI starts remembering for you. Then came the most ambitious experiment, when he wanted to go further with a trip he took to Mexico City in 2022. He had 291 photos and 343 videos taken with an iPhone 12 Pro with GPS coordinates in the metadata, but he also exported his Google Maps location history, his Uber trips, his banking transactions and his Shazam history. By including all that data and sources, the model was able to cross-reference banking transactions with location data to identify the restaurants where he had eaten. For example, he found images of a soccer match in the photos but did not remember which teams were playing, but he found out that information by crossing those photos with bank transactions in which he found a Ticketmaster invoice with the name of the tournament and the teams, and incorporated them into the page. He also used Shazam’s history to describe the music playing in each location. From photos and memories to a personal encyclopedia. A wonderful project that now anyone can replicate thanks to the whoami.wiki website. First the trips, then the friendships. What started as a travel documentation project evolved into something more personal. The Facebook, Instagram and WhatsApp archives contained some 100,000 messages and several thousand voice notes exchanged with close friends over a decade. The AI ​​model managed to convert all this information into a unique biography, identifying vital episodes of the protagonists, then converted into pages that, according to Jeremy, “read as if they were written by someone who knew us both.” When he shared the pages with those friends, they couldn’t stop reading those stories and wanted more. MediaWiki as a master ingredient. One of the most interesting decisions of the project is the choice of software. MediaWiki, Wikipedia’s engine, turned out to be an extraordinarily suitable tool for that use case. AI models understand this perfectly because they have been trained with millions of Wikipedia pages and know their structure and functioning. Discussion pages serve to control the development of those pages, categories group pages by topic, and revision history monitors the evolution of each page. All of this infrastructure already existed, and it was not necessary to create a new platform to organize the information that Jeremy was providing. Surpriseyes. At the end of his story, Jeremy explains that after the process: “I realized that I was no longer alone working on a family history project. What I had been creating, page by page, was a personal encyclopedia. A structured, navigable, interconnected record of my life compiled thanks to the data I already had around me.” Documenting her grandmother’s life revealed things she didn’t know: her years as a single mother or the decisions she had to make, for example. Going through the history of his friendships allowed him to recover moments that he had almost forgotten and made him call some of them to remember them together. “The encyclopedia not only organized the data, it made me pay more attention to the people in my life,” he explained. you can do it too. The project has been so rewarding for him that he … Read more

Meta has ended up firing its developers to pay for AI

Mark Zuckerberg’s company is not having its best week. To the sanctions imposed by a US court for not protecting users of the addictive consequences of their platformsjoins a new round of layoffs that affects hundreds of people in five business areas. It’s not the first time so far this year, and it probably won’t be the last either. We cannot say that the measure has caught Meta employees by surprise, because a few days ago Reuters I was already ahead that the parent company of Facebook, Instagram and WhatsApp was planning to cut staff due to the increased costs of AI development. Now have materialized eliminating the departments closest to the metaverse. 700 employees on the street and a metaverse that goes out. According to published NBC Based on sources close to the company, Meta will lay off about 700 employees in this round. The cuts will affect Reality Labs, the division that for years was the flagship of Zuckerberg’s big bet on the metaverse, which just a few days ago announced the Horizon Worlds closure on Quest headsetsas well as some in the human resources departments, sales and Facebook employees, as pointed out The New York Times. Those affected are a small fraction of the nearly 78,000 employees that Meta currently has on staff, but the reason given by the company is already a classic in big tech: “Meta’s teams restructure or implement changes periodically to guarantee that they are in the best position to achieve their objectives,” said a Meta spokesperson. in a statement to which you have had access NBC. Layoffs down, bonuses up. Hours before these layoffs were announced, Meta presented a new stock compensation program for six of its senior managers. The message between the lines has not gone unnoticed. While the company cut staff with the argument of reducing costs to face the huge investments in AIwith a forecast of expenses of between 162,000 and 169,000 million dollars for 2026, the executives closest to Zuckerberg saw their compensation increased by up to 921 million dollars each for the next five years. Meta justifies the increase to its managers as a tool to retain talent in the middle of the war for the best AI profiles, but the temporal coincidence between both announcements could not have been more unfortunate. ​Layoffs without financial hardship. Historically, a company laying off its employees was a clear sign of financial problems. Instead, in the age of AI, each round of layoffs is celebrated on the financial markets with increases in the price of shares because it is a clear sign that the company is restructuring to adapt to changes in strategy for the development of AI and continue generating million-dollar income. In fact, one of the phenomena that is occurring In the latest rounds of layoffs in large technology companies, while hundreds of employees are being laid off from certain departments, new vacancies are opening up. to hire new employees with another profile more AI oriented. ​Meta is not an isolated case. What happens in Meta is part of a dynamic that is repeated throughout the sector. Amazon, Microsoft and other big tech companies have announced massive cuts in recent months, and in all cases the AI appears as the main justification for layoffs. According to data From the consulting firm Challenger, Gray & Christmas, AI has been the argument for 12,304 layoffs so far in 2026, the equivalent of 8% of all layoffs recorded in that same period.​ In Xataka | Mark Zuckerberg spent millions on a “superintelligence” team. He is dedicating it to creating a personal AI agent for you Image | Goal

Spain has been ignoring dozens of products that it sells daily in its supermarkets for decades. But that just ended

You may have read or heard it somewhere: “goodbye to turkey ham and stuffed olives.” And what a joke, can you imagine a world without anchovy-flavored olives? Having to live only on ham or chicken breast? Luckily, you don’t have to imagine it. They don’t disappear. What the Royal Decree does has unleashed All this controversy is something a little more complicated: putting in order the enormous food mess that has been growing for decades in Spanish pantries. What food mess? On February 27 Royal Decree 142/2026 was published that seeks to modify (or repeal) more than a dozen food quality provisions. It seems somewhat minor, but some of which (such as the cookie regulations) are more than 40 years old. The interesting thing, however, is that this new legislation removes from legal limbo numerous products that have not been ‘thought of’ at a regulatory level for many years. In that sense, the decree affects dozens of daily consumer products, but it does not affect them in the sense that ‘they are going to change’: it affects them in the sense that now the rules are going to be clearer. The case of turkey ham and stuffed olives are paradigmatic: the former now has a clear definition and the latter will have the obligation to specify the characteristics of the filling. But what is interesting is not what is important. The important thing, clearly, is the inclusion of gluten-free bread in the bread quality standard. Not only is it a historic demand of the celiac community, but it closes a very tough debate at a regulatory and fiscal level. Until now, at a technical level, the standard did not contemplate that bread made with gluten-free flour could be called bread. This ‘nonsense’ made celiacs They will pay more VAT than they would pay on normal breadbut it’s already over. Something similar happened with horchata without added sugar, the clarification of cider, the types of sangria or the acidity of vinegars. What does disappear. The bologna mortadellawhich until now was a category and which will now have to be called something else to avoid confusion with the designation of origin of the true Bologna mortadella. The central issue is that the agri-food industry has changed a lot. And as usual, the legislation has been dragging its feet, generating piecemeal regulations and creating completely inexplicable holes. So yes, we have taken a step forward. And without having to give up even the turkey ham and stuffed olives. Image | Xavi Cabrera In Xataka | This is how ultra-processed foods have been invading our diet: the evolution of three decades in a single graph

the exhibition ended with the plane in flames

If you’ve ever flown within Europe on a short or medium-sized journey, there’s a good chance you’ve spent several hours inside a Airbus A320. This model has become one of the most common aircraft on the continent and is part of the daily landscape of airports and airlines. Today it is difficult to imagine European air transport without it, but there was a time when he A320 was an absolute novelty that was just beginning to be shown to the public. One of those first public flights took place in 1988 and was intended as a demonstration for spectators, press and guests. It was also the first flight with passengers on an Airbus A320. The plane belonged to Air France and was part of the first units of the model. That presentation was to serve to show the new Airbus aircraft in a simple maneuver on a small airfield. What should have been an exhibition ended up becoming one of the most remembered episodes of the early years of the A320. The premiere that went wrong and went down in history The demonstration was part of an aeronautical event held at the Habsheim airfield in eastern France. Air France agreed to participate in the exhibition and took advantage of the occasion to publicly display its new Airbus A320 in the company’s colors. The plan was to perform a flyby at very low altitude on the runway with the landing gear deployed so that attendees could observe the plane before it continued its trajectory. The flight did not depart directly from that small airfield. The plane had taken off from Paris Charles de Gaulle airport and subsequently flew to Basel-Mulhouse, where a press conference was held before boarding. According to Aviation Safety Networkwhen the device took off again it had 130 passengers and six crew members on board. Among the occupants were journalists and people who had won a seat on the flight through a lottery. In the cockpit were two commanders with extensive experience within Air France. One of them was heading the company’s training division and the other was involved in the introduction of the A320 into the airline’s fleet. Three minutes after takeoff, with the airfield already in sight, the pilot began the descent which had to place the plane at the altitude planned for the maneuver. However, the decline continued below that level. According to data collected later in the investigation, the plane first passed about 50 feet (about 15.20 meters) and just a few seconds later it descended to about 30 feet (about 9.15 meters) above the ground. At that moment the power was increased to try to overcome the maneuver, but the reaction came too late. At that point, the room for maneuver was minimal. As we can see in a videothe Airbus A320 continued to advance at very low altitude until skim the treetops located at the end of the Habsheim airfield. The accident ended with the plane engulfed in flames in front of those attending the aeronautical event. After the accident, an investigation was opened in which Air France and Airbus participated together with the Bureau d’Enquêtes et d’Analyses pour la sécurité de l’aviation civile, the BEA, the French body in charge of investigating air accidents. The objective was to accurately reconstruct what happened during the maneuver and determine why the plane had ended up hitting the trees located at the end of the airfield. In its report, the BEA pointed out several factors that, combined, explained the accident. Among them, he mentioned carrying out an overflight at a height lower than that of the obstacles present in the area, a very low speed during the maneuver and the late application of the power needed to start the comeback. According to the investigation, this combination of circumstances left the plane without sufficient margin to regain altitude before reaching the tree line. Commander Michel Asseline rejected part of the investigation’s conclusions. In his defense, he maintained that both he and the other pilot, Pierre Mazières, had only received the flight plan on the morning of the accident. He also stated that the crew did not have maps of the aerodrome or detailed information about the configuration of the flight field where the demonstration was to take place. Asseline also questioned the interpretation of the moment in which the comeback attempt was attempted. According to your version, the fly-by-wire control system of the A320 would have prevented the application of power and lifting the plane with the necessary speed. In addition, he went on to claim that the black box data could have been manipulated and that four seconds were missing from the recording. Despite these allegations, the case ended up going to court. The judicial process ended with several convictions for involuntary manslaughter. Commander Michel Asseline, the first officer, two Air France officials and the president of the flying club that organized the event were found guilty. The case put an end to one of the most controversial episodes of the early years of the Airbus A320. As time went by, the relationship between Air France and the A320 continued to develop normally. According to data from ch-aviationthe airline currently operates about 40 Airbus A320-200. It also previously flew another 61 A320-200 and 13 A320-100, the variant involved in the 1988 accident. Today the A320 remains one of the most common aircraft on short and medium-haul routes within Europe. Images | Wikimedia Commons In Xataka | China has just found a hole in the US’s quietest weapon: an algorithm has hacked its B-2s in Iran

Sierra was the second most powerful supercomputer in the world. When its time came it ended up in the shredder, literally

Supercomputers represent the extreme of modern computing: machines capable of performing enormous amounts of calculations every second and supporting scientific or strategic projects of enormous complexity. Saw He was one of those giants. For years he operated in the Lawrence Livermore National Laboratorywhere he was in charge of highly sensitive simulations for the United States Government. At the time he came to occupy second place in the TOP500 rankingwhich ranks the world’s fastest supercomputers. But in high-performance computing, even the most advanced systems have a limited lifespan. After seven years of service, Sierra has been retired. A giant for simulations. When Sierra began operating in 2018 at the Livermore facility, it was incorporated into the center’s high-performance computing infrastructure to support the nuclear arsenal maintenance program managed by the National Nuclear Security Administration. Instead of resorting to real nuclear tests, scientists use computer simulations capable of reproducing the behavior of the weapons and materials involved in their design. This work requires extraordinary computing power and also has implications in areas such as nonproliferation and counterterrorism. Almost at the top of the ranking. As we noted above, for several years the Sierra was among the fastest machines on the planet. According to the TOP500 ranking, it recorded 94.64 petaflops, that is, tens of quadrillion floating point operations per second. To achieve this, it used an unusual architecture at the time, based on IBM Power9 processors combined with NVIDIA Volta V100 graphics accelerators. This design allowed work to be distributed among thousands of computing nodes and offered a notable leap over previous generations of supercomputing. When the hardware starts to fail. Supercomputers do not escape a reality common to any technological infrastructure: over the years, the hardware begins to deteriorate. In this type of systems, The usual useful life is usually around five to seven yearsa period after which the failure rate begins to grow and maintaining the system becomes more complex. As these machines accumulate hours of operation, the likelihood increases that certain components will fail or need to be replaced. In the case of Sierra, furthermore, part of the problem was already very specific: some of its components had stopped being manufactured and the version of the operating system it used had lost support. The successor. Sierra’s retirement is also related to the arrival of a new generation of supercomputing at the center. In 2025 it began operating The Captainthe system destined to take its place within the laboratory’s computing infrastructure. Although at first glance both may seem similar facilities, the difference is inside. El Capitan uses an architecture based on the AMD Instinct MI300A APUs and a shared memory system between CPU and GPU, which allows it to achieve much higher performance. According to data released by the lab, this machine can reach 1,809 exaflops, about 19 times faster than Sierra at its peak according to TOP500. Disassemble a supercomputer piece by piece. The end of Sierra was not simply about shutting down the system and leaving it out of commission. The process was carried out in several phases that began with the progressive removal of computing nodes and internal components. Technicians dismantled entire racks, extracted batteries and separated different elements for recycling or controlled destruction. Some parts, such as system plates or metal structures, were sent to specialized facilities for shredding. Since Sierra had worked with simulations linked to the US nuclear arsenal, the laboratory had to prevent any possibility of partial data recovery or reconstruction of sensitive information, hence the storage devices received even stricter treatment. Images | United States Department of Energy In Xataka | Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.