Sierra was the second most powerful supercomputer in the world. When its time came it ended up in the shredder, literally

Supercomputers represent the extreme of modern computing: machines capable of performing enormous amounts of calculations every second and supporting scientific or strategic projects of enormous complexity. Saw He was one of those giants. For years he operated in the Lawrence Livermore National Laboratorywhere he was in charge of highly sensitive simulations for the United States Government. At the time he came to occupy second place in the TOP500 rankingwhich ranks the world’s fastest supercomputers. But in high-performance computing, even the most advanced systems have a limited lifespan. After seven years of service, Sierra has been retired. A giant for simulations. When Sierra began operating in 2018 at the Livermore facility, it was incorporated into the center’s high-performance computing infrastructure to support the nuclear arsenal maintenance program managed by the National Nuclear Security Administration. Instead of resorting to real nuclear tests, scientists use computer simulations capable of reproducing the behavior of the weapons and materials involved in their design. This work requires extraordinary computing power and also has implications in areas such as nonproliferation and counterterrorism. Almost at the top of the ranking. As we noted above, for several years the Sierra was among the fastest machines on the planet. According to the TOP500 ranking, it recorded 94.64 petaflops, that is, tens of quadrillion floating point operations per second. To achieve this, it used an unusual architecture at the time, based on IBM Power9 processors combined with NVIDIA Volta V100 graphics accelerators. This design allowed work to be distributed among thousands of computing nodes and offered a notable leap over previous generations of supercomputing. When the hardware starts to fail. Supercomputers do not escape a reality common to any technological infrastructure: over the years, the hardware begins to deteriorate. In this type of systems, The usual useful life is usually around five to seven yearsa period after which the failure rate begins to grow and maintaining the system becomes more complex. As these machines accumulate hours of operation, the likelihood increases that certain components will fail or need to be replaced. In the case of Sierra, furthermore, part of the problem was already very specific: some of its components had stopped being manufactured and the version of the operating system it used had lost support. The successor. Sierra’s retirement is also related to the arrival of a new generation of supercomputing at the center. In 2025 it began operating The Captainthe system destined to take its place within the laboratory’s computing infrastructure. Although at first glance both may seem similar facilities, the difference is inside. El Capitan uses an architecture based on the AMD Instinct MI300A APUs and a shared memory system between CPU and GPU, which allows it to achieve much higher performance. According to data released by the lab, this machine can reach 1,809 exaflops, about 19 times faster than Sierra at its peak according to TOP500. Disassemble a supercomputer piece by piece. The end of Sierra was not simply about shutting down the system and leaving it out of commission. The process was carried out in several phases that began with the progressive removal of computing nodes and internal components. Technicians dismantled entire racks, extracted batteries and separated different elements for recycling or controlled destruction. Some parts, such as system plates or metal structures, were sent to specialized facilities for shredding. Since Sierra had worked with simulations linked to the US nuclear arsenal, the laboratory had to prevent any possibility of partial data recovery or reconstruction of sensitive information, hence the storage devices received even stricter treatment. Images | United States Department of Energy In Xataka | Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

The Supreme Court ended up seeing the obvious

The last straw for a worker is to have been doing the same job for more than 16 years in the same place and with the same colleagues, and to be fired for not having passed the trial period. It seems like a joke, but it is exactly what happened to the employee of a notary office in Madrid. What came next was a judicial battle that reached the Supreme Court, and ended up agreeing.​ Although taken to the extreme, this case is no exception. In Spain, companies terminated more than a million contracts in 2025 alleging that the worker had not passed the trial period, and the data suggests that behind many of these dismissals there is something else than employees who did not perform enough.​ Sixteen years in the same place. Just like is detailed in the sentence of the Supreme Court, The worker had been working in the same notary office since May 2004, chaining contracts with different notaries who occupied the position throughout that period. In September 2019, the titular notary was assigned to another location and was offered the choice between going with him to Jávea and keeping his contract in force, or collect your compensation for the cessation of its activity. The employee opted for compensation of 10,071.20 euros.​ A few months later, the new notary contacted him to enlist his services and that of his former colleagues at the same notary office. In February 2020, he signed a permanent contract with this new notary with a trial period of six months. It should be noted that the new notary was employing the majority of the previous staff, he continued in the same office, with the same furniture, computers and software as all his predecessors. The pandemic and layoff. With the state of alarm due to COVID-19 newly declared, the worker and two colleagues went to the notary’s office to remind the notary that he must apply the health measures dictated by the authorities: shifts, gel, masks and limiting activity to urgent matters. The notary’s response, recorded literally in the sentence, was that “this is not a cooperative.” That same afternoon, the three received their dismissal letters for not having passed the six-month trial period contemplated in their indefinite contract.​ The case took five years to resolve. In December 2023, a first court ruled in favor of the worker: if the new notary had assumed the staff and resources of the previous one, there was a transfer of the company and the agreed trial period was void. There is no point in testing the ability of someone who has been in the same position for 16 years. Finally, and after several appeals before different instances, in January 2026 the Supreme Court confirmed its verdict: the trial period was not valid and sentenced the notary to reinstatement of the employee or compensation of 54,294.42 euros.​ One million layoffs a year. This case is striking because of the extreme and obvious nature of the situation, but it is nothing more than an example of an upward trend among companies to avoid compensation for unfair dismissal. According to report data ‘Balance of the labor market in 2025‘ Prepared by the USO union with sources from the SEPE, INE and Social Security, Spanish companies terminated 1.02 million contracts alleging that the worker had not passed the trial period. This represents an increase of 2.34% compared to 2024 and 79% more than in 2021, before the last labor reform in which permanent contracts were reinforced compared to temporary contracts.​ What it does especially that data is relevant The thing is that it is not a general increase: it is mainly the indefinite contracts that are behind this growth. Before the labor reform, in 2021, only 13% of all dismissals for not passing the trial period corresponded to permanent contracts. In 2025, that percentage had already risen to 75% of layoffs. To put this figure in a context, dismissals of permanent workers grew by 137% in the same period, while dismissals of permanent workers in a trial period grew by 864%, to exceed 720,000 cases. “At USO we have always said that it was more than suspicious data. Suddenly, there are many people who are not worth the job for which they are hired. It is clear that the trial period is an escape route to hire people temporarily and not even have to compensate them. But it has been seen that, even so, not only is its use abused, but it is twisted and used illegally,” warns Joaquín Pérez, general secretary of USO. The gap left by the storms. To understand the reason behind this sudden use of the trial period to argue for dismissal, we must take into account a detail in the regulations that governs severance pay: When an employee does not pass the trial period, the company does not have to pay him any compensation or justify his termination of the contract. On the other hand, for a dismissal, it must be justified and, depending on the case, compensation must be paid. Firing with the excuse of not having passed the trial period is even more profitable for the company than letting it expire. a temporary contractsince in this case a compensation of 12 days per year worked is applied. Therein lies the trap that the unions They have been denouncing for some time. With the latest labor reform, companies can no longer chain temporary contracts so easily and are forced to hire indefinitely in many more situations. Terminating an indefinite contract during its trial period is cheaper than any other form of dismissal, and hardly requires any paperwork. As and as they pointed from The Economistthis fraud of law would be producing a precariousness of permanent jobs. The Ministry of Labor launched inspection campaigns in 2024 and 2025, but in January 2026, layoffs for this reason continued to grow by 1.3% compared to the same month of the previous year. In Xataka | Companies … Read more

Before, stars were born in movies and ended up on Netflix. Now they are born in streaming and end in movies

‘War Machine’, the war science fiction film starring Alan Ritchson, has accumulated 39.3 million views in its first three days on Netflixbecoming the most viewed title on the platform globally today. The second most viewed film that week was ‘Jurassic World Rebirth’, by a huge margin: 6.7 million. The result is also a symptom of how the star factory has changed: the new star system is born on the platforms, not in the multiplexes. Other figures. The opening of ‘War Machine’ is the second best placed of the year on Netflix to date. If it keeps up the pace, it could aspire to enter the platform’s all-time Top 10 in the English-language film category. To gauge the magnitude: in all 87 countries tracked during that four-day windowthe film ranked number one in 80 of them. What is it about? The film is not especially original in its premise, and its authors do not intend it to be. Directed by Patrick Hughes (from the weak ‘The Expendables 3’ and the fun ‘The Other Bodyguard’) and produced by Lionsgate, it follows a group of candidates for the American Rangers during the final selection phase. Their training maneuver becomes a fight for survival when a robotic threat of alien origin appears. Alan Ritchson plays the character known only as 81, a traumatized combat engineer, even more silent and introverted than his famous Jack Reacher. Although all the critics have stressed its derivative and unpretentious nature, the truth is that its two-hour chase structure finds an enjoyable middle ground between ‘Predator’ and Heinlein’s Space Troops (not Verhoeven, there is no irony here, as seen in an ending with will continue that replies, without venom, the recruitment spots of that masterpiece 1997). ‘War Machine’ embraces its spirit of an effective and direct B series with a healthy brainlessness that makes perfect sense that it has found a millionaire audience, eager to disconnect and let themselves be dazed. The star. It has taken Alan Ritchson almost two decades to become a star. He debuted in ‘Smallville’ as Aquaman and then went unnoticed through multiple series as a secondary character until in 2022 he played the protagonist of ‘Reacher’ on Prime Video. The series, which championed the return of the television “for parents” (of which ‘War Machine’ is also an excellent example), is one of the biggest hits on the Amazon platform, and is already preparing its fourth season. In just a few weeks, Ritchson has managed to position himself as the number one actor simultaneously on Netflix and Prime Video with different projects. The distinction that for years existed between the star of streaming and the one that can sell a blockbuster in theaters with its mere presence is blurring. It is not the only case. Although the case of Ritchson, exclusive streaming star, is particular due to his almost total absence of films in his filmography, there are many other cases of proper names who owe a good part of their fame to the platforms. Pedro Pascal is now a global star whose fame was born entirely in hits for streaming (‘Game of Thrones’, ‘Narcos’, ‘The Last of Us’, ‘The Mandalorian’). Henry Cavill or Chris Hemsworth were born as movie stars, but they consolidated (‘The Witcher’, ‘Tyler Rake’) their fame in streaming. Dave Bautista or John Cena is also finding a second home in streaming thanks to hits like ‘Trap House’ or ‘El Pacificador’. Unmistakable signs of the change of times. Stars germinate in different places, but they generate hits with figures that rival the biggest blockbusters on the big screen. In Xataka | When medical dramas seemed to be in the doldrums, ‘The Pitt’ appeared. And that has forced Netflix to make decisions

ended up sneaking in errors and references that didn’t add up

Artificial intelligence has become an everyday tool for millions of people. Today many use it to write emails, summarize documents or translate texts in a matter of seconds. However, this speed has a less visible side: generative systems can also make mistakes, invent data or alter sources without the user immediately noticing. When these errors appear in one of the largest encyclopedias in the world, the situation changes completely. That is precisely what has happened on Wikipedia with a series of translations carried out with the help of AI. The opening episode. It all started within the Wikipedia community itself. Some editors began reviewing recent translations and noticed something strange: certain texts included phrases that did not appear in the cited sources or references that did not seem to fit with what the article stated. According to 404 Mediathese translations were part of a project promoted by an organization that sought to expand the presence of Wikipedia content in different languages ​​using language models to speed up the process. When translation invents. As editors began to examine these translations in more detail, the problems became more evident. One of the cases cited by 404 Media is that of a draft article about the French royal family La Bourdonnaye. The translated text included a reference to a book and a specific page to explain the origin of the family. However, when editor Ilyas Lebleu, known on Wikipedia as Chaotic Enby, reviewed that source, he discovered that the page cited was incorrect. Lebleu added that, when doing a quick review of several translations, he also found interchanged references, phrases without a source, and cases in which paragraphs were added based on material unrelated to what was being written. It was published or remained in draft. The case also raised a relevant question: whether these errors had appeared in already published articles or whether they were detected during the review process. At least one of the problematic examples was identified in a draft translation, allowing editors to revise it before it was finalized. With the material provided here, however, it cannot be stated how many translations with problems were published and how many remained under review. Who is behind these translations. Here appears the name of Open Knowledge Association (OKA), a non-profit organization that claims to work to improve Wikipedia and other open platforms. As the organization itself explains on its website, its model consists of offering monthly stipends to collaborators and translators who work full-time expanding the encyclopedia’s content, and “taking advantage of AI (large language models) to automate most of the work.” According to 404 Media, editors who investigated the project concluded that it relied on contractors. The editors’ response. As more problematic examples appeared, the Wikipedia community decided to intervene. The editors reviewed the operation of the translation project and ended up establishing new restrictions for those who participated in it. OKA-linked translators who accumulate four strikes for unverifiable content within a six-month period may be blocked without additional notice if a new case appears. Additionally, content added by a translator that ends up being blocked may be removed preventively, unless another reputable editor takes responsibility for reviewing it. OKA explains. The organization mentioned in the debate also offered its version of the events. Jonathan Zimmermann, founder and president of the Open Knowledge Association, explained to the aforementioned media that the project’s translators work on an hourly basis and that there is no fixed goal of articles per week. In addition, he admitted that “errors happen,” although he defended that the system includes human verification and review of sources. Following the discussion on Wikipedia, he added, the organization is introducing a second review with another AI model to detect possible errors before publishing, and is studying the possibility of adding peer review mechanisms if necessary. Images | Oberon Copeland @veryinformed.com | Luke Chesser In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

Meta was building its AI chips to not be dependent on NVIDIA. Has ended up surrendering to the evidence

Meta faces a crucial year. While its competitors were laying the foundations for AI, Meta was burning money in the metaverse. That, along with a totally different approach to what Google or OpenAI were doing with AI, caused Zuckerberg’s company to pass a few years in the gutter. After reorganizing the house and sign the AI ​​A-TeamMeta was preparing so much a great model as new own chips for training. The thing… hasn’t turned out as expected. MTIA. Within the different Meta teams focused on artificial intelligence, there is one known as MTIA. It comes from ‘Meta Training and Inference Accelerator’ and its objective was research and design own chips training for artificial intelligence. Having your own chip makes all the sense in the world, since it is designed based on the needs you have. They have another advantage: you are not dependent on anyone else. If NVIDIA doesn’t have enough chips, it doesn’t matter because you have yours and can continue scaling data center systems (and those of Meta are immense) to continue the training and inference tasks. Meta was not going to be in charge of manufacturing, something that the highly reputable TSMCbut the program got off to a bad start. This is very difficult. Reuters He already mentioned it last year. After testing his first in-house developed training chip, Meta realized that things were not going well. It was underperforming what they expected, and it was also worse than the competition. They did not throw away the chips, but instead referred them to other systems (such as those for recommending Facebook and Instagram based on algorithms). The problem is that the performance of the training chip, the one really important for the AI ​​career, was not enough. Strategy change. In The Information They echo a statement from Meta stating that the company remains committed “to investing in different silicon options to meet our needs, which includes the advancement of our MTIA division” and they urge us to remain attentive to news that will be shared throughout this year. However, in the same medium it is noted that Meta has greatly lowered its expectations with its chips. The idea was to have two chips. On the one hand, Iris, a single instruction training chip that is easy to design, but from which it is difficult to extract all the juice in these training tasks. artificial intelligence training. On the other hand, Olympus, a chip that would be completed towards the end of this year and that would be the central part of Meta’s training clusters. According to The Information, there were many internal doubts about the stability of Olympus, its intricate design and profitability, so they have left it in the drawer to focus on more “simpler” chips. The evidence. In the end, if you can’t beat your “enemy”, join him. The sources consulted by The Information point out that, in addition to other complications, the training software was not as stable as what alternatives such as those from NVIDIA offer. And all of this has ended up causing two multimillion-dollar agreements. In a period of just a few days, Meta signed agreements with both AMD and NVIDIA so that both can supply them with chips to train the AI. It’s a win-win for everyone because Meta receives what he needs, NVIDIA has another client on a list it dominates and AMD continues to make a name for itself in the sector thanks to agreements like this one or the one they signed last year with OpenAI. In addition, Meta secures several sources so as not to depend only on one company. In fact, it is also estimated that they have signed an agreement to rent TPU units from Google. The competition. Meta’s objective, therefore, is to diversify its portfolio of AI chip suppliers as much as possible while continuing to investigate its own chips of which, supposedly, we will learn details later. They may continue investigating Olympus or a variant or decide on another approach. Because what is clear is that they must develop something ‘own’. NVIDIA and AMD are suppliers, not competitors as such. The real competition is OpenAI, X and Google, and the last two have their factories at full capacity. Google with its TPUsprocessors designed exclusively for AI, and xAI with its own chips that they abandoned and picked up more recently. Objective: dethrone NVIDIA. And all this occurs in a world in which everyone is ‘friends’, but enemies at the same time. I already say that NVIDIA is a hardware supplier, but they practically control the AI ​​​​computing market and are moving both in hardware and software. It is logical that other companies are investigating alternatives to boost their own AI. Added to the list is an Amazon that is also manufacturing some chips called Trainium3 UltraServer and OpenAI with its agreement with Broadcom to manufacture chips. It is, as I say, a curious scenario: everyone needs each other, and there is the “circular economy” of AI, but at the same time everyone wants to be independent. The problem is that NVIDIA has a huge advantage in this and has both the technology and the contracts with memory companies… and the contacts with which it ends up manufacturing the best chips: TSMC. In Xataka | Trump ordered the Pentagon to stop using Claude for being a “Woke AI.” Right after he bombed Iran using Claude

ended up having access to 6,700 devices around the world

You don’t have to have a house full of devices to depend on the cloud. All it takes is a connected robot vacuum cleaner so that some of its information passes through external servers and we can manage it from anywhere. The model has been standardized and, in principle, works. But that normality breaks down when questions arise about who can see what. That is what an American technology publication published regarding the DJI ROMO: A user claimed to have accessed data and activity from thousands of devices around the world before the issue was fixed. Curiosity and risk. The story begins with something much more trivial than one might imagine. Sammy Azdoufal, an AI strategy manager at a vacation rental company, only wanted to control his own DJI ROMO with a PS5 controller “because it was fun,” as explained to The Verge. To do this, he developed a homemade application that began to communicate with DJI servers. The unexpected thing was that it was not just his vacuum cleaner that responded. Instead of a single device, thousands began to appear, spread across different countries, which recognized it as if it were its owner. What I could see and control. What came next is what really changes the tone of the story. During a live demonstration, Azdoufal showed how his tool was detecting devices in real time: in just nine minutes he had cataloged 6,700 robots in 24 countries and collected more than 100,000 messages sent by them. Each one reported information every few seconds through a protocol called MQTTcommon in connected devices, indicating their serial number, which room they were cleaning, how far they had traveled or when they returned to the charging base. As Azdoufal himself explained, he did not need to “hack” the company’s servers in the classic sense. What he did was analyze how his own ROMO communicated with DJI’s infrastructure and extract the private token associated with his device, that is, the credential that allows him to authenticate to the system. To decipher these protocols, he resorted to the well-known AI tool Claude Codewhich he used as support in the reverse engineering process. The problem, always depending on your version, is that once authenticated as a valid client, the servers did not properly limit which messages you could subscribe to receive. The official version and patches. The company maintains that it detected the vulnerability in late January through an internal review and began remediation immediately. According to its statement, it deployed a first patch on February 8 and a second update on February 10 to cover nodes that had not received the initial fix. DJI admits “a backend permission validation issue” related to MQTT communication between device and server, although it says unauthorized access was “extremely rare.” It also highlights that the transmission was encrypted using TLS and that data from European devices is stored on AWS infrastructure located in the United States. Questions on the table. If a user was able to detect that level of exposure almost by accident, one might wonder how these systems are internally audited and what controls are in place before a product hits the market. We are not talking about just any appliance, but rather a device with sensors, a camera and permanent connectivity within the home. Azdoufal himself even questioned the presence of a microphone in a vacuum cleaner. It is not a new debate: in recent years Other manufacturers have faced similar incidents with robots capable of transmitting video or storing images. A change of scenery for DJI. After years dominating the air with drones and stabilization systems, the company decided to apply its engineering to domestic soil. The result was DJI ROMO, a robot vacuum cleaner that combines optical and LiDAR sensors to generate precise maps and avoid obstacles, supported by planning algorithms and the DJI Home app to manage zones, modes and alerts. It is not a simple mechanical appliance, but a connected platform that depends on continuous data to function with that precision. And that is where security takes on a determining role. Images | DJI In Xataka | How often should we change ALL our passwords according to three cybersecurity experts

There was a time when Japan was the king of TVs. All its giants have ended up surrendering to the evidence

Not so many years ago, talking about Japanese televisions was talking about the kings of the market. Not so much for volume but for quality. The Sony Trinitron were (and still are) to play retro video games) legendary, but there were the technologies of Sharp, Toshiba or the plasma from Panasonic. However, first South Korea and now China have run over Japanese brands. And Panasonic is the latest “victim.” And it may be for the best. The Panasonic case. Bluntly: Panasonic, which was once on the podium of the great Japanese manufacturers, has just announce that the Chinese company skyworth From now on, it will be in charge of producing and selling its televisions. At the catalog presentation event for this year, representatives of the Japanese brand they commented that the new partner “will lead sales, marketing and logistics while Panasonic provides expertise and quality assurance.” Speaking to FlatpanelsHD, Panasonic said Skyworth will take care of everything, but the resulting product will still be one that will have the “Panasonic” name. Turn towards China. The company had been outsourcing the production and functions of its models for years. mid-range and entrybut now that loss of identity is complete. With the move, the firm hopes to once again become one of the largest in both Europe and the United States, and the curious thing is that this announcement comes just a few weeks after Sony will outsource the production of its televisions to TCL. It is a symbolic turn because the Japan that previously led the technological conversation was gradually eclipsed by South Korea, Taiwan and, now, China. Both TCL and Skyworth are Chinese companies and, although TCL is much better known, Skyworth is not exactly small. Headquartered in Shenzhen, it has intermittently strained in the conversation of the main television manufacturers Android TV. It makes… sense. In statements to FlatpanelsHD, both companies will jointly develop the high-end OLED TVsand the movement has a very clear reading: it is a win-win for both companies, but as in the case of Sony-TCL, one wins -much- more than the other. Chinese companies have made a very strong investment in recent years in plants capable of producing an enormous quantity of large-inch panels. Televisions are manufactured from what is known as “mother glass”plates that, the larger the size, the more derived large-inch televisions will be produced. And if more televisions can be produced at a time, they can be sold at a lower price. TCL has state-of-the-art factories focused on that large-inch production, which helps explain why they sell 65- and 75-inch models at ridiculous prices. Therefore, with these associations, the Japanese hope that the muscle of the Chinese will help them achieve greater penetration. But, of course, it is undeniable that the names ‘Sony Bravia’ and ‘Panasonic’ are much more powerful than those of any Chinese brand, and now it is TCL and Skyworth that can exploit it in the market. Tears in the rain. In the end, as they say, of those muds, these muds. Panasonic, which was once one of the spearheads in terms of television technology thanks to plasma, had not made much of a splash for years in a conversation dominated by LG, Samsung and, by leaps and bounds, the Chinese. They were, along with Sony, the stronghold of a Japanese industry that had already seen how giants like Sharp, Pioneer or Toshiba they stayed in the gutter to be, in some cases, rescued by… Chinese companies (Toshiba by Hisense) or Taiwanese (Sharp by Foxconn). As they say, ‘mistakes were made’ and Panasonic held on for too many years to a plasma technology which was impressive, but also very expensive to produce and a huge ship that could not correct course when better LCD and OLED panels began to come out. As we say, we have to wait to see what this translates into in terms of market share, but in Japan it is a blow. Only with the joint venture of Sony and TCL, esteem that 50% of the Japanese market will be controlled by Chinese capital. The last pride they could hold on to was Panasonic. In Xataka |

The robotaxis did not need a driver, but Waymo has ended up paying delivery drivers to close ajar doors

What until not long ago seemed the exclusive province of science fiction is beginning to become visible on the streets: cars capable of moving from one point to another without a driver. And you don’t need to buy one to live the experience. In some cities around the world, it is enough to order a robotaxi from an application and see how the vehicle arrives to pick you up, identifying you in certain models with your initials on an LED screen located on the roof, as our colleague Javier Lacort confirmed in San Francisco almost two years ago. Futuristic scene, present problems. In the midst of this transformation of transportation, which aims to offer more safety and comfort, its weak points are also beginning to emerge. We don’t talk about the jams caused by connectivity failures nor of those cars that, for some reason, They start honking their horn at four in the morning. The issue is even more basic: if a user closes the door incorrectly, the vehicle cannot continue operating. The problem is not driving, it is being able to leave. In the case described by CNBC and TechCrunchare blocked if, at the end of the trip, a passenger leaves a door ajar. Waymo confirmed to both media that this detail prevents the car from resuming travel and completing new routes until someone closes it correctly. This is a basic, almost domestic friction that turns a simple oversight into an operational problem and explains why the company has to resort to human support to return its vehicles to service as soon as possible. Pay delivery people. The company is testing a system in Atlanta that alerts nearby delivery drivers of applications such as DoorDash when one of their vehicles is left with the door open. The proposal is simple: approach, close it and allow the robotaxi to operate again. The media even cites the case of a driver who was offered $11.25 for that specific task. They also detail a similar order divided between $6.25 for travel and another $5 after verifying the closure. It is not an isolated case. The Atlanta pilot is not the only example of this specific dependence on human help. Waymo has also turned to users of honka roadside assistance platform, to resolve similar situations in other American cities. In this case, some collaborators received offers of up to $24 to close the door of a stopped robotaxi. More than a local anecdote, these examples draw a clear operational pattern: when the vehicle is immobilized due to a minor detail, the quickest solution is still to send a person. Automatic doors, on the way. Today Waymo operates with a fleet made up entirely of electric vehicles Jaguar I-PACE adapted for autonomous driving, which still depend on human intervention in situations like this. But the company owned by Google assures that this gap has an expiration date, although without specifying it: it announced that its future robotaxis will have automatic closing. Meanwhile, the present of the autonomous car continues to show that double face: sophistication in driving and human dependence on the simplest details. Images | Xataka In Xataka | When San Francisco suffered a blackout, its streets were plunged into chaos for a reason: dumped self-driving cars

ended up giving away $40 billion in bitcoins

One of the largest houses in cryptocurrency exchange from South Korea wanted to reward its users with a symbolic promotion for their operations. However, a mistake has made the promotion of the company on everyone’s lips today…and not exactly for the better. For a few minutes, several hundred customers Bithumb They saw their accounts filled with bitcoins worth several billion dollars. What should have been a small promotional prize turned into a mistake that, on his screen, became billionaires to normal users. A conversion error. Bithumb’s original idea was to offer a reward of 2,000 won (approximately $1.37 in exchange) to users who participated in a company promotional event. The equivalent of a welcome coupon for newcomers. The problem came when, instead of sending that small amount in won, the system ended up sending bitcoins to the accounts of the new clients. According to published the BBCthe failure occurred when an employee entered the indicator “BTC” in the payment field instead of “Korean won”, so the platform executed the reward in cryptocurrencies instead of local currency. That simple misplaced piece of information led the company to mistakenly transfer some 620,000 bitcoins, a figure that, at current prices, is around $44 billion. A mistake that destabilized the market. Bithumb estimated that about 249 users of its platform received bitcoins by mistake and the failure affected about 695 clients who operated on the platform. It is estimated+ that, on average, each user was assigned about 2,490 bitcoins, which represents a value of around 144 million euros. Seeing the new balance of bitcoins in their account, several of these new “accident millionaires” rushed to sell, generating an avalanche of orders that caused the price of bitcoin to fall within Bithumb itself. about 10% in a matter of minutes. Bithumb hit the panic button. When the company realized the error, it began to apply restrictions to affected clients, temporarily limiting operations and withdrawals to stop the leak of funds. In its assessment of the incident, Bithumb assures which managed to recover approximately 99.7% of the 620,000 bitcoins that were left by mistake, which would leave about 125 bitcoins still pending recovery. The company also points out that what has already been recovered includes some 1,663 bitcoins that users they managed to sell before the platform’s “panic button” was pressed, which activated transaction blocks. Lee Jae-won, the company’s executive director, assured that the company will take the incident as a lesson and will prioritize “customer trust and peace of mind” over external growth. “Paper” Bitcoin. The case has reopened the debate about the so-called “paper bitcoin”, in reference to those transactions that exist within the internal systems of the exchanges but do not always have the real assets that support them behind them. The sum of bitcoins that suddenly appeared in the accounts far exceeds the $5.3 billion in bitcoin assets that Bithumb claims to be in custody, making it clear to what extent much of that “wealth” was only on paper in its internal books. It’s not the first time it happens. It is not the first time that a banking or financial entity makes its users millionaires in a “magical” way. He Financial Times counted a few days ago how Citibank made one of its clients a billionaire by transfer 81 billion dollars when he intended to send him a payment of $280. As happened with the South Korean bitcoin exchange, the bank realized the error and fixed it (unfortunately for the user) in 90 minutes. However, the simple fact that a human error when indicating a figure or inserting the type of currency can shake the entire bitcoin market has set off alarms in the South Korean Financial Supervisory Service, which has announced reviews and does not rule out opening formal investigations if they detect serious failures in internal controls or signs of illegal activity. In Xataka | Cryptocurrencies were supposed to become “independent” from the power of states. The US just killed him with a stroke of the pen Image | Unsplash (Michael Fortsch)

The most brutal rains in the history of Andalusia have already ended. Now the real problems begin

The storm Leonardo little by little begins to fade from the maps, leaving in its wake mainly alerts for strong gusts of wind in certain regions of Andalusia. The problem is that its footprint on the ground is just beginning to show its true dimension, since the main danger is that even if rainfall begins to decrease, the water continues to rise in the rivers. And this gives rise to the feared floods that are already has caused numerous evictions. Extreme saturation. To understand why authorities and the AEMET maintain the emergency level 2 and red warnings despite lulls in rainfall, we have to look under our feet. The soil functions, under normal conditions, like a sponge capable of retaining large volumes of water. However, after weeks of constant rainfall, Andalusia has reached its saturation point. In this way, the land does not support any more water, which increases the runoff coefficient throughout the territory. This means that each new liter that falls, no matter how small, will barely filter through the ground. The result is that it will run on the surface, turning slopes and mountains into giant slides towards the rivers. Increase of the channel. This is the reason why 14 rivers are under red notice today and another 31 under orange. Rivers such as the Guadalete, the Genil, the Guadiaro and the Guadalhorce They are not just responding to today’s rain.but to the inability of the basin to drain what has accumulated in the last 48 hours. We have an example in Huétor Tájar in Granadawhere the Genil River overflowed, making the entire town become a large lake. And this is the main risk we face despite the fact that rainfall is beginning to reduce its intensity. The reservoirs. The other major front of this crisis is hydraulic engineering. The reservoirs act as buffers during floodsretaining the water to prevent it from devastating the towns downstream. But Leonardo has managed to finish filling these reservoirs to their maximum limits. This has forced us to initiate technical releases with increasing amounts of water to avoid breakages or uncontrolled overflows of the dams. The problem is that doing so injects more flow into rivers that are already at the limit of their capacity, keeping towns like Ubrique or the lower areas of the Guadalquivir in suspense. Sierra Nevada. Gravity in the Genil basin is not based solely on precipitation, but on thermodynamics. Leonardo is not a cold storm of polar origin, but rather an Atlantic storm loaded with humidity that is causing snow accumulated in previous weeks melts at a high speed. The result here is clear: a greater flow in the rivers that drain the Sierra that joins all the factors that we have mentioned before. Landslides. For the next few hours, in addition to the increase in the riverbed, we must also keep in mind the risk of hillslide. In these cases, water saturation increases the weight of the soil and reduces its internal friction. This translates into a greater risk of landslides on roads and slopes, something that can especially occur in mountain areas such as Cádiz or Axarquía in Malaga. More rain on Saturday. Faced with overflowing soil, the last thing you want is to receive more rain. But the reality is that this same Saturday a new storm comes in that has already activated an orange alert in a region that has been greatly punished by Leonardo such as Grazalema. In this case, accumulations of up to 80 liters per square meter are again expected, which may further aggravate the situation that is being experienced. Images | Ted Balmer In Xataka | We have always believed that London is very rainy and that Barcelona is not. The only problem is that it’s a lie

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.