two decades of success with the most stable audiences on television

The same week in April 2006 that La Sexta premiered broadcasts, ‘El Intermedio’ began its career as one of the channel’s flagship programs and Antena 3 launched the current stage of ‘La roulette de laluck’. Two decades later, both programs celebrate their anniversary (one with special panels, the other with Pedro Sánchez wearing borrowed suspenders) and stand out as true anomalies on a grid in eternal motion. The luck of roulette. ‘The roulette of luck’, a word guessing contest broadcast at two in the afternoon, manages to almost double the share of its own network. In the 2025-2026 season The program presented by Jorge Fernández registers a 21.6% screen share, with an average of 1,564,000 viewers and more than 3.1 million unique viewers. Antena 3, as a network, has averaged 12.4% of share. Its main rivals in that range, ‘Mañaneros 360’, is around 12%, and ‘El Precio Justo’ is timidly approaching 9%. It is not punctual: the format has four consecutive years above 20% and maintains a uninterrupted monthly leadership since May 2020. Poor beginnings. When the current stage of the contest started in 2006, it returned to the small screen with a 26.9% share and 1,318,000 viewers: promising figures that deteriorated in the following years. However, its number of viewers skyrocketed when it began to occupy the 2:00 p.m. to 3:00 p.m. slot that ‘The Simpsons’ had until then had, where television consumption is significantly higher. Curiously, ‘La roulette’ has seen its viewer base decrease since the 2020/2021 season, but the share has gone up. The explanation is that linear television as a whole loses audience, but ‘La roulette’ loses it less than its rivals. Not just business days. The robustness of the format also extends to weekends. Since 2020, Antena 3 has been broadcasting reruns of the contest on Saturdays and Sundays, and the program also leads in that slot: in the current season it reaches a 17.8% share and 1,258,000 viewers. All for a proposal that, born in the United States, has had some 60 international versions since 1975, and that arrived in Spain in 1990 with the birth of Antena 3, living a brief stage in Telecinco between 1993 and 1997 and a subsequent hiatus until Jorge Fernández took over the baton in his current stage. The ‘El Intermedio’ case. The satirical news program draws a different, but equally striking curve. In 2026, it averages about 848,000 viewers and reaches between 6% and 7% share (common data in the modest Access Prime Time of La Sexta). All this with specific increases such as the 20th anniversary special broadcast last Thursday, April 16, with which it scored its best quota since February 2022 (10.2%), in an unusually long delivery of more than two hours, and which brought together active politicians (several ministers, Gabriel Rufián or Pedro Sánchez himself) and musicians such as Kiko Veneno, Ana Belén and Víctor Manuel. A complicated hour. ‘El Intermedio’, unlike ‘La roulette’, has never moved from its original space, perhaps the most hostile strip of the entire schedule: the moment in which the viewer chooses where to spend the night. It has ‘El Hormiguero’ on Antena 3 and ‘La Revuelta’ on La 1 as its last direct competitors, but Wyoming has been resisting for two decades. According to a analysis of Barlovento Comunicaciónin 2012 La Sexta designed a strategic positioning of “vertical programming” focused on political and social news. The programs created then (‘The Intermediate’, ‘Al Rojo Vivo’, ‘Better Late’ or ‘Saved’) had the objective of providing street-level information about current events. The Atresmedia umbrella. The two programs share something more than the year of birth: both are from Atresmedia, and both operate in time slots where at least part of the competition cannot find alternatives. In the case of ‘El Intermedio’, the Wyoning program has seen in these twenty years how ‘El Hormiguero’ went from competitor to ally‘First Dates’ on Cuatro took shape as a more stable competitor during the last decade and finally, ‘La revuelta’ appeared, which notably politicized the strip. The numbers. Atresmedia closed 2025 with revenue of 1,002 million euros and maintained its hegemony on television for the fourth consecutive year, with a screen share of 26.1% and a historical advantage of 1.7 points about Mediaset. Antena 3 was once again the most watched channel of the year. These two anniversaries are indicators that the strategy of cheap programs, with rigid structures and a loyal audience continues to be profitable. ‘La roulette’ produces five days a week with a small team and ‘El Intermedio’ works with daily news as raw material, which reduces content development costs. And two decades of holding on. In Xataka | The “audience war” with ‘La Revuelta’ has been very good for ‘El Hormiguero’. Eight million euros of good

the success of Starship V3 accelerates the race to the Moon

SpaceX, Elon Musk’s space companyis almost ready to launch its next-generation Starship in the month of May. But before carrying out this launch it is necessary to carry out some static tests, such as starting the engines. The first test of this type was carried out just a month ago, with a small incident at the end, but the second one went perfectly, so the launch plans are moving forward. A complete ignition test. On April 14, SpaceX performed the static ignition of the engines of its upper stage. Although the ignition test of the first stage had to end early due to a failure in the ground equipment, in this case all the engines have been able to ignite, demonstrating that this enhanced version of Starship is ready for its first flight. Why is it necessary? Logically, rocket engines are key and very sensitive parts for their proper functioning. They are one of the factors that most often fail in launches, along with fuel filling systems. Therefore, it is important to test prior to launches. In the static ignition tests, all engines start to check that there is no anomaly. In the case of version 3 of Starship none have been detected. Everything is on the right track. A battered version of the previous one. The Starship version 3 measures 124.4 meters, 1.2 meters longer than the previous version. It is much more powerful, thanks to its V3 Raptor engines. For this reason, SpaceX has already announced that it will be capable of carrying loads weighing more than 100 tons to low Earth orbit. Version 2 could only travel with 35 tons on board. Ready for the Moon? After the success of Artemis II, NASA already has its sights set on Artemis III, which will become the final test for the landing of a new batch of humans on the moon. To do this, the American company needs a rocket to match. Never better said. For now, there are two private companies working on it: Blue Origin, with Blue Moon, and SpaceX, with Starship. Although at first everything was betting that it would be SpaceX that would take the next humans to the Moon, some delays have led to thinking that Blue Moon could overtake them on the right. Therefore, the fact that version 3 of Starship has advanced in this way is good news for Elon Musk’s company. In May we will know if it really lives up to expectations. Images | SpaceX In Xataka | In 2018, Elon Musk put his own car into orbit. Eight years later it is still circling the Earth

Apple is dying of success with the MacBook Neo. So much so that its manufacturing is in danger

Apple has a problem with MacBook Neo: You are selling it too much. The first Mac with an iPhone processor is being an overwhelming success, and it hits the keys that mobilize the average user: it is cheap, it can be used for practically all uses and… it is a Mac. The problem? That this laptop has the Apple A18 Pro It is no coincidence, and that it is selling so much is a problem for the supply chain. Why the A18 Pro. Apple is not manufacturing new A18 Pro chips for its MacBook Neo, it is recycling processors from the original production. If we look at its technical details, the MacBook Neo incorporates a five-core GPU and not six. When processors are manufactured in batches, not all of them work perfectly. Some may have specific failures in one of the CPU or GPU cores. Instead of throwing them away, Apple deactivates that defective core and can sell a trimmed version of it. This allowed Apple to create a laptop whose processor was practically at zero costa pillar for the profitability of the product. The problem. The demand for the MacBook Neo is exceeding Apple’s expectationsand the stock of the A18 Pro is starting to come to an end. According to Tim Culpan, production of this device is divided equally between Quanta and Foxconn, with an initial plan to produce about six million units. As of today, suppliers are not clear about being able to produce more MacBook Neo with the stock of A18 Pro processors. The dilemma. The Apple A18 Pro is manufactured in TSMC’s N3E process, three-nanometer technology, a chip whose production capacity is practically exhausted. Among Apple’s options would be to pay a premium to order urgent batches from TSMC, something that would allow production to resume but would end the key to the Neo: manufacturing an economical product with a profit margin. The second plan involves reallocating the wafers that Apple uses for other devices to the production of the Neo, another solution that does not seem ideal. If we add to this the current storage and RAM costs, the production of the Neo becomes complicated. No solution in sight. If demand for the MacBook Neo remains above expectations, Apple will have a decision to make. Raise Neo prices? Eliminate the budget 256 GB option? Offer new colors to revitalize the product? Be that as it may, the Neo makes one thing clear: the strategy of selling MacBooks at the lowest possible price works. And even more so when we are at that point where a mobile processor is, literally, a PC processor. In Xataka | The MacBook Neo is the biggest existential threat to the Windows laptop market. And the manufacturers have no answer

how a relay in Gipuzkoa saved Europe while the Spanish system died of success

Next April 28 it will be exactly one year of the biggest collapse in our recent history: the great blackout that turned the Iberian Peninsula black and left 55 million people in Spain and Portugal without electricity supply for 12 hours. Almost twelve months later, we finally have the official autopsy. The final report. The European Network of Electricity Transmission System Operators (ENTSO-E) has made public the long-awaited final report. Throughout 472 pages, the panel of experts dissects an unprecedented event to the millisecond. The document, which warns from its preamble that it does not seek to assign legal responsibilities but rather to learn from mistakes, reveals a chilling diagnosis: the blackout was the perfect storm caused by the rigidity of new technologies, manual ineffectiveness in the face of a millisecond crisis and an infrastructure incapable of keeping pace with the energy transition. The anatomy of collapse. To understand the ruling, you have to look south. According to the European report, at 12:03 p.m. on April 28 a local vibration was recorded of 0.63 Hz caused by instability in the electronic converters of renewable plants. Minutes later, at 12:19, the swing was amplified, affecting the entire continent. Technical research points to what could be defined as “operational blindness.” The report notes that much of the renewable generation in Spain operated under a “fixed power factor.” That is, the solar and wind plants were blind to the needs of the grid; they could not absorb reactive energy dynamically. When the voltage rose, these plants were simply taken offline for safety. When they stopped generating electricity, their reactive absorption also suddenly stopped, causing a rebound effect that triggered the voltage in an uncontrolled manner. Furthermore, while the crisis required millisecond reflexes, the control of reactances (the machines that absorb excess voltage) was carried out manually. Operators needed vital minutes to assess the situation. The blackout that could have been avoided. The European report not only acts as a notary for what failed, but also puts on the table what should have happened. By diving into the technical simulations of the ENTSO-E document, sector experts such as Joaquín Coronado have drawn a devastating conclusion: The collapse of the Spanish electrical system was not inevitable, but the result of ineffective management of voltage control by the System Operator (Red Eléctrica). The European analysis is blunt. In his simulation of sensitivity (named Analysis 7), the report concludes that if the connection of the reactances – such as the Caparacena shunt reactor at 400 kV – had been automated instead of depending on the slow human factor, the voltage rise would have been limited and the cascade effect avoided. In addition, ENTSO-E simulates alternative scenarios that show that electrical zero would have been stopped cold with measures that should already be operational: an increase in reactive power margins, the requirement that conventional generators absorb more voltage, or the use of the eight new synchronous capacitors that were already planned in the 2021-2026 planning. Without this automated reactive power reserve or dynamic support, the network was orphaned at the worst possible moment. The rescue from Gipuzkoa. The continental disaster was avoided thanks to Gipuzkoa. At 12:33, the high voltage substation in the Osinaga neighborhood of Hernani detected that the Spanish chaos threatened to drag down all of Europe. In milliseconds, the protection relay out-of-step (out of step) decapitated the connection with the French Argia substation. This “shot” left Spain in the dark, but it shielded the continental network. Barely ten minutes later, Hernani became the rescue route, allowing France to inject energy to resurrect the peninsular system from top to bottom (Top-Down). The structural problem of the market. The targeting of clean energy in the moments before the blackout has raised eyebrows, but the sector defends itself by pointing directly to regulatory inaction. In an interview for XatakaHéctor de Lama, technical director of UNEF (the photovoltaic employers’ association), is blunt: “A plant, no matter how large, cannot cause a blackout. Many other factors must come together.” De Lama explains that the current inverters installed in Spain meet very high European technical requirements, but places the structural problem on the roof of the Ministry (MITECO) and the CNMC for not financially incentivizing renewables to provide security services to the grid. “The current remuneration of €1/MVArh is not enough to encourage renewables to provide this service (voltage control) when we are paying combined cycle plants between 100 and 200 times more for the same thing,” details De Lama. The UNEF expert also recalls a historical administrative negligence that took its toll on us on April 28: while Portugal approved regulations to take advantage of the voltage control of its renewables in 2019, Spain took years to implement vital mechanisms such as Operation Procedure 7.4. We were playing with the rules of the past in the face of a crisis of the future. “A gold mine without a road.” This diagnosis fits with the voices of the industry. During the VI Economic Forum of elDiario.esPatxi Calleja, director of regulation at Iberdrola Spain, defined the national system as “a gold mine without a road.” We have enormous cheap generation capacity, but the electricity grid is the great limitation due to lack of investment compared to our European neighbors. And this green shield also has cracks. As we already analyzed in Xatakathe very high renewable penetration shields us from geopolitical crises (such as the increase in gas prices due to the war in Iran) during daylight hours, plummeting prices to zero. However, as soon as the sun goes down, the lack of mass battery storage sends us back to square one, leaving us at the mercy of combined cycles and fossil volatility. The war without quarter. While technicians analyze the ENTSO-E simulations that point to operational failures, a fierce battle is being waged in the offices. The president of Redeia (parent company of Red Eléctrica), Beatriz Corredor, has used the Brussels report in her appearances in the Senate to entrench herself … Read more

must find a way not to die of success

The latest generation GPUs for artificial intelligence (AI) that are being designed by NVIDIA, AMD or Huawei, among other companies, They are a technological prodigy. However, their performance is deeply conditioned by the performance of the memory chips with which they coexist. And the most advanced GPUs are so fast that they often they are forced to wait until the memory gives them the data they need to be able to continue performing calculations. HBM4e memory chips (High Bandwidth Memory) seek to end this bottleneck in AI hardware once and for all. The three largest designers of this type of integrated circuits (Samsung, SK Hynix and Micron Technology) are working on their HBM4e solutions, and the two South Korean companies will presumably deliver the first samples to their customers during the second half of 2026. The American company Micron will arrive a little later: in 2027. SK Hynix currently leads this market with a share close to 70%so that the remaining 30% is shared between Samsung and Micron Technology. However, the future of your HBM4e memories is not only in your hands. To sustain its current market share SK Hynix must produce its future HBM4e memories on a cutting-edge lithography node, so, according to DigiTimes Asiahas decided to bet on it safely: it is evaluating the possibility of TSMC being in charge of manufacturing the core of these memories in its 3nm node. SK Hynix and TSMC alliance is a seismic movement in the AI ​​industry Traditionally, memory chip designers have also been responsible for manufacturing their own integrated circuits. However, SK Hynix has three good reasons for leaving the production of its HBM4e memories in the hands of TSMC. The first of them is that this Taiwanese company manufactures the GPUs designed by SK Hynix’s main clients, so if it is also responsible for producing the memory, the final assembly of these two components using COWOS advanced packaging (Chip-on-Wafer-on-Substrate) is much simpler. HBM4e memory must be produced using extremely small and fast transistors Additionally, HBM4e memory must be produced using extremely small and fast transistors, and SK Hynix management knows that its current integration technologies are not as advanced as TSMC’s more sophisticated lithography. Finally, HBM4e memories will not only be responsible for storing information; They will also be able to carry out basic operations with the data before delivering it to the GPU in order to optimize hardware performance for AI. In some sense these memories will be more like processors than ever, and TSMC has much more experience than SK Hynix in manufacturing these chips. Be that as it may, this alliance poses a problem: TSMC’s N3 node is absolutely saturated. NVIDIA, Apple and other clients of this company monopolize it, so TSMC is having very serious difficulties to meet the demand. In fact, it has faced this problem since it started manufacturing 3nm chips. Their second generation of this integration technology, known as N3E, refined N3B lithography enough to make its per wafer yield significantly higher. In fact, N3E eliminates some of the transfer process steps of extreme ultraviolet lithography and reduces transistor density in order to minimize manufacturing costs and improve per-wafer yield. The third generation of lithography TSMC’s 3nm is called N3P. It is characterized by increasing the density of transistors by 4% and their speed by 5% while their consumption is reduced between 5 and 10% at the same clock frequency. It’s not bad at all. Additionally, N3P lithography is fully compatible with N3E lithography design rules, so NVIDIA, Apple, and TSMC’s other customers can move their designs from N3E to N3P with virtually nothing done. However, despite the improvements that TSMC has made to its 3nm manufacturing nodes, there are not enough wafers for everyone. It is currently unclear how this Taiwanese company is going to resolve the arrival of SK Hynix to this node. Presumably you will have to maximize yield per wafer and increase your production capacity, but doing so is by no means a piece of cake. This is undoubtedly the biggest challenge TSMC faces because it cannot leave its best customers hanging. Image | Generated by Xataka with Gemini More information | DigiTimes Asia | SemiAnalysis In Xataka | The Government of Taiwan warns a TSMC in full international expansion: its best technology will stay on the island

Science had been looking for an alternative to laboratory mice for years without success. Until he found the moths

In the world of science, the mouse has been for decades the undisputed king of the laboratory. However, it is an expensive, slow and, above all, ethically complex reign. That is why we have been looking for alternatives for years, and the answer may not be in a silicon chipbut an insect that you have probably seen eating the wax of a beehive. The advance. This is what researchers at the University of Exeter have arrived at, who have achieved a milestone that promises to change the rules of the game in the fight against superbacteria: They have genetically “hacked” dinner moth larvae to function as real-time biological indicators. The most impressive thing is that they even have a very visual indicator: they shine when you get sick and go off when the medicine is working correctly. The biological traffic light. The study, published this week in Naturedetails how the research team has achieved what seemed impossible: applying tools of genetic editing advanced these moths with unprecedented precision. And I know this is very important, since using insects to model human diseases had limitations, but this team has combined two key techniques. The techniques. The first of them is the system PiggyBac to be able to insert genes that produce fluorescent proteins into these moths, so they have basically gone from having larvae to biological “neon lights.” In this way, if bacteria or fungi are injected, fluorescence makes it possible to monitor the infection in vivo under the microscope. In addition, the famous technique was also included CRISPR-Cas9 to deactivate specific genes in the insect’s body. This is a tremendously positive thing, as it allows scientists to manipulate the larva’s immune system to see how it reacts to different pathogens, mimicking complex human conditions. The key data. The bottom line is that the modified larvae allow us to see if an antibiotic is working in real time. The indicator we have is fluorescence, which if it decreases indicates that the bacteria is dying from the antibiotic and the larva is surviving. All this in a visual, fast and cheap way. Why the moth. It may sound strange to compare a moth with a mammal such as the mouse, which may be more like us, but the Galleria mellonella He has an ace up his sleeve: your body temperature. Unlike the fruit fly, these larvae can breed and survive comfortably at 37°C, the average human body temperature, which is crucial because many human pathogens only activate their virulence genes at that temperature. Furthermore, their innate immune system is surprisingly similar to that of mammals in terms of structure and function of phagocytes, the cells that literally ‘eat’ pathogens that enter the body. Furthermore, with this animal model the use of 10,000 mice per year in the United Kingdom alone can be avoided. Against the clock of the resistance. The context of this advance is not trivial, since we are facing a race against the resistance of bacteria to our antibiotics. We need at this moment test thousands of new compounds fastand doing it in mice is a brutal bottleneck both because of the time it takes and the ethical questions that arise. On the other hand, these transgenic larvae allow for massive screening. Instead of waiting weeks to see results in mice, scientists here can test hundreds of compounds in larvae and get immediate visual readings on toxicity and efficacy. Images | Wikipedia Kalyan Sak In Xataka | Researchers removed Instagram and TikTok from 300 young people to see if their anxiety decreased. The results speak for themselves

We will have to wait to test the future with the Meta Ray-Ban Display outside the US: they are being victims of their own success

The Meta Ray-Ban Display are not perfect glasses nor do they pretend to be, but it’s hard not to feel that there is something different here. The idea of ​​having a color screen integrated into the lens, with speakers, microphones, cameras and even artificial intelligence, continues to sound more like a prototype than an everyday product. And yet, it is happening. Faced with other attempts to “kill” the smartphone that fell by the wayside, such as the AI Pinthese glasses aim for something more subtle. Screen on the lens, bracelet on the wrist. Beyond the concept, the Meta Ray-Ban Display is based on an unusual combination. The information appears on a small color screen located outside the axis of vision, designed for brief and non-continuous consultations. The control is not done by touching the glasses, but through the Meta Neural Banda bracelet that interprets muscle signals in the wrist area to execute minimal gestures. Meta presents this system as a more natural way to interact, reducing the need for buttons, touch surfaces or visible commands. Overwhelmed demand, expansion on pause. As explained this Tuesday by the companythe volume of interest has far exceeded the available inventory, to the point of generating waiting lists that extend well into 2026. Given this scenario, Meta has chosen to freeze the international deployment that it had planned in the short term and focus all its efforts on fulfilling orders already placed in the United States, while reviewing its availability strategy. This stoppage directly affects the plans communicated months ago. The company had indicated that the Ray-Ban Display would arrive in early 2026 in markets such as the United Kingdom, France, Italy and Canada, a first wave outside the United States that is now on hold. Meta does not speak of cancellation or set new dates, a prudent position that confirms the immediate stop, but does not clarify how long it will be extended. {“videoId”:”x9qouiu”,”autoplay”:false,”title”:”Ray-Ban Meta (Gen 2)”, “tag”:”Meta”, “duration”:”134″} Does the queue get longer here too? Although Spain was not in that first group of countries, the slowdown has obvious consequences for this side of the map. In launches of this type, expansion usually proceeds in waves, starting with the United States and continuing, in some cases, through key markets. In this case, Meta never confirmed plans for Spain, neither for this first generation nor for possible subsequent ones. The only thing that can be stated is that, if at some point these glasses end up reaching this market, the international delay makes it reasonable to think of an even longer wait. In Xataka Meta is so serious about smart glasses that its catalog is already a mess: this is how the new models differentiate themselves News for the Meta Ray-Ban Display. The stoppage in expansion has not prevented Meta from continuing to show progress. During CES, the company presented new functions designed to expand the uses of glasses, as a teleprompter mode to read prepared texts or possibility of writing messages by drawing letters with your finger on any surface, which are then transcribed into digital format. They are improvements that reinforce the idea of ​​a product in continuous evolution, even when its availability remains limited by supply and inventory. Images | Goal In Xataka | Two weeks with the Oakley Meta. Technically impressive, but in no man’s land (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news We will have to wait to test the future with the Meta Ray-Ban Display outside the US: they are being victims of their own success was originally published in Xataka by Javier Marquez .

Senna has given us back the passion for a Formula 1 that no longer exists. And its sound is key to understanding its success

March 1, 1981. Brands Hatch, United Kingdom. He had fought for two karting world championships but was still a complete unknown to the general public. Not even in England, where the passion for motorsport is several steps ahead of other European countries, were they aware of what they were seeing. Brazilian with curly hair. The face of a child on the body of a 21-year-old boy. The arrogant look of someone who knows he is superior. And it is superior. That day was fifth at the controls of his Van Diemen. Two weeks were enough for me to get his first victory. With the circuit flooded, Ayrton Senna da Silva asked his team to put as much pressure as possible in their tires. They say that no one on the team believed in that decision but as a pilot who paid to have a guaranteed seat, the mechanics followed orders. The rest is history. The Brazilian driver began to string victories. Six races held that year in the Formula Ford 1600 with four victories. 12 victories out of 19 rounds in which he took the exit. At the end of that same year, Ayrton Senna fulfilled his family commitment and promise to Lilian de Vasconcelos Souza, then girlfriend and then briefly wife of the man considered the most talented Formula 1 driver in history. Senna returned to his country to run the family business. But he had already experienced what it was like to win. He had already experienced what it was like to be the best. And he came back to win it all. They exist, they are somewhere More than 40 years after that Brands Hatch race, Netflix released Senna. “While we were still searching, we recorded a Formula Ford in Sweden, an FF 1600,” The speaker is Gabriel Gutiérrezsound designer of the six-episode series in which the pilot’s life is recreated working with, among other tools, Dolby Atmos. Senna talks about the human side of the driver, his private life and his path to becoming a triple world champion. But if something attracts an amateur, it is the montage of the images, the recreations aboard legendary single-seaters. Recreations that would be nothing without their sound. “I received a call from a post-production supervisor from Brazil, Gabriel Queiroz, who told me about a new project by Vicente Amorim, with whom I had already worked on Holy. From the beginning, we started looking for cars worldwide and how to get models from that era to go out and record them,” explains Gutiérrez about how Senna was built. “The filming was going to be done with replicas of the cars that were custom-built models, fantastic, with enormous precision, but their engines were not Formula 1 racing ones,” Gutiérrez clarifies. Ayrton Senna in the Formula Ford 1600 in 1981 And there begins the challenge: to be able to record the most iconic models driven and against which Ayrton Senna competed throughout the decade of the 80s and early 90s. “Many people told us that we were crazy, that we were never going to achieve it, that those cars were dismantled and that they do not exist.” But boy do they exist. Whoever has ever gone to see a Formula 1 race, there is something that they do not forget: the sound. The current V6 hybrids have nothing to do with the brutal howl of the V10s of the late 90s and early 2000s that Senna himself would not see. What he did have in his hands were cars from a time that will not return. Between his debut in Formula 1 in 1984 and the fateful May 1, 1994 when he lost his life in the Tamburello curve of the Imola circuit (San Marino), the turbo V8 and the naturally aspirated V10 and V12 paraded through Formula 1, the latter with a brutal sound, hoarser than the return of the V10 from 1995 onwards. Pure sounds, without a trace of electrification, that danced inside the cabin to the metallic tapping of the gearbox lever. From stomping on the clutch to downshift, playing with the accelerator to synchronize the revolutions of an engine that was going above 10,000, 11,000, 12,000 rpm. The engine backfired before taking the first chicane at Monza where the Ferraris of Berger and Alboreto watched in shock as Ayrton Senna abandoned the car after Jean-Louis Schlesser crashed and got the only victory they would scratch to the McLarens throughout 1988. The hit of the accelerator at the start and the howl with each gear change before reaching the Parabolica and heading down the finish line. The no less powerful cry of the typhosi in the stands when they saw that they were returning to the top of the podium in Monza when just three laps before they had seen it impossible. They were years of pure driving, of senses. By sight, smell, touch… and hearing. For the protagonists and those who admired them. For those who saw a Brazilian debutant swims between the rails in Monaco in 1984jeopardizing the victory of an already renowned Alain Prost who managed to stop the race before its end, distributing half of the points in a decision that would end up costing him the World Championship at the end of the year in favor of Niki Lauda. Ayrton Senna aboard the Lotus 97T “We were able to record Ayrton Senna’s original Toleman from 1984 and the original Lotus, the 97T model at the Lotus Classic Track in Oxford, which was a fantastic recording. The Toleman was positioned as the new leading car for us, the favorite,” explains Gutiérrez. By then, they had already obtained a good handful of the cars that marked an era. As? Moving through the mist. Senna’s sound designer explains that his first idea was to talk to Frank Cruz, who held that same position in Rush by Ron Howard, a film about the duel between Niki Lauda and James Hunt in the 1976 World Championship. The film … Read more

We have searched for dark matter with the most sensitive detector in history and we have found nothing. And that is a success

The search for dark matter It becomes more and more like a game of hide-and-seek where, as we improve our vision, the target appears to become more invisible. The last thing we tried to do to find it was drill 1,500 meters deep underground, although in the end we had a very bad result, although it did allow us to find things that we were not looking for. The dark matter. It is without a doubt one of the great mysteries of physics. While many researchers suggest that this matter surrounds us and is the main component of the universe, others believe that we were wrong and it doesn’t exist. Although little by little evidence is emerging that it is true that it exists so that our own theories fit. This whole mess is mainly focused on the fact that we do not have the ability to detect this matter. We know it’s there, but we don’t ‘see’ it. Something that generates a great confrontation within the world of physicistsand that is why these types of experiments try to shed light on this matter that allows us understand much better the composition of what surrounds us. New tools. Science has exploited the LUX-ZEPLIN (LZ) experimenta very sophisticated tool built by humanity to hunt down these ghost particles. To understand it, it is nothing more than a sensor that had to be buried 1,500 meters deep, in the facilities of the Sanford Underground Research Facility (SURF), in South Dakota. The reason? Use the rock as a shield to block the cosmic radiation that bombards the surface. The concept. The magnitude of this experiment has undoubtedly been quite considerable, since at its core 10 tons of ultrapure liquid xenon have been housed. The theory here is that if a dark matter particle passes through the Earth, it should occasionally collide with a xenon atom that produces a tiny flash of light. In total, the LZ has analyzed data collected for 471 daysbetween March 2023 and April 2025. A period of time that makes this the most exhaustive search that has been done so far. The sound of silence. The main result is that no direct interaction with the particles has been detected. However, this null result is practically worth gold in the field of physics. And by not finding anything, scientists have been able to rule out a huge range of possibilities about what dark matter is and what it is not. In short, we have been able to establish tighter margins to detect dark matter, now having the strictest limit in the world on the cross sections of dark matter particles for a very specific mass. And it is that being of such a small masswhich is why it offers so many problems when it comes to detecting them. The surprise. The most fascinating thing about these results is not what was missing, but what appeared. Although the detector did not see dark matter, it did validate its extreme sensitivity by recording something incredibly difficult to capture: solar neutrinos. This marks a bittersweet milestone: the experiment has officially entered what physicists call the ‘neutrino fog‘. This means that we have reached a point of such extreme sensitivity that neutrinos (that go through everything without flinching) begin to generate background noise that could be confused with dark matter. And the truth is that we are facing a big problem, since technology will have to find a way to distinguish dark matter from neutrinos. The future. The experiment does not stop here. Although these results cover until April 2025, the official plan is to continue taking data until 2028, with the aim of accumulate more than 1,000 days of observations. And many experts continue to point to the same thing: 85% of the mass of the universe It’s dark matterand although it escapes us, we are getting closer to knowing what the universe is made of. Images | Karo K. In Xataka | The strangest event that humanity has witnessed occurred in 2019 under a mountain in Italy

Smart glasses find their “iPhone moment” in China. The key to your success: payments

In China, AI glasses allow you to pay by looking at a QR code and giving a voice command. Alibaba itself launched its Quark for $268, integrated with Alipay for payments and Taobao for purchases. Xiaomi presented its glasses with AI in June and they became the third best selling in the world in the first half of 2025, despite being available for only one week. The Chinese market for smart glasses is growing exponentially in the second half of the year, according to a study by BigOne Lab. Why is it important. After more than a decade of unfulfilled promises, smart glasses have finally found their reason for being. And it is something as prosaic as paying without taking your cell phone out of your pocket. AND It’s working in China like nothing else has before. in this sector. From the adoption for payments, the rest of the value proposition is built. The context. China’s digital infrastructure, where even the elderly use their smartphone for everything, facilitates adoption. QR codes are in all shops and Meta does not operate in China without a VPN, which has left the field clear for local companies to experiment without direct competition. Yes, but. The price is determining. Chinese glasses cost between 200 and 300 dollars, a price not too high. Xiaomi, RayNeo, Thunderobot, Kopin, Baidu and Alibaba compete in the Chinese domestic market. The payment functionality does not require very sophisticated screens or complex optics. All you need is a basic camera, voice recognition and connection to the payments ecosystem. This makes production much cheaper. The big question. Will we see something similar in Europe with Bizum? Mobile payments here are less ubiquitous than in China, but Bizum has achieved enormous penetration in Spain. If businesses adopted Bizum QR codes, as some already do, smart glasses could find their practical use here as well. The European ecosystem has advantages: stricter privacy regulation, greater consumer trust in traditional banking systems, and a population accustomed to incremental innovations. But it doesn’t have the density of QR codes that makes China the perfect terrain for this experiment. Between the lines. Chinese companies are not just developing hardware. They are creating the use case that justifies wearing smart glasses all day, and instead of looking for something spectacular and complex, they have found something much simpler and everyday: not having to take your phone out of your pocket. Rokid boasts that its glasses are not tied to a single generative AI model: they work with OpenAI, Llama, Gemini and Grok. They also offer simultaneous translation into English while someone speaks in Chinese. But none of that matters as much as the payment feature. And now what. Meta dominates the global market with a 73% share in the first half of 2025, according to Counterpoint. His success with Ray-Ban Meta This is explained by a design that is almost indistinguishable from normal glasses. In addition, Western manufacturers maintain advantages in chips. But Chinese companies have obvious advantages: many brands and models, rapid iteration, and the ability to adapt quickly to market changes. In Xataka | The POCO F8 Pro and F8 Ultra are a great change of direction for the brand. We spoke with POCO to find out what awaits us now Featured image | Xiaomi

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.