Movistar Plus+ was making a comeback after four years of losing customers. Telefónica has decided to cut its workforce

Telefónica has set 119 final departures in Movistar Plus+part of the ERE that will eliminate 4,554 positions in Spain. It is a reduction compared to the more than 200 losses initially planned, but it comes at the worst moment: when the platform was finally adding clients again. Why is it important. Movistar Plus+ has 3.75 million (the most recent data is from September 30) , the best data since 2018 after years of collapse. It lost almost 650,000 clients between 2019 and 2023, hit rock bottom, and was already beginning to recover. Now Telefónica is cutting muscle just when it needed to step on the accelerator. The paradox. The company bet a lot of money buying Canal+ and launching its own productions to compete with Netflix and Prime Video. When the numbers improve, he reduces the workforce. The inevitable question: how are you going to keep up with global giants with fewer people and a tighter budget? Yes, but. Subscriber growth does not guarantee profitability. Telefónica has reoriented Movistar Plus+ towards a more flexible and cheaper offer, unrelated to convergent packages. That adds customers but compresses margins. And competing in streaming without a global scale is very expensive. The unequal context. Netflix already has more than 300 million subscribers in the world. Prime Video exceeds 200 million. Disney+ around 120 million. Movistar Plus+ has 3.75 million in Spain, at the end of the third quarter of 2025. The difference in scale is brutal and translates directly into budget for content, technology and distribution. What works. Football continues to be the lifeline. LaLiga and the Champions League keep many subscribers hooked who, without that content, perhaps would not have stayed for so long. But a platform cannot be built only on sports rights that also increase in price every cycle, as we saw a few days ago. What deserves more luck. Movistar Plus+’s own series and documentaries have objective quality. ‘Poison‘, ‘The Messiah‘, ‘The Plague‘, ‘riot police‘, ‘The Pioneer‘ either ‘Rapa‘ demonstrate the ability to find powerful stories with local cultural sensitivity. Netflix and Prime also produce Spanish content, but Movistar Plus+ has built its own catalog that transcends obvious trends and connects with the public in another way. The problem is not the quality of the content. Quality is sometimes not enough when you compete against infinite budgets and recommendation algorithms fine-tuned with data from hundreds of millions of users. The big question. What will become of Movistar Plus+ if it continues to contract? It was beginning to regain ground, but doing so with 119 fewer people makes it difficult to maintain the pace. Without the investment capacity to match the Netflix-Amazon-Disney triumvirate, the room for maneuver narrows every quarter. The background. This ERE is not an isolated case. Telefónica has been thinning its workforce for years while it pivots towards infrastructure and gets rid of unprofitable Latin American subsidiaries. Marc Murtra, president for one year, has renovated its entire dome. The 2024 one cost 1,300 million and took 3,421 positions. This new adjustment will be more expensive and deeper. Between the lines. The unions have ended up accepting forced dismissals in minority companies such as Movistar Plus+, despite having set it as an initial red line. The pressure from the workforce to guarantee early retirements in other subsidiaries has weighed more than maintaining positions. UGT and CCOO have appealed to “common sense” and “responsibility”common euphemisms to justify a capitulation. In Xataka | Telefónica is preparing a tough ERE, but for many veterans it will be like a prize Featured image | Xataka with Mockuuups Studio

Europe is looking for a place to light its “artificial sun” and Spain only has to defeat Italy and Germany to achieve it

For decades, nuclear fusion has been the distant horizon of energy: an almost mythical promise, always thirty years ahead. A future without a map. In full electrification of the economy and with demand pushed by the digital industry and data centers, Europe has begun to set coordinates for that promise: where to build the first commercial centers. For the first time, the “artificial sun” is no longer just a scientific experiment and it becomes a problem of territory, infrastructure and industrial planning. And in this new European energy map, Spain appears among the best positioned countries. A new path. Gauss Fusion, the European company created to power the first generation of commercial fusion plants on the continent, has completed the first comprehensive European study of potential sites for this technology, in collaboration with the Technical University of Munich (TUM). The study culminates in a map that did not exist until now. A map that indicates 150 industrial clusters and up to 900 potential sites spread across nine European countries. Behind each point there is an analysis of geology, seismicity, meteorology, refrigeration, access to the electrical grid and existing infrastructure, aligned with standards of the International Atomic Energy Agency (IAEA). Spain on the horizon. It appears as the third country with the most identified clusters: 17, only behind Germany (53) and Italy (22), and ahead of France, Austria, the Netherlands or Switzerland. This is not a political decision or a formal candidacy, but rather a strictly technical diagnosis: where it would be possible to build a first-generation fusion power plant if it had to be done tomorrow. “That Spain appears as the third country with the most potential clusters is due solely to technical criteria,” emphasizes Milena Roveda, CEO of Gauss Fusion, in an interview with Xataka. “The study follows an objective methodology consistent with international standards. There are no strategic weightings or quotas per country,” he emphasizes. And that nuance is key. The map does not look for winners or distribute investments: it identifies where the minimum physical and industrial conditions already exist to host a fusion power plant. But why Spain? On the one hand, its fusion ecosystem. Spain is one of the European countries with greater historical involvement in ITERhouses the headquarters of Fusion for Energy in Barcelona and has achieved key industrial contracts for national companies. Added to this is the role of CIEMATuniversities with leading groups in plasma physics and materials, and the beginning of the construction of IFMIF-DONES in Granadaa critical infrastructure to validate materials for future reactors. On the other hand, their regulatory experience. “Spain has a nuclear regulatory body with extensive prestige and experience,” highlights Roveda. From an industrial point of view, Roveda insists that Spain should not limit itself to being a host: “It has the potential to be a key piece in the merger value chain. Companies like IDOM already have demonstrated that can design and deliver extremely complex systems. Where could these clusters be? The map does not draw isolated points, but rather broad areas. The study identifies regional clusters capable of containing multiple viable locations. In Spain, they appear spread over a good part of the territory – from Andalusia and Extremadura to Castilla y León, Aragon, Catalonia, Galicia, the Basque Country and the Valencian Community – and are concentrated in industrial areas with high electrical demandgood network connectivity and, in some cases, close to old energy enclaves that could reuse part of their infrastructure. Frédérick Bordry, CTO of Gauss Fusion, explains to Xataka that the objective of the map is not to select a specific place, but “to have a broad database that allows collaboration with authorities, companies and other interested parties.” The final decision, remember, will not come until the end of 2027. What would a commercial fusion center be like? Talking about commercial fusion is no longer talking about experiments like ITER. Gauss Fusion works with the concept of a GIGA plantcapable of producing 1 gigawatt of electricity. This implies very specific industrial requirements. “Assuming an efficiency of 30%, a plant of this type must safely evacuate about 2 GW of heat,” explains Bordry. In practice, this requires access to rivers, reservoirs or the sea, as well as robust electrical infrastructure. Unlike fission, fusion does not produce chain reactions, is self-limiting, does not emit CO₂ and does not generate long-lived radioactive waste. “Due to its safety features, it could and should be integrated near urban and industrial centers,” says Bordry, even supplying waste heat for industrial uses or district heating. This aspect connects with a trend that is already seen in Europe: heat recovery in district heating networks, as happens in Finland with data centerseither the use of large industrial heat pumps. The process now enters a delicate phase. According to Gauss Fusion, the goal is to reduce the European map to between two and five final locations by the end of 2026, and make the final decision in 2027. But the technical criteria will not be the only ones. “Political will, the regulatory framework and social acceptance will be essential,” emphasizes Roveda. In his opinion, Europe needs policies that promote fusion as a new industrial engine, and regulations “adapted to the real risk of these facilities.” Social acceptance will also be key. “Transparency and citizen participation are essential,” he says. “We have to explain well what fusion is and what it is not.” A project that covers a lot. For Bordry, no European country can tackle a project of this magnitude alone. The merger will require a continental industrial alliance, something that Roveda defines as a “fusion Eurofighter”, in which Spain should play a central role, not only as a location, but as a technological and industrial supplier. In a context in which European electricity demand could grow up to 75% by 2050fusion is beginning to be seen not as a distant promise, but as one more piece of the energy puzzle, complementary to renewables, storage and electrification. An open closure, but with a … Read more

The cosmos has sent us a series of blue flashes for more than a decade. We now have a clue as to what they really are.

For more than a decade, the cosmos has been sending us mysterious flashes of ultra-bright blue light that appear out of nowhere and disappear in a matter of days. This phenomenon has a strange little name, but they are known as ‘luminous fast blue optical transients’ (LFBOTs), and have baffled astronomers since its discovery. Now, thanks to the analysis of one that has become the brightest ever detected, scientists believe they have solved the enigma: they are black holes devouring companion stars, and the process is extremely violent. The discovery. The team led by researchers from the University of California at Berkeley analyzed a LFBOT discovered in 2024 and named ‘AT 2024wpp’. The phenomenon turned out to be between five and ten times more luminous than any other of its kind previously observed. Astronomers used a range of space and ground-based telescopes (including Chandra, Swift, NuSTAR, ALMA, and the Keck and Gemini observatories) to study it at multiple wavelengths, from X-ray to radio. The data revealed that the energy released by AT 2024wpp was 100 times greater than that of a normal supernova. As Natalie LeBaron, a graduate student at Berkeley and first author of one of the studies, explains, “the absolute amount of energy radiated by these bursts is so large that you can’t feed them with the collapse and explosion of a massive star, or with any other type of normal stellar explosion.” An extreme cosmic feast. The researchers they propose that these flashes are produced by what they call “extreme tidal disruption.” This process occurs when a black hole (with a mass up to 100 times that of our Sun) completely destroys its companion star in a matter of days. According to the team’s reconstructions, the black hole had been absorbing material from its companion for a long time, surrounding itself with a halo of gas. In the case studied, the scientists report that, when the star got too close and was torn apart, the new material violently collided with the pre-existing gas as it fell towards the black hole, generating the intense blue and ultraviolet light characteristic of LFBOTs. According to account Robert Sanders, a researcher at the University of Berkeley, Some of the gas was ejected in jets from the poles of the black hole at about 40% of the speed of light, producing the radio emissions that scientists later detected. Intermediate mass black holes, a separate enigma. The black hole’s inferred mass places these objects in a particularly interesting category: intermediate-mass black holes. Although experiments like LIGO Black hole mergers of more than 100 solar masses have been detected, they have never been directly observed and their formation process remains a mystery. “Theorists have proposed many ways to explain how we get these large black holes,” points out Raffaella Margutti, associate professor of astronomy and physics at Berkeley and lead author of both studies. “LFBOTs allow us to approach this question from a completely different angle. They also allow us to characterize the precise location where these things occur within their host galaxy, which adds more context to trying to understand how we ended up with this configuration: a very large black hole and a companion.” A family of phenomena with curious nicknames. The first LFBOT with sufficient data for analysis was detected in 2018 and received the official designation ‘AT 2018cow’. His name led researchers to nickname him “the Cow”, a tradition that continued with later events: the Koala, the Tasmanian Devil and the Finch. AT 2024wpp, the subject of this study, has already been informally named the Woodpecker. To date, just over a dozen of these events have been identified, all located in galaxies with active star formation at distances of hundreds of millions and billions of light years. The companion star destroyed in AT 2024wpp was more than 10 times the mass of the Sun and could have been a Wolf-Rayet starthat is, very hot and evolved objects that have already consumed much of their hydrogen. TO hunting for LFBOTs. Researchers hope that the upcoming ultraviolet space telescopes, ULTRASAT and UVEX, scheduled to launch in the coming years, will revolutionize the detection of these phenomena. “Right now we find only one LFBOT a year or so. But once we have UV telescopes in space, finding LFBOTs will become routine, like detecting gamma ray bursts today,” explains Nayana AJ, researcher at Berkeley and first author of X-ray and radio analysis. In Xataka | When nuclear energy orbited the Earth: the day a Soviet satellite with a reactor fell in Canada and sparked a crisis

There was a reason for airports to avoid solar panels, and Malaga has just dismantled it

In our daily lives we are increasingly accustomed to seeing solar panels. on balconies either roofs. Even when we travel by car it is common to find plate-covered land either large wind turbines. However, there is one place where until now solar energy seemed out of place: airports. For years, sun reflection was an unsolved problem in the airport environment. The fear that a flash could affect a pilot on approach stopped any attempt to install solar panels. In Malaga, that fear is no longer an obstacle. In short. Malaga-Costa del Sol Airport sum for the first time self-consumption photovoltaic installations promoted by private companies. Europcar and Goldcar They were the first to take the stepwith a project developed by the Malaga engineering company Ubora Solar. As La Opinión de Málaga highlightsit is not a project promoted by Aena, but rather a direct commitment by private companies to generate their own clean energy in one of the most regulated and monitored spaces in the country. The big obstacle: glare. The main challenge of the project was not technical or economic, but rather air safety. The possibility that the solar panels generated annoying reflections or glare on pilots and controllers was a critical concern, also regulated by Aena regulations. The answer involved an exhaustive analysis of visual risk. Ubora Solar developed aeronautical glare studies following the standards of the Federal Aviation Administration (FAA) and the European Aviation Safety Agency (EASA), taking into account everything from the actual flight trajectories to the visibility from the control tower. All of this served to precisely define the orientation and inclination of the panels within the airport complex. The results were conclusive. Luminance values ​​were well below the European threshold of 20,000 cd/m², and any possible reflection coincided with the position of the sun, being “masked by its own brightness”, a phenomenon known as sun masking. In other words: the reflection exists, but it is imperceptible and does not pose an operational risk. In other countries it was already a reality. Although solar installations already exist in airports in other countries, the case of Malaga is especially relevant due to its private nature. In the United States and in different parts of Europeairport photovoltaics has been a reality for years, always subject to strict glare and air safety studies. The difference, as various media emphasizeis that in Spain this step had not yet been taken without a direct impulse from the airport manager. Málaga thus acts as a laboratory and precedent for a model that could be replicated in other airports in the country. A success that does not blind. For years, the sun was seen as a risk at airports. In Malaga, he has become an ally. The project shows that the greatest fear —the glare— it is not fought with prohibitions, but with rigorous studies, planning and technology. Málaga-Costa del Sol Airport not only manages takeoffs and landings. It has also opened a new path for the energy transition in one of the most complex environments that exists. And it has done so without losing sight of the most important thing: safety. Image | solar ubora and Unsplash Xataka | When the December sun surpasses that of April: the luminous paradox of a vertical panel on the balcony

private astronaut Jared Isaacman will be its new director

Space has once again been a question of power, and NASA is once again at the center of the spotlight. While China officially maintains that will reach the Moon with astronauts in 2030 and lists concrete advances in its manned program, the United States faces a replacement at the head of its space agency after a turbulent year. It is not a simple name change in the organizational chart: it is the confirmation that the lunar race has entered a phase in which leadership, the calendar and today’s decisions weigh a lot. The United States Senate just confirmed to Jared Isaacman as NASA’s new administrator, ending a long and unusual process even by Washington standards. The vote went ahead with 67 votes in favor against 30 votes against, according to C-SPAN, and clears up an uncertainty that had weighed on the agency’s management for months. With this confirmation, NASA leaves behind a stage marked by interim management and once again has a ratified manager. New command at NASA and a lunar race that once again sets the course Jared Isaacman doesn’t fit the traditional mold of NASA administrators. Founder of payments company Shift4, private pilot and astronauthis name became known for commanding two commercial manned missions aboard SpaceX’s Crew Dragon, Inspiration4 in 2021 and Polaris Dawn in 2024. This journey, halfway between the private sector and direct experience in flight, explains why its arrival is interpreted as a sign of continuity in the agency’s growing openness to commercial actors. The process started more than a year ago, when Donald Trump announced his intention to place him at the head of NASAbut it was interrupted on May 31, cwhen the White House withdrew his nomination. Months later, with meetings between the two and in a context of growing frustration within the administration over the performance of the acting administrator, according to industry sources cited by SpaceNews, Trump announced on November 4 his decision to re-nominate him. Jared Isaacman All this comes as China reaffirms its goal of landing astronauts on the Moon by 2030 and listed a chain of tests already carried out and others in preparation. The program includes tests of the Lanyue module, validations of the Long March-10 rocket and the Mengzhou spacecraft, in addition to the development of extravehicular suits and a manned lunar rover, within a schedule that the organization itself describes as demanding. Space Launch System From Washington, this Chinese advance has been translated into an increasingly explicit message. Sean Duffy, Secretary of Transportation and acting administrator of NASA during the interim period, publicly congratulated Isaacman upon his confirmation and framed his arrival as part of a clear political objective. “I wish Jared success as he begins his tenure and leads NASA as we return to the Moon in 2028 and defeat China,” he wrote in a post. In his appearance before the Senate, Isaacman made it clear that the lunar calendar is not a secondary issue for NASA. Given doubts about the accumulated delays in the Artemis program, he warned that any additional delay could have strategic consequences. “There is no doubt that the top priority in the short term is returning American astronauts to the Moon,” statedemphasizing that postponing that objective opens the door for the United States to lose the initiative. Images | NASA | SpaceX In Xataka | We have filled the Earth’s orbit with satellites. And now the risk of a catastrophic collision is very high

We thought talking to ChatGPT and other AIs was private. We didn’t have these extensions stealing our conversations

There are matters that we would not publish on social networks or comment out loud. However, there they go, flowing in a waterfall of messages towards an artificial intelligence (AI) chatbot, as if it were our best friend. There are no glances, no judgment, no awkward silences. There are answers that, many times, are limited to proving us right or convincing us. But beyond that, an uncomfortable question appears: what if everything we have told could end up in the hands of a third party? What if there is someone else reading those conversations? Opt out in training models or maximizing the security of our account may not be enough. There is another threat that is reaching millions of users these days, and they may not even be aware of it: browser extensions that spy on and steal what is said to chatbots. At the top of the list is Urban VPN Proxy. A Chrome extension with more than 6 million users, rated 4.7 stars and that, until the publication of the cybersecurity report that we will talk about today, showed a “Featured” badge on Google, something that we can still verify in a version archived at the Internet Archive. The discovery. What has set off the alarms is a report published by Koia company specialized in cybersecurity. It is not a generic warning or a hypothesis, but the result of analyzing what these tools do in the background while we browse. When looking at popular extensions, the kind that are installed to gain privacy or security, their researchers detected a worrying pattern: some were capable of reading and sending conversations held with artificial intelligence chatbots outside the browser. A much larger attack surface. The investigation indicates that Urban VPN Proxy did not target a single AI provider, but rather a broad set of popular platforms. ChatGPT, Claude, Gemini either Microsoft Copilot appear among monitored services, greatly expanding the volume and diversity of data potentially captured. These conversations are not trivial: they often include intimate questions, financial information, or details of ongoing projects. Therefore, access to this type of exchange involves a very delicate level of exposure. How conversations are captured. According to the research firm, the mechanism does not depend on vulnerabilities in the chatbots themselves, but on the privileged place that the extensions occupy within the browser. Urban VPN Proxy monitors active tabs and, when the user accesses an AI platform, injects code directly into the page. This code intercepts the requests and responses exchanged with the server before the browser displays them on the screen, allowing access to the full content of the conversation in real time. What Urban VPN Proxy extracted were not jumbled fragments, but entire conversations with their associated context. Koi documents the systematic capture of user messages, AI responses, identifiers for each chat, and temporal data that allows them to be sorted and related to each other. This type of information, crossed over weeks or months, allows us to draw very precise usage patterns. From work habits to personal concerns, the value of the whole lies precisely in its continuity and not in a specific message. The content script that forwards the data It does not depend on activating the VPN. One of the most important nuances of the report is that conversation capture is not tied to the use of the VPN service itself. The mechanism, they explain, works independently, even when the VPN is disabled. It is enough to have the extension installed so that the code responsible for intercepting conversations continues operating in the background. There is no user-accessible switch that allows you to disable this collection without completely removing the browser extension. Conversation collection was not present from the beginning. According to the analysis, Urban VPN Proxy did not include this behavior in previous versions of the extension. The turning point comes on July 9, 2025, when an update is released that activates the capture of conversations with AI platforms by default. From there, any user with the extension installed and automatic updates activated began to execute that new code without an explicit notice comparable to the change in behavior or having to expressly accept that modification. What does “AI protection” promise? In the extension’s tab and in its messages to the user, Urban VPN Proxy presents this feature as an additional layer of security. According to its description, it serves to alert when personal data is entered into a chatbot or when a response includes potentially dangerous links. The problem is that this layer of notifications is not directly related to the collection of conversations. Activating or deactivating warnings does not prevent messages from continuing to be intercepted and sent to the company’s servers. The investigation did not stop at Urban VPN Proxy. By tracing the origin of the code and its behavior, Koi found that the same conversation capture logic appeared in other extensions published by the same publisher. Some present themselves as VPNs, others as ad blockers or browser security tools. Together, there are more than 8 million users between Chrome and Edge, which expands the scope of the problem and explains why researchers talk about an ecosystem and not a specific anomaly. Identified extensions for Chrome: Urban VPN Proxy 1ClickVPN Prox Urban Browser Guard Urban Ad Blocker Identified extensions for Microsoft Chrome: Urban VPN Proxy 1ClickVPN Proxy Urban Browser Guard Urban Ad Blocker Who is behind. Urban VPN Proxy is operated by Urban Cyber ​​Security Inc., a company linked to BiSciencea data intermediation firm, a data broker, as described by Koi. Koi recalls that BiScience had already been the subject of previous investigations by other cybersecurity experts for the collection and commercialization of browsing data. The report frames this case as an evolution of these practices, going from collecting browsing habits to capturing complete conversations held with artificial intelligence systems. The finding also puts the focus on how the user is informed. The extension generically mentions the processing of data related to AI services … Read more

The Basque Country and Navarra exported 35,700 qualified professionals who would like to return. The problem is how and where

Companies argue that one of their main problems when it comes to filling job vacancies is find qualified workers. However, the data suggests that these qualified profiles are forced to leave the country for find better opportunities jobs outside Spain. In fact, a recent study by Artizarra Foundation and Deusto Business School puts precise figures on this mismatch between the situation of qualified talent and its reality. Thousands of professionals trained in Spanish universities and with consolidated careers outside the country they would be willing to return, but the system does not offer them a complete attractive setting to return to. The talent that left. According to the reportmore than 42,000 young people between 25 and 40 years old, trained in universities and higher educational centers in the Basque Country and Navarra, currently work outside their territory of origin. These are not profiles in transition: they are highly qualified professionals, with training in engineering, STEM disciplinesbusiness management or research. However, the key data from this report is the return intention of these professionals. More than 85% of the participants in the study affirm that they would like to return if they found working and living conditions comparable to those they have achieved abroad. If this scenario materializes, the study estimates that up to 35,700 qualified professionals could be recovered. A career developed abroad. Six out of every ten professionals consulted have already accumulated more than six years working in other countries, which implies that they already have consolidated professional trajectories there, competitive salaries and international work experience that is difficult to replicate in the short term. From an economic point of view, its impact is relevant. We are not talking about talent in training, but about already qualified personnel, with high technical knowledge and productive capacity that have been trained in public schools and universities in Spain, but that Spanish companies have not known how to retain. This lack of job opportunities is the key to their departure. Ability to train talent, not to retain it. The contrast appears when crossing the data from the Deusto Business School report with the Cotec Foundation Talent Mapwhich analyzes 55 indicators on talent creation, attraction and retention. In its latest edition in 2023, and maintaining the same territorial framework as the Deusto study, the Basque Country reaches 66.4 points, well above the national average (49.1 points) and only behind Madrid (67.7 points). The conclusions drawn from these data are clear. The Basque Country stands out for the quality of your higher educationtechnical qualification and productive environment. The educational system works well in training talent. The problem comes when that training period ends and that talent compares what you find in your country with what is offered outside. They do not return for the same reason they left. The reasons for the flight of talent are recurring: better salaries, greater professional projection, access to cutting-edge projects and, in the case of scientific profiles, more opportunities to develop a stable research career. As and how they point According to the authors of the Deusto Business School report, these factors do not disappear when the return of that talent is considered. On the contrary. Accumulated experience raises expectations and makes those reasons more visible. The study by Artizarra and Deusto identifies barriers that go beyond employment and connect with structural problems common to an entire generation. Return yes, but where. The price and conditions of housing is one of the main reasons that slows the return of this talent. Returning implies assuming high prices, both for rent as for home purchaseand face it with salaries that do not always compensate for the difference compared to other European markets. For those who have already built a life outside, the opportunity cost is high. The second major barrier to return is the quality of employment. Not so much the absence of work for these qualified profiles, but the difficulty for local companies to match salaries, professional autonomy and recognition of talent. The comparison with international markets is inevitable. A paradox that remains open. The study data supports the spirit of this talent to return because it has not separated itself from its territory and maintains its roots. Most want to return. However, as the authors of the study point out, the biggest problem is an environment that allows doing so without giving up professional and life expectations. From an economic point of view, recovering part of those 35,700 profiles would be an investment that is difficult to match for a labor market that affirms that the shortage of skilled labor It is the stone that prevents them from moving forward. As Joe Biden once said: “Pay Them More“. In Xataka | Spain has such good nurses that it exports them to other countries. The problem is that public health needs 100,000 Image | Unsplash (Philipp hubert)

The V-27 signal warns you where no one warns. Or, at least, where the DGT’s V-16 beacon does not warn

Much has been said about the V-16 beacon. We have told of Where does this invention come from that the DGT will make mandatory? in just a few days and, also, who is doing business with them. It has even been questioned whether or not we are buying really legal beacons. But of all the controversies, the one that has gained the most force due to a purely practical issue has been the issue of its visibility and effectiveness. And it is that the experts themselves have warned that the light is seen little and may be insufficient. From Xataka We have shown our own doubts about whether or not it is really advisable to get rid of triangles. These doubts are generated, in part, by the visibility of the beacon in complicated environments such as a secondary road, especially if we are next to a change in gradient or a secondary road. In this case, we still think that triangles are the best tool but the DGT also includes help that, of course, will not always work. We are talking about the V-27 signal. A notice in exchange for the triangles As we have told you before, When we activate a connected V-16 beacon we have 100 seconds until our position is first submitted. This data is sent through IoT networksknown as “the Internet of Things.” Our position is sent to the DGT 3.0 platform so that they can locate the incident. Once the DGT receives the notice that something is happeningactivates the warning on nearby light panels that an accident or breakdown has occurred in the vicinity so that vehicles drive with caution. This, obviously, does not make much sense on a secondary road where these panels do not exist. But the DGT warns that, in exchange, we will have the signal V-27. This sign is in the shape of a triangle with an exclamation mark inside and three stripes on its right side to represent its “connectivity.” Once DGT 3.0 detects the incident, in addition to sending the warnings to the light panel, it will also activate this V-27 signal that will light on the instrument panel of our car to warn us that we are facing a risky situation. Likewise, the body can activate it for any other reason it considers dangerous, not just by activating a beacon. V-16. Of course, as noted in the Royal Decree that regulates its arrival“this signal, of a voluntary nature, will only be displayed in those vehicles that are connected by telematic means, directly or through a service provider, with the National Access Point for Traffic and Mobility Information“. Therefore, it will not be available in all cars. Is the V-27 signal useful? Yes, because the driver will receive more information about what is happening around him. It must be taken into account that on a highway it is easier to visualize a problem and it is easy to pass by one of the illuminated panels that alert us of any type of emergency. However, on a secondary road it can be very useful since it will be activated before we reach the scene of the accident, right on a type of road where visibility is reduced and where the focus has been placed on limiting the V-16 beacon. We must not lose sight, however, that in order to receive this notice we must have a connected car and, in addition, the car or its provider must be registered with the National Access Point for Traffic and Mobility Information. If not, it will be of no use. Photo | Andri Klopfenstein and DGT In Xataka | The V16 wanted to replace the triangle and reduce risks. They have ended up proving that they can also create them

We have searched for dark matter with the most sensitive detector in history and we have found nothing. And that is a success

The search for dark matter It becomes more and more like a game of hide-and-seek where, as we improve our vision, the target appears to become more invisible. The last thing we tried to do to find it was drill 1,500 meters deep underground, although in the end we had a very bad result, although it did allow us to find things that we were not looking for. The dark matter. It is without a doubt one of the great mysteries of physics. While many researchers suggest that this matter surrounds us and is the main component of the universe, others believe that we were wrong and it doesn’t exist. Although little by little evidence is emerging that it is true that it exists so that our own theories fit. This whole mess is mainly focused on the fact that we do not have the ability to detect this matter. We know it’s there, but we don’t ‘see’ it. Something that generates a great confrontation within the world of physicistsand that is why these types of experiments try to shed light on this matter that allows us understand much better the composition of what surrounds us. New tools. Science has exploited the LUX-ZEPLIN (LZ) experimenta very sophisticated tool built by humanity to hunt down these ghost particles. To understand it, it is nothing more than a sensor that had to be buried 1,500 meters deep, in the facilities of the Sanford Underground Research Facility (SURF), in South Dakota. The reason? Use the rock as a shield to block the cosmic radiation that bombards the surface. The concept. The magnitude of this experiment has undoubtedly been quite considerable, since at its core 10 tons of ultrapure liquid xenon have been housed. The theory here is that if a dark matter particle passes through the Earth, it should occasionally collide with a xenon atom that produces a tiny flash of light. In total, the LZ has analyzed data collected for 471 daysbetween March 2023 and April 2025. A period of time that makes this the most exhaustive search that has been done so far. The sound of silence. The main result is that no direct interaction with the particles has been detected. However, this null result is practically worth gold in the field of physics. And by not finding anything, scientists have been able to rule out a huge range of possibilities about what dark matter is and what it is not. In short, we have been able to establish tighter margins to detect dark matter, now having the strictest limit in the world on the cross sections of dark matter particles for a very specific mass. And it is that being of such a small masswhich is why it offers so many problems when it comes to detecting them. The surprise. The most fascinating thing about these results is not what was missing, but what appeared. Although the detector did not see dark matter, it did validate its extreme sensitivity by recording something incredibly difficult to capture: solar neutrinos. This marks a bittersweet milestone: the experiment has officially entered what physicists call the ‘neutrino fog‘. This means that we have reached a point of such extreme sensitivity that neutrinos (that go through everything without flinching) begin to generate background noise that could be confused with dark matter. And the truth is that we are facing a big problem, since technology will have to find a way to distinguish dark matter from neutrinos. The future. The experiment does not stop here. Although these results cover until April 2025, the official plan is to continue taking data until 2028, with the aim of accumulate more than 1,000 days of observations. And many experts continue to point to the same thing: 85% of the mass of the universe It’s dark matterand although it escapes us, we are getting closer to knowing what the universe is made of. Images | Karo K. In Xataka | The strangest event that humanity has witnessed occurred in 2019 under a mountain in Italy

Google just changed the rules of the lightweight model

Now, in the race to lead the development of artificial intelligence, something unusual has just happened. Gemini 3 FlashGoogle’s new model, has surpassed GPT-5.2 Extra High, the higher-reasoning variant of OpenAI, in several performance tests. And that forces us to rethink some of the rules that we took for granted. A fast model that also reasons. Google’s new model comes with a very specific promise: to demonstrate that “speed and scalability do not have to come at the expense of intelligence.” Although it has been designed with efficiency in mind, both in cost and speed, Google insists that Gemini 3 Flash also excels at reasoning tasks. According to the company, the model can adjust your thinking ability. It is able to “think” for longer when the use case requires it, but it also uses 30% fewer tokens on average than Gemini 2.5 Promeasured with typical traffic, to complete a wide variety of tasks with high precision and without penalizing response times. The truth is in the benchmarks. Are the benchmarks perfect? No. But they are still one of the most useful tools we have for comparing AI models.confront them against each other and detect in which scenarios they perform better or worse. And in this area, Gemini 3 Flash comes out well. In SimpleQA Verifieda test that measures reliability in knowledge questions, Gemini 3 Flash achieves 68.7% compared to 38.0% for GPT-5.2 Extra High. In multimodal reasoning, within MMMU-Pro, Google’s model scores 81.2% compared to OpenAI’s 79.5%. In Video-MMMU, Flash achieves 86.9% compared to 85.9% for GPT-5.2 Extra High. If we look at multilingual and cultural capabilities, Flash is again ahead, with 91.8% compared to 89.6% for GPT-5.2 Extra High. In Global PIQA, focused on common sense in 100 languages, the difference remains: 92.8% for Flash versus 91.2% for the OpenAI model. Everything indicates that Gemini 3 Flash is specially optimized to capture nuances outside of English and reason more fluently in global contexts. He also excels in the use of tools and agents. In Toolathlon, Flash scores 49.4% compared to GPT-5.2 Extra High’s 46.3%. In the FACTS Benchmark Suite, the difference is tighter, but still in favor of Google: 61.9% versus 61.4%. In long-term tool execution tasks, Flash appears to show greater consistency. But he is not the king of pure reasoning. Now, it is worth looking at the complete photo. Although Gemini 3 Flash outperforms the best OpenAI model in several tests, if you are looking for “pure” reasoning, the balance changes. In the most demanding tests in this area, GPT-5.2 Extra High continues to set the benchmark. OpenAI’s model leads ARC-AGI-2, focused on visual puzzles, with 52.9% compared to Flash’s 33.6%. In AIME 2025, with code execution, it reaches 100% compared to 99.7%. And in SWE-bench Verified, aimed at software engineering, it obtains 80.0% compared to 78.0% for Gemini 3 Flash. What exactly is GPT-5.2 Extra High. Throughout the article the name GPT-5.2 Extra High appears several times, and it is normal to wonder if it is something new or little known. In reality, it is not a model that is usually mentioned to the general public. Google uses this designation in its comparison table to refer to the maximum level of reasoning available in the OpenAI API for GPT-5.2 Thinking and Pro. In the official OpenAI documentation it is identified as “xhigh”. Where you can use Gemini 3 Flash. Access to Gemini 3 Flash is not country dependent. If you have access to the Gemini appyou are already using this model, which has become the default option. It is also reaching developers through the API, AI Studio and Vertex AI. In the United States, the deployment goes a step further, as the Gemini 3 Flash has become the default model of the AI Mode of the Google search engine. The price of using Gemini 3 Flash. For those who want to integrate Gemini 3 Flash into their applications, the model costs $0.50 per million input tokens and $3 per million output tokens. This is a slight increase over Gemini Flash 2.5, which was $0.30 per million tokens in and $2.50 per million tokens out. An increasingly tight race. Gone are the days when Google tried to confront ChatGPT with Bard, or when OpenAI seemed to be years ahead of the rest. Today, the distances between the big players in AI have been drastically reduced. The competition is more direct, more technical and, above all, much closer. Images | Google In Xataka | Amazon is preparing an investment of 10 billion in OpenAI because if you can’t beat your enemy, the best thing is to join him

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.