Anthropic is winning the enterprise AI race, so OpenAI has a new plan: become Anthropic

OpenAI has thrown out everything that moved in AI. They have been launching everything: a video generatora web browser with AI, an image generator with Studio Ghibli styletools e-commerceetc. The logic was simple: whoever tries everything has more chances to get something right, but the result has ended up being the opposite. While OpenAI seemed to be everywhere, Anthropic was focused on a single site and It has managed to eat the land where it mattered most. Enough of trying everything. Fidji Simo, the board that Altman signed last summer, recently called upon employees to give them a message that is rarely heard in a company with the growth of OpenAI: their main rival was teaching them a lesson. What Anthropic is doing, Simo explained, should be a wake-up call for OpenAI, which has lost leadership among software developers and enterprise customers. “We cannot waste this moment because we are distracted by parallel projects,” he stressed. The hidden cost of doing a little of everything. The problem with shooting at everything that moves is not only the focus, but the resources that this implies. In companies that develop foundational models, the key resource is computing capacity, and at OpenAI that resource jumped from one team to another depending on the priorities of the day. The Sora team, for example, was integrated into the research division despite being one of the company’s most visible products. OpenAI was growing fast in too many directions, and that also created internal tensions over which project should be prioritized. Anthropic focused on one thing. As OpenAI diversified, Its main rival adopted a completely opposite strategy: few products, a lot of depth. Claude does not generate images or video, does not have his own browser and is not trying to create his own chips (at the moment). It is dedicated to creating foundational models and offering them both in web service mode and especially through APIs for companies and developers. Claude Code, its flagship product for programming, became a viral phenomenon among software engineers last fall, and has ended up consolidating itself as the reference tool among amateur developers—vibe coding is still going strong—and of course among technical teams in all types of companies. OpenAI strikes back. The response has not been long in coming: OpenAI launched last month a new version of Codexhis programming tool, and accompanied it with new GPT-5.4 which is precisely much more oriented towards professional environments. According to Simo itself, Codex already exceeds two million weekly active users, almost four times more than at the beginning of the year. To drive usage of its product, OpenAI is deploying engineers to consulting firms and business partners to accelerate adoption of these products. IPO on the horizon. Both OpenAI and Anthropic are taking clear steps towards an IPO which in fact could occur this year. That makes gaining share in the corporate market—which is the one that really pays, the one that signs contracts, and the one that justifies valuations—absolutely essential for these IPOs to be successful. The initial share price and real valuation of these companies will depend on how well positioned they are, and at OpenAI they want to recover the lost ground in the enterprise market. In the meeting with the staff Simo explained that “we are acting as if this were a code red.” The paradox of being the pioneer. OpenAI unleashed the AI ​​fever with the launch of ChatGPT in November 2022 and made generative AI an almost everyday phenomenon. However, being the first usually has a trap, because it forces you to explore and diversify to maintain your reference position and that is very expensive. Anthropic came along later, saw where the real money was, and focused specifically on that sector. The student has surpassed the teacher, it seems, and at OpenAI they want to correct the strategy. What will happen to so much product?. It remains to be seen how this OpenAI strategy affects its entire product catalog. If you start focusing on developers and enterprise solutions, what will happen to your imager, Sora or Atlas? The structural tension between being a “research laboratory” and being a “product company” can pose a challenge for a company that naturally did not stop exploring new ideas to apply AI to them. Image | TechCrunch | Wikimedia Commons In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

This robot vacuum cleaner has a self-emptying base, 180 minutes of autonomy and LiDAR navigation. Everything without reaching 85 euros

Keeping our house clean is almost as necessary as a real pain in the ass. For this reason, any technological help that we can have for this is always welcome and there are few things more useful than a robot vacuum cleaner. Do you want one without costing you a fortune? Well, keep an eye on this iLife A30 Pro: on AliExpress it comes out 84.03 euros if we use the coupon ‘ESA13‘. At this price, it’s hard to find something better. ILIFE A30 Pro Vacuum Cleaner and Mop, Self-Emptying Station for 60 Days, 5000Pa Suction, LiDAR Navigation, 2.4G WiFi/App The price could vary. We earn commission from these links A robot vacuum cleaner that is surprising for its price As we have been telling you since yesterday, the AliExpress Anniversary It’s back with a vengeance this year. There are really powerful offers and this iLife robot vacuum cleaner is a more than perfect example to illustrate them. If we take a look in stores like amazon either Leroy Merlin, The price of this model is around 200 euros. For this reason, this AliExpress offer is a real treat, but even more so if we take a look at what this iLife A30 Pro offers. The first thing is the suction power, which is 5,000 Pa. Translated into practice, it is more than enough to carry away dust, crumbs and even animal hairthe things that most often populate the floors or carpets of our homes. Plus, it also scrubs. It is also worth stopping a little while browsing. It has a LiDAR system that It is not usually present in robot vacuum cleaners in this price rangewhich is already a point in its favor. Thanks to it, you will move well between rooms and overcome the obstacles you encounter, avoiding those uncomfortable headbutts that these types of devices sometimes cause. Beyond all of the above, perhaps one of its greatest assets is its self-emptying base. This will clean the robot’s tank and, as the dirt ends up in a 2.5 liter capacity bag, It’s enough so that we don’t have to do anything for about 6 or 7 weeks. And it has plenty of autonomy, since it offers up to 180 minutes if we use its gentle mode. It is reduced if we use more suction power, of course. This iLife A30 Pro does not seek to be the best robot vacuum cleaner on the market, but it is one of the best options we can buy if we want to spend as little as possible. For less than 90 euros, It is very difficult for us to find something better. And in fact, it is rocking it on AliExpress: it has more than 10,000 sales and an almost perfect average rating. Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | iLife In Xataka | Best robot vacuum cleaners in quality price. Which one to buy based on use and six recommended models In Xataka | Best cordless upright vacuum cleaners. Which one to buy and seven recommended broom vacuum cleaners from 139 euros

The question is not whether 2027 will be the warmest year on record, the question is by how much. And the answer lies in El Niño who is approaching us.

Not one, not two; but three independent forecasts They converge on the same idea: 2027 is being given the face of a record. And in recent days, events have happened very quickly: The Child is at the doors and, from what we know so far, it may be a historic event. This means that next year has a very good chance of becoming the warmest year ever recorded, surpassing 2024 and exceeding the 1.5 degrees of the Paris Agreement. But let’s start with El Niño. ENSO (English acronym for El Niño-Southern Oscillation) is a cyclical, although irregular, climate phenomenon that has large effects on the global climate. Great, in fact. If we exclude the stations, it is the most important source of annual climate variability from all over the planet. During the warm phase (which will now affect us), the lack of trade winds that cool the surface of the equatorial Pacific causes the temperature to skyrocket. And so, through different atmospheric teleconnectionsdisrupts all of Earth’s weather systems. The effects in terms of precipitation change depending on the region (“drier than normal conditions in certain parts of the world, while in others it causes more precipitation. Some countries have to deal with major droughts and others with torrential rains”, says AEMET); but no one escapes from the temperature. What is happening with him? That between the forecasts for December 2025 and those for March 2026, everything has accelerated radically. Although La Niña is officially with us, the chances that we will end up with a strong or very strong EL Niño continue to grow. Above all, since researchers discovered this massive surface heating of the equatorial Pacific caused by Kelvin waves and which has already hopelessly eroded the cold pocket of water that we associate with La Niña. This is the most interesting because, as pointed out by Severe Weather Europe and Climate Impact Companythe parallels with other superChildren They are more than patent. What does all this mean? That, barring a miracle, temperatures are going to skyrocket until they exceed the red lines that we had set for ourselves. Each and every one of the last three years have surpassed 1.4 degrees over the pre-industrial period: 2026 will continue along the same lines, but 2027 has everything “in its favor” to settle above 1.5 degrees. That, translated into natural language, means ‘problems’. Issues? ENSO is a highly variable phenomenon and, in general terms, each phase is unpredictable in terms of intensity, duration, time of year and various interactions. However, the effects are sharp. On the one hand, El Niño causes flooding in California, Central America, northern Peru, Ecuador and large areas of northern and southeastern South America; torrential rains in the eastern-central Pacific islands and central Asia. On the other, is synonymous with droughts in southern Africa, the Sahel, Southeast Asia and, apparently, the Valley of Mexico. In Spain, in addition to the temperatures, it usually coincides with a small increase in rain. Could this rapid warming be indicative of something else? But beyond all this, there is something that worries researchers: that this sudden warming is a symptom of changes between the three phases of El Niño that are faster than they have been until now. Nothing is clear, obviously, but the mere possibility makes experts from half the world nervous. Meanwhile, Image | Climate Realanyzer In Xataka | We don’t know anything about El Niño at this time of year. That’s a meteorological mystery… and good news

The metro has been splitting Rivas in two for decades. The city council has a plan to cover it up and has already presented it to Madrid

The Rivas Vaciamadrid City Council has registered before the General Directorate of Infrastructure of the Community of Madrid its project to cover 2.5 kilometers of Metro Line 9B. This is a project that aims to transform part of the town’s urban layout, and the deadline for issuing its technical report has already opened. We tell you all the details. What exactly is this about? Just like they count From the town hall itself, the project consists of burying or covering the section of road that runs above ground through Rivas Vaciamadrid between the Cerro del Telégrafo Sports Center and the Rivas Futura station. They are 2.5 kilometers long and 30 meters wide which, if covered, would stop acting as a physical barrier that divides the municipality in two. On the surface, it is planned to extend the Linear Park, creating a corridor with green spaces for public use. The project also includes the construction of a fourth Metro station in Rivas, located on José Saramago street. Deadlines. The City Council had a technical meeting on February 27 with the General Directorate of Infrastructure, where it presented the solution. A week later, on March 4, it was officially registered, and now the Community of Madrid has three months to decide whether to move forward at a technical level. According to collect El Diario, the council has expressly requested “agility” from the regional administration. Tpolitical background. The fourth season brings them. And it is that according to Diario de Rivas, the Community of Madrid has already pointed out on more than one occasion that this infrastructure “is not justified on a technical level.” The City Council, for its part, insists that the project “is the result of months of rigorous and reliable technical work and that it meets the necessary requirements to move forward towards its execution.” The General Directorate of Infrastructure, for now, has limited itself to confirming that there was a meeting. What the data say. The City Council supports its position to move forward with the project through a survey in which they say that 78% of Rivas residents recognize the importance of this project. The organization frames it within its Rivas 2030 Urban Agenda, where it appears as one of its most notable projects to reconfigure its urban model. What happens now? The ball is in the court of the Community of Madrid. Before the end of June, the technical response from the General Directorate of Infrastructure should arrive. This report will determine if the project can move forward as planned, if it needs modifications or if the proposal (especially the new station) encounters obstacles from the regional administration. The Town Hall has expressed his confidence that the Community “facilitates the progress of an action long awaited by the citizens of Rivas”, but it seems that we will have to wait to find out if it finally materializes as the city council wants. Cover image | Google Maps In Xataka | BYD is already studying entering Formula 1, according to Bloomberg. And it is not a whim, it is a necessary step

your astronauts just harvested them

Imagine for a moment a tomato plant growing hundreds of kilometers above Earth, inside an orbiting space station. The scene might seem like something out of science fiction, but it is already part of the scientific activity at the Chinese Tiangong space station. According to China Central Television (CCTV)the Shenzhou-21 mission crew has harvested cherry tomatoes grown in orbit, photographing the ripe fruits before removing and storing them as part of the experiment. Behind this striking image there is a very specific objective: to check whether humans will be able to produce food in space during long missions, something that space agencies consider important for future expeditions beyond Earth’s orbit. The system. The tomatoes were grown in an aeroponic growing system designed to operate in microgravity, a technology that sprays water and nutrients in the form of a mist to directly feed the plants’ roots. Sina explains that The team was sent to Tiangong in July 2025 aboard the Tianzhou-9 cargo ship and is part of a series of experiments aimed at verifying key cultivation technologies in orbit and expanding the range of species that could be grown in space. After more than three months of growth, the plants managed to complete their cycle and produce ripe fruits that the crew photographed and removed following the scientific protocol of the experiment. The technology behind the “orbital garden”. Cultivation in space requires very different solutions than terrestrial agriculture. Instead of soil, the system used at Tiangong keeps the plants’ roots suspended and feeds them with a mist of water and nutrients, a technique known as aeroponic growing. As explained by astronaut Zhang Hongzhangthis method allows increasing water use efficiency, something especially important in the closed environment of a space station. The device is complemented by an LED lighting system designed to provide the light spectrum necessary for plant development and improve the energy efficiency of the system. Experiments that have been underway for decades. Growing plants in space is not a new idea. NASA reminds that Space agencies have been conducting experiments with plants in orbit for decades, although for a long time samples were grown only for scientific purposes and sent back to Earth for analysis. In 2015, for example, astronauts on the International Space Station became the first to eat a food grown in space, red romaine lettuce produced in the station’s vegetable laboratory. Since then, different studies have been carried out with microgravity cultures, including studies with tomatoes like VEG-05, made at the Veggie facility, or XROOTS. A key element to live longer in space. If humans want to spend months or even years away from Earth, depending exclusively on shipments sent from our planet is impractical. For this reason, for decades different space agencies have been investigating how to integrate plants into life support systems capable of regenerating part of the resources necessary for the crew. According to the scientific literature cited in Frontiers in Plant Sciencecrops can provide fresh food, produce oxygen and absorb carbon dioxide within closed environments such as space stations or future bases on the Moon or Mars. In addition, the researchers point out another less visible but important benefit: cultivation activities have been shown to have positive effects on the psychological state of astronauts during prolonged missions. The importance of these experiments goes beyond the curiosity aroused by images of plants growing in orbit. Each crop grown on space stations allows data to be collected on how plants react to microgravity, knowledge that is essential to design more complete life support systems. Current research seeks to understand whether these crops can be integrated into bioregenerative life support systems capable of producing part of the resources necessary for a crew. If that objective is confirmed, technologies such as those being tested today in Tiangong could become an important tool to sustain prolonged human missions in space. Images | CCTV In Xataka | China has the Moon between its eyebrows: it has now created the first chemical map of the hidden face

We believed that machines could only beat us at chess or Go, but now they are preparing to beat us at tennis

Kasparov succumbed to Deep Blue and that showed that machines could finally surpass humans. Then came defeats in other fields (Go, StarCraft), but always with algorithms as the protagonists. Now those who want to surpass us are the robots, and after some disappointments and also amazing previewsare wanting to conquer a sport that poses an exceptional challenge: tennis. Be careful, Alcaraz, the robots are coming. Researchers from Tsinghua University and Peking University, among others, have collaborated to develop a robot capable of playing tennis. The project has been named LATENT (Learn Athlethic humanoid TEnnis skills from imperfect human Motion daTa) and it is surprising because the principle is very similar to that of developments like AlphaZero: the machine (the robot) practically learns to play by itself. We have already seen similar advances with sports like ping pong or with kung fu demonstrationsbut this milestone has been achieved in a different and striking way. imperfect movements. Until now, getting a robot to react at the speed of a tennis ball was an almost insurmountable challenge due to the lack of perfect movement data, but the advances made by these researchers are especially striking. Especially since these machines now use “imperfect” information captured from humans to learn how to play. Mini tennis. Capturing accurate data from a real tennis match is very expensive and complex due to the size of the court and the subtlety of the tennis players’ wrist movements. To solve this, the LATENT team chose to collect “primitive skills” data. That is, the robot was shown basic movements such as the forehand drive, backhand, or lateral movements. In addition, an area 17 times smaller than a professional court was used precisely to reduce the complexity of the initial system. The objective: that from there the robot could develop its own technique. Learn from your mistakes. The striking thing about this development is that with those few data the robot was capable of making corrections on the fly when moving or hitting the ball. Thus, he was able to maintain the stability of his body following the style of human movements, but he was also able to finely adjust the angle of the racket to impact the ball appropriately. No strange things. The researchers also wanted to prevent the robot from starting to “make up” strange movements during its reinforcement training. Thus, they created a technique that forced the AI ​​to explore only human-like movements based on the initial data distribution. Unitree G1 already plays tennis. To translate their system into reality, the researchers installed this system on a Unitree G1 robot. This model of humanoid robot It has 29 degrees of freedom and a racket was attached using a 3D printed part. The physical tests were surprising: the G1 was able to return balls thrown at more than 15 m/s (54 km/h), but it was also able to maintain rallies with human players on a real court. The robot was capable of covering a large part of the court and dynamically adapting its posture according to the trajectory of the ball. The beginning of something bigger. These tennis robots are very far from being able to compete with human players—much less with professionals—but they demonstrate that reinforcement learning techniques that have been applied in games such as chess or Go may be valid for physical environments with robots. In fact, this advance raises the possibility that robots can learn any physical discipline (whether sports or not) from a limited learning of basic movements. In Xataka | And finally the human being beat, with much drama, a robot playing ping pong

TSMC is running out of capacity on the N3 node. And that’s going to affect everything you buy.

There is a bottleneck that conditions everything in the technology industry and it has a very specific name: TSMC’s N3 node. AI has devoured 3nm chip manufacturing capacity faster than anyone anticipated, and right now there aren’t enough wafers to go around. Why is it important. The N3 node is not just the process where the most advanced AI chips are made. It is also where the iPhone, Macs, iPads, Qualcomm Snapdragons and Intel laptop processors live. When that capacity disappears absorbed by the demand for data centers, the impact does not remain in the server area: it reaches mobile phones, computers and any device that depends on the latest generation chips. Therefore, it reaches all of us. The context. For years, the N3 was almost the exclusive territory of consumer electronics. Apple was its first big customer with chips M3, M4 and M5 for Mac, and the A17, A18 and A19 for iPhone. Qualcomm uses it in its Snapdragon 8 Elite. MediaTek, in its most advanced Dimensions. That balance in which everyone was reasonably happy has been blown up in 2026. According to the analysis of SemiAnalysis (forgive the redundancy), in this exercise the AI ​​accelerators are going to absorb about 60% of all TSMC’s N3 production. In 2027, that figure could reach 86%, leaving mobile and PC manufacturers with hardly any access to the node. Between the lines. What has happened is a confluence that no one has managed in time. TSMC was slow to expand its capacity: Although the big AI investment cycle began in late 2022, with the bombshell arrival of ChatGPT, TSMC’s capital spending did not surpass its previous all-time peak until 2025. By then, demand had already caught up. The result is that TSMC today acts as an involuntary arbiter deciding who can build what and when. NVIDIA secured the N3P wafers for its new Rubin architecture before anyone else, displacing other clients. Google and Broadcom They got to the N3 even before NVIDIA, with the v7 TPUs already in production during 2025 and a big increase in volume this year. amd, AWS with his Trainium3and Goal with his MTIA They also compete for the same node. AND Apple, Qualcomm and Intel They are, in this new distribution, those who stand in line. The big question. Can anyone stand up to TSMC? In the short term, the answer is no. Intel Foundry has the political backing of the Trump administration and could capture assignments that add points to him. Samsung has landed some big contracts (including Tesla chips and, according to SemiAnalysisan entry in NVIDIA’s supply chain), but its technology remains behind. Foundry diversification is more of a strategic desire than a real alternative, at least for now. Yes, but. There is one nuance that should be remembered: the shortage of N3 is accelerating the transition to node N2the next step in TSMC’s roadmap. Some mobile manufacturers that planned to stay in N3 are moving ahead of schedule, not by technical choice or because the timing be the most logical, but because they have no other option. The shortage not only redistributes the present, it is also rewriting the product calendars of half the sector. In Xataka | Chinese memory manufacturers are no longer secondary players: they are the lifeline of the consumer market Featured image | Igor Shalyminov

The games of 2026 aim to be graphic marvels. NVIDIA is clear that the solution is in AI

The GDC, or Game Developers Conference, is a very special video game event. It is not focused on announcement of new titlesbut to holding presentations and roundtable talks between those who create video games. Lovers of the most technical issues in the industry consider it an unmissable event, and the one who does not skip an edition is NVIDIA. And in this GDC 2026 it has arrived with all his muscle and a clear idea. The future of gaming goes through artificial intelligence. DLSS 4.5, the umbrella. Leaving aside the current situation of the PC market due to the requirements of artificial intelligence and unprecedented crisis where we find ourselves, AI applied to video games is something that NVIDIA has been pushing for several generations. A lot has happened since the RTX 2000 and the arrival of ray tracing in real time along with a solution to make performance more sustained: DLSS. Deep Learning Super Sampling is a scaling tool that allows the GPU to render the game at a low resolution and then scale it to the native resolution of our monitor. This allows for improved frame rates in performance while maintaining image quality. With the passing of generations, DLSS has been evolving until it becomes a complete neural rendering suite that involves several technologies. It is no longer just scaling through deep learning, but another series of techniques to improve both image and performance. For its main work, DLSS 4.5 presents greater understanding of the scene, improving both image quality and performance at higher resolutions. But he has more things up his sleeve. Frame Generation. One of them, perhaps the most notable, is the enhanced frame generation mode. If in the previous generation DLSS could multiply the frames per second of the image by up to four (through this deep learning, four frames were “invented” for each native one provided by the GPU), with DLSS 4.5 the figure increases to 6x. This is crucial to maintain fluidity in games with extreme graphical loads if we want to play at 4K. Because at 1,440p, the power of the GPU is more than enough, but to play in 4K with all the current effects activated, generating frames seems key if we want to take advantage of the high refresh rates of the monitors. According to NVIDIA data, the step from 4x to 6x increases performance in titles with path tracing in 4K by up to 35% on RTX 50 GPUs. It uses Reflex technology, also from NVIDIA, so that latency is minimal, and the scenario is most curious because we can be playing a game in which most of the frames are reconstructed, and not native, without us realizing the latency. Multi Frame Gen, the “magic”. Within that frames per second multiplier, there is a very interesting technology: DLSS 4.5 Dynamic Multi Frame Gen. Its name is quite self-explanatory and,. Basically, it consists of an algorithm that establishes the best frame multiplier for each moment depending on the image, the performance of the GPU and even whether we have vertical synchronization activated or not. It is an automatic change that changes all the time between 2x and 6x (passing through intermediate multipliers) with the objective of always maintaining the highest possible frame rate per second, but without spending resources foolishly. That is to say: if we have a 120 Hz monitor, the GPU changes the multiplier depending on the situation to always try to guarantee those 120 FPS, but without wasting resources. If in a game we are in a phase with low graphic load (an interior, for example), only a 4x multiplier may be necessary. If we go outside, maybe we need that 6x push, and what the system does is change automatically. The next time we are inside it will go back to 4x and so on constantly. The explanation is simple: the aim is to make the experience as consistent as possible, but without generating frames foolishly to prioritize native frames over those generated by the AI. “New” word: path tracing. And all these technologies to give life to games that will soon begin to consume more and more of the PC’s native resources. Because if ray tracing is already demanding, we are going to have to get used to a new term: path tracing. It’s not new, but it’s basically a more complete form of ray tracing that attempts to even more realistically simulate how light impacts game geometry. Ray tracing can be applied to everything (shadows, reflections or global illumination) or separately, but path tracing is a unified solution. In short: it is like applying all possible ray tracing, but at the same time. This consumes a lot of resources, something we can see in games like ‘Cyberpunk 2077‘ either ‘Resident Evil Requiem‘, and is the reason for DLSS 4.5 rendering techniques and 6x frame generation. The games are ready. In the end, it’s about AI allowing you to achieve performance that the GPU, on its own, might not achieve. In top-of-the-range graphics like RTX 5080 or RTX 5090 We may want to ‘pull’ the native resources, but with others like the 5070 or the 5060, these AI ‘helps’ allow us to further stretch the visual quality of a game while maintaining good performance. And all these tools together will be necessary if we take into account what is to come. Because we have already mentioned some games, but over the next few months others will arrive like ‘007 First Light‘, ‘Resonant Control‘, ‘Star Wars Galactic Racer‘ either ‘Directive 8020‘ that promise to be visual wonders and that will integrate these technologies. In Xataka | Nintendo has not been just a video game company for thirty years. But it is now when it is showing it with dividends

behind is North Korea

A European company publishes an offer for a remote technological position and, after several filters, hires a candidate who perfectly fits the profile. The resume is solid, the interviews are carried out without problems and, on paper, this incorporation becomes integrated into the team as one more. But there is a possibility that until recently many companies did not even consider: that this worker is not who they say they are. Cybersecurity experts maintain that this phenomenon comes almost exclusively from North Korea, a practice documented in the United States and whose first signs are also beginning to be seen in Europe. The problem of fake employees in Europe. To understand why it is now starting to cause concern in this part of the world, it is worth first looking at what has already happened in the North American country. There, authorities and cybersecurity specialists have been investigating a very specific pattern for years: supposed technology professionals who were actually part of networks linked to Pyongyang. According to data from the Department of Justice, these operations managed to infiltrate more than 300 companies between 2020 and 2024, generating at least $6.8 million in income for the North Korean country. How deception works. The process usually begins with building a compelling professional identity. According to the Financial Timesoperatives can take over inactive LinkedIn accounts or even pay their owners to use them, and from there create apparently legitimate profiles with falsified resumes and recommendations generated by other members of the network. Language models also help them create culturally appropriate names, plausible email addresses, and messages that reduce linguistic or cultural cues that could previously give them away. In the interview phase, technology plays an increasingly important role: these networks can resort to digital masks, avatars or video filters, and when some companies tighten controls, they also go so far as to pay real intermediaries to appear on video calls in their place. The success of this scheme is not explained only by the technological tools used by false candidates. It also has to do with a structural weakness within many organizations. According to cybersecurity experts cited by the aforementioned newspaper, the hiring process has rarely been considered a corporate security front. For years it has been managed mainly from human resources, with controls designed to evaluate talent, not to detect infiltration operations. That approach has left a vulnerability that these networks are exploiting. Once inside the company. Getting through the hiring process is only the first phase of the operation. Some of these schemes include intercepting the laptops that companies send to their new employees to work remotely. After accessing the equipment, operatives can connect from other locations and carry out their work activity using tools based on language models and chatbots. This system allows them to fulfill the tasks assigned by the company and, in some cases, manage several technological jobs at the same time. Furthermore, the risk is not limited to collecting salaries; some also steal information or infect systems with malware. For threat analysts, the first signs of expansion towards Europe are already visible. According to information collected by the Financial Times, researchers have identified signs that networks linked to North Korea are trying to reproduce in the region the same model that was previously observed in the North American country. One of the elements that has attracted attention is the appearance in the United Kingdom of the so-called laptop farmsspaces where remotely connected laptops are concentrated so that operatives can work as if they were physically in the country. This type of infrastructure suggests that the scheme could also be beginning to be replicated in Europe. Images | Xataka with Nano Banana In Xataka | We knew that North Korea has been infiltrating workers into Western companies for years. Now we know how they do it

We believed that AI was killing jobs in the tech industry. It is actually changing the rules of the game: Crossover 1×41

It is possible that in the future AI will take away our jobs, but at the moment it is being taken away from very few. This was stated in a recent Anthropic study on the impact of AI on the labor market, and this is a perfect perch to present the debate that concerns us in Crossover 1×41. And it is a special edition because we have as a guest Jordi Arrufiof Talent Arena. This event, which is held within the framework of the Mobile World Congress in Barcelona, ​​is aimed at future developers and also senior profiles, and with it we had the opportunity to talk about how AI is changing the rules of the game for professionals in the sector. To begin, we must dispel myths. At least for now, because although there was a time that AI was going to replace programmers, what is being seen according to Arrufí is that The demand for technological talent is increasing. In fact, what is expected is that the impact of AI will cause this technology to begin to create jobs that we cannot even imagine. We also couldn’t imagine that with the rise of the Internet there would be frontend and backend developers or web designers: the same in this case. Many professionals may fear that future, and here the recommendation to be prepared for the future is that these professionals combine your technical capacity (‘hard skills’) with human capabilities (‘soft skills’) such as critical thinking, leadership or communication. The frenetic advancement of AI also makes the ability for continuous learning and adaptability key in these changing times. He vibe coding has changed the paradigm, and has opened this area even to users without basic programming knowledge. Plus there is something striking here. A real opportunity for current professionals and those to come, because if something is clearly taking off it is interest in technological sovereignty. Europe seeks to recover ground against the US and China through investments in chipsFor example. Public funding is especially critical to retaining talent and prevents professionals from emigrate for higher wages. We also had the opportunity to talk about another of the areas of greatest projection: robotics. It is expected a imminent adoption of humanoid robots in industry and in logistics processes. Domestic robots will take longer, no doubt, but what seems clear is that by 2035 the world will be dominated by AI agents and massive advances in fields such as biotechnology. This is not just about AI: It’s about talent, money and who adapts faster and in a more accurate way. On YouTube | Crossover In Xataka | A startup from Malaga is the most used European AI app in the world according to Andreessen Horowitz. It’s called Freepik

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.