more job offers but it is more difficult to find work

The technology sector has never had so many open vacancies and yet finding a job there has become a task harder than ever. This apparent contradiction is not just a feeling: the data confirms it, and it has everything to do with how AI is redrawing the map of who has a place and who does not in technology companies. A detailed analysis by Lenny Rachitsky, expert in the technological labor market and host of the popular Lenny’s Podcastoffers an image that invites reflection. The figures are the most optimistic that has been recorded in its four editions of the report on the state of employment in the technological product sector, but the reality of many professionals who looking for a new job contradicts that optimism on paper. Numbers are deceiving (or at least, they don’t tell everything). According to the collected data by Rachitsky through TrueUp, a platform that tracks job offers in more than 9,000 technology companies in the world, there are more than 7,300 vacancies open for profiles Product Manager At a global level, 75% above that recorded at the beginning of 2023 and almost 20% more than at the beginning of this same year. In engineering, the figure is even more striking, with more than 67,000 active offers worldwide and 26,000 in the US alone. However, more vacancies do not automatically equal easier finding a job. Rachitsky himself acknowledges in his report that there are many people having a hard time searching, and that this does not change because the overall numbers are good. He labor market growsYes, but it doesn’t do it at the same rate for everyone. not even for all profiles. The boom in roles linked to AI. The great catalyst for this growth is AI. Jobs related to its development and implementation are skyrocketing compared to other technology roles, something Rachitsky describes as a hockey stick-shaped growth curve. This profile demand of software engineering reaches both native AI companies (such as OpenAI, Anthropic or Cursor) and non-technology companies, which looking for product managers specialized in integrating these technologies into their processes. a report of the London School of Economics confirms that more than 76% of product managers expect to expand their investment in AI in 2026, which has triggered demand for managers capable of translating the capabilities of AI models into concrete products. The profile that companies are looking for, however, is very specific and not just any candidate with AI on the resume is worth it, but experienced professionals in implementation and with the ability to make decisions in environments where AI is already part of the development process. Side B: junior profiles are left out. This is where the other side of the paradox comes in. The report by Anthropic ‘Labor market impacts of AI: A new measure and early evidence’reveals that overall unemployment among workers most exposed to AI has not increased significantly since the arrival of ChatGPT, but there is a worrying sign in the data from hiring the youngest. Specifically, the study detects that, since 2024, workers between 22 and 25 years old have increasingly less likely to be hired in jobs most exposed to automation. The incorporation rate for these positions has fallen approximately half a percentage point, reducing by up to 14% the probability that a young man finds a job in those occupations, relative to levels prior to the launch of ChatGPT. For workers over 25 years of age, however, that same drop is not observed. Design, the great forgotten of the recovery. There is another profile that the recovery of employment in the technological labor market seems to have left aside: the design one. While product and engineering roles have been growing for two years, vacancies for designers have practically stagnated since the beginning of 2023, with around 5,700 global offers compared to more than 7,300 for product. The analysis firm Humbl Design confirms in its January 2026 report that design roles oriented toward routine execution will barely grow between 2% and 3% until 2034, while profiles specialized in strategy and problem solving project an increase of 16% in the same period. AI has a lot to do with this stagnation. Its ability to accelerate the work of engineers has reduced dependence on traditional design processes, especially in the prototyping and generation of visual variants phases. That is, AI has assumed that role and is now executed from the development departments, so companies They don’t need so many designers anymore.. In Xataka | “The world is in danger”: Anthropic’s security manager leaves the company to write poetry Image | Unsplash (Mimi Thian)

This is how the most brutal engineering work in urban history was born

London Underground, known in our language as the London Undergroundis one of the most famous public transportation networks in the world. With more than 543 units, 408 kilometers long and 274 stations, this precious piece of the United Kingdom capital is capable of handling up to five million passengers a day. Now, this service did not become what it is today overnight. London Underground has a fascinating history, a history that, by the way, began more than 160 years ago with a completely innovative project for the time: the construction of an underground railway. Let’s go back in time. In the 1830s, London was the largest city in the world. It was a rapidly growing global economic epicenter that needed to decongest its streetsso the idea arose that trains They will begin to move underground. The problem was that until then nothing similar had been implemented. After many years of being just a proposal on paper, a test tunnel was built in 1855 at Kibblesworth. After this step, which turned out to be a success, work began on the world’s first underground railway, a circuit between Paddington (then Bishop’s Road) and Farringdon that entered service on January 10, 1863. The locomotives ran on steam engines and the carriages were lit with gas. It was basically like putting up a traditional railway system in a closed placewhich translated into inconvenience for passengers, who often had to travel in a polluted environment with high temperatures. In any case, the metropolis continued to grow and there were more and more transportation initiatives with private investment. Therefore, in 1868 the first section of the Metropolitan District Railway was inaugurated. This was a service that ran between South Kensington and Westminster (now part of the District and Circle lines). Electricity reaches trains Both services continued to expand as tunnel construction techniques improved. On December 18, 1890, The City and South London Railway launched the first electric railway. This was a very important advance because it allowed us to solve some of the main drawbacks of the service. In 1905, electrification came to the District and Circle lines, but the London Underground network operated as separate systems. This changed after 1906, when companies began to make their way deep into the city to unify. In all this, the name ‘Underground’ did not yet exist. Artist’s representation of a platform on Baker Street London in 1906 The companies that had come together for the project proposed different names, including ‘Tube,’ ‘Electric,’ and ‘Underground,’ but the latter was the winner. In this way, in 1908 it appeared for the first time the name ‘Underground’ in the seasons, and he did it with the roundel symbol that we know today. The technological progress of the London Underground seemed unstoppable. That same year, electronic ticket-issuing machines arrived and in 1911 the first escalators were installed. In 1929, manually operated doors began to become extinct. These were updated with pneumatic systems. Until this point, the service was operated by the Underground Electric Railways Company of London (UERL). In 1933, however, underground transportation services merged with the railroads and bus services under the London Transport brand, which was overseen by the London Passenger Transport Board. That same year Harry Beck’s map appearedan element intended to guide users. The system had grown so large that some stations were just meters away, while others were kilometers away. It is a cartography that was received with skepticism, but ended up triumphing. Aldwych tube station, in 1940 For the first time, decisions about London’s public transport services were perfectly coordinated. This allowed us to improve the service and outline an ambitious improvement plan. However, the outbreak of World War II in 1939 meant that the plan could not be completed as originally envisioned. The underground transport service was converted into a huge air raid shelter between September 1940 and May 1945. Some stations were also used during the war as a warehouse to keep valuable historical items safe, for example pieces from the British Museum. After the war, in 1948, the London Passenger Transport Board acquired a public role. HE nationalized and became the London Transport Executive, years later being renamed the London Transport Board and operating under the orbit of the Ministry of Transport. The system also suffered several tragedies. In 1975 a train heading south did not stop at the final terminal and crashed at the end of the shift. 43 people died and 74 were injured. In 1987, a fire claimed 31 lives at King’s Cross station. Later, in 2005, an attack on the London transport system It caused 52 people to lose their lives. Nails contactless cards called Oyster They were implemented on the London Underground in 2003, but by 2014 you could already pay directly with contactless bank cards. By 2016, some lines provided evening service on weekends. Currently the service is run by an organization called Transport for London (TfL) which comprehensively manages the city’s state transportation strategy. Images | Joel de Vriend | Nelson Ndongala | Tomas Anton Escobar | Tom Parsons | Will H McMahan | The Graphic (Wikimedia Commons) | John Jackson In Xataka | The unfinished dream of the Roman Empire: a 125-kilometer train to link Europe and Asia over the Bosphorus In Xataka | France has been torpedoing the possibility of AVE reaching Paris for years: Renfe’s plan is now regional ones In Xataka | In 2007, Japan made a cat the station master of a dying train line. Today that line is saved

Paying more for a very fast NVMe SSD is wasting money if you only save PDFs, but it is the only option if you are also going to work from it

Like me, you have probably also at some point faced the purchase of a new storage unit, internal or external, for your desktop PC or portable. Something that, until a few years ago, was quite simplified: either you chose a 5,400 rpm HDD (revolutions per minute), or you chose one of 7,200 rpm. End of story. To something else. But since SSDs came onto the scene, purchasing (and usage) possibilities have changed a lot, making opting for one type or another is not so simple. Today, taking into account the price differences between HDDs (the “old” mechanical disks) and SSDs (the “modern” solid state drives), the choice is clear: SSDs win by a landslide, offering wide capacities and much, much higher speeds. Although well, the current context of AI surcharges It changes the film a little and, whatever purchase we make now, it will entail a greater outlay. But this shouldn’t last forever and, under normal conditions, SSDs are still the best value for money purchase option for general use. The price could vary. We earn commission from these links So, well, you already have one thing clear: to expand capacity, in general terms, the ideal in 2026 is to go for an SSD. However, the choice is not so simple because different technologies and different models come into the field of SSDs, each with a series of advantages and disadvantages. All of them, valid for any use you plan to give them, be careful. But not all of them cost the same and, depending on what you need your new unit for, Smart purchasing will tip the balance on one side or the other. And your pocket, of course, will thank you for choosing carefully. In other words and to give them first and last names: in a scenario in which you need more space for your PC or portable and you have to go through the checkout to expand it using an SSD, you will have to choose between an NVMe SSD or a SATA SSD (which are the main types of SSD that are generally sold). The first, more expensive and faster. The second, cheaper and slower. AND each one, in its proper context, shines with its own light. Next we are going to see how they differ and why they are a better purchase option compared to their rival, depending on the context. And thus pay more if the situation requires it or save as much as possible if you are not going to take advantage of its full potential. SATA SSD: not as fast but cheaper When SSDs burst onto the scene, they did so in a format we know as SATA. In units of different sizes (although also ostensibly more compact than mechanical HDDs) that are still commonly marketed in 2.5-inch models. If you have a laptop or desktop PC from a couple of decades ago, probably contains one of these. These SSD units were, at the time, night and day compared to mechanical HDDs. What used to take you half an hour to wait was suddenly completed in minutes. And also, without noise. The “problem” is that today, with much more modern and faster units (spoiler: NVMe), this type of SSD have been relegated more to pure storage than as devices for daily work. That is to say: what we once stored on HDDs, we now do on these SSDs. A digital storage room that, in any case, is much faster and makes it easier (and faster) to move large amounts of data and copy and paste files. In addition, the SATA SSD is probably the only option when it comes to somewhat “old” laptops: today, practically all models come with an M.2 connector (where the NVMe are installed), but if you have a laptop that is a few years old (around 2018 or earlier) it will probably not have said connector and the 2.5-inch SATA SSD is the one you will have to use. If you are also using a mechanical HDD, the change will be spectacular. Does this mean they are a bad choice? Not at all, they’re still great in 2026… but especially for what I’m doing: storing. Because if what you need is a “hard drive” on which to install the operating system, applications and games, or on which work intensively on tasks that require constant writing and reading of data (such as video editing), then you will be limited. This leads us to the next model: NVMe SSD. NVMe SSDs: faster and more expensive While SATA SSDs are somewhat larger and slower (but cheaper), NVMe SSDs are a rocket. The quickest and most direct way to describe them is: speed, speed, speed. While the former would become a one-lane national highway, the latter become a highway with eight lanes in each direction. This means that if a sporadic car (some file, such as PDFs) is going to pass through these “roads”, SATA is enough for you; If you need several heavy trucks moving at the same time (video editing, for example, with thousands of MB of data moving at full speed) then That national highway will collapse and there is no choice but to drive on the highway.. NVMe SSDs also stand out in design: they are compact, stylish and very small. The inseparable companion of any current desktop or laptop PCbut also in video game consoles by offering better performance in all types of tasks and taking up less space (something vital, for example, in the case of consoles). In fact, this is the type of SSD that the PlayStation 5, the Steam Deck… come with in the M.2 connectors that they incorporate. Connector that, by the way, has been present on practically any desktop or laptop motherboard for a few years now. This type of SSD is more expensive than its SATA relatives, but that extra financial effort is worth it if, in addition to storing data as such, you plan to work on them. … Read more

live in one city and work in another

Leaving home at five in the morning to travel 200 kilometers before arriving at work and repeating the same route back is, in fact, the daily routine of thousands of Spaniards who live and work not already in different citiesbut in different autonomous communities. The housing market has turned cities like Madrid or Barcelona into places where living is economically unviable for many working families. This phenomenon already has a name: pendulum travelers. And their number does not stop growing. Housing as a driving force of the exodus. According to data From the Tax Agency’s Labor Market Mobility survey, in 2019, 166,000 workers changed autonomous communities or provinces. In 2024, there were 236,848, which represents an increase of 30%. The reason why so many people choose to move between communities every day fits into one fact. In 2024 alone, 54,500 employees left the province of Madrid and 30,475 did the same from Barcelona. The sociologist Sara Porras, doctor in Applied Sociology at the Complutense University, confirmed in statements to The Newspaper What was the reason for that migration? outside the big cities. These are “expulsion processes caused by the overheating of housing prices, which have made rents unpayable,” said the sociologist. A life of early mornings and packed trains. As and how I collected The Spanish NewspaperMiguel Ángel García has spent years with one foot in Valladolid and another in Madrid, where he works in the financial sector. Miguel Ángel leaves the Campo Grande station at 6:45 and returns at 3:40 p.m. “Distance is not measured in kilometers, but in time: it is 170 kilometers, but it took an hour“, just as if I lived in Leganés,” he says. In his company there are 55 people who travel daily from Valladolid or Segovia, and they attribute their situation to the flexibility it has provided. the arrival of teleworking and hybrid days, which have reduced the days of mandatory presence in the office. The economic key is given by Elena Parreño, a journalist who moved from Barcelona to a town ten minutes from Gerona, that declared to The Newspaperthat “before, a round-trip ticket Gerona-Barcelona cost 27 euros; now, with the discounted passes, it is just over eight.” Begoña, a 40-year-old civil servant, made the same calculation on the other side of the map, and bought a house in Valladolid (something she describes as “impossible in Madrid”) and makes the daily journey to the capital in just over an hour on Avant trains. How much does it cost to leave and how much does it cost to stay?. The numbers explain a good part of the exodus that Madrid or Barcelona suffer towards other provinces with more affordable housing prices. The gap between housing prices in large urban centers and nearby provinces largely explains this exodus. Madrid closed 2025 with an average purchase price of 5,914 euros/m2while in Valladolid the average was around at 2,006 euros/m2. The contrast of the example in Catalonia is just as striking. Barcelona reached prices of 5,144 euros/m2in front of 2,667 euros/m2 which the province of Gerona recorded on average. The AVE factor. Another decisive factor in this migratory movement towards territories with a more affordable housing price is railway vertebrationwhich makes it possible to connect cities far enough away to reduce real estate tension, but not so far away that covering that distance requires investing a good part of the day. At that point, the train has become the only possible alternative. He Renfe Single Passvalid since January 2025, allows unlimited use of Cercanías and medium-distance trains throughout Spain for 60 euros per month (30 for those under 26 years of age). This savings has caused an increase in the use of the train to reach the big cities that, according to data From the last Railway Observatory in Spain in 2023, the Gerona-Barcelona line will register a total of 2,436,098 passengers, 44.7% more than the previous year, while the Madrid-Valladolid line reached 2,264,882, an increase of 64% compared to 2022. In 2024, the trend continued to rise, and only on the line Madrid-Segovia-Valladolid exceeded 2.7 million annual travelers. In Xataka | A silent phenomenon is brewing in Madrid: people who go to live in Valladolid and return to work by train Image | Unsplash (Yunming Wang)

The IOC has a new method to exclude trans athletes from the Olympic Games. The problem is that biology doesn’t work like that.

At the end of March, the International Olympic Committee announced undoubtedly one of the most controversial decisions in its recent history: starting with the 2028 Los Angeles Olympic Games, no transgender athlete will be able to compete in the women’s category. But beyond the social and political debate that can be generated, we must also focus on the method chosen to determine this exclusion: a simple genetic analysis where a single gene is searched. And this is something highly discussed among science. His discoverer. The gene in question, which will be analyzed in athletes who want to participate in the female category, will be SRYwhich is nothing more than the “Sex Determining Region Y”. A gene that was discovered in 1990 by molecular biologist Andrew Sinclair and who pointed out that its presence is a determining factor in male sexual development. It is, literally, the scientific father of the test that the IOC has chosen to integrate into its Olympic requirements. But the thing is that he himself is against using it for this. Your disagreement. This decision is not a big news, since if we look back, the body that governs world athletics, World Athletics, adopted this same test in September 2025 to participate in their competitions. Here is Sinclair himself He did not hesitate to publish an opinion article where he made it clear that the result is not definitive, since the only thing the analysis can say is whether the gene is present or not. Because. In this way, it must be detailed that being positive in SRY does not give us information about whether it is working to form a testicle, if it stimulates the production of testosterone or even if it expresses the necessary receptors so that testosterone can be used. Put another way: knowing that an athlete has the SRY gene does not tell you anything conclusive about her physiology, her hormonal levels or, by extension, about her supposed competitive advantages from having testosterone. The biology of sexual development is infinitely more complex than the presence or absence of a genetic marker, which will now mark the ‘everything’ before the IOC. There is more evidence. This researcher is not the only one who opposes this decision, since at the beginning of March it was published an article signed by 34 academics to respond to the decision of World Athletics. Here they pointed to the same thing: we are facing a test that reduces everything to a single gene when biology is much more complex. And biological sex is the result of a very complex interaction of human genetics, hormones, receptors, tissues… Furthermore, the IOC’s argument suggests that this test protects against competitive equity, but for academics, they point out that there is no solid scientific evidence to demonstrate that the presence of the SRY gene is directly related to having a greater sporting advantage. It’s not something new. Although we now see a big scandal in the sports world over this decision, the reality is that if we look at the newspaper archive, something similar was already being done in the 90s. 30 years ago The IOC decided to require women to verify their sex through chromosomal testing and also by determining the SRY gene. But finally the tests were withdrawn due to technical limitations, the absence of medical evidence and also because of the legal problems it could have. A Spanish case. Due to these tests, the Spanish athlete María José Martínez Patiño was disqualified in 1985 after testing positive in the chromosome test despite not having any physiological advantage over her peers. In this way, her career was practically doomed, but she was able to recover it thanks to the help of a geneticist who was able to document her case with scientific evidence that showed that it was not giving her an advantage over the rest of her competitors. The debate. If the basis for requiring genetic testing is to protect competitive fairness, we must ask what science says about the real advantages of transgender athletes. And at this point much less is known than the general population believes. One of the most important studies It was made in 2015 by a transgender researcher who analyzed the running times of eight athletes before and after their transition. In this case, the brands slowed down and their relative performance compared to runners of the same sex remained quite stable. An IOC study. Published in 2024 and partially financed by the committee itself, produced results that do not fit with the discourse we keep hearing: transgender women showed worse results than cisgender women in lower body strength and lung function. But logically it does not mean that there cannot be residual advantages in certain sports, which is something that to this day remains a question that needs an answer. And now what? We are undoubtedly facing a dispute about which tools are valid to solve a genuinely complex problem. Right now, science suggests that the SRY gene test is not the best tool, but because it does not give us a complete answer, since the SRY gene may be present and the body may not respond to testosterone. But this is something that today must continue to be investigated to obtain evidence that can guarantee this equity, but always with a scientific basis behind it. Images | Umanoid Erik van Leeuwen In Xataka | We have accepted that sport is “medicine” for the body. Now science is discovering its side effects

This is how T cells work “with turbo”

The immune system in our body is a fascinating biological machine that protects against any type of threatsuch as hateful tumor cells. But these cells are really intelligent, and have the ability to camouflage themselves from our defenses, as happens in prostate cancerwhere tumor cells look so much like healthy cells that our defenses barely recognize them or do so with such weak force that the tumor manages to escape. What were we doing until now? The medical solution was to increase the ‘affinity’ of the defensive cells to the maximum so that they destroyed the tumor cells. The problem? That sometimes they over-braked and attacked healthy tissue, making it more harmful than beneficial. But now this is something that we are trying to solve with the design of “hooks” on a molecular scale. The problem. If we delve deeper into this problem, we must know that T cells, which are part of the immune system, have receptors on their surface that literally bind specifically to the foreign proteins of tumor cells in order to identify them, as if it were a nightclub bouncer asking for ID to detect those who cannot enter. In prostate cancer, one of the targets to look for is the PAP protein, which tells the immune system that it should start attacking, although the problem is that the affinity is not very good, so they literally missed identifying many of these. The problem is that increasing the affinity of this cell causes the opposite effect, by becoming hyperactive and developing a ‘cross-reactivity’ that mistakenly attacks the healthy organism and generates a high level of toxicity. In this way, it was necessary to increase the affinity, but without it being toxic. The solution. This is where the brilliance comes in. a new job published in Science that is based on proven concepts by previous teams who tried to achieve this better affinity. What they saw here is that the solution was not to increase the “glue” of the cell in general so that it would stick together better, but to design what they have called ‘catch bonds’. Catch bonds. To understand how it works, you simply have to imagine a seat belt or a fishing hook. Under normal conditions, the bond is quite soft, but if a large mechanical force is applied, the bond changes its structure and grips much more tightly. In this way, the scientists took a weak natural receptor specific for the PAP protein and introduced very precise mutations into it. The result is a genetically modified receptor that acts like a hook. Better cells. By entering these coatch bonds, The researchers managed to create T cells ‘with turbo’ and in the tests they showed that these cells could bind much better to prostate cancer cells and also managed to destroy them in a more efficient way. But the most important thing here is that a very low cross-reactivity is maintained, so they have the ability to ignore healthy tissues and only deploy the destructive potential when the mechanical “hook” comes into action on the specific target. In Xataka | Neither cure nor die: why the next great revolution against cancer is to make it chronic

“You would have had the same right to work 360 days as 539”

There is a reality in the system unemployment protection in Spain that not many active employees know about and that directly affects their pocketbooks. The extra time you spend working within certain margins does not always translate into more days of contributions when it comes to apply for unemploymentand the “leftover” days of contributions are not saved for a future benefit. SEPE itself has clarified this explicitly on its official website. An example of the way in which this estimate is made is that someone who has worked 420 days ends up receiving exactly the same amount of unemployment benefits as someone who has worked 360 days. Not one more day. This is how a system works that does not grow proportionally, but in steps, and understanding it can make a real difference in your work decisions. How the section scale works. The contributory unemployment benefit is not calculated day by day based on contributions. Instead, the SEPE applies a scale of sections included in the article 269 of the General Law of Social Security. The mechanism is as follows: each range of contribution days corresponds to a fixed block of benefit days. The first legal minimum section to access unemployment starts at 360 days of contributions and extends up to 539 days. Whoever falls within that interval, regardless of whether they have 360, 420 or 539 accumulated days, receives exactly 120 days of benefits, that is, the right to four months of unemployment benefits. To make the jump to the next step and access 180 days (six months), it is necessary to have contributed at least 540 days. The section that “engulfs” 179 days of trading. The complete table published by the SEPE and supported by current regulations shows how the sections are distributed along the entire scale. In all cases, the principle is the same: within each section, it does not matter if you are on the minimum or maximum day. This means that, in the first tranche, a worker with 539 days of contributions receives the same benefit as one with 360. The difference between the two is 179 days of contributions which, for unemployment purposes, do not generate any additional rights. The system openly recognizes this logic and, according to the example provided on the official SEPE website, “when you credit a total of 420 days, the section that covers between 360 and 539 days of contributions is applied to you, so you are entitled to 120 days of benefits. You would have had the same right if you worked 360 days or the maximum of the section, in this case.” Quoted period Delivery time From 360 (minimum) to 539 days 120 days From 540 to 719 days 180 days From 720 to 899 days 240 days From 900 to 1,079 days 300 days From 1,080 to 1,259 days 360 days From 1,260 to 1,439 days 420 days From 1,440 to 1,619 days 480 days From 1,620 to 1,799 days 540 days From 1,800 to 1,979 days 600 days From 1980 to 2,159 days 660 days From 2,160 days 720 days (Maximum) The remaining days are not saved. One of the points that generates the most confusion among workers is what happens with the “excess” days of contributions within the section. The SEPE’s response is clear and does not allow for nuances: days that exceed the minimum established by the section being accessed are not accumulated or reserved for a future benefit. This means that, in the example of the worker with 420 days of contributions, the 60 days that exceed the minimum threshold of 360 simply disappear once the unemployment benefit is requested and consumed. As explains the SEPE on its own website: “The remaining days cannot be saved for another benefit.” The “step” rule and its practical implications. Understanding this mechanism allows workers to identify which section they are in and how many days are left before they make the jump to the next section. Contributing more days without reaching the threshold of the next step does not add anything in terms of benefit, so the additional effort is absorbed by the current section without any return in the form of more benefit time when unemployment is requested. In other words, if someone has 500 days of contributions and loses their job, they will be in the same bracket as someone with 360 days, but they will only have 40 days left to jump to the 540 step, which would give them the right to two more months of benefits. Know this logic of sectionsallows you to make more informed decisions about your employment situation and know what benefits correspond to you. In Xataka | Aid of 480 euros from SEPE for people over 52 years of age: how to request online the aid that you continue to receive even if you find a job Image | Community of Madrid, Unsplash (Spencer Davis)

There are many wireless alternatives to the HDMI cable. The problem is that none of them work as well as the HDMI cable.

That the tangle of cables in our homes is well organized is something that it makes us obsessed for years. Normally, when we talk about cable management we usually refer to the workspace, but we also can be a problem in our salonswhere there are cables that no matter how much we try to replace them, they are still there. One of them is HDMI. Although there are technologies to be able to watch content on a television or projector without having to pull a physical cable, HDMI cable is still the best and sometimes only option. Alternatives to HDMI cable HDMI cables have the drawbacks of any cable: they limit mobility, cause visual clutter and, depending on the device we want to connect, it is very likely that we will need adapters. Fortunately there are technologies that allow us to do without it. Chromecast and Airplay Google TV Streamer, formerly known as Chromecast They are the most popular and well-known options since they have the support of large companies such as Google and Apple. More and more televisions integrate both systems, so it is no longer necessary to purchase a separate device. In the case of Chromecast classic, technically not a wireless solution, But it does allow us to launch the content we want to see without having to use one expressly to connect the mobile phone to the TV. Miracast Smart View on a Samsung mobile. Image: Xataka Home One of the solutions that has tried to replace HDMI is Miracastusually known as Mirror screen or Smart View on Samsung phones. It is a protocol that works through Wi-Fi Direct It allows two devices to detect each other and we can mirror the screen of one on the other. This point is important since it only works in mode mirroringthat is, that clone screen contentit does not extend it or play a video from an app like a Chromecast does. With Miracast, if you want to watch a video that you have saved on your mobile on TV, you will have to leave your cell phone on and the same video playing on it. The advantage is that it is a cross-platform standard and allows you to send FullHD video with almost no latency. That’s when it works well, because Connection problems are quite common. Wireless HDMI Kits If you can’t (or don’t want to) run an HDMI cable to a display or projector, a solution may be to use a wireless kit. It consists of an emitter that sends the AV signal wirelessly to a receiver, which will be connected to the destination device, such as the TV. There are quite a few options available at relatively affordable prices, such as this one from UGREEN that costs less than 60 euros or a little more expensive, like this one from VENTION for 119 euros. The problem with these types of solutions is mainly the interference and, above all, latency. In addition, they have limitations such as lack of HDR support and many do not support 4K video. HDMI is still necessary Although there are alternatives without cable, they are just that, alternatives, not substitutes. Yes, there are proposals that improve HDMI, such as GPMI standard developed by an organization of more than 50 Chinese companies. This interface promises transfers of up to 192Gbps and supports 8K video, but even if it manages to displace HDMI You still need a cable. There are no wireless alternatives that improve the performance and stability of a physical HDMI cable, especially in scenarios where latency is key such as competitive video games. Whether on the console or the PC, the cable will always be the preferred system in this case. It is also best if you are interested in obtaining the best video and audio quality, for example when connecting a home cinema system, and you prioritize connection stability. Of course, you have to choose the cable well and the port to which we connect it to get the most out of it. Image | Xataka In Xataka | The curse of hotels are TVs that do not allow you to use the HDMI port. The solution is obvious: hack them

will not work miracles

Just do scroll a few minutes on TikTok or Instagram to come across a video teaching the great benefits of taking magnesium supplementationpromising to sleep better or make your muscles perform better than ever. This has caused magnesium to become almost something essentialbut the reality is that it does not work miracles. Like other supplements that had their great heyday in the past. Specific cases. The current scientific evidence is clear and fits perfectly with what nutrition experts warn: in the healthy population, the absolute priority is diet. Magnesium supplements are not a miracle pill and only make true sense when there are deficiencies or very specific clinical situations. The problem is that the general population falls into wanting quick benefits with a simple pill. This is something that can be seen in a medical consultation, where leaving without a prescription for a new medication seems crazy, since we do not welcome being prescribed a diet or a series of exercises. In the end we want a pill that cures absolutely everything and gives us the greatest fulfillment in terms of health. The effect of magnesium. This is what happens with magnesium, and it is normal when it is sold to us as something so good. But the question we must ask ourselves here is: do we really lack that much magnesium? The answer is varied, because intake data in Europe reveal that between 25 and 40% of adults are below the average requirement, being a relevant problem in women. However, going “just right” in magnesium does not mean that you have clinical hypomagnesemia (a drop in magnesium), nor does it mean that you need to buy the best supplement. The vast majority of these people could solve their problem by making small adjustments at the supermarket: eating more legumes, nuts, seeds, green leafy vegetables and whole grains. Which is supported. The European Food Safety Authority (EFSA) endorses that magnesium reduces fatigue and supports muscle and nerve function. But if we look at recent clinical trials, its miraculous effect is nuanced. For example, it has been seen that magnesium oxide used as an osmotic laxative for decades, and it is something that has been confirmed in the laboratory. Another effect with strong evidence behind it comes from a meta-analysis published in 2025 that shows that prolonged supplementation for between 12 and 16 weeks with doses of 250-450 mg per day improves blood pressure, lipid profile and reduces inflammatory markers. What is less clear. On the other side of the scale, there are some effects that do not have such strong evidence. For example, a mantra that has been heard is that supplemented magnesium is ideal for migraines. Something that has been demonstrated, but without very conclusive data. As an extra, magnesium is also sold as the ultimate and most natural sedative, but here is the data They are less spectacular than they seem. The reality is that it helps a little with insomnia and mild anxiety, but the quality of the evidence is limited, and the effect, according to trials, is not uniform. Where to click. If they have sold magnesium to avoid cramps while running or perform better in the gym, the truth is that it is smoke. An in-depth 2024 scientific review on magnesium and exercise concluded that the evidence for performance, recovery and cramps is minimal and, thus, its widespread use in athletes is not solidly supported. It is not the solution either for the famous hot flashes that exists when you reach menopause, since the trials here show that magnesium does not reduce the frequency or intensity of these. Yes, it is true that right now there are some studies underway with specific formulations to see if they act on other symptoms. They are not all the same. The most important thing here is to always consult your doctor so that he or she can determine if there really is a deficiency of this mineral (or others) through a simple blood test. If confirmed, you have to know how to choose well, with magnesium citrate being one of the favorites as it is the one that is best absorbed in the intestine. Be careful not to go overboard. Different institutions mark a very clear red line: they cannot be consumed more than 350 mg of magnesium in the form of a daily supplement. Overcoming this barrier, especially if more than 400 mg are taken daily, opens the door to the most common adverse effects such as diarrhea, nausea and abdominal cramps. Additionally, magnesium supplements are contraindicated in people with severe kidney failure, as their kidneys cannot filter the excess, which can cause severe muscle weakness and neuromuscular blockages. In Xataka | There are people obsessed with consuming magnesium as a supplement when the best way is to put it in your diet

The southern entrance to the A5 underground is already 80% excavated, and there is a culprit that has speeded up the work: the soil

Allow me, if you don’t mind, to use an expression that I have been wanting to use for a long time: there is already light in the tunnel. That’s right, then the burial of the A5 Move against the clock to meet deadlines. And the goal is to open traffic in November. The southern tunnel of the Extremadura highway has already exceeded four fifths of its route under Madrid. There is less left, largely due to the technical innovations that have made it possible. The largest work in Madrid right now. The burial of the A-5 is, today, the largest infrastructure under construction of the capital. Under the streets that connect Madrid with the exit to Extremadura, two tunnel boring machines work in parallel to bury one of the historic entrances to the city and thus free up surface space for urban use. Where is each tunnel. The work runs in two independent galleries. The southern tunnel, through which vehicles entering Madrid will circulate, has been excavated for approximately 80% of its length. The northern tunnel, the exit, is advancing at a slower pace and has completed about half of the route. Although the asphalt has not yet been laid, the interior appearance of the most advanced gallery already allows a fairly clear idea of ​​what the final infrastructure will look like, according to transfer to Telemadrid the technical teams that supervise the work. The key to acceleration: the ground. As the media points out, a new construction system applied to the tunnel floor has made it possible to speed up both the excavation and the consolidation of the infrastructure. For this reason, and because of the work that is being put into the work every day, it has been possible to reach 80% without major delays, maintaining the schedule. 14 emergency exits. Parallel to the main gallery, the work includes the construction of 14 emergency exits, one every approximately 200 meters. Each of them is accompanied by technical rooms where the systems necessary for the operation of the tunnel will be housed, including geothermal installations that will improve its energy efficiency. Jump to the surface. Starting in September is planned that the works also extend abroad, with urbanization actions in the area around the A-5. The idea in this phase will be to definitively integrate the infrastructure into the surface, with the aim of reducing traffic outside and taking advantage of the area for new public spaces. November, the date marked on the calendar. With the tunnel boring machines still in operation, the goal is for vehicles to be able to travel through the new tunnel before the end of the year. November is the date currently managed by those responsible for the work. So we just have to wait a few more months to call it a day. one of the heaviest works of these years in the capital. Cover image | Madrid City Council In Xataka | Portugal had to choose where to take its AVE first. And between Madrid and Galicia, it is very clear

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.