the problem is how you want to do it

Mexico is about to make a profound change in the organization of its work week, reducing the maximum working day from the current 48 hours. 40 hours per week The proposal presented by President Sheinbaum is to bring the working day closer to OECD standards. Currently, Mexico is one of the countries with the longest working day in the world. The problem is that the path to reaching those 40 hours a week is not going to be as simple as it was proposed. in the first proposals and the result is a reform that is born with many nuances, with a calendar that extends until 2030 and that already generates doubts among businessmen and accusations of “simulation” from the opposition. How the working day will be reduced. The opinion circulating in the Constitutional Points and Legislative Studies commissions that is being studied for its approval in the Senateestablishes a progressive reduction of the working day at a rate of two hours less each year until 2030. That is to say: 2026 will end with a 48-hour work week. In 2027 it will be reduced to 46 hours. In 2028 it will be reduced to 44 hours. In 2029 it will go to 42 hours. In 2030 the goal of 40 hours per week applies. This design seeks to give companies room to adapt so that salaries are not affected and productivity adapts progressively. Congress will have 90 days to adjust the Federal Labor Law and align it with the new 40-hour limit, which includes redefining new regulations to regulate overtime and working hours control mechanisms. A single day of rest. One of the first changes that have occurred with respect to President Sheinbaum’s first proposal is that the obligation of two days of rest will not be applied, but rather the current text of the article 123 of the Constitutionwhich states that for every six days of work there must be “at least one day of rest.” In practice, this means that even with a 40-hour week, the ruling does not explicitly establish a work schedule with Saturdays and Sundays off, but rather preserves the minimum of a single day of rest with full salary. Eight hours that will be extra. On the other hand, in the modifications with which the Senate is working, it is considered that the eight hours that are going to be cut from the ordinary day may be covered as overtime. The new framework states that overtime may not exceed 12 hours per week, compared to the current eight, and may be distributed in up to four hours a day for a maximum of four days. Compared to the three hours a day with a limit of three consecutive days that regulates the current Federal Labor Law. That is, the working day is reduced, but the limits for overtime are increased. Of course, when this limit is exceeded, each extra hour must be paid at 200% of the usual salary. Controversy over the hours. This extended extra work design has triggered criticism from the opposition and some specialists, who consider that the reduction in working hours is distorted if a relevant part of those hours ends up returning to the calendar as overtime. The national leader of the Citizen Movement, Jorge Álvarez Máynez, summed it up harshly from your Facebook profile stating: “Alert, Morena wants to make a fool of themselves in reducing the working day,” and maintained that the ruling “does not reduce the working day to 40 hours until 2030 and extends overtime hours to 12 per day, so that the working day does not actually decrease.” Less hours, less rest. For their part, business associations demand that the new regulations establish a reduction in rest timesso that the eight hours a day are effective work. Employers are asking to compact the day to reduce rest times during the day, so that it can be done uninterrupted. The Government has already positioned itself against it since its commitment was not to cut the rights already acquired by workers. 30 million Mexicans will work fewer hours. It is estimated that with the reduction of the working day, some 30 million Mexican employees will work fewer hours, better reconciling their work and family life. Mexico is currently one of the countries with the longest working hours in the world with an annual working day of about 2,124 hours per year, compared to an average of 1,687 hours. in OECD countriesthat is, around 23% more. In Xataka | Airbnb and digital nomads brought dollars to Mexico City: they have also brought the biggest housing crisis in years Image | Unsplash (Clayton Cardinalli, Denise Jans)

Genie 3 is awesome at creating worlds for video games. But the problem with video games was never creating worlds

Genie 3 has been with us since August and its previous versions since long beforebut this weekend its fame has exploded because Google has taught us how to generate interactive 3D environments simply by writing a phrase. And in seconds, Genie 3 materializes a forest, or a city, or a cave, or whatever you want. And there you can move, or jump, or fly. Technically it is brilliant. However, There is nothing there that makes us think that it is going to bring down video game development.. It will make it easier, in any case, but it does not pose a threat to him. Because The bottleneck of video games has never been generating polygons. The difficult part of creating a good game is not creating a world in which a character can walk or fly. The hard part is creating a world where you, I, and all the other players want to keep walking or flying. That difference between space and experience is what separates a demo like the ones we have seen of Genie 3 – a video to be amazed by for a few seconds – from a video game that we are going to dedicate hours to. Or at least a few minutes. Several video game companies fell around 10% after this announcement of Genie 3, but none as much as Unity, which has fallen 20%. It is a sign that There are Unity investors who don’t understand what makes a company like Unity valuable.. Unity is not Unity because it renders polygons but because of the invisible infrastructure it sells: making the physics the same on all the devices on which its games are played, creating collision systems that do not fail, maintaining debugging that explain why your game crashes in frame 47,293. Genie 3 generates impressive landscapes, but it can’t explain why your character is traversing the ground in that particular corner of the map. From the outside, what is visible seems to be the great work to be done with a game. The graphics, the models, the environments… But any developer knows that create assets is the, quote-heavy, “easy” part. The bad thing is what takes years of accumulated effort: Design clashes with enemies that are complex but fair, calibrate progression curves, write dialogue that serves much more than conveying an idea (such as revealing a character) without stopping the action for it. That is, build complex systems that consolidate the narrative and engage the player, interacting in emergent ways. Genie 3 doesn’t touch any of that. There is one limitation that perfectly sums up the distance between what Genie 3 does and what a video game demands: spatial memory. The generated worlds they tend to forget themselvesand that is why a ladder that you saw a while ago is no longer there, and not because someone has taken it. If you go back, possibly the model regenerates it, perhaps in another place, perhaps with another geometry. In a video game, just the opposite is needed: a persistent state where each action has consequences. A tree you cut down has to stay down. Spatial consistency is the basis of a digital world. And that is not solved by updating the model to make it a little more capable. It is something inherent to generative systems: they live in an eternal present, without real memory of frames previous. This doesn’t mean that Genie 3 is useless. We insist: it is beastly. But for something else. For rapid prototyping, to elevate conceptual art to something interactive. Those types of scenarios. Maybe even for a indie Show the investor what your game will be like without settling for a PowerPoint. And that is valuable. It will change dynamics and lower costs. But It is one more tool in the entire process, not a replacement for any process.. Google is going to solve a problem in the world of video games, but it has the most difficult ones left: making those worlds matter something and making the mechanics satisfactory. May we remember the stories they tell and may the players progress. Ultimately, the soul of a game. That is hardly designed with a prompt. That is designed, iterated, and polished for a long time by people who know about intentionality. Now AI can create the canvas, but that has never been the hardest thing about painting. Featured image | Google In Xataka | What’s happening with Ubisoft: after canceling six games and adjusting its structure, this is the plan of the great French studio

The problem remains that any hotel will photocopy it for you.

The National Police has announced in a post on X (Twitter) that a new Public Key Infrastructure (PKI) has been implemented of the DNI with new encryption algorithms. Thus, the new system goes from being an RSA 2048 to the so-called Elliptic Curve 384 system. The change is interesting on the one hand. The RSA system is already a classic for data encryption, but it has some disadvantages: it uses large public keys, operations are slower (especially when signing) and it consumes more CPU and bandwidth. With Elliptic Curve 384 bits (ECCusually P-384) allows you to provide a lot of security in small keys. Signatures are much faster, for example, and consume much less CPU and battery, in addition to providing a very high level of security. It is a system with much more future projectionand seems a suitable alternative for the “ID 4.0“of which the National Police speaks. What does this cryptography protect our DNI? These private keys inside the DNIe chip never leave it, and various types of operations are carried out with them: Authentication: prove that you are you, and that it allows you, for example, to connect to the Tax Agency Electronic signature: when we present official procedures or electronic contracts, the DNI “signs” as if it were our physical signature Key exchange: reinforces, for example, the TLS security of secure protocols when browsing if we need that additional layer This new cryptographic system mitigates impersonation even if passwords are stolen in certain scenarios (such as trying to use it for official procedures) because the attacker would need not only “a photo” of the DNI, but the physical DNI, the PIN, and break the ECC 384 encryption, which is practically impossible today. So, the measure is positive, but that is not the problem. The problem is how the DNI is used in Spain. The weakest link These measures are aimed at protecting the data on our ID, but the problem is how this document has become inherently vulnerable because of the way it handles itself in the real world. In fact, the DNI has become a document that we transfer with extraordinary ease. Hotels have been asking for and photocopying our ID for years when checking in.. They cannot and should not do so. The AEPD published a note in June 2025 in which it discussed this issue and concluded that “a copy of the identity document should not be requested.” This document is also usually photocopied or scanned in procedures before a notaryFor example. In this case, the arguments are usually put forward that they have to provide reliable documentation because the money laundering law. The AEPD in fact sanctioned the General Council of Notaries (CGN) for this type of requests last summer of 2025. However and how explained in X cybersecurity expert Román Ramírez, it is another question of “they do not want to commit themselves by attesting that the document you show is the real one. It is a way of “washing your hands,” he explains, because if something happens it is no longer your word against theirs. In fact, we have long recommended that if you have to send a photo of your DNI for any online procedure, it should be done with a watermark. Tools like SafeLayer They facilitate this task but in some cases the entities or companies that request this document object to the shipment with a watermark or do not accept it at all. The curious thing is that the legislation theoretically prevents this type of mass registration of DNIs in cases such as administrative procedures. He Royal Decree 522/2006 of April 28eliminates the obligation to present photocopies of the DNI in the administrative procedures of the General Administration of the State and its dependent organizations. This rule obliges administrations to verify identity data internally, prohibiting requiring physical copies unless expressly opposed by the citizen or specific regulations. This rule does not apply directly to the examples of hotels or notaries, but there the AEPD has also made it clear that the exhibition is enough of the document. No matter what they tell you, do not send your ID as is We should never simply send photos of the ID. Neither by email, nor by WhatsApp, nor through dubious forms, even if the message seems to come from a real company. In fact, if a company asks for your DNI after a theoretical data leak or theft, the best thing we can do is verify it on our own, informing us on its website or calling a legitimate customer service number to find out what is really happening and if this procedure is necessary. The DNI It’s “gold” for scammersbecause it allows: Open fraudulent accounts Request credits or microloans Validate identities in online services Reinforce scams with messages such as “we have your ID, we are from this company or bank.” In fact, this document is a treasure if other data such as contact or telephone number is added to it, because thanks to this information it is possible to carry out much more effective identity theft attacks that can lead to truly dangerous scams and frauds. The DNI has another peculiar problem: it is used as a kind of “universal identifier” in Spain. If it leaks once—if someone steals it or a photo of it ends up in the wrong hands— You can no longer change it like someone changes a password.. You can only renew it, but even then the “old” one is still extremely useful for those phishing attacks. That is why it is important to limit as much as possible where we upload the ID, who has it, and in what format: we should never upload the complete photo unless it is absolutely essential. In Xataka | How to share your ID online safely to avoid dangers

We have turned sadness into a psychiatric disorder. And that is a problem that is devouring us socially.

When Roland Kuhn discovered the first antidepressant in history, imipramine, the directors of Geygi hesitated to put it on the market because depression was so rare who did not believe it could become a profitable medicine (Healy, 1999). It was the 50s of the 20th century, but it seems like an alternative reality. Today, depression is omnipresent. Only in Spain, the consumption of antidepressants has grown 200% in the last fifteen years and it is nothing more than the reflection of an unstoppable international trend. How is it possible that, in just over half a century, depression has become “so common”? Are we confusing normal sadness with a psychiatric disorder, as many experts say? Are we pathologizing everyday life? I am not going to enter into terminological debates, no matter how interesting and necessary they may be. When talking about “invention of mental illness” or “pathologization of everyday life” we run the risk of minimizing problems as serious as depression and that is something that is not in question. On the contrary, the idea is understand her better to treat her better. As the neurologist Luis Querol said“if we stick to the conventional concept of diseaseanyone who has seen a melancholic depressive SUFFER (…) will recognize that it is an illness.” It is totally true: that is enough for now. Depression is a particularly insidious and destructive disorder. According to the WHOnot only is it the main global cause of disability, but it affects 350 million people and is behind 800,000 deaths each year. Synopsis of an epidemic However, this does not explain why depression has become an epidemic. Above all, because it is not a disease that we “just” discovered. Melancholy is one of those psychiatric disorders so old that they were already diagnosed by Hippocrates and classical Greek medicine. Since the 19th century, the European diagnostic tradition separated most mood disorders from deep melancholy and included this among the diseases that end up consuming the person (such as senile dementia). At the beginning of the 20th century, psychiatric practice already clearly differentiated between endogenous or melancholic depression (which affected between 1 and 2% of patients) and reactive or neurotic depression (much more common) which was a product of stress, loss or pain. (Unsplash) In 1980, in the middle of a deep reputation crisis for psychiatric practiceDSM-III changed the way we think about depression. It moves from an etiopathogenic model (which asked about the cause of the disease) to a semiological one (which, in its claim to atheoretical nature, was based on symptomatology). A careless eye might think that the change was terminological and that “endogenous” was only replaced by “major” and “reactive” by “dysthymia”; but, in reality, the DSM-III expanded the playing field. Melancholia became one of the five subtypes of major depression and, with this, the underlying depressive disorder went from having a prevalence of 2% to a prevalence of up to 17% (Kessler et al., 2005). In recent years, a good number of historians (and activists) have insisted that this change and the commercial pressure of pharmaceutical companies (Horwitz and Wakefield, 2007) have taken us to overdiagnosis current disease (Mojtabai, 2013; Parker, 2007). At its strongest, it is a difficult argument to reject. Especially because it is not that the existence of depression is denied, but rather that it is argued that the failure of epidemiologists, psychiatrists and social scientists to differentiate ‘normal sadness’ and ‘depressive disorder’ is leading to health policies that condemn many people to taking unnecessary medications and carrying the weight of stigma on their backs. Whys, doubts and conspiracy Basically, although it is not usually said clearly, we are talking about ‘iatrogenesis’; That is, suffering or damage to health caused by health professionals themselves. The current opioid crisis in the US It shows that, far from being pure conspiracy, pharmaceutical companies and their balance sheets can create a health problem of colossal dimensions. However, we must not be unfair, nor fall into banal Manichaeism. Although it may seem counterintuitive and paradoxical, many problems only appear when we have the solution them. Without antidepressants or effective behavioral therapies, depression was deep sadness, black sorrow that wells up, black shadow that amazes me. Something that was between us and there was nothing we could do to avoid it. (Jacob Sedlacek/Unsplash) Horwitz and Wakefield say that “tolerance for normal but painful emotions has fallen” in the West. And it may be true. But they forget two fundamental things: that, for the first time in the history of humanity, we can do without them and that it is not a personal problem, the modern world has tended to prioritize productive optimism and has forgotten how to live with sadness. At this point we realize that, if we want to learn to better separate “illness” from “normality”, it is not just a matter of challenging depressive overdiagnosis, but of claim sadness. The problem is that, why would we want claim sadness? And the answer, honestly, may surprise us. Sadness, said Lazarus (1991), promotes personal reflection after the loss. Focus our gaze on ourselves, promote resignation, invite acceptance (Izard, 1993). It allows us to waste time to update “our cognitive structures” (Welling, 2003); that is, to accommodate the loss. That reflective function of sadness It allows us to stop. And weigh actions, review our goals, modify our plans (Bonanno & Keltner, 1997; Oatley and Johnson-Laird, 1996). It makes us more attentive to detail, more precise. It makes us flee from heuristics and stereotypes (Bodenhausen, Gabriel and Lineberger, 2000; Schwarz, 1998) and distrust first impressions (Schwarz, 2010). Physiological arousal decreases and makes us more prone to slow thinking (Overskeid, 2000). Furthermore, it shapes us as a group. Causes sympathy, empathy and altruism in others (Keltner and Kring, 1998). The complex balance between “normality” and “disease” In 1843, Charles Darwin wrote a letter of condolence to a distant cousin in which he said that “strong affections have always seemed to me the noblest part of man’s character and the … Read more

I have tried Apple Creator Studio and it is clear to me that Adobe has a problem. The key: its price

Prove Apple Creator Studio It is relatively simple because, in one way or another, the subscription includes applications already known in the creative world. Apple has been smart and has come up with a package that allows access to Final Cut Pro, Pixelmator Pro and Logic Pro. That’s for starters. And finally, more tools like Motion, Compressor, MainStage, and even AI tools in your office suite. In any case, the real value of the subscription is provided, at least for me, by Final Cut Pro and Pixelmator Pro. Although my time as a TikToker is now a thing of the past, I still use photo and video editors for my things and my daily life and, after having been playing around with the apps included in Apple Creator Studio, I can only conclude that Adobe has a problem. One that costs 12.99 euros and that expands throughout the Apple ecosystem. This is not about YoTO. As much as one of the interesting additions to Apple Creator Studio is AI, the truth is that the utilities based on it, which are useful in some cases, take a backseat in practice. The key to the subscription is the price and the comparison with its direct rivals. And for example, a button: monthly price apple creator studio (Includes Final Cut, Pixelmator Pro and Logic Pro, among other apps) 12.99 euros Creative Cloud Pro (includes entire adobe suite) 118.96 euros Adobe photography (includes photoshop and lightroom) 24.19 euros adobe photoshop 26.43 euros adobe premiere 26.43 euros adobe audition 26.43 euros capcut pro 29.99 euros canvas pro 12 euros The separate purchase of all the apps included in Apple Creator Studio would amount to around 800 euros, but it is possible to access them for 12.99 euros per month. Not one of the rival subscriptions, not a single one, is capable of matching what Apple offers in price, features and simplicity. Pixelmator Pro | Image: Xataka In few contexts something else comes to light. The Adobe subscription that includes all its tools costs 119 euros per month. Almost ten times what Apple’s costs. The problem is that this subscription contains apps that not everyone will take advantage of. Anyone who wants to access Photoshop and Premiere has no choice but to go through either Creative Cloud Pro (119 euros per month) or combine photography and video plans whose cost would amount to more than 50 euros. The question is whether the 119 euros per month subscription offers the user 119 euros in value, because probably not. Anyone who wants to edit photos and videos probably has no interest in Audition, InDesign, or Fresco, so by choosing Adobe you will be paying more for tools you don’t use. Apple goes simple. Because Apple knows that this is not about great creators with teams behind them, but from aspiring/small influencerscreators who cook it and eat it. If you already have an iPad (undisputed king of the tablet world) or a Mac (historical favorite in the world of creativity), the integration, familiarity and communication between apps achieved by Apple is unrivaled, and neither is the price. Some of the Apple Creator Studio apps | Image: Xataka The apple firm has not warmed up by offering very niche products, quite the opposite. You have taken the four key tools that you know work, some AI tools for office automation, you have put them in the blender and served them to the user. Will there be cases in which Adobe is more worth it? Possibly at the studio or company level (or if you have a Windows PC, of ​​course), but at the user level and in the Adobe environment, CapCut and Canva in particular are against a rock and a hard place. AI Utilities. At the office automation level, I consider that a lot and at a very extreme level you have to use Pages, Number and Slides for Apple Creator Studio to be worth it to you. Beyond certain utilities such as rescaling a photo, accessing premium templates and generating images (with OpenAI models in the background, by the way), office automation remains more or less the same. It is not the strong point, of course, and if you use these apps for university work you can survive without the subscription without any problem. Here you can see the search by transcript. When searching for “iPhone Air”, Final Cut Pro returns only the parts where that word is mentioned | Image: Xataka Little lifesavers. Where AI does play, or can play, an interesting role is in editing. Apple’s approach is not so much to have the app edit for you, but to assist in the process. There are a couple of features that have caught my attention and I find particularly useful. They are not even half of those included and that puts another reality on the table to which we will return shortly. Search by transcript: If you have followed a script and you are clear about the phrase you are looking for, you can reach the exact moment by simply entering that phrase in the search engine. For a TikTok maybe not, but for a half-hour YouTube video, an interview or a podcast I find it super useful. beat detection: One of the first things they teach you when you edit video is to change shots to the rhythm of the music so that there is coherence and dynamism. Until now, the best guide was the peaks in the audio track. At each peak, plane change. Final Cut Pro is now able to flag those changes to make docking faster and more intuitive. I like it. Montage Creator: I don’t edit on iPad because the day they distributed patience I fell asleep, but having the ability to make quick montages by importing several video clips and an audio track seems quite useful to me, especially for typical reels or TikTok which are just resourceful shots happening to the rhythm of the music. For typical b-roll … Read more

The big problem with lithium ion batteries is their degradation over time. A chemical adjustment can change it

It doesn’t matter if it’s a mobile phone, a laptop, the Nintendo Switch or a Dyson: as you use it, the battery life will reduce. Yes, lithium ion batteries they have changed the world and for years they have been the absolute standard in consumer electronics, but degradation over time is their endemic evil. While we look for alternatives To this technology, a research team has found a promising solution in a seemingly simple chemical tweak. The advance. The main idea of ​​this research is not to change the main materials of the battery, but simply to add a small amount of an additive: lithium difluorophosphate. Its existence is not new, but this research led by Professor Chunsheng Wang of the University of Maryland reveals how effective it is in stabilizing batteries. Why is it important. Because lithium ion batteries are present everywhere and this modification would extend their useful life using standard, low-cost chemistry. The result of their experiment is that with this additive, batteries can be optimized to maximize power and energy, or to achieve greater useful life and stability. For practical purposes, the study shows how with this adjustment they maintained a significantly higher capacity after hundreds of charge and discharge cycles. As Wang explains.“It is a relatively simple modification of current batteries.” Or what is the same, after having run security tests and long cycles, “it could realistically reach consumers.” Brief notes on the mechanism of a battery. Lithium ion batteries are made up of a negative anode and a positive cathode and have a porous separator between the two. The assembly is immersed in an electrolyte whose mission is to allow lithium ions to move between electrodes during charging and discharging. With the discharge, the anode releases electrons to the electrical circuit (gives electricity to the device) and ions to the electrolyte, meeting again at the cathode. Upon charging, an external source (the charger) reverses the process by “pumping” the ions back to the anode to store the energy in the chemical structure. The degradation of its capacity with use occurs due to the irreversible loss of lithium in secondary chemical reactions and due to mechanical fatigue of the electrodes. Basic diagram of the operation of a lithium ion battery. Walter Davison. Via: Wikimedia In detail. If we delve a little deeper into the previous explanation, the solid electrolyte interface (SEI) appears, a thin layer that forms on the anode during the first charges. In standard batteries, this layer is fragile and breaks down with use, consuming lithium and reducing battery life. Through a simple reaction inspired by organic chemistry, this additive makes the electrolyte more prone to accepting electrons, making degradation more controlled. In short, it helps to form a more robust, elastic and uniform SEI, thus acting as a kind of shield that prevents the electrolyte from reacting parasitically with the electrodes. In addition, it is a flexible chemistry that can be adjusted to be more or less protective and the presence of the additive minimizes the presence of cracks in the cathode. In Xataka | They have found a way to turn tall buildings into batteries. And that makes Benidorm our best asset In Xataka | China sold cheap batteries for years. The problem is that in the meantime no one built an alternative Cover | John Cameron

We have always believed that London is very rainy and that Barcelona is not. The only problem is that it’s a lie

Few towns exist so troubled by the vicissitudes of time like the British. During my stay in Cambridge, one of my first conversations with a native revolved around its climate. “Actually, the weather is nice in Cambridge,” he told me, “the problem is Londonwhich has a microclimate where it is always raining.” According to his testimony, London, the city with the greatest international projection, gave a bad name to the rest of the country. The British weather wasn’t so horrible. The truth is that it is: Most of the United Kingdom is cold, lives under a perennial blanket of gray clouds and enjoys greater rainfall than the rest of the continent (especially in Scotland). His story, in fact, was inversely true. Despite legend, London is one of the most dry of the United Kingdom, and a European capital with comparatively little rainfall. So why do we universally believe the opposite? First, let’s look at the data. According to Met Office figuresAccording to the British weather agency, London receives between just under 600 and almost 700 millimeters of precipitation annually (depending on the season: London is a gigantic city). The standard chosen by Wikipedia is Heathrow, east of the megalopolis, where in 2014 they fell 601.7 millimeters. Without further reference, it is a neutral number. How does it compare to the rest of England? On a map: London, the black spot… Of the low rainfall in the United Kingdom. The bluest areas are the rainiest in Britain (north east scotland plays in another league). In general, the North Sea coast is drier than the Atlantic. And as we approach the south, to the English Channel, rainfall reduces. This is where we can find London: a city in which it rains comparatively little compared to its island neighbors. My confidant was wrong: it rains more in Cambridge than in London. “Ok, ok, but the United Kingdom is a very rainy country per se. Just because it rains less in London than in other parts of the island does not mean that it rains in London.” bit“. The reasoning is logical, but also incorrect. The truth is that there are few points in continental Europe that have annual rainfall below of 600 millimeters. Unlike supposedly rainy London, Europe below the Channel does live underwater. Raining many days does not mean raining a lot Let’s think about, without going any further, Barcelona. The beautiful city of Barcelona has a reputation for being sunny. It receives millions of tourists a year thanks to its wonderful, mild and friendly climate. Well, its rainfall is very similar to that of London, and in 2014 it was slightly higher. AEMET counted 640 millimeters that yeardistributed throughout 72 days. The surprising record places Barcelona as a rainier city than London. The same thing happens with other quite amazing points of European geography. For example, Croatia. The most recent milestone of European tourism has also built a reputation for “good weather”, but the climatic reality of the Adriatic is stubborn: only in Dubrovnik, the famous citadel popularized for Game of Thrones, more than 1,000 millimeters of precipitation fall per year. 65% more than in London, of tormented fame. With some licenses, places in Europe where it rains less than in London (in yellow). The best way to understand how wrong our intuition is about London’s climate is the map above, shared a few months ago by a Reddit user: Areas in blue (almost all of central and western Europe, including Italy) receive more rainfall per year than London. Only the areas in yellow are drier, and they are few: specific points in Poland, almost the entire Iberian Peninsula (from the Ebro down, so to speak) and Sicily. Let’s think about two antagonistic places: Helsinki and Lecceon the Puglia peninsula, southern Italy. The first is one of the northernmost world capitals and spends most of its time buried under snow amid terrifying temperatures. How much does it rain there? Well, not much more than in London: about 655 millimeters annually. The second is a baroque jewel with a very sunny summer nestled in the heart of the Mediterranean. Its rainfall? Depending on the year, about 590 millimeters. Such geographical disparity does not correspond to very different rainfall. Which shouldn’t be strange, but it does manage to properly contextualize the importance of rain in London. The London chirimiri, the source of prejudice Now, if London is dry, why do we all think it’s always raining? A Basque would have an immediate answer (despite the fact that the Basque Country is very humid, especially Bilbao): chirimiri. In other words, the thin layer of rain that always grips certain cities but is actually very gentle. This is where the scarce 72 days of rain in Barcelona come into play, a city where it rains on just a few days on the calendar. If you want to look for really humid places in Europe, head to the Alps or the Atlantic ledges. In London the opposite happens: it rains more or less the same, but the water is spread over many more days (110a little less than a third of the year). Helsinki is another story: its rainy/snow days range from 180 in 2010 and the more than 200 from last year. Like many other northern European cities (Cambridge included: I barely saw the sun during the month of January I lived there), London often dawns cloudy and with a thin layer of rain that never seems to evaporate. The sun comes and goes, the clouds appear and disappear, the rain stops and starts again regularly. It doesn’t rain much, but the feeling of rain and humidity is almost permanently, inevitable. That’s why fame is so raw. Another factor is the dry reality of most of Europe’s capitals. Berlin, Vienna, Stockholm, Paris, Madrid, Warsaw or even Copenhagen They have less or only slightly more annual rainfall than London (none exceeds 700 millimeters). There are few capitals in Europe where it rains a lot (Amsterdam, … Read more

The internet has decided that 2016 was great and worth remembering. But there’s a problem: it wasn’t at all.

The aesthetics of 2016 comes back strong: filters that They imitate the Instagram of then (according to Wikipedia, more than 200 million videos with filters that imitate visuals), trends that they recover photos from thenrecreations of the summer of ‘Pokémon GO’, tributes and memories to David Bowie. Generation Z users, many of them teenagers at the time, they rebuild 2016 like a golden age (there has been a 450% increase in searches of the term “2016” on TikTok). The contradiction is obvious: That same year, numerous media declared it one of the worst in recent history. What happened. On January 10 he died David Bowie; they followed him Prince, Leonard Cohen, George Michael, Carrie Fisher. On June 23, the United Kingdom voted to leave the European Union. On November 8, Donald Trump won the US election. Media like slate either Newsweek They wondered if it was the worst year in history. Less than a decade later, that same year it has become an object of nostalgia. Starting shot. The Bowie’s death January 10 marked the year since its inception. Two days before he had published ‘Blackstar’, an album that today is interpreted as a farewell but that then went unnoticed in its testamentary dimension. The shock was immediate: an artist who had hidden his cancer for 18 months disappeared without warning, and memes filled that void almost immediately. The artists mentioned above followed, and each death reinforced the same idea: 2016 was cursed. In Xataka All the reasons you should listen to David Bowie if you haven’t already Imbalance. Trump and Brexit shattered the expectations of progress and openness that dominated Western political discourse. In‘The future of nostalgia’already in 2001, Svetlana Boym distinguished between “restorative nostalgia” (which seeks to reconstruct a mythical home) and “reflective nostalgia” (which enjoys longing without seeking to recover anything). Nostalgia for 2016 is of the first kind: it invents a year that never existed. Boym noted that restorative nostalgia “does not recognize itself as nostalgia, but as truth and tradition.” Just what happens when TikTok recreates the summer ofPokémon GO as if it had been edenic. This has already been said. There are theorists who have reflected on the phenomenon to remember 2016 just ten years later. David Foster Wallace documented in the 1990s what he called “nostalgia for the present”: the urge to long for something that is not yet over. 2016 fulfills that paradox: it has become an object of nostalgia before being historically processed, while its political consequences remain active. The temporal distance necessary for nostalgia, usually two or three decades, has been compressed to the point of almost disappearing. {“videoId”:”x9785qi”,”autoplay”:false,”title”:”Prince – Partyman”, “tag”:””, “duration”:”233″} Retromania. It is inevitable to refer to ‘Retromania‘a 2011 essay in which Simon Reynolds argued that since the 2000s, pop culture had reversed its direction: instead of generating the future, it was dedicated to reactivating the past. Reynolds documented band reunions, deluxe reissues, revival festivals, nostalgic samples. Fifteen years later, his thesis has intensified: no society has ever been so obsessed with the cultural artifacts of its most recent past. The return to 2016 confirms his diagnosis: a decade is enough to activate nostalgia. Hauntology. Mark Fisher elaborated on this idea in ‘The ghosts of my life’where he developed the concept of “hauntology” that Derrida had coined: we are inhabited by futures that did not materialize. Fisher, who died in 2017, argued that contemporary culture had lost its ability to imagine alternatives to the present. The past cannot be recovered; Their ghosts haunt a present incapable of projecting forward. Nostalgia for 2016 materializes this paralysis: one longs for a year defined by its catastrophic nature because there is a lack of vocabulary to articulate desirable futures. In Xataka A rosy past: why our brains can’t fight nostalgia Nostalgia mode. Finally, Fredric Jameson had anticipated this phenomenon in ‘Postmodernism: or the cultural logic of late capitalism’ in 1991, when describing the “nostalgia mode”: postmodern culture reproduces styles from the past by emptying them of historical reference and reducing them to an aesthetic surface. Instagram and TikTok accelerate this process. What was present yesterday is content today vintage available for consumption. The Spotify playlists of 2016 and the summer of ‘Pokémon GO’ are remembered, but not the bad thing. The algorithm creates a sweetened version of the past that eliminates conflict. It could be worse. 2026, without going any further. The nostalgia of 2016 reveals an escape from much more present horrors: those of 2026. That year has been dwarfed as a “bad year” because a decade later Trump returns to the presidency in a much more virulent way, with attacks on international law and invasion of countries, the war in Ukraine has no signs of ending, Gaza is going through a humanitarian disaster that shames the planet, political and media polarization has become radicalized, housing has become inaccessible… Carrie Fisher, who died in 2016 If in 2016 there were those who considered it exaggerated to talk about authoritarian drift, 2026 materializes that exaggeration: the alarms that seemed like hyperbole turned out to be prophetic. Nostalgia for 2016 is not innocent: it is the implicit recognition that the situation has worsened, that that year, with all its disasters, was preferable to the present. It’s coming. The cycle accelerates. If 2016 is already an object of nostalgia in 2026, what year will be nostalgic in 2030? 2020, the year of the global pandemic? 2024? Culture is caught in a loop where the present devours itself before it has been digested, where the ability to imagine alternatives has atrophied to the point that we can only look back. Even when what we see behind is disaster. In Xataka | People are so fed up with the current Internet that they are returning to MySpace. Not out of nostalgia, but out of rebellion (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news … Read more

We believed that the US was facing a major energy shortage problem for AI. The data says the opposite

To win the AI ​​race you need several things, but two are very important. The first, have the best technology and the best chips. The second, having enough energy to power those chips. The US has the first, but everything pointed to it having a major energy bottleneck. That is no longer so clear. China has plenty of energy. The China’s strategic visionwhich once again has been investing in the energy field for decades, is bearing fruit and the country has considerable room for maneuver in terms of energy supply. That is a factor that seems to tip the balance in its favor: Jensen Huang, CEO of NVIDIA, already warned that China can win the AI ​​race. According to him, China has more flexible regulation and its companies have government subsidies for the energy their data centers need. But the US has another philosophy. A deep study from the startup Epoch AI—responsible for FrontierMath AI benchmark— serves as a counterpoint to these pessimistic theories. In recent months we have seen how the US seems to have a real problem with the energy needed for AI data centers. China has not stopped increasing its energy generation capacity, but the US has not for a simple reason: until now it did not need it. Source: Epoch AI. However, Epoch AI explains that it is not that the US is not capable of creating more energy capacity: it simply has not needed it until now. While China has prepared for the future—even if that future does not come—the US has maintained a more conservative attitude: as long as there was no demand, it would not make any move. The immediate question, of course, is whether you can move it now or is it too late? And no, it doesn’t seem like it is. Forecast of necessary energy capacity for data centers in the US until 2030 according to different scenarios. In the worst of all of them (pink color), almost 80 GW of capacity will be needed. Source: Epoch AI. The demand is going to be huge. There is a reality: those ambitious plans to create more and more data centers throughout the US —with Project Stargate at the forefront—will cause data centers in the country to need between 30 and 80 GW of energy capacity in 2030. For those responsible for the study, it is perfectly possible that the US “gets its act together” – pun intended – and manages to increase its energy capacity. As? Various options. The US has room for maneuver. In order to supply all that energy that all those data centers will theoretically need, there are several clear alternatives according to the Epoch AI study: Natural gas: is relatively cheap and plants can be built quickly. There are three large companies that can cover this demand: GE Verona, Mitsubishi Heavy and Siemens. The plans of all of them point to a production of more than 200 GW in 2030. Even if they are not met, this supply (without being totally dedicated to AI) would already be an important part of the solution. Solar energy: the other big part of the solution, especially because its costs have fallen drastically and because it is very, very scalable. We have already seen how the US has the capacity to install 1,200 GW solar for IA thanks to its deserts, but at the moment Big Tech does not dare to use them. Once again, estimates point to around 200 GW of installed capacity in 2030, but even if these expectations are not met, this infrastructure will also be a clear part of the solution. Energy flexibility. The report also talks about a dynamic supply philosophy. Most of the time the US power grid is oversized for one simple reason: It is built to be able to supply power at peak peaks—like when everyone turns on the air conditioning—but most of the time there is plenty of power even to give to large AI data centers. This future infrastructure must be created with that same idea: oversized, but flexible. And there are other alternatives. The country is turning to energy solutions that it thought were buried to power data centers. Among them are the fossil plants that were theoretically going to close but that are returning to operation due to the astonishing increase in demand. There is also talk of going to military solutions and even more unusual alternatives, such as energy under volcanoes. Not to mention, of course, the nuclear power plants and the small nuclear reactors (SMR) that are already being used by some of the Big Tech for your data centers. Be careful with your electricity bill. The reality is that in the North American country data centers are growing faster than electrical infrastructure, and these facilities They are draining the country’s electricity. The situation is even causing electricity grid operators to ask be able to shut down data centers in times of high demand. And then there’s the other big side effect: AI data centers they are skyrocketing the electricity bill. When starting up an AI data center, power costs a tenth of what chips cost. Source: Epoch AI. There doesn’t seem to be a problem. Even with all those obstacles, Epoch AI’s conclusion is clear: “we doubt these challenges are significant enough to impede the scaling of AI.” In fact, they remember that what is actually expensive are the chips, not the energy, which represents a tenth of the investment in chips. The report concludes that China having an advantage is not necessarily true, and that the hypothetical US energy bottleneck “is much weaker than many people have indicated.” Image | Andrey Metelev In Xataka | Artificial intelligence has already reached nuclear power plants. And it’s going to change them forever

In 1925, procrastination was already a problem and someone found the definitive solution: the isolation helmet.

Hundreds of thousands of years of evolution They have turned modern humans into perfect machines in one thing: distracting us. No matter where, when or how you are, if you are accompanied or alone, if you are waiting in line at the butcher shop or have a book in front of you, chances are that your attention ends up dispersing for any nonsense. Maybe the flight of a fly. Maybe that sound you just heard in the next room or a stain on the wall. It happens today and it happened a century ago, when a science fiction-loving inventor designed the ultimate machine to end distractions. His patent dates back to 1925, but it addresses a hot topic: procrastination. The war of wars. Since man has been a man, he has done two things, both wonderfully: he is distracted and he procrastinates. Almost 2,000 years ago Seneca warned us about the risks of wasting our time and we know, for example, that distractions were one of the big concerns of the monks of the Middle Ages. Some even thought that if our minds disperse it is due to the influence of devils. In 2026 things are not very different. A quick Google search comes up to find a wide (very wide) list of guides and videos with tips on how to focus and stop putting off tasks. And it is understandable. After all, cell phones, social networks and other inventions of modern technology make our lives easier, but they have been filing our ability to focus. Even science has confirmed that we are losing the ability to focus among so many stimuli. And how do we solve it? We humans have not only been distracted for centuries and centuries. We have also spent some time looking for ways to avoid that annoying wandering of thoughts. Of all the solutions that have been given to the problem, perhaps the most astonishing (and bizarre) is the one proposed just a century ago by Hugo Gernsbachan imaginative Luxembourgish-American inventor. His name may sound familiar to you because, in addition to register patents of inventions and working in the electronics industry, Gernsback excelled in another field: publishing. Throughout his life he promoted several magazines focused on technology (RadioNews), but he also shone in science fiction. We owe him Amazon Storiesa milestone of the genre. His contribution in the field was so important that he is considered one of the parents of science fiction (with permission from Verne and HG Wells) and every year he is honored through the Hugo Awards. Adding facets. A century ago Gernsback combined this double facet, his technical ingenuity and overflowing imagination, to launch a proposal through the pages of Science and Inventiona magazine specialized in technology. In its July 1925 issue, the inventor, editor and novelist presented a creation which he named ‘The Isolator’. The name is striking in itself, but it pales in comparison to the photographs that illustrate the report. They show Gernsback working in his office with his head in a gigantic diving suit, an elongated helmet with two small openings for the eyes and a tube that connects it to an oxygen cylinder. Its purpose: to immerse the wearer in absolute isolation, an ideal state for centering. When silence does not come. Gernsback came to a conclusion very simple: sometimes it is not enough to lock yourself in a room without noise to concentrate. Even so, we risk our mind getting carried away by the flight of a fly or starting to wander after seeing a stain on the table. The way to avoid it, he concluded, was to eliminate all those influences “in one fell swoop.” As? With a helmet prepared to suppress unnecessary noises and visual stimuli. For the first thing, the noises, Gernsback decided to go for a robust multi-layer helmet. Its first prototype was made of solid wood with an internal and external layer of cork and a felt trim. For the second (view) he added three small pieces of glass. The design was completed with a device at mouth height that allowed the user to breathe without noise creeping in. The result, says the inventorit was a helm with an efficiency of “about 75%”. It isolated from external noises, but not completely. There was room for improvement. And how did you improve it? Perfecting the design. Gernsback rethought the material and added an air chamber so that the efficiency of ‘The Isolator’ rose to 90 or 95%, “eliminating practically all noise.” So that vision was not a problem either, the helmet’s glass peepholes located in front of the eyes were painted black, leaving only a narrow transparent strip. “When the two white lines on the glass open, the field through which the view can move is relatively small,” points out the inventor. “It is almost impossible to see anything but a sheet of paper in front of the user. There is no distraction.” Concentrating… and breathing. It is one thing that ‘Isolator’ lived up to its name by isolating the user in a bubble of responsible concentration and another, very different, that it was comfortable or even bearable. The author explains that after 15 minutes with it on the user “experienced some drowsiness”, so he decided to improve the breathing system, connecting it to a small oxygen tank. This improved breathing and “revitalized the subject.” In his article Gernsback added detailed plans of ‘The Isolator’ and even a sketch of an office with a complete distraction-proof installation, which included a ‘noise-proof’ door and an adequate ventilation system. “With this provision you can contemplate an important task in a short time,” boasted. “Building ‘The Isolator’ will be a huge investment.” The power of paper. If humanity has also learned something (including Gernsback) it is that paper supports ideas that are not supported in reality. His helmet may have been eye-catching, it may have even worked, but it didn’t work. We don’t know to what extent its inventor really expected it … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.