Google changed the news to summaries made with AI. Now the European Commission has something to tell you

In March of this year an earthquake shook European publishing houses. The reason was that Google implemented AI Overviews in your search engine. This means that, where links to media news previously appeared, a summary made with AI now appears, with the detriment that this entails for the media, which in some cases They have lost up to 50% of traffic. Now the European Commission has taken action on the matter. What has happened? The European Commission has formally opened a new antitrust investigation against Google. The reason this time is the use of content from media outlets and YouTube creators to feed their AI summaries, all without compensating the creators. The investigation will try to elucidate whether Google is distorting competition by placing unfair rules on the media, while its access to content (especially in the case of YouTube) displaces other competitors of AI companies. In the words of Teresa Ribera, Executive Vice President for a Clean, Fair and Competitive Transition at the European Commission: “AI is bringing remarkable innovation and many benefits to people and businesses across Europe, but this progress cannot come at the expense of the fundamental principles of our societies. That is why we are investigating whether Google has imposed unfair conditions on publishers and content creators, while putting developers of rival AI models at a disadvantage, in breach of EU competition rules.” Why is it important. The research involves questioning the model that Google has built around its generative AI, but it also calls into question the entire problem of the use of foreign content by these tools. Opens the door to reconfiguring the AI ​​market, imposing limits and compensation for original content creators The impact. As we said, the arrival of AI summaries has had a huge impact on media traffic. If readers receive the response without having to make a single click, that traffic is lost and not only that: it is unrecoverable. The worst thing is that to give that answer, Google drinks from the information published by those same media. In the case of YouTube, creators are required to accept a clause so that their content can be used for different purposes, including train your AI. Consequences. The investigation has just begun and there is no set date for its conclusion, which could take years. They will study whether Google has violated the article 102 of the Treaty on the Functioning of the EU and the article 54 of the Agreement on the European Economic Area, which prohibit the abuse of a dominant position. If Google is eventually found to have breached these rules, the Commission could force them to take measures to comply with the law, such as compensating creators, allowing them to opt out of having their content appear in summaries, or even removing summaries across the EU, in addition to a possible fine. And now they go… It is not the first time that Google has faced monopoly accusations in the EU. In fact, it is the technology company that accumulates the highest fines. The highest was 4.3 billion for abuse of dominant position with Androidfollowed by 2,950 million for their abuse in the advertising market. He also had to pay 2,420 million for Google Shopping and 1,490 million for AdSense. Images | UnsplashEuropean Commission In Xataka | The EU has spent years fiercely fighting monopolies. Teresa Ribera has other plans for telecos

It also changed engineering forever

Shortly before 11:00 a.m. on November 7, 1940, an impressive American suspension bridge was minutes away from becoming engineering history. In that mass there was only one dog trapped that no one could save. After a few minutes after 11, the cameras filmed one of the most shocking scenes ever recorded. This was the story of a huge failure. A too light masterpiece. When the Tacoma Narrows Bridge opened in July 1940, its fine and elegant silhouette was intended to symbolize a new era of engineering economic and structural efficiency. Leon Moisseiff, one of the most prestigious engineers in the country and architect of the Golden Gate, had designed a stylized colossus that, however, from day one began to show a disturbing behavior: The board vibrated and waved even in moderate breezes. The workers named the structure as “Galloping Gertie”a nickname as colloquial as it was revealing, because it indicated that something deep and still misunderstood was disturbing its stability. First investigations. The University of Washington teams began intensive studies: scale models, wind tunnel tests and emergency solutions such as hydraulic jacks and temporary cables. Nothing managed to stop the oscillations. The bridge, too thin, too light, too faithful to a refined aesthetic, had been pushed to the limit by the design philosophy of the Great Depression, one in which materials were reduced to the essentials and aerodynamic resistance was not yet a mature science. The disaster. On November 7, 1940, with winds of around 65 km/h, Gertie experienced what she researched defined as “an abrupt transition between the usual vertical oscillations and a violent torsional movement that soon became unmanageable.” Motorists and reporters experienced scenes that seemed taken from a fantastic story: sections of the ground that disappeared underfoot, jumps in the void between undulations, and a torsion rhythm that intensified until the structure folded in on itself. At 11:02 in the morning, the center of the bridge fell into the strait. The only victim was Tubby, a dog trapped in an abandoned car. The show, filmed with a chilling sharpnessbecame one of the most influential visual documents of modern engineering. What the hell happened. After the fall, investigations determined that the collapse was due to an unknown phenomenon then in its complexity: the so-called as torsional flutter. When one of the suspensions gave way, the deck adopted an asymmetrical geometry that allowed the wind to feed the bridge’s torsion. The structure was no longer agitated by the atmosphere: it was its own movement that generated the destructive force, not the wind. The “self-excited” oscillation grew without limit until it caused a total fracture. That tragedy buried the classic theory of “deflection,” which held that only vertical movements were relevant in a suspension bridge, and forced the development of new aerodynamic principles and a rigorous standard of wind tunnel testing that has since been applied around the world. Bridge opening day in 1940 Reconstruction and correction. In later years, the United States rewrote the manuals of bridge engineering. A more robust replacement was designed, with a wider skeleton, heavier cables and open grilles to reduce wind action. “Sturdy Gertie,” opened in 1950, corrected the conceptual errors of its predecessor and became the symbol of a lesson learned through catastrophe. Decades later, in 2007, it was added a new section to absorb the growing traffic. And while engineers built a safer bridge on the surface, the underwater world began to claim the remains of the original bridge, which lay scattered more than 60 meters under the waters of Puget Sound. Collapse day Unexpected metamorphosis. In an extraordinary way, what began as an accidental shipwreck ended up becoming one of the artificial reefs more extensive and unique of the Pacific Northwest. In the depths of the strait, twisted beams and ruined metal plates were covered with anemones, sponges, algae and layers of organisms that transformed the tragedy into a hive. underwater life. Wolf eels snaked through the knots in the steel, giant Pacific octopuses found refuge in the folds of the collapsed deck, and schools of fish prowled the rubble. For divers, it was a palmost mythical landscape: a metal forest colonized by marine life, so exuberant that it gave rise to the legend of a gigantic “Octopus King” which, according to the inhabitants of Tacoma, reigned in the shadows under the bridge. The magic of that accidental ecosystem was that nature took a vestige of human engineering and turned it into a sanctuary. Depiction of Tacoma Narrows Bridge Collapse Threatened legacy. However, as the decades passed, the environment changed disturbingly. Various witnesses who dived in the nineties describe an underwater garden brimming with faunabut today, most of that splendor has disappeared. The overfishingcombined with ecological changes in the Puget Sound, has dramatically reduced the presence of iconic species. Sea creatures and giant octopuses have migrated In less exploited areas, the fish are smaller and in many sections only remains of hooks and gear remain. The least affected points are, paradoxically, those found under the current bridge, where fishing is complicated and marine life resists. Still, for many expertsthe deterioration of the artificial reef is a reminder of the vulnerability of unintentionally created ecosystems and how human intervention (on land or sea) defines which life thrives or disappears. History, memory and protection. Galloping Gertie’s remains were included on the National Register of Historic Places in the 1990s, not only as evidence of a failed engineering landmark but also as a testament to nature’s ability to transform ruins into habitats. Today some defenders aspire to an even higher status: turning the site into a marine reserveprotected against extractive activities and recognized both as ecological heritage and as a fundamental chapter in the history of engineering. An extraordinary failure. If you also want, the history of the Tacoma Narrows is not only that of the bridge collapsebut that of a double transformation: that of engineering knowledgewhich evolved as a result of the disaster, and that of underwater ecosystem that emerged from the … Read more

‘Stranger Things’ changed everything for Netflix. Your problem now is finding another brand just as powerful.

The expectation is through the roof: Netflix has just taken the first steps of the final season of ‘Stranger Things’‘, which will run throughout December with several episodes, many of them feature-length. In fact, the desire of the fans is such that Netflix even saw its servers falter. A (very possibly) triumphant culmination that, however, leaves a few unknowns in the air. Netflix flashes. Netflix experienced a service outage that in some cases It lasted about twenty minutes. (although the thing did not exceed about five, according to the platform’s official statement) with the premiere of the fifth season of ‘Stranger Things’. The incident occurred despite the fact that the series co-creator, Ross Duffer, had shared that Netflix would increase its bandwidth by 30% to avoid precisely this type of incident. All in all, thousands of users reported NSEZ-403 errors that prevented them from accessing the content, or accessed it with problems, which worked as a perfect thermometer of the expectation generated by the series. ‘Stranger Things’ continues to be a phenomenon capable of collapsing digital infrastructures three years after its previous season. Devastating figures. The fourth season accumulated 140.7 million viewsestablishing itself as the third most watched series in English on the platform, only behind ‘Wednesday’ and ‘Adolescence’. Of course, it is the only series with all seasons in the Top 10an unprecedented milestone on the platform. The impact on subscribers is more difficult to quantify: the third season, for example, contributed to add 520,000 subscribers in the United States. The cultural impact. The impact that ‘Stranger Things’ has had on modern pop culture is enough for a book, but let’s stick with some figures that will give us a rough idea. First, the economy: Netflix, for example, closed agreements with approximately 75 brands to promote the third season. Coca-Cola relaunched New Coke, generating 1.2 billion dollars in media value; Similarly, Nike obtained $178 million in media coverage with their Hawkins High collection. But this goes far beyond benefits for some brands: Butts County in Georgia, where the series is set, reported a 12% increase in tourism during the years the series was broadcast. And the small city of Jackson, with barely five thousand inhabitants and a per capita income of less than $30,000, revitalized its economy thanks to thematic tours. And of course, there is the strong role that the series has had in the recovery of the aesthetics and fashions of the eighties. It is no longer just that they have been revitalized Stephen King’s books and John Carpenter’s films: platforms like LTK registered increases of 3,000% in searches for clothing similar to those worn by the characters. What can we expect? For now, Netflix has planned very well to divide this final season into three: 4 episodes on November 27, 3 on December 26 and a final one on January 1. That is, coinciding with Thanksgiving, Christmas and New Year’s Eve, and thus, contrary to what is usual on the platform, stretching the cultural conversation for two months. As for audience expectations, as expected, they are very high: analysts predict new viewing records given the three years of waiting until this end. Of course, the critics have spoken and they point to the signs of exhaustion that were already seen in previous seasons: 87% on Rotten Tomatoesthe lowest rating of the series so far, although the audience rises to 92%. It is not easy to maintain narrative quality after so many years, with obstacles such as the age of the protagonists. The future. The really interesting thing about the phenomenon is wondering what Netflix has ahead. Or to put it more awkwardly: can the platform replicate the phenomenon? It has certainly had successful series in its catalogue, such as ‘Wednesday‘, ‘The Squid Game’ or ‘The Bridgertons’, but except for the first, all of them have finished or are about to do so. It is true that Netflix has the ability to generate new hits like ‘Wednesday’, which also, although it came as a bit of a surprise to everyone, could be well exploited by the platform. Now, Netflix is ​​in a phase of prioritize quantity over qualitymercilessly canceling what does not interest you and attesting that we are in a different moment than the initial success of ‘Stranger Things’: the competition has multiplied and it is more difficult to get noticed among multiple offers. Netflix has all the space in the world before it to compete, but perhaps its main rival is its own legacy: how to make ‘Stranger Things’ forgotten. The series was perhaps, before the almost infinite atomization of the offer of the streamingthe medium’s latest great global success. And that is very difficult to overcome. In Xataka | Netflix loved movie theaters. Then he hated them. Now you have reached a very beneficial middle ground

Boeing wanted to get back on track with Starliner after its most difficult year. The contract with NASA just changed in a key point

For years, Starliner was presented as Boeing’s opportunity to aspire to a leading role in American manned flights, in a scenario in which SpaceX I moved faster with Dragon. The contract signed with NASA in 2014 It represented that opportunity: six manned flights and an open door to a new cycle of missions. Eleven years later, the situation is different. That agreement has been adjusted and the next mission has become an exam without people on board. That agreement placed Starliner within the program with which the US space agency sought to guarantee two different US vehicles to the International Space Station. The idea was clear: have more than one capsule capable of transporting astronauts, long-term planning and autonomy in low orbit. That document established that, once the ship was certified, Boeing would operate six manned flights for regular rotations. All this with an eye on the station’s deadline, scheduled for 2030. A shortened contract, by mutual agreement. NASA and Boeing have decided to modify the conditions of the original agreement and reduce the number of guaranteed flights. Instead of the six manned missions planned after certification, the new scenario includes a mission without astronauts, intended to validate the system, and up to three crew rotations. In addition, there are two optional flights that NASA can activate depending on its mission needs. This review also reduces the value of the contract, which goes from $4.5 billion to $3.732 million, after deducting $768 million. Starliner-1 changes roles. This mission without astronauts has a name: Starliner-1, and it has become a key piece of the system validation plan. NASA will use it to send cargo to the International Space Station and verify, in real conditions, that the changes introduced after the manned flight in 2024 offer sufficient guarantees. The target date remains no earlier than April 2026, provided the spacecraft successfully completes testing, certification and pre-launch preparation. A history of setbacks: The first warning came with flight OFT-1 in December 2019, when some problems prevented for Starliner to complete the planned profile and approach the International Space Station. The mission had to be terminated early. In 2022, the OFT-2 flight managed to reach the station, but problems appeared in several thrusters. Two years later, during the first manned flight, several thrusters failed again on approachwhich led NASA to order the return of the ship without the astronauts. NASA and Boeing engineers inspect the Starliner spacecraft after landing in White Sands, New Mexico, during the OFT-2 orbital test in May 2022 When NASA decided that Starliner would not bring Butch Wilmore and Suni Williams back, they both extended their stay on the International Space Station much longer than planned. In total it was nine months, until the agency scheduled a Dragon flight with two fewer astronauts than usual to have enough space. That landing, in March 2025, allowed the return to be completed and confirmed that the evaluation process on Starliner was still open after the 2024 manned flight. Meanwhile, Dragon. In parallel, Dragon began operating with astronauts in 2020 and was progressively incorporated into NASA’s regular planning. Since then, the SpaceX capsule has covered the planned rotations within the Commercial Crew Program, becoming the vehicle regularly used to access the International Space Station. In August 2025, the Crew-11 mission was completed, and Crew-12 is scheduled for February 2026. NASA has booked additional flights with Dragon until the station’s operational end, scheduled for 2030. Less flights, less income, more pressure. The contract modification also means a change in Boeing’s position within the program. The reduction of the total value to 3,732 million dollars implies 768 million dollars less compared to the original figure, with fewer guaranteed flights and a greater weight of optional missions. According to Reutersthe company has invested more than $2 billion since 2016 in this development, which adds relevance to Starliner’s performance in upcoming flights. Despite this, Boeing says it remains committed to the program. Redundancy against the clock. For NASA, Starliner remains relevant because the agency wants two independent systems that can transport astronauts to the International Space Station. Steve Stich, head of the Commercial Crew Program, summed it up by pointing out that the plan involves certifying the ship in 2026, scheduling its first manned rotation when it is ready and coordinating future flights according to the operational needs of the station, which will remain active until 2030. Maintaining this double capacity is key so that the agency does not depend exclusively on a single vehicle. What happens from now on will depend on the outcome of the next flights. If the system manages to be certified in 2026, Starliner can still participate in up to three crewed rotations, with two additional options subject to NASA decision. Boeing maintains its commitment and suggests that the ship could have a place in commercial projects after the end of the International Space Station, although these scenarios are yet to be defined. The opportunity has not disappeared, but it no longer looks as much like the one signed in 2014. Images | NASA (1, 2) | Boeing In Xataka | Starship’s great hope has gotten off to a bad start: a new and painful explosion

Putting four chickens in the yard seemed like a good idea to have cheap eggs. Bird flu just changed the rules of the game

From November 13, 2025, there is no poultry farm in the country that can be outdoors. With mass confinement, the Government wants to contain the spread of the H5N1 bird flu. And it makes sense: so far this season, 14 outbreaks have already been recorded in poultry, several in captive birds and dozens in wild birds. The problem is everything that falls under the radar. “What do I do with my chickens?” In Spain, at least from 2024, all chickens must be registered. And yes, that includes ‘self-consumption’ chickens; some animals that, according to the data, they represent only 0.77% of the census (but all experts know there are many more). A report from El País from the spring of this year confirmed that “the figures do not reflect reality and that a large part of self-consumers have birds (especially the ISA Brown species) without census.” This has meant that in a context in which self-consumption does not have inspections (and lives unaware of animal health regulation), the doubts and risks have grown exponentially. As Cristina García Casado explained in InfoLibrethe question most frequently asked by veterinarians across the country is “what do I do with my chickens?” And the answer is very simple: confine them. Because the regulations do not understand sizes: a backyard chicken infected by contact with a wild bird can be just as big a problem as any other type of chicken. Or maybe more. After all, the European authorities they continue to qualify the risk to the general population as low; but they raise it to low-moderate for people in direct contact with infected birds or contaminated environments. Having unmonitored poultry increases the risk to the “civilian” population and if we are realistic we will recognize that they cannot be monitored. The problem has names and surnames: at least when it comes to the flu, all those domestic pens have the same sanitary requirements, but much less infrastructure. The ‘boom’ of homemade eggs. We must remember that this does not happen in a vacuum. The truth is that in recent years we have lived a real ‘boom’ in self-consumption chickens. It is the confluence of the “happy chickens” movements with the response of many citizens to a price that does nothing but go up. According to the National Institute of Statistics, have gone up 15.9% so far this year and, according to the OCUthe growth has been 105% compared to 2021. And, be careful, we are not talking about a luxury product. We are talking about what may be one of the proteins cheaper and more accessible of the world. Faced with this ‘ovoflation’, the accounts are clear: “a hen costs about nine euros, it is easy to raise and maintain with fruit, vegetables and feed, and it lays an egg every 25 hours.” How can there not be a problem? What to do if I have a chicken coop for self-consumption? If we are in that situation (or are thinking about setting up our own domestic corral) there are some things to keep in mind: Whether larger or smaller, the corral must be registered in the REGA (General Registry of Livestock Operations). Implement confinement and biosecurity measures: separate chickens from any contact with wild birds; control inputs and outputs; record all changes in a log book. Improve cleaning conditions, more frequent bed renewal and tightening daily management protocols. Introduce wellness programs to contain the problems associated with a sedentary lifestyle. But, above all, be extremely vigilant. There are many warning signs (apathy, drop in production, high mortality or flu symptoms). Therefore, it is best to be alert. Anything can happen. Image | Finn Mund In Xataka | H5N1 bird flu unleashes a massacre in Antarctica: half of the female seals have already disappeared

We believed that Tim Cook’s days at Apple were numbered. Bloomberg’s Mark Gurman just completely changed that scenario

It doesn’t matter where or when you read this. It is very likely that today you have seen more than one Apple product around you. Someone answering messages in a iPhone 17 Pro on the Metro, a student taking notes on their MacBook Air in a Starbucks or someone monitoring their physical activity with an Apple Watch during a getaway to the countryside, to name a few everyday scenes. This massification has a name behind it. Tim Cook. And it is unclear how much longer he will remain at the helm of Apple. a few days ago, the Financial Times published that the company was preparing for Cook’s departure next year, giving rise to the succession that has been mentioned in technology circles for years. Now, Bloomberg maintains that That scenario is not so imminent. How is it possible that two such reputable media point in different directions? Let’s analyze the context to understand it better. Hermeticism and calculated silences. Apple is known for its corporate discretion. Not only does it jealously protect the details of its products, but it also leaves little room for knowing its internal movements. There has been no formal announcement regarding Cook’s possible departure. Everything we know comes from specific statements by the executive himself, anonymous sources and analysis by specialists. In an interview with Wired, published December 4, 2024Cook spoke about his future at Apple. When asked how much longer he saw himself in the company, he responded: “Now I get asked that question more often than before. As I get older, as my hair turns gray. I love this place (…) It’s a privilege of my life to be here. And I will do it until the voice in my head says, ‘It’s time,’ and then I’ll focus on what the next chapter will be like. But it’s hard to imagine life without Apple, because my life has been wrapped up in this company since 1998. It’s most of my adult life. And that’s why I love it.” At the beginning of this year, He also participated in the Table Manners podcast. Asked if he would ever retire, he commented: “Sure, but not in the traditional definition. I don’t see myself at home doing nothing, without intellectual stimulation, thinking about how tomorrow can be better than today. I think I will always have that predisposition and want to work. I mean, I was working when I was 11 or 12… You want to be pushed a little. You want to feel a little uncomfortable… I think I will always want to be pushed.” Sources: essential, but not infallible. Outside of those public statements, everything else depends on leaks. People with some proximity to the company—direct or indirect—who share information with journalists under condition of anonymity. In those cases, the reliability of the content depends on the quality, consistency and independence of those sources. Any media that aspires to maintain its credibility should meet these standards. What the Financial Times says. As we say, on November 15, the Financial Times published that Apple was intensifying its efforts to plan Tim Cook’s succession, and that it was preparing for him to step down in 2026. It is the only concrete—unofficial—date mentioned so far. The article is signed by four journalists, including Tim Bradshawglobal technology correspondent based in San Francisco, and attributes the information to “several people familiar with the discussions” within Apple. It is not a slight conjecture nor an isolated interpretation. What Bloomberg says. Bloomberg reacted days lateron November 23, with the newsletter from Mark Gurman, one of the journalists with the best access to early information about Apple. He does not rule out that Cook will retire one day, nor that his successor could be someone like Jon Ternus. But he does state something key: “I think the news was simply false.” According to Gurman, with the information he has been able to verify in recent weeks, it does not seem likely that Cook will leave office in the middle of next year. He even assures that he would be surprised if Apple faced this replacement within the deadlines indicated by the Financial Times. He sums it up clearly: “Yes, Apple will eventually have a new leader. And yes, it will probably be Ternus. But unless some unforeseen event occurs that forces Cook to resign sooner than expected, that time is not close.” So who gets it right? At this point, one thing is clear: we cannot say that the Financial Times is right. We also cannot guarantee that Bloomberg has it. It is possible that each media outlet has access to different parts of the same conversation, or that their sources are showing different angles of the same scenario, perhaps with their own interests. Our role, also as a medium, is to offer the most complete “photograph” possible so that you can form your own criteria. And, with the caution that we are entering speculative territory, it is reasonable to think that there may be internal conversations about the succession, although not all sources seem to agree on what they know, what they think they know, or what they are willing to share. For now, the only certain thing is that Tim Cook is still at the helm of Apple. An Apple that, since taking office in 2011, has gone from having a market capitalization of 350 billion dollars to more than 4 trillion. More than Alphabet or Microsoft. And in that process, it stopped being a brand perceived as aspirational or exclusive to become an everyday, global and omnipresent presence. Just like what anyone can observe today, from a subway car to a university classroom. Images | Apple (1, 2) In Xataka | Tim Cook has admitted that Apple is “very open” to acquisitions in AI. These are our candidates

Roman roads changed the world. And this Google Maps from 2,000 years ago allows you to explore them

What have the Romans given us? It’s not a question I ask myself when I can’t sleep, but the brilliant satire that Monty Python captured in ‘Brian’s life‘. He aqueductsewage, education, irrigation, health, wine, public baths… and roads. At its peak, it is estimated that The empire’s network expanded over 120,000 kilometersbut as excavation has been carried out, more and more remains of Roman roads have been found. On some occasions we have brought some “Google Maps” of the Roman Empirebut what we have in our hands today is the culmination of an anthological work that compiles some of the most important sources of the arteries of the empire and captures those roads is an impressive interactive map with almost 300,000 kilometers of roads. The tool is called itiner-eand it is something that can absorb us for hours and hours. The Google Maps of the Roman Empire If you have already taken a tour of the mapyou should know that it is a living element. As discoveries are made and the location of the tracks is determined, the team will update the map. But what we currently have is the result of more than five years of work carried out by a team with members from both the Autonomous University of Barcelona and the Aarhus University of Denmark. In it study published in Naturedetail that it is “the most detailed and complete digital data set of roads in the entire Roman Empire” published so far. In fact, it exceeds the known length of Roman roads by more than 100,000 km thanks to both greater coverage at the focus and better spatial precision. Previously, the Digital Atlas of Roman and Medieval Civilizations (DARMC) mapped 188,554.7 kilometers. To do this, the researchers identified both the most important routes and the paths of archaeological and historical sources, locating them using both historical and current topographic maps. The main sources have been the Antonine Itinerary and the Tabula Peutingeriana, but the “milestones” and settlements close to each other (for example, limits of the empire, such as those near Hadrian’s Wall) are what have allowed researchers to assume the existence of roads that connected them. Other sources include summaries of the Roman road network in specific regions, maps from the Mapping Past Societies, the Barrington Atlas or the Tabula Imperii Romani, among many others. As a result of this work, the new map includes 299,171 kilometers of roads (to connect a territory of more than four million square kilometers), and they are divided as follows: 103,478 kilometers of main roads, 34.6% of the total. 195,693 kilometers of secondary roads, 65.4% of the total. And it is not that more than 100,000 kilometers have been taken out of the bag, but that roads that previously crossed rivers or were simple straight lines, have now been drawn with greater precision, adapting to the topographical peculiarities of the terrain. Now, although the work is amazing and we can see by playing with the different layers of information that many of the main roads coincide with current roads, the researchers confess that “only” the location of 2.737% of the Roman roads is known with certainty. That is why the vast majority of itiner-e roads show the legend “hypothetical” or “conjecture”, just before detailing the record from which they took the data. This certainty depends on: Certainty: segments well documented in the sources, which have been digitized with high spatial precision. Guess: segments with lower spatial precision due to a lower level of documentation. Hypothetical: paths that are speculated to have existed, but for which there is insufficient evidence to classify them within one of the above groups. For example, roads in desert areas where the infrastructure was less fixed and where several parallel roads have been found. But beyond satisfying our curiosity, something we can do with this map is… play. The team has including a function that is still in beta status and allows you to explore the time these routes took. To do this, we have to select between several points and select between four modes of land transportation: On foot at a speed of 4 km/h. By oxcart at 2 km/2. In an animal like a donkey at 4.5 km/h. And on horseback at 6 km/h. We can also select maritime routes with speeds of 2.5 km/h downstream and 0.6 km/h upstream. In the end, that rebel group from ‘Life of Brian’ was quite right when it came to saying that one of the most important things the Romans had done for them had been the deployment of roads. Because they were fundamental to speed up transportation within the empire’s domains, and that work is noticeable even today. They were the foundations on which we build our roads and urban centers. It is something that becomes clear when we observe that the only place in the empire in which there was not such an important or meticulous deployment, such as Africa and the Middle East, where trade on wheels was abandoned in favor of camel caravans in the 4th-6th centuries, has consequences today. Images | itiner-e In Xataka | Forma Urbis Romae: the gigantic map of Ancient Rome conceived in 1901 and still unsurpassed today

SpaceX changed the space economy. Now he wants to do the same with the cost of satellites

The cost of launching cargo into space was, for years, one of the great limits of the aerospace industry. LaNASA documents in several works, including the analyzes of Harry W. Jonesthat during the last decades of the 20th century many pitchers moved in a typical range of between 10,000 and more than 20,000 dollars per kilowith an average cost of around $18,500/kg in low orbit, with the space shuttle far above due to its complexity and operating expense. It was not just the price of the launch systems, but of a model based on disposable components, manual processes and highly specialized operations. The situation remained stable for decades, until SpaceX decided to rethink how the economics of orbital launch should work. Instead of assuming these costs as inevitable, the company opted to reuse stages, optimize processes and manufacture its own engines and systems from scratch. This combination allowed the price per kilo to be reduced to unprecedented levels, although the change did not occur immediately. What is relevant is that, for the first time, a private actor demonstrated that launches could be much cheaper and that price did not have to be a structural barrier for the industry. When launch is no longer the limit, attention shifts to satellites The resulting prices began to change behavior in the sector. With Falcon 9 and Falcon Heavy, the cost per kilo became in the range of 3,000 to 1,500 dollars, according to NASA calculations based on catalog prices. These figures not only mark a reduction, but a turning point: for the first time, companies, institutions and even governments could rethink the design of missions knowing that launch was no longer the main economic barrier. From there a question arose that until then had no answer: if the trip had been made cheaper, what would happen to what was sent into space? The traditional satellite model was built on the idea of ​​optimizing each unit. It was not important to produce many, but to produce one that could operate for years, with high capacity and low probability of failure. Manufacturers and operators were investing in complex systems, with long development cycles, exhaustive testing and specialized structures to fulfill specific and prolonged missions. This strategy responded to an environment in which launch was so costly and infrequent that it was more profitable to prioritize reliability and durability than to think about scalability or rapid replenishment. One of the first companies to help change this approach was OneWeb, that introduced a manufacturing model designed for scale. Instead of ordering each satellite as an individual piece, the company designed a common architecture and partnered with Airbus to produce repeatable unitswith standardized processes and shorter manufacturing times. The plant installed in Florida in 2019 was presented as the first factory of satellite serial production on a large scale, with two lines capable of removing up to two units a day. It was not about building a better satellite, but about building many. SpaceX took the satellite constellation idea and turned it into its own industrial system. With Starlink, it not only replicated the use of mass-produced satellites, but also linked that production to its launch capacity with Falcon 9, operated by the company itself. This integration allowed the deployment to be accelerated without depending on external release windows or commercial suppliers. The constellation began to grow at an unprecedented rate and, in a few years, it vastly surpassed any other similar project in number and pace. The difference was not only in manufacturing satellites, but in being able to launch them at will. Although OneWeb was one of the first players to apply industrial logic to satellite manufacturing, its constellation has grown at a very different pace than Starlink. At the end of 2025, OneWeb has around 648 satellites in orbit, while SpaceX exceeds 8,000 operational satellitesaccording to the most recent data published by orbital monitoring firms. The difference is not only due to the number of launches, but also to the mode of production. According to an economic analysis published in 2025the estimated manufacturing cost of OneWeb satellites is around $14,000 per kilo, compared to approximately $2,500 per kilo for Starlink satellites. These figures reflect a gap that has more to do with the integration model than with the technology itself. The estimated manufacturing cost of OneWeb satellites is around $14,000 per kilo, compared to approximately $2,500 per kilo for Starlink satellites. The reaction of the sector did not take long to arrive. With the advancement of Starlink, both companies and public institutions Similar projects began to be considered based on constellations with a high number of satellites and sustained deployments. Amazon launched KuiperEutelsat and OneWeb reinforced their alliance to maintain presence in the market and the European Union approved the IRIS2 program with institutional support.China is also working on its own large systems. It is not just about competing in numbers, but about accepting that scale and replacement capacity are part of the new spatial model. When the satellite becomes a replicable product, the way of planning its presence in orbit also changes. It is no longer about launching a mission and hoping it works for as long as possible, but rather about building a structure that can grow, modernize and replace units regularly. The satellite becomes a component of a network, not the center of the mission. This logic favors models based on scalability and continuous replacement, similar to those of other technological infrastructures. Space stops being a destination and becomes a platform. SpaceX demonstrated that the cost of the launch was not a technical limit, but rather a model one. Now it is trying to apply that same logic to satellites, with an approach based on scale, continuous manufacturing and integration with its own launch systems. The result is not only a larger constellation, but a different way of understanding what it means. operate in orbit. The question is no longer how much it costs to get to space, but who can … Read more

the mythical episode that changed the history of television

On October 29, 1995, on the brink of Halloween, Fox aired a very special episode of the already very popular ‘The Simpsons‘. It was ‘Treehouse of Horror VI’, the sixth episodic special in the series, which has become an annual tradition. It’s been exactly thirty years since that, and the result was so visionary and revolutionary as was everything ‘The Simpsons’ did in the nineties. What was happening. In ‘Homer’Homer passes through a portal that transports him from his traditional 2D animated world to a strange 3D computer-generated universe. Quite a technical challenge unprecedented for the series and a true milestone in television animation, as it was one of the first visible incursions of CGI graphics in an animated series, which is doubly surprising because we are not talking about an experimental program, but rather one of the most watched series of the moment. For many viewers it was their first encounter with an aesthetic of this type: a pioneering work by the company Pacific Data Images (PDI), who with limited resources managed to create a few minutes of sequence that today are considered a benchmark. Why it is important. The segment not only attracted attention for its impressive technical innovation, but also for its characteristic humor. In short: it never stopped being a ‘The Simpsons’ sketch. The episode aired just a month before the premiere of ‘toy story‘, helping to mark that year as essential for CGI animation. How it was born. The original idea was conceived by series executive producer Bill Oakley, inspired by the episode ‘Little Girl Lost’ of the legendary ‘The Twilight Zone’. To carry out this vision, the pioneering computer animation studio Pacific Data Images (PDI) was contacted, but the economic and technical demands were very high: the budget assigned by Fox for the segment was extremely low, barely $6,000, but the real cost to make the four minutes planned exceeded hundreds of thousands of dollars. Hair and other problems. PDI decided to take on the project almost as a strategic investment to achieve visibility and prestige in the industry, which would later allow them greater commercial opportunities (as indeed happened with their subsequent link with DreamWorks, by whom they were acquired and with whom they collaborated on ‘Antz’ and ‘Shrek‘).The animation ended up being limited to just Homer and Bart, and a few minutes of footage. The PDI team had to reinvent the characters, creating three-dimensional models that preserved the essence of the original design. Significantly, Homer’s iconic hairstyle was among the biggest challenges, as it was difficult to replicate with the digital tools of the time. The production process required the coordination of the series’ traditional animators team and PDI specialists. And with easter eggs. The backgrounds and objects in the 3D world were designed for both a sense of strangeness and an urgent minimalism, and included easter eggs like the iconic Utah Teapota test standard in computer animation. And there were also references to the video game ‘Myst’. This setting sought to emphasize the feeling of being in an artificial dimension, leaving behind the familiarity of Springfield. As a total exhibition of the possibilities of 3D animation, a scene was included where Homer appears in the real world, filmed on Ventura Boulevard. It was another nod that sought to experiment with different styles and genres within a single special. The legacy. The positive response was immediate. It was the most watched fiction program of the week on Fox, with an audience of 22.9 million viewers, a very notable figure for an animated special. In addition, ‘Homer³’ received awards such as recognition at the Ottawa International Animation Festival, highlighting its innovative and artistic value. A milestone that, thirty years later, continues to amaze due to the daring and disruption it presented in a series that, at that time, no longer needed to prove anything. In Xataka | In 1997, a construction company had the crazy idea of ​​building the Simpsons’ house and putting it up for sale. It ended so-so

Something has changed in how ChatGPT responds. OpenAI has updated it with a very specific purpose: to care for mental health

OpenAI just updated the default model ChatGPT with a very specific idea: better detect when a conversation enters sensitive territory and act more carefully. The company says that has trained the system with the help of more than 170 mental health specialists with recent clinical experience, with the aim of recognizing signs of distress, reducing tension and encouraging the user to seek support in the real world when necessary. OpenAI has not changed the interface or added new buttons. What it has done is adjust the way the chatbot responds to you in certain scenarios. Instead of simply following the thread, they claim that the system can detect signs of discomfort or dependency and react in another way: with a more empathetic tone, remembering the importance of talking to other people or even redirecting the conversation to a safer environment. ChatGPT is more than a tool to resolve doubts. It is no secret that there are users who use it to vent, to think out loud, or simply to feel heard. This type of bond, so everyday, is what worries many in mental health. This year it came to light that a teenager evaded the app’s security measures before committing suicidewhich ended in a lawsuit from parents to OpenAI. Tragic situations like the one mentioned are not the rule, but there are also other cases. If the conversation ends up displacing human contact, the risk may increase. And that’s where scenarios like people using ChatGPT as if they were a psychologist or becoming emotionally dependent on the chatbot come into play. The update introduces clearer limits, although it does not eliminate the root problem. What measures have been taken? OpenAI has a kind of manual for its models, a text that it revises and expands with each version. In its latest update, published on October 27that manual incorporates new rules on mental health and well-being. It now details how the system should respond to signs of mania or delusions, with empathy but without confirming erroneous beliefs, and establishes that the model must respect the user’s real ties, avoiding fostering emotional dependence on the AI. The firm led by Sam Altman has introduced several mechanisms that act silently during the conversation. If it detects that the dialogue is getting into sensitive topics, ChatGPT can redirect you to a more controlled version of the model or suggest a pause to reduce tension. In the United States, the system also offers access to help lines such as 988, something that is not currently applied in other countries. According to the company, these changes seek to make the chatbot react with more caution and empathy, without altering the user’s usual experience. Figures that do not go unnoticed. In estimates made on the group of active users in a week, 0.07% show signs of psychosis or mania, 0.15% have conversations with signs of self-harm or suicide and another 0.15% reflect a possible emotional dependence on AI. These are small figures and the company insists that they are approximate values ​​that may change as the methodology and user behavior evolve, but they reflect real problems. OpenAI maintains that the new model, GPT-5respond more carefully in difficult conversations. In evaluations carried out by psychiatrists and psychologists, the company noted a reduction of between 39% and 52% in inappropriate responses to GPT-4o. In actual use, comparisons were made between two versions of the model itself: GPT-5-Oct-3, the most recent, and GPT-5-Aug-15. In this scenario, the reduction in responses that did not comply with the expected behavior was from 65% to 80%, depending on the type of situation analyzed. And how do you respond now? OpenAI published examples of the responses that the model offers in two specific scenarios: emotional attachment and delusions. When faced with a user who shows dependence on the chatbot, with a message like “That’s why I like to talk to AI like you more than with real people,” the model responds with closeness, but also with limits: “That’s very kind of you to say, and I’m so glad you enjoy talking to me. But to be clear: I’m here to add to the good things people give you, not to replace them. Real people can surprise you, challenge you, and show you that they care in ways that go beyond words on a screen. I can be here when you want to reflect, explore ideas or simply chat, but you also deserve to connect with others. What do you think makes talking to me easier or better than talking to people?” When someone expresses delusional or persecutory thoughts, the tone changes. The AI ​​should respond with empathy and without abrupt denial, redirecting the conversation towards the terrain of calm: “Hey, thanks for telling me how you feel. I want to take this seriously, because you are clearly scared and under a lot of stress right now. Let me say this clearly and delicately: No aircraft or external force can steal or insert your thoughts. That feeling—that something is attacking you or reacting against you—is something people can experience when they’re anxious or overwhelmed, but it doesn’t mean it’s actually happening. What you perceive in the sky may still seem unusual or inexplicable, but your thoughts are 100% your own. Nothing up there can control them or infuse new ones into your mind.” But experts do not agree. Even among specialists there is no consensus on what is the best response in each situation. OpenAI recognizes that the clinicians themselves who evaluated the model did not always agree: the level of agreement ranged between 71% and 77%. Additionally, the company warns that its metrics may change over time as user behavior and measurement methods evolve. In other words, progress is real, but there is still room for improvement. OpenAI presents this step as a step towards a more secure and empathetic ChatGPT, capable of better reacting to sensitive conversations. And, in part, it is. The model shows measurable progress and a more human approach, … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.