Three Chinese astronauts have delayed their return to Earth due to an impact on the ship. The suspect: space junk

The crew of the Shenzhou-20 spacecraft, which was scheduled to land this Wednesday in Inner Mongolia, has been forced to postpone its return to Earth. The cause is not bad weather, as is usual in manned flights, but the most feared enemy of modern space exploration: a probable impact of space debris. Evaluating risks. China Manned Space Agency (CMSA) broke the news this morning: The return of the three astronauts aboard Shenzhou-20 has been delayed indefinitely following suspicions that the ship may have been hit by a small piece of space debris. The ship is still docked at the Chinese Tiangong space station, where the crew are safe. The crew and engineers on the ground are analyzing the impact on the ship to try to determine the extent of the damage and assess the risks of the return journey. The problem is reentry. Three people traveled to the Chinese space station in April aboard the Shenzhou-20 spacecraft: Chen Dong, Chen Zhongrui and Wang Jie. The problem is not his immediate survival, but the viability of his ship surviving the atmospheric re-entry maneuver after the impact. In low orbit, objects travel at hypersonic speeds of up to 28,000 km/h. At that speed, even a tiny fragment of metal or paint can release devastating kinetic energy, especially if it hits critical components like the ship’s heat shield or its parachutes. What do we know for now? The CMSA has not specified where it believes the impact occurred or what data alerted them to the event. Now, engineers on the ground and the crew in orbit will perform telemetry checks, check for possible leaks, and analyze the guidance and propulsion systems. They will most likely use the Tiangong station’s 10-meter robotic arm to conduct a detailed visual inspection of Shenzhou-20. If necessary, an extravehicular activity (EVA) or spacewalk is not ruled out to assess the damage closely. A problem that China was trying to avoid. The irony of this incident is that the Shenzhou-20 crew itself is fully aware of the danger. In fact, part of its six-month mission in orbit focused on mitigating this risk. Two of the astronauts six hours passed in September by installing additional protective shields against orbital fragments outside the Tiangong station. Although they reinforced the station, the impact seems to have occurred in the way that would bring them back. Image | CMSA In Xataka | Three large pieces of space debris reenter every day: “one day our luck will run out and they will fall on someone”

What is the Richter scale, how it works and why you should stop using it when talking about earthquakes

We still often hear about “an earthquake measuring so many degrees on the Richter scale” in the news or when reading about an earthquake. This is incorrect for one or more reasons.. To understand why, we must delve into what the Richter scale is, when it is used and, above all, when it is not. What is the Richter scale The Richter scale is a scale used to measure the magnitude of an earthquake. According to defines the National Geological Institute (IGN), the magnitude of the earthquake is “a measure of the energy released by an earthquake and is determined from the signal recorded in a seismogram.” There are several magnitude scales for earthquakes, since earthquake waves can vary in their characteristics. Among them, the best known to the public is that of local Richter magnitudeor simply ML for “local magnitude.” Local, in this answer, refers to the fact that this scale is used to measure earthquakes that have been captured from close range. Specifically, it is used for those captured at less than 600 kilometers, according to the IGN. Who was Charles Francis Richter The name “Richter scale” refers to the American seismologist Charles Francis Richter. Born in 1900 in the state of Ohio, this American physicist and seismologist would leave as a legacy the first scale of its kind, a systematic way of measuring the strength of an earthquake. The seismographs They had been used for decades as a way to measure earthquakes, but it was in 1935 when Richter brought up the idea to establish a magnitude with which to measure these events. Starting from this idea, Richter would have the help of the German-American seismologist Beno Gutenberg to put it into practice. Charles F. Richter died in 1985 in the US state of California. The scale And how are the magnitudes calculated? The scale It is based on the logarithm of the amplitude of seismic waves. That is, the magnitude of an earthquake is proportional (logarithmically) to the height reached by the waves drawn by seismographs. The calculation must be “corrected” to, among other things, adjust it to a “type seismograph”. What we measure with the Richter scale, and what we don’t We pointed out before that the Richter scale, or ML, is used locally. And for seismologists, “local” refers to earthquakes originating no more than 600 kilometers of the seismograph that must measure it. But not all earthquakes that occur in “local” contexts are the same, so they are not all measured using this scale. The use of the ML scale is also limited with respect to the magnitude earthquake: it is only used to measure earthquakes of small or moderate magnitude (magnitudes between 2 and 6.5). The objective of measure the magnitude of an earthquake It is to get an idea of ​​its strength. To do this, scales such as the Richter scale use the waves generated by the earthquake, as captured by seismographs. The problem, as the experts realized, is that waves in large earthquakes do not always allow extrapolation of the magnitude using the Richter scale: sometimes the magnitude thus calculated overestimates the strength of the earthquake and sometimes the opposite occurs. Come on, although there are two earthquakes less than 600 kilometers from where they have been recorded with a seismograph, this scale is not always accurate for both. Sometimes this scale is fine, but other times the actual strength of the earthquake is higher or lower than what it measures. To compensate for the shortcomings of ML, geologists created different scalessuch as body wave magnitude (Mb) or surface wave magnitude (Ms). Each of these scales works in its own context, but the problem arises because none are universally applicable. To solve this, we then had to create the Mw scale, which we will talk about below. Magnitude and intensity To avoid confusion, we have to have clear concepts such as earthquake intensity. The intensity of an earthquake has its own scalebut it does not measure the strength of the earthquake but its impacts. The European Macroseismic Scale graduates in a scale from I to XII earthquakes based on the damage caused. The ML scale and the Mw scale As explained by the United States Geological Survey (USGS), the limitations of the existing scales implied the need to create a new scale that would serve to cover these limitations. This is how the seismographic moment magnitude scale, or Mw, would have been born. This scale, although it is adjusted to “coincid” with the local magnitude scale where the latter is applicable, is based on a very different principle. Where the Richter scale converts recorded seismic waves into a magnitude, the Mw scale uses geological properties of tectonic movement. To do this, we start from the measurement of seismic momentthe product of the area traveled by the fault that has moved, the distance traveled in this displacement, and a measure of the stiffness of the rock that makes up the fault. This measurement is transformed through a logarithmic formula to obtain the magnitude of the moment (Mw) of the earthquake. Here, we can say that this scale is the closest thing to a universalsince it was created to be used in all earthquakes, even those with a magnitude greater than that supported by the Richter. Thus, it is currently the most used today to measure earthquakes, although in the news we will continue to hear about Richter’s. By saying degrees when they are magnitudes Other common mistake When talking about earthquakes and their scale, we talk about degrees, for example if we said “an earthquake measuring 5.5 on the Richter scale.” The origin of this common error is not clear, but some attribute it to the fact that there are scales (such as the one used to study the intensity of earthquakes) in which degrees are used. In Xataka | 0.2 magnitude points and 70 years of disaster preparedness: what differentiates the deadly tsunami of 1952 from the one that occurred … Read more

Rosalía’s new album has been leaked two days before its release. Actually, it suits you very well.

One of the most careful and meticulously planned promotions in recent times has received a bucket of cold water: the new album by Rosalia‘Lux’, which was scheduled to be released this Friday, November 7, has been leaked on social networks. The question, more than “How did it happen?” It’s more like “How could it have happened at this point?” First notice. A first notice of the leak came with the sudden appearance on Spotify of the second preview song from the album, ‘Reliquia’yesterday Tuesday. It only lasted on the platform streaming a few minutes and was removed almost immediately. What was initially understood as a marketing maneuver was soon identified by the artist’s record label, Columbia Records, as an error on the part of the company’s parent company in the United States. Apparently, ‘Reliquia’ was planned to be a preview prior to the release of ‘Lux’, but not yesterday, Monday. What does ‘Relic’ tell us? The quickest listeners also had the opportunity to look at the song’s credits: no less than seven co-writers, including, in addition to Rosalía herself, the American Ryan Tedder. He is a member of the OneRepublic collective, and has gained some fame as a successful songwriter for stars of many different styles, such as Adele, Beyoncé, Taylor Swift, Ed Sheeran and Beyoncé. Of course, there are already theories that speak of a premeditated leakand which can be read as a message in tune with the spirituality planned in ‘Lux’. Another leak. After ‘Reliquia’, the entire album has been seen on social networks and platforms such as Telegram in files titled ‘Lux Leak’, which has clashed with a millimeter promotional campaign that Columbia Records was preparing. Among other planned actions, there is an appearance next Monday in Broncano’s ‘La Revuelta’ or a private presentation party this Thursday, to which will be added a performance at the LOS40 Music Awards Santander 2025 gala this Friday, the day of the album’s release. The record company has not made an official statement. She’s not the only one. Rosalía is, of course, not the only one who has recently suffered leaks. In 2024 and 2025, someone had access to almost all of Drake’s new album, leaking seven songs and various unreleased materials. In 2024, 100 GB of unreleased content was leaked. In October, Taylor Swift’s ‘The Life of a Showgirl’ It was leaked hours before its official launchwhich even extended to physical copies that some fans received ahead of time. Previously, Swift had suffered leaks of demos and unreleased songs. The almighty K-pop combo BTS too suffered them in 2025although here we do have the culprit: a producer from his record company, who leaked demos of an upcoming album, forcing his label to modify the strategies for this future album. How good it is for you. Although officially, and beyond fan theories, the leak was an accident, the truth is Rosalía She is not going to be harmed by this situation.quite the opposite. The fact that it happened only two days before the worldwide launch not only does not hurt in material terms, but it means that we already spend several days talking about the subject before it arrives. The conjecture is not crazy: it is suspected that Madonna herself, who Today he praised Rosalía’s workhe did (supposedly) with ‘Rebel Heart’, Korn with ‘Untouchables’, and Beyoncé with ‘4’ and ‘Renaissance’. The line between the leak and the mystery teaser is very fine. What will come on Friday. Just a couple of days should not greatly affect next Friday’s launch. ‘Lux’ is Rosalía’s fourth studio album and contains 18 songs in its physical edition. It was recorded in collaboration with the London Symphony Orchestra and promises to be an ambitious work where classical music and experimental pop collide, and where themes linked to feminine mysticism and a certain desire for transcendence will be played. The lyrics are in 13 different languages, including Catalan, Spanish, Arabic, English, French, German, Hebrew and Japanese. In Xataka | Rosalía’s revolution with her score is not an isolated case: pop artists have turned suspense into the best marketing

If you really want to understand China (and how it sees the future), it’s easy: read its five-year plans

Today’s China bears little resemblance to that of the mid-20th century, when in the time of Mao Zedong the People’s Republic decided to promote its first five year plan. ran the year 1953 and the country was preparing for the Great Leap Forwardan attempt at industrial modernization that ended with a famine with tragic consequences. Since then China has chained almost uninterrupted five-year plans, documents that help understand its evolution. Its reading is interesting now that the Central Committee of the Communist Party has launched the machinery to provide a plan for 2026-2030. Playing short or long term? On Monday Isaac Stone Fish, founder of Strategy Risk, opened a debate interesting in X: What horizon does China use when drawing up strategies? Do you focus on the long term or do you think only a few years ahead? It is not a minor issue. Stone himself brought up the subject a video released by the White House, the fragment of an interview granted by Trump to CBS in which it was pointed out that the Chinese “are playing the long game.” Click on the image to go to the tweet. “A recommended read”. “Let’s stop saying that the Chinese are playing the long game. This is orientalist nonsense that we must eradicate from our discourse with China. Read the Five Year Plan from five years ago and you will see how different China has become from what its leaders predicted. The Chinese think, like the rest of the people, mainly about the challenges they will face today and in the years to come,” claims the analyst, who assures that long-term speeches have other purposes, such as the party’s self-reaffirmation. He is not the only one who believes it. “If you are interested in reality, read the Chinese five-year plans. They are instructive,” slid another user in X. “Read a plan from five years ago. It is recommended.” But what are five-year plans? Economic and social guides, five-year guidelines that the Chinese authorities set for themselves and that basically set objectives in terms of development, industry, innovation or well-being. Also the paths to reach them. The first dates back to 1953 and since then they have been happening (with almost no pauses) with greater or lesser success, but exerting a key influence on the national evolution of the last 70 years. In fact it is not strange to hear that the turning point in China’s modern development came in 1978, with the economic reform promoted by Deng Xiaoping, which was followed shortly after by a five-year plan for the period 1981-1985. “A macro guideline”. “The five-year plan serves as a way for leaders to take stock, examine challenges and tasks, set directions and move forward. It must be followed closely, as strategic thinking and planning have become a rarity among governments,” They explain to EFE Nomura analysts. “It is a macro-level instruction or guideline for the market to know, including investors, state-owned enterprises and the public, to have the correct expectation of what government policy will be in the future,” comment in AP Li Lun, professor at Peking University. Its role is important because, as remember Neil Thomasresearcher at the Asia Society Policy Institute, marks a key difference with Europe or the US “Western politics operates through electoral cycles, but Chinese policymaking operates through planning cycles.” In the focus. That the Chinese five-year plans are being talked about right now is no coincidence. The country is immersed in the preparation of the new roadmap that will mark its steps until 2030, a complex scenario marked by the real estate crisishe weakening of domestic consumptionthe trade tensionshe youth unemployment or the aging of the population, among other challenges. A few days ago the Central Committee of the Chinese Communist Party met behind closed doors to talk about the new five-year plan, a document that will not be approved until March 2026but the one that Beijing wanted advance some keys. Among other goals, the technological self-sufficiencymaintain at a level “reasonable” of manufacturing and raise life expectancy up to the 80 years. Why is it important? Because although there is still a long way to go for the approval of the new five-year plan, in the past this roadmap has been key to understanding the priorities of the Chinese Government. Also in its development. At the end of October Nick Mash published an analysis on the BBC in which he details three occasions in which the plans have influenced the world economy: the reformist and opening trend of 1981-1984, the commitment to “strategic emerging industries” during 2011-2015 and “high-quality development” (2021-2025). Images | Dominic Kurniawan Suryaputra (Unsplash) and Chinese Communist Party In Xataka | Xi Jinping wants two things: first, to create a global center that regulates AI. The second, that it is in Shanghai

There is an extensive system to avoid being cut off in the 48 km underground of the M-30. It’s time to renew it

Madrid City Council will completely renew the radio communications network of the M-30 tunnels, a system that has been in operation for almost two decades and is vital for coordinating emergency services and keeping drivers informed underground. The tender for the project starts today, Wednesday, with a budget of 4.8 million euros. Why you need a renovation. The 48 kilometers of M-30 tunnels register 488 million users a year, according to advance from The World. The current radio communication system is practically the same with which these underground galleries were inaugurated almost 20 years ago. Like any technology, requires updates to continue providing service and guarantee quick responses to any incident. What systems maintain communication. Just like inform In the middle, the infrastructure has two differentiated networks that allow coordination between security forces. The TETRAPOL system covers the National Police and Civil Guard, while the TETRA connects the Municipal Police, Firefighters and SAMUR. Furthermore, according to collect El Mundo, these systems guarantee the operation of analog emergency channels, Madrid Calle 30 communications and commercial FM stations. All this works thanks to two radiating cables installed on the roof of the tunnels: one exclusively for emergency services and another for the rest of the transmissions. “What makes it possible for this system to respond are radio communications. When there is an incident, the emergency services and the Police have an effective possibility of coordinating between themselves,” explains Antonio Jesús Tocino, managing director of Madrid Calle 30, to El Mundo. When will the works start and what improvements will they bring?. As well as inform In the middle, the work will begin in spring 2026 and will continue for 13 months, until April 2027. The intervention will be carried out in control centers, technical rooms and the more than 200 emergency exits, without affecting traffic except for specific outages to renew amplifiers. Among the notable developments is the increase in FM stations, which now number 48 of the current 12. This will expand the ability of the M-30 Radio system to inform drivers of safety recommendations in the event of an emergency. “We have gone looking for the most modern thing on the market,” assures Bacon. A broader technology plan. This renovation is part of a comprehensive modernization of the facilities that has already mobilized 34 million euros. Recent improvements include bluetooth beacons that allow maintaining the GPS signal inside the tunnels, in addition to a project for centralized management of the control center financed with Next Generation European funds. Cover image | Google Maps In Xataka | 171 million euros later, Metro de Madrid wants to reopen line 7B. The big question is whether the tenth time will be the charm.

30,000 lightning strikes, orange warnings and severe events: don’t call it ‘squall line’, call it ‘new normal’

During the early morning and early hours of this Wednesday, November 5, the arrival of a cold front to the Peninsula has caused a very active squall line throughout the southwest. And, in this case, saying “very active” is not an exaggeration: the images that they come to us from Portugal they are incredible and at the moment, he is heading to Extremadura and Western Andalusia. The interesting thing is that we no longer talk about meteorological information, we begin to enter the field of Okay, but what is a squall line? This is an organized storm system that, often ahead of a cold front, forms in a line. Due to its structure, this phenomenon causes strong and destructive winds, torrential rains, hail and lightning. In addition, they are characterized by advancing very quickly and being able to cause significant damage. In Xataka The "tropicalization" of the atmosphere is going to change Spain and not exactly for the better And so it has been. Portugal’s Civil Protection recorded more than 150 nighttime incidents and, as the Portuguese press explainedit is not just the problems caused by the rain and wind; is that tens of thousands of electric shocks have been recorded. About 30,000 in a few hours. Given this, AEMET activated orange noticesin Galicia, Extremadura and Andalusia. In addition, 122 Extremadura is prepared for rains of 5–20 l/m² in very short periods of time. It’s not a lot of water, but in these circumstances it can cause a lot of problems. Aren’t we talking about autumn showers? No, we are not talking about loose showers: it is an organized convection capable of producing severe gusts, hail and wet blowouts. They are formations that trigger the risk on urban areas, electrical networks and mobility. It’s another episode of “This is not just an Atlantic storm” that has been with us for weeks now. It is true that November is a typical month for hallways in the southwest; but the data suggests that we are facing something more. {“videoId”:”x89b35l”,”autoplay”:false,”title”:”PROFESSIONAL STORM CHASERS_ this is their daily life”, “tag”:””, “duration”:”400″} What is really happening? In technical terms, we are talking about the arrival of an Atlantic trough and cold front with sufficient shear to organize convection and force a quasi-linear system. Ambient humidity does the rest and that is the key. As we said a few days agothat area of ​​the peninsula is prone to low convergences that, with adequate shear and sufficient humidity,They organize convectively very easily. As connections with the Gulf of Mexico (the famous ‘rivers of moisture’) become more common and, with them, the available humidity grows: these systems will become more frequent and more intense. It is the same as occurs in the Mediterranean with DANAs: It doesn’t matter if climate change causes more or not, the amount of “available fuel” makes any spark turn into a fire. Meteorologically speaking, of course. Image | Carlos Virazón (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news 30,000 lightning strikes, orange warnings and severe events: don’t call it ‘squall line’, call it ‘new normal’ was originally published in Xataka by Javier Jimenez .

swapping hordes of tourists for undersea cables

If the capitals of the countries are the cornerstone on which their economies revolve, in Portugal there is not much debate, although there is a certain amount of boredom. Years ago, Lisbon set out to be a tourist capital, and this summer it has been confirmed that it has become the biggest tourist hell of Europe with the price of housing shot while the urban center lost a good part of its population. But Portugal has a simple but difficult plan to execute: exchange submarine cables for tourists. The new horizon. Sinesa seemingly modest coastal municipality, is once again at the center of Portugal’s strategic ambitions. After decades in which tourism became the country’s main economic engine (representing almost a quarter of GDP) the Portuguese government is now seeking to rebalance its production model attracted by an opportunity that mixes geography and technology. As? Sines is the point where they land and take off submarine cables that connect Europe with America and Africaand that will soon also link with the United States through of one line from Google to South Carolina. Portugal as a data center. This combination of global connectivity, available space and energy infrastructure has promoted the development of projects such as a mega data center 8.5 billion eurosa battery factory of 2,000 million and the expansion of the deep-sea port managed by the Port Authority of Singapore, investments equivalent to 4.6% of GDP of the country that could generate more than 5,000 jobs. For Lisbon, Sines is not an experiment, but the link that could transform the Portuguese economy into an Atlantic logistics and technological platform. The Google cable that will connect the US with Portugal and the rest of Europe Ambitions interrupted. However, the municipality carries a legacy of promises that were not kept. In the 70s, the authoritarian regime tried to convert it in the industrial hub of the country, building a commercial port, a refinery and an energy plant with the expectation of processing fuels from the Portuguese colonial empire. After the Revolution of 1974 and the loss of the colonies, the project deflated: the port was underutilized, the refinery survived with difficulty and the power plant ended closing in 2021 due to the cheaper renewable energy. The region grew up expecting a boom that never materialized and many of the newcomers ended up leaving. That memory weighs heavily today on the minds of its inhabitants, who observe this new wave of investments with a mixture of excitement and caution. Pressure. Bloomberg counted that the arrival of thousands of workers linked to the construction of new data centers, factories and port expansions is straining the urban fabric of a city that remains small and with limited services. The housing supply is insufficient, some workers sleep in cars and residential projects are advancing slowly due to lack of financing. Basic services (schools, health centers, municipal infrastructure) show signs of saturation. This mismatch between investment and life support fuels the fundamental doubt: whether Sines This time it will be a city that retains wealth or if, as in the past, the activity will arrive, the works will be completed and the value generated will once again go to other regions and companies. Start Sines Campus Logistics hub. As we said, the port of sines occupies a strategic position between Europe, Africa and America, and its expansion seeks to go from being a transshipment point between ships to becoming a port that introduces goods towards the interior of the peninsula. But this transition requires rapid connections with Spain and central Europe, and the road corridor to the border It is incomplete and does not exist a passenger rail connection, while freight transport is slow. Solution? The government is studying improvements that would cut up to three hours logistics routes to Spain, which would allow it to compete with ports such as Valencia or Algeciras. Transport infrastructure is therefore the real turning point: without it, Sines will remain a peripheral port, but with it, it could become one of the central pieces of European Atlantic trade. Technology, energy and capital. The new projects in Sines are marked by international investment. The data center Start Campus operates with renewable energy and has secured 1.2 gigawatts (a capacity comparable to Lisbon’s consumption) by reusing cooling systems from the old thermal power plant using seawater. The CALB battery plant, partially controlled by Chinese capital, will receive up to 350 million euros in public support and aims to produce batteries for 200,000 electric vehicles per year by 2028. The combination of available clean energy, seawater for cooling, physical space and direct access to submarine cables makes Sines a privileged node in a world where digital infrastructure weighs as much as industrial infrastructure. The great opportunity. For many inhabitants, this transformation may be the opportunity that never camebut for others, it is a new cycle in which large companies will take center stage and the local community will be left out. The difference between one result and another will depend on three levers: accessible housing, infrastructure that connects Sines with the rest of the country and the State’s ability to capture and redistribute the value generated. Thus, what is at stake is not only the future of a coastal citybut the Portuguese economic model as a whole: if the municipality goes from being a tourist landscape and a transit port to becoming a European technological and logistical node, the country could leave behind decades of dependence on tourism as an economic monoculture On the other hand, if he doesn’t, Sines will once again be a symbol of unfulfilled promises. Image | Kalboz, MaritimeGoogle In Xataka | Years ago, Lisbon set out to be a tourist capital. Now it has become the biggest tourist hell in Europe In Xataka | If the question is “can a country sustain itself with renewable energy alone”, the answer is right here: Portugal

There are green, orange and even purple USB ports. The color rule that indicates your generation is extinct

There was a time when everything was easier. If the USB port was white, it was slow; if it was black, it was standard; and if it was blue, it was the fastest. That rule that helped us Easily identify USB-A generations It’s gone. The arrival of new standards, charging functions and brand marketing has meant that today we find a wide range of green, orange and purple ports that no longer mean much. Image: StorageReview The original color code. The current chaos, as we explain in our guide to the USB standardit was not planned. The USB-IF organization tried to standardize it: white corresponds to USB 1.x, black for USB 2.0 (480 Mbps), and blue (or turquoise) for fast USB 3.0 (5 Gbps). First confusion. That is a product of the charging ports: the first problem came when colors began to be used to indicate power functions, not just data transmission. This is how the yellow, orange or red ports arrived. These usually indicate an “Always on” or “Sleep & Charge” function, which means that the port continues to provide power even when the computer is turned off or in sleep. More speed, more colors. To differentiate USB 3.1 (10 Gbps) and 3.2 (20 Gbps), the standard suggested the color turquoise blue, or failing that, red. Here the system began to leak. And the final blow came from marketing. A purple USB cable for a Huawei device indicates that it supports SuperCharge, its fast charging technology. Image: Reddit The rule became extinct completely when the brands decided use colors as corporate identity. The most famous case is that of Razer, which dyes its ports a characteristic lime green. Likewise, if you see a purple port, it’s probably from Huawei. The Chinese manufacturer uses them to identify its devices compatible with SuperCharge (its fast charging system), although technically it is still a USB 3.1 port. Chaos also in names. If there is already a mess with the colors, there is also a mess with the names: USB-IF itself has contributed by renaming the standards. USB 3.0 was renamed “USB 3.1 Gen 1” and is now “USB 3.2 Gen 1.” In parallel, USB 3.1 is now “USB 3.2 Gen 2”. This makes it almost impossible for a user to know what they are buying without looking at the fine print, a mess that the Wi-Fi Alliance solved much more elegantly. with standards such as Wi-Fi 5, 6 or the most recent Wi-Fi 7. The real culprit: USB-C. The final nail in the color coding coffin is the USB-C connector. It’s just a reversible connector, but what’s inside is chaos: the same USB-C port can be a slow USB 2.0, a USB 3.2 or a very fast Thunderbolt 4. The only way to differentiate them is to look for the lightning bolt logo that characterized Apple. Or read the device’s spec sheet because color, unfortunately, no longer means anything. Image | Xataka In Xataka | How to prepare a USB to use it on your mobile phone, tablet or Smart TV and expand its memory

What are they, what can you do in them and what are they for?

Let’s explain to you what are ChatGPT projectsa function with which to create work environments with the artificial intelligence. This is one of the advanced options available in the paid versions of ChatGPT from the Go subscription. We are going to start this article by explaining what exactly ChatGPT projects are, so that you can differentiate them from conventional chats. Then, we’ll go on to tell you what you can do with it, and we’ll finish with some examples of things you can do with them. What are projects in ChatGPT The projects on ChatGPT are a personal and persistent work environment within artificial intelligence itself. With them you will be able to organize, save and develop complex or long-term work, without ChatGPT forgetting things you have done in them in the past. In normal chats with AI, the conversations are more ephemeral or thematic, you talk to something and at most ChatGPT remembers what you have been saying in that chat, but if you open a new one, almost everything may be lost. While, The projects are like a kind of separate memorywhere ChatGPT will always remember what you have been doing and saying in them, and will always maintain all the context, data or instructions. Another advantage of these projects is that they are also like a kind of portfolio, and you can upload your own files so that ChatGPT uses them when generating responses. In fact, you can make their responses based solely on internal memories. What you can do within a project Within a project you can create and edit fileswhether code or written documents. You can also upload your own external files for AI analysis or transformation. You can then interact with ChatGPT to base its responses on these files. In a project you can also do other things like run python code, or even generate graphs, process data or convert file formats that you have inside. You can also connect automatic tools and flows, such as recurring queries, reports, or scheduled tasks. You can also define custom instructions for the projectso that everything is focused on specific goals. For example, you can make it just to make weekly sales reports from the files you give them. And in these projects you can also use specific contextual memory for what you do in them. This means that ChatGPT will remember the decisions and instructions you have made within that project, and will continue to remember them within it no matter how many times you ask it other questions, even similar ones, in conventional chats with AI. In fact, in the project configuration you can decide whether to access external chat memory and that the memory of these chats access the project, or close the memory to only use the project memoryso that you only access the projects’ memory from them, inside you cannot access the external one, and from outside you cannot access the projects’ memory. What are these projects for? ChatGPT projects serve to have an independent workspace where you can work in a more structured and persistent way. Its main advantage is that helps you maintain long-term contextremembering the files and settings you upload, as well as purposes or instructions. You don’t have to upload and configure it every time like in normal chats. Projects are like folders, where you can upload and organize multiple fileswhich can be PDF, CSV, DOCX, code, etc. Then, within this same ChatGPT project can analyze and cross-reference information of these uploaded files, or directly modify them. But the most important thing beyond these things is that You can use the bases you upload to develop content or create products. You can write anything from a book to a technical article by asking questions about the sources you have manually uploaded, but also to develop apps, data analysis, create guides or courses, etc. For example, you can upload several documents with data that act as sources. Then you ask ChatGPT to summarize, classify the content or draft an article for you. Then, you can polish everything and even create a final PDF. But the most important thing about this is be able to pick up the project where you left off whenever you want. If you do these things in a normal chat, ChatGPT may later lose context due to other chats you’ve opened, or because the information crosses paths with other conversations. But in a project everything stays within the package, It’s like a separate room. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

AI data centers consume too much energy. Google’s ‘moonshot’ plan is to take them to space

Training models like ChatGPT, Gemini or Claude requires more and more electricity and water, to the point that the energy consumption of AI threatens to exceed that of entire countries. Data centers have become real resource sinks. According to estimates by the International Energy Agencythe electrical expenditure of data centers could double before 2030, driven by the explosion of generative AI. Faced with this perspective, technology giants are desperately looking for alternatives. And Google believes it has found something that seems straight out of science fiction: sending its artificial intelligence chips into space. Conquering space. The company Project Suncatcher has been revealedan ambitious experiment that sounds like science fiction: placing its TPUs—the chips that power its artificial intelligence—on satellites powered by solar energy. The chosen orbit, sun-synchronous, guarantees almost constant light. In theory, these panels could work 24 hours a day and be up to eight times more efficient than the ones we have on Earth. Google plans to test its technology with two prototype satellites before 2027, in a joint mission with the Planet company. The objective will be to check if its chips and communication systems can survive the space environment and, above all, if it is feasible to perform AI calculations in orbit. The engineering behind the idea. Although it sounds like science fiction, the project has solid scientific bases. Google proposes to build constellations of small satellites—dozens or even hundreds—that orbit in compact formation at an altitude of about 650 kilometers. Each one would have chips on board Trillium TPU connected to each other by laser optical links. Such light beams would allow satellites to “talk” to each other at speeds of up to tens of terabits per second. It is an essential capability to process AI tasks in a distributed manner, as a terrestrial data center would do. The technical challenge is enormous: at these distances, the optical signal weakens quickly. To compensate, the satellites would have to fly just a few hundred meters apart. According to Google’s own studyKeeping them so close will require precise maneuvering, but calculations suggest that small orbit adjustments would be enough to keep the formation stable. In addition, engineers have already tested the radiation resistance of their chips. In an experiment with a 67 MeV proton beam, Trillium TPUs safely withstood a dose three times higher than they would receive during a five-year mission in low orbit. “They are surprisingly robust for space applications,” the company concludes in its preliminary report. The great challenge: making it profitable. Beyond the technical problems, the economic challenge is what is in focus. According to calculations cited by Guardian and Ars Technicaif the launch price falls below $200 per kilogram by the mid-2030s, an orbital data center could be economically comparable to a terrestrial one. The calculation is made in energy cost per kilowatt per year. “Our analysis shows that space data centers are not limited by physics or insurmountable economic barriers,” says the Google team. In space, solar energy is practically unlimited. A panel can perform up to eight times more than on the Earth’s surface and generate almost continuous electricity. That would eliminate the need for huge batteries or water-based cooling systems, one of the biggest environmental problems in today’s data centers. However, not everything shines in a vacuum. As The Guardian recallseach launch emits hundreds of tons of CO₂, and astronomers warn that the growing number of satellites “is like looking at the universe through a windshield full of insects.” Furthermore, flying such compact constellations increases the risk of collisions and space debris, an already worrying threat in low orbit. A race to conquer the sky. Google’s announcement comes in the midst of a fever for space data centers. It is not the only company looking up. Elon Musk recently assured that SpaceX plans to scale its Starlink satellite network—already with more than 10,000 units—to create its own data centers in orbit. “It will be enough to scale the Starlink V3 satellites, which have high-speed laser links. SpaceX is going to do it,” wrote Musk in X. For his part, Jeff Bezos, founder of Amazon and Blue Origin, predicted during the Italian Tech Week that we will see “giant AI training clusters” in space in the next 10 to 20 years. In his vision, these centers would be more efficient and sustainable than terrestrial ones: “We will take advantage of solar energy 24 hours a day, without clouds or night cycles.” Another unexpected actor is Eric Schmidt, former CEO of Google, who bought the rocket company Relativity Space precisely to move in that direction. “Data centers will require tens of additional gigawatts in a few years. Taking them off the Earth may be a necessity, not an option,” Schmidt warned in a hearing before the US Congress. And Nvidia, the AI ​​chip giant, also wants to try his luck: The startup Starcloud, backed by its Inception program, will launch the first H100 GPU into space this month to test a small orbital cluster. Their ultimate goal: a 5-gigawatt data center orbiting the Earth. The new battlefield. The Google project is still in the research phase. There are no prototypes in orbit and no guarantees that there will be any soon. But the mere fact that a company of such caliber has published orbital models, radiation calculations and optical communication tests shows that the concept has already moved from the realm of speculation to that of applied engineering. The project inherits the philosophy of others moonshots of the company —like Waymo’s self-driving cars either quantum computers—: explore impossible ideas until they stop being impossible. The future of computing may not be underground or in huge industrial warehouses, but in swarms of satellites shining in the permanent sun of space. Image | Google Xataka | While Silicon Valley seeks electricity, China subsidizes it: this is how it wants to win the AI ​​war

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.