Elon Musk often promises impossible things like Terafab. The problem is that sometimes he manages to turn them into reality.

It was up to Elon Musk to revolutionize the automotive industry with Tesla and the electric car. Probably no one believed he could do it. Then he did the same with the aerospace industry with SpaceX, and that was more of the same: it seemed impossible. It may be many things, but the truth is that although Elon Musk promises many things and does not always fulfill them when he says (hello autonomous car), has achieved unimaginable things. That’s why when you talk about Terafab, maybe we should give it a chance. Because this seems almost as impossible as his other feats. Terafab and Musk’s master plan. On Saturday night, from a power plant that has not been used for a long time, Elon Musk advertisement the last of the components of its master plan: Terafab. The objective is to create a chip factory in which Tesla, SpaceX and xAI will collaborate. According to Musk, this plant will be capable of manufacturing between 100 and 200 GW of computing capacity per year on earth, but it will reach 1 TW in space. The problem, as always with Musk, is distinguishing what part of the plan is engineering and what part is theater and fireworks. He doesn’t do it just because. At that event, the magnate explained that semiconductor manufacturers do not produce enough chips for their AI and robotics needs. And since TSMC and the rest of the manufacturers cannot meet Musk’s demand, he has proposed manufacturing them directly. You need them for your robotaxis and your humanoid robots, Optimuswhich he hopes will end up multiplying by 10 or 100 the production rate of his cars. But it also needs chips so that xAI can compete in the field of AI, and SpaceX needs them for its satellites. That is, it actually needs a lot of chips. Many. Chips from space. At Terafab they intend to create two types of chips. On the one hand, there will be those intended for autonomous vehicles or Optimus robots. On the other, the chips that already have their own name, D3, and that will be designed specifically for space, with products that use them that work in low Earth orbit and are powered by solar energy. For Musk, the idea “becomes an obvious decision”: there will come a point where putting payload into orbit is so cheap that host data centers in space It is cheaper than doing it on land because solar energy is practically unlimited there. Too many unknowns. Everything was very nice and promising, but once the speech and promises were over, the questions began. Building a state-of-the-art semiconductor factory is a colossal challenge. It’s not just a matter of money: it’s that advanced chip manufacturing is in the hands of three companies around the world (TSMC, Samsung and Intel), and requires photolithography with UVE technology which is only manufactured by the well-known Dutch company ASML. And here’s the thing, that Musk: Did not announce any agreement with ASML It has not shown orders that demonstrate that it will have these equipment He has not named a technological partner for the project No estimated dates or calendar have been given. And he hasn’t talked about the budget either. It’s all a gigantic unknown. The most ambitious vertical integration in tech history. On several occasions Musk repeated how at Terafab they intend to cover the entire development, manufacturing, packagingtesting and improvement in the same facilities. If we fulfill that promise, we would be facing another unprecedented achievement, because the semiconductor industry has been doing just the opposite for decades: hyperspecialization by different suppliers: some design, others manufacture, others package… Musk wants to do it all, and if he succeeds he will become a direct rival for Samsung or TSMC, which a priori he would no longer need. Promises and realities. This project seems especially diffuse, but with Musk anything is possible, as we have said. In recent years, yes, we have seen how several of his ideas or they have failedor they have been delayed, or they have been left in no man’s land. The robotaxis still haven’t arrived, the Cybertruck arrived late and it’s not settingand companies like The Boring Company or products like Solar Roof have had less reach than they promised, at least for now. Terafab seems like another impossible project from Musk. We’ll see if it ends up not being so. Image | tesla In Xataka | 8 years ago Elon Musk launched a Tesla Roadster into space: it continues to orbit and was mistaken for an asteroid

Nuclear waste is a problem, so Germany is looking for the solution in a Jurassic rock in Switzerland

Nuclear energy is capable of generating clean electricity, continuously and in large quantities. A marvel except for two small details: the risk of a possible leak and what to do with its waste. The most widespread solution is bury them in a nuclear cemetery and wait. How much? Well, it depends, but it could be hundreds of thousands of years, until they are no longer dangerous. The million dollar question is where. An international research team led by Germany has started to drill a hole in a Swiss mountain to try to answer it. The project. Her name is DEBORAH (Deep borehole to resolve the Mont Terri Anticline Hydrogeology), stands for deep drilling to understand the hydrogeology of the Mont Terri anticline and is exactly what it does. Your goal? Document in great detail the layers that exist and their properties. There is some especially interesting material: Opalinus Clay. This deep experiment involves the German Geosciences Research Center GFZ and the German Federal Institute for Geosciences and Natural Resources (BGR), the Nuclear Waste Service (NWS) of the United Kingdom and Swiss researchers from the University of Bern. Why is it important. Because it can be the ideal rock to build a radioactive waste deposit. As details GFZSwitzerland has already made the decision, but Germany and the United Kingdom (the other parties to the project) have not yet. The key is what the analysis of the drilling says: details such as how much water it allows to filter, at what speed or where it will be key to making the decision. It is not trivial: a leak, no matter how slow and small, can contaminate aquifers. What’s special about it. The Opalinus is a clay rock dating back to the Middle Jurassic, with an estimated age of approximately 175 million years. Simply put, it is clay that has been compacted into rock. And it has a property that makes it a good candidate for nuclear storage: its very low permeability. Context. The study of Opalinus is not new by any means: GFZ’s on your radar for 30 years because, in addition to its very low permeability, it has properties such as its plasticity (under pressure, warps instead of breakingsomething convenient if it works as a radioactive deposit) or its ability to retain certain radionuclides. Switzerland has already chosen it, but it remains to be known how it behaves under the conditions that exist in much deeper areas, where, for example, temperature or pressure change noticeably. How they do it. In the Swiss canton of Jura, near the municipality of Saint-Ursanne, there is that Mont Terri. In its bowels there is an underground laboratory that is accessed through the security gallery of a highway tunnel, about 150 – 200 meters underground. A drilling platform works continuously there, advancing meter by meter, until reaching a depth of 800 meters. The drill uses a hollow crown that allows extracting intact rock columns, the sample that is later analyzed in the laboratory. Each advance works as a witness insofar as it reveals the age, the composition, the fractures and the differential quality: how it behaves with water. In addition, they use seismic and gravimetry techniques to obtain a complete x-ray of what is hundreds of meters deep. In Xataka | Ships have been damaging the oceans with noise for centuries. Germany is working on silent propellers to solve it In Xataka | 700 tons of nuclear waste have arrived in Germany from England. The Germans are not entirely happy Cover | Ilja Nedilko and Evangelos Mpikakis

The rain has transformed the driest desert on the planet into a sea of ​​flowers. It’s a sight to behold and a problem for experts

The Atacama Desert bloomed again in spring. After the August rains, more than 200 species from the Chilean region were activated and provoked the first major flowering since 2017. The Internet was filled with impressive photos, but (beyond the hype) there is a central problem: increasingly clear signs of a destabilized climate system. What has happened? In August 2025, a storm left accumulated between 40 and 60 mm in the Chilean Atacama Region. Specifically in the south: in Huasco, Freirina, Vallenar and the Llanos de Challe National Park. As a consequence, flowering started in the third week of September and reached its peak between the end of September and mid-October. He show was amazing: a mantle of red and yellow añañucas, of sighs, of huilles, of guanaco legs and lion’s claws. And why are we talking about this now? It’s a good question. Historically desert blooms occurred between 5 and 7 years. Typically linked to El Niño phenomena. In the last 40 years, Chile has recorded about 15 superblooms. The striking thing about this case (as happened in 2022 and 2025) is that it is linked to La Niña conditions. And, indeed, one may be a coincidence, but three so close together mark a trend. And the problem is that more blooms are not always good news. And so? As explained Maria Fernanda Pérezan ecologist at the PUC of Chile, out-of-season blooms generate a gap between flowering and pollinators. What’s the point of having pollen if we don’t have bees to do their job? Indeed: absolutely nothing. What’s more, if climate change causes this type of blooms on a regular basis, this deregulation could cause very serious problems. After all, just think that a guanaco paw seed can spend fifteen years on the desert floor until its time comes; If it germinates and there is no one to pollinate it, there will not be another seed. Climate change is going to cause us more problems than we are able to imagine. Because the serious thing is not the sea level, the melting of the glaciers or the rise in temperatures (that too). The most important thing is these little things that change everything. Things so small that we haven’t thought about them. Image | In Xataka | The Atacama Desert is one of the driest places on the planet. And right there a bunch of “crazies” are trying to get water out of the fog.

OpenAI’s big problem all these years has been a chronic lack of definition. Now he wants to solve it with a super app

OpenAI spent much of 2025 announcing new features, not new models (that also), but new products. We saw him with his Sora 2 video generator or with ChatGPT Atlas browser. Now, the company recognizes that they were diversifying too much and their plan is… to launch another app. The super app. They have an exclusive Wall Street Journal that OpenAI is preparing a desktop tool that will unify the ChatGPT app, its Codex code platform and the Atlas browser. This super app will offer agentic capabilities, not only oriented to code, but also to productivity. This is aiming directly at the business field, a field in which its rival, Anthropic is quite ahead of him. Too many products. The company’s goal with this move is to simplify the experience and reduce fragmentation between products. Speaking to the Wall Street Journal, a company spokesperson assures that it will allow them to unify the different teams, which will be able to focus their efforts on one product instead of several. In an internal note, OpenAI explicitly acknowledges that they were spreading their efforts across too many apps and needed to simplify them. The change will be led by Fidji Simo, the head of apps at OpenAI, who recently brought the employees together to give them a message: “We cannot waste this moment because we are distracted by parallel projects.” And diversifying consumes many resources, both economic and computing capacity, and OpenAI is not to be wasted none of them. Without direction. OpenAI has the most used chatbot in the world, but what they don’t have is a clear product strategy. They have wanted to be too many things at once without a clear strategyand in addition, half-abandoned products have been left along the way. The Atlas browser is the best example of this. I had all the potential to be a serious alternative to Chrome which had not yet integrated Gemini. The reality is that, five months after its launch, ChatGPT Atlas is still exclusive for Mac and also has lost functions. Something similar happened with Sora 2: they got the viral moment they were looking for, but today the app remains exclusive for users in the US and Canada. Competition where it hurts most. While OpenAI launched its video memes or its browser, the competition moved forward with a much less flashy, but better thought-out plan. According to a Menlo Ventures reportin 2023 OpenAI had a 50% share in the enterprise segment, while Anthropic had only 12. In 2025 the tables turned: Anthropic had 32% and ChatGPT 25%. If we focus only on programmers, 42% prefer Claude and only 21% ChatGPT. ChatGPT still has many more users, but the vast majority are for personal use. Financially, business users are much more valuable because they have no qualms about paying for subscriptions that often exceed $200 per month. Image crisis. In case Anthropic was not eating enough toast, the image crisis caused by the agreement with the Pentagon. ChatGPT began to lose users at a worrying ratewhile Claude was placed in the top of most downloaded applications. What they were missing. Image | Amparo Babiloni, Xataka In Xataka | There was a time when ChatGPT was a magical and free tool. That time is about to end

Apple’s problem with AI is not just being very late. The fact is that allying with Google will not be enough

We still do not have a date for Apple to finally release the new Siri which he has been promising for two years. But the biggest problem with being late is not just being late: it’s arriving at a time when you don’t even all the efforts you have put on the table They are enough. Comet arrived. Perplexityquietly, is beginning to conquer an important piece of mobile territory. Its latest alliance comes from Samsung, natively implementing its artificial intelligence in star models like the Galaxy S26. One of Perplexity’s most powerful tools is its browser Cometwhich just landed on iOS. A browser that, by default, uses Google as a search engine, but whose technology is above what Gemini manages to offer today. Why is it important. Comet is not smoke. It is also not a browser with minor functions that adorns the desktop of our iPhone. The interface is simply outstanding Block ads by default Find information for us Manage tabs Allows voice searches with interactive answers It is capable of playing video for us and summarizing it without us having to see it. Summarize websites Comet stops short of being fully agentic AI, but it replaces the browser with a more reliable solution than chatbots like Gemini or GPT: you’re using AI inside a browser, not AI that accesses the internet to find (or invent) links. And so, with everything. 2026 is being a wild year for AI. In fact, it is exhausting to open the computer every morning and see how practically every day a new model has come out that surpasses the previous one. 2026 is being a year in which AI advances day after day. Nobody knows how Apple will be able to launch something at the level of what may already be obsolete today Although the iterations are minimal, we are seeing spectacular phenomena such as OpenClaw. While Chinese brands like Nubia begin to implement it on their phones, Apple only has the promise that Siri will be smart one day soon. Soon, it is assumed. According to Gurman leaks, we will see the new Siri throughout the first half of this year. The “according to” is important, because the rumors pointed to a February in which we have not yet seen a trace. Apple has been accumulating delays since it promised a Apple Intelligence which disappointed, and beyond the announcement of its alliance with Google, we have no more relevant news. Image | Xataka In Xataka | What have Apple and Google agreed on for the new Siri? Nobody knows because Google doesn’t even want to mention it.

the problem is different and it is much closer

Bitcoin It has been presenting itself for years as a decentralized system, resilient by design and less exposed to the single points of failure that affect traditional banking. The idea is powerful and, to a large extent, true. But it has an important nuance that is usually left out of the conversation: to function, Bitcoin continues to rely on a very specific physical infrastructure that connects the world and that also conditions its real resistance. The study that puts figures on resilience. A study by the Cambridge Center for Alternative Financebased on eleven years of network traffic and 68 real cable incidents explains something very interesting. The significant disconnection threshold of the clearnet of Bitcoin is between 72% and 92% of submarine cables in random failure scenarios. However, the same work introduces a decisive nuance: this solidity changes noticeably when the problem is no longer random. Decentralization, but not isolation. Just because Bitcoin does not have a central authority does not mean that it works independently of other infrastructures. Its network is made up of distributed nodes that constantly exchange information, but they do so through providers, routes and physical systems that also support the Internet. The Cambridge study itself highlights this interdependence between layers, where the logical and the material coexist. For this distributed network to work, the nodes need to continuously exchange data, and that occurs over a global infrastructure shared with the rest of the Internet. We are talking about submarine cables, terrestrial links, service providers and routing systems that determine where information circulates. Bitcoin’s resilience, according to the study, depends largely on how all these components are organized and connected. Where everything changes is in targeted attacks. Compared to the resistance shown in random scenarios, the study warns of a much more accessible vulnerability when the attack focuses on large ASNs or key routing infrastructures. Damaging cables indiscriminately is not the same as hitting specific surfaces of the network, and this difference paints a very different scenario from that of massive and indiscriminate failures. Researchers support their conclusions with documented events. One of the most significant is the cable cutting recorded on March 14, 2024 off the Ivory Coastwhich affected multiple countries in the region. On a global scale, the impact on the Bitcoin network was minuscule, although at a regional level the consequences were much more visible. Tor’s role in resilience. The study identifies another element that influences the robustness of the network: the growing use of the protocol Tor. According to their data, in 2025 around 64% of Bitcoin nodes will already operate through this network and, in the four-layer model used by researchers, this evolution not only does not weaken the infrastructure, but rather increases its resilience against cable cuts under the current geography of the relays. So, overall, the study paints a less intuitive scenario than is usually proposed. Bitcoin does not seem particularly exposed to a collapse caused by massive and indiscriminate failures in the global infrastructure, but rather to much more focused disruptions. The key, according to researchers, is not so much in the scale of the damage as in where it occurs, which forces us to rethink how we understand its resilience. Images | Jen Titus | Erling Løken Andersen In Xataka | Seedance 2.0 has used Hollywood intellectual property to go viral. Hollywood has used the courts

The problem is that no one can agree on what they are.

He James Webb Space Telescope It has been targeting the most remote regions of the universe for years and, with each new observation, it has revealed something that doesn’t quite fit. In his images, small, tiny, bright red dots appear, which repeat with a frequency that is difficult to ignore. They are not a specific anomaly or an observation failure: they are objects that astronomers have been studying for some time without having yet achieved a convincing explanation of their nature. The novelty. A recently published study in The Astrophysical Journal, led by Devesh Nandal and Avi Loeb, from the Harvard-Smithsonian Center for Astrophysics, opens an alternative to the most widespread interpretation. Specifically, it suggests that some of these red dots might not be systems dominated by active black holes, but rather supermassive stars formed in the early universe. Speaking to Live ScienceNandal argues that this type of star can explain key features of these objects without depending on the presence of growing black holes. Before this turn, the so-called “little red dots” had already been on astronomy’s radar for some time. The term began to be consolidated in studies published in 2024, when several teams began to analyze them systematically after the first Webb observations. We are not talking about a recent discovery, but rather an accumulated enigma: At Xataka we already address it as a phenomenon that is difficult to fit into current modelswith very compact, extremely luminous objects present in the early universe. The dominant hypothesis. During the first years of analysis, the explanation that gained the most traction was that these red dots were driven by growing black holes. In the first phase, some of the researchers attributed its red color to dust in the environment, although later work has shifted part of that focus to hydrogen gas. What is starting to not fit. With the passage of time, some observations have complicated this initial interpretation. Several of these objects do not show clear X-ray emissions, one of the most common signs of active black holes, and their spectra lack strong metallic lines beyond hydrogen and helium. Added to this is “The Cliff”, one of the objects analyzed by the RUBIES program, which does not fit either as a conventional galaxy or as a system dominated by dust. The proposal of the new study fits into this context, which proposes a different reading for at least part of these objects. Instead of active black holes, some small red dots could be supermassive stars formed from primordial gas, composed almost exclusively of hydrogen and helium, and observed just before collapsing. According to the model developed by the team, this scenario reproduces both its extreme brightness and specific features of its spectra, without having to assume the presence of a growing black hole. The new study does not close the debate, rather it expands it. The researchers themselves acknowledge that directly demonstrating what lies behind these objects remains extremely difficult, and other voices in the scientific community insist that none of the hypotheses can yet be ruled out. The presence of black holes in these systems remains to be demonstrated directly and, for now, is inferred mainly from their brightness and how abundant they are. Images | NASA/ESA/CSA (1, 2) In Xataka | The Zoo Hypothesis: Why Aliens Likely Know About Us and Don’t Want to Contact Us

Beyond prices and vacation rentals, housing in Madrid faces a huge problem: irregular houses

Beyond price escalation, the pressure of the vacation rental or the decoupling Between the speed at which homes are created and new buildings built, in Madrid the real estate market faces a tricky challenge: irregular developments. The latest data of the Community of Madrid reveal that in the region there are dozens of settlements of illegal origin that bring together thousands of homes that start from an irregular situation. all one hot potato for administration. What has happened? The data has revealed it The Newspaper. The Community of Madrid has registered almost 200 developments built without the necessary permits, settlements of illegal origin that add up to thousands of homes. The calculation is based on an update of the inventory from the 1980s, when 136 irregular settlements were identified. The figure has changed since then for two reasons. The first, because there were nuclei that have managed to regularize themselves. The second, because the technicians have added to the list others that (for one reason or another) did not appear in the catalog that accompanied the 1985 regulations. What do the figures say? If you walk around Madrid you can find dozens of housing units built without respecting the regulations. Some very populous. Specifically, The Newspaper talks about 184 urbanizations or settlements of illegal origin and some 10,500 homes. The figure is partly explained because the 1980s census incorporated almost a hundred new consolidated residential areas. The Ministry of the Environment clarifies that in most cases they are the result of “urbanization processes outside the law” and “lacking planning”, which explains why they often do not offer “minimum conditions for urbanization.” Are all cases the same? Not at all. Not all urbanizations identified by the Community of Madrid are the same nor do they have the same dimensions. Particularly noteworthy is the settlement of La Vega del Tajuñawhich brings together a large part of the residences in an irregular situation detected by regional technicians. Specifically, there are 5,513 distributed over more than 2,700 hectares. With those dimensions it would be the largest settlement of its kind in the community, although not the only one where hundreds of people live. In Camino Viejo de Madrid and Vega Baja del Guadarrama there are also more than 1,400 buildings and there are others, such as El Rondelo, Pico Valsarón or Dehesa Nueva, with hundreds of homes. The Community has also noted constructions located in locations very close to the capital, such as Improved Field. How is that possible? The circumstances and context are not always the same, but a few days ago EPE visited a nucleus of Mejorada del Campo that helps to understand how settlements like this can be formed in the heart of Madrid. Specifically, the newspaper visited a nucleus that began to form in the 1980s, driven by developers who parceled out rural land and sold the land at affordable prices, offering it as an ideal space for “urban gardens” with access to water. Time, use and the increasing pressure that affect housing prices in Madrid did the rest. What were initially huts designed for tools gave way to more ambitious installations. Is it something new? Not at all. And not only because the history of these settlements can go back a long time. At the end of 2025, the Community of Madrid has already issued a statement in which he recalled that in just four years he had inspected 1,906 “irregular constructions” on protected land. To be precise, the regional government spoke of 5,334.3 hectares “affected by this type of settlements”, also identified in 56 municipalities. “Of them, about 80% are concentrated in the plains of the main Madrid rivers, the majority in the areas of the Tajuña River (2,712.5 hectares), followed by the Jarama (1,019.5), Guadarrama (363.2) and Tajo (150.2)”, explains the Madrid Executive, which warns of the “risk” it represents “both for people and the environment.” Hence, this type of construction appears among the objectives of the Urban Inspection and Discipline Plan. Does it only happen in Madrid? No. Settlements of this type are also common in other parts of Spain, such as Catalonia. “There are many urbanizations that were built in the 60s, 70s and early 80s of the 20th century, which were marketed without the necessary planning, urban management or basic public services,” recognize from the Catalan Generalitat. “Of the 1,433 identified in the 2015 catalogue, there are 730 with urban deficits. Many are concentrated in small municipalities and the tendency to convert housing estates into primary residences aggravates their situation,” acknowledges the regional government. The topic is complex because, as remember EPE When talking about the Madrid case, the legal framework varies over time: if a home built on non-developable land remains long enough outside the ‘radar’ of the authorities, the crime expires and can no longer be demolished. Images | Community of Madrid Via | The Newspaper In Xataka | Madrid believed itself immune to the TukTuk plague in the most tourist cities in the world. Now someone wants to ban them

There are many wireless alternatives to the HDMI cable. The problem is that none of them work as well as the HDMI cable.

That the tangle of cables in our homes is well organized is something that it makes us obsessed for years. Normally, when we talk about cable management we usually refer to the workspace, but we also can be a problem in our salonswhere there are cables that no matter how much we try to replace them, they are still there. One of them is HDMI. Although there are technologies to be able to watch content on a television or projector without having to pull a physical cable, HDMI cable is still the best and sometimes only option. Alternatives to HDMI cable HDMI cables have the drawbacks of any cable: they limit mobility, cause visual clutter and, depending on the device we want to connect, it is very likely that we will need adapters. Fortunately there are technologies that allow us to do without it. Chromecast and Airplay Google TV Streamer, formerly known as Chromecast They are the most popular and well-known options since they have the support of large companies such as Google and Apple. More and more televisions integrate both systems, so it is no longer necessary to purchase a separate device. In the case of Chromecast classic, technically not a wireless solution, But it does allow us to launch the content we want to see without having to use one expressly to connect the mobile phone to the TV. Miracast Smart View on a Samsung mobile. Image: Xataka Home One of the solutions that has tried to replace HDMI is Miracastusually known as Mirror screen or Smart View on Samsung phones. It is a protocol that works through Wi-Fi Direct It allows two devices to detect each other and we can mirror the screen of one on the other. This point is important since it only works in mode mirroringthat is, that clone screen contentit does not extend it or play a video from an app like a Chromecast does. With Miracast, if you want to watch a video that you have saved on your mobile on TV, you will have to leave your cell phone on and the same video playing on it. The advantage is that it is a cross-platform standard and allows you to send FullHD video with almost no latency. That’s when it works well, because Connection problems are quite common. Wireless HDMI Kits If you can’t (or don’t want to) run an HDMI cable to a display or projector, a solution may be to use a wireless kit. It consists of an emitter that sends the AV signal wirelessly to a receiver, which will be connected to the destination device, such as the TV. There are quite a few options available at relatively affordable prices, such as this one from UGREEN that costs less than 60 euros or a little more expensive, like this one from VENTION for 119 euros. The problem with these types of solutions is mainly the interference and, above all, latency. In addition, they have limitations such as lack of HDR support and many do not support 4K video. HDMI is still necessary Although there are alternatives without cable, they are just that, alternatives, not substitutes. Yes, there are proposals that improve HDMI, such as GPMI standard developed by an organization of more than 50 Chinese companies. This interface promises transfers of up to 192Gbps and supports 8K video, but even if it manages to displace HDMI You still need a cable. There are no wireless alternatives that improve the performance and stability of a physical HDMI cable, especially in scenarios where latency is key such as competitive video games. Whether on the console or the PC, the cable will always be the preferred system in this case. It is also best if you are interested in obtaining the best video and audio quality, for example when connecting a home cinema system, and you prioritize connection stability. Of course, you have to choose the cable well and the port to which we connect it to get the most out of it. Image | Xataka In Xataka | The curse of hotels are TVs that do not allow you to use the HDMI port. The solution is obvious: hack them

In 1987 he had a problem displaying images on his Mac, so he created an app. Today it is the most used image editor in history

Maybe with Nano Banana There are people who have banished Photoshop, but the image editor is the tool that has accompanied photography professionals for decades, almost on par with their camera. In fact, it achieved something only within the reach of very few technological products: becoming a verb and even enter the dictionary. We Photoshop an image and Google it on the internet. Like many other milestones, Photoshop was born by chance: It was the result of a screen that did not know how to show grays. In figures. In these almost 40 years of Photoshop’s life, the editor has been accumulating astronomical data of its progress. Its launch price in 1990 was $895. No joke, it would be equivalent to $2,100 today. It has never been a home software but a professional one. Adobe closed last year with record turnover of 23.77 billion dollars. In 2024 billing was of 21,510 million dollars, of which subscriptions represented 20,521 million dollars. In 2013 Adobe played all its cards on the subscription. Time has proven him right: in twelve years it went from 4,000 million annual billing to almost 24 billion in 2025. How it all started. It’s 1987 and Thomas Knoll was pursuing a doctorate at the University of Michigan in computer vision. Then he had a problem: his Mac Plus had a monochrome screen unable to display grayscale images, only pure black and white. So he wrote a few lines of code to fix it. He called it Display. His little program did the trick, but that was it: he had no intention of commercializing it. The one who did have a nose for the business was his brother John, who at that time worked at Industrial Light & Magic (George Lucas’ company in charge of making Star Wars special effects): convinced him to develop the entire program. Brothers and partners, they sold the license to Adobe Systems Incorporated in 1988. From layers to AI. Photoshop 1.0 would see the light of day in February 1990 as an editor that required only 2MB of RAM and an 8 MHz processor to run, the minimum specifications for a Mac. To put it in context: today Photoshop recommends 16GB of RAM, 8,000 times more. It included tools as iconic to its users as the lasso or the magic wand. But if there was a technical leap that made the difference, those were the very useful capes: they arrived in 1994 with Photoshop 3.0. Before layers, the editor was destructive: each change overwrote the original image. Almost 20 years later, another functional milestone would arrive: the arrival of AI with Generative Fillthat is, being able to add or delete objects with a prompt. Despite the controversy over authorship and the future of retouchingits numbers were incontestable: in April of last year it had already generated more than 22,000 million images since its launch, according to Adobe. The risky move to the subscription model. Before the tricky decision to include AI in its suite, Adobe made another risky move: in 2013 and when we had still succumbed in subscriptionocracyannounced that it would stop selling its Photoshop on a license forever and start renting it. At that time almost 50,000 customers signed a petition against of this decision and its shares fell 12%. Once again, time and pocketbooks seem to have proven them right: they have multiplied their income by six. In Xataka | 16 years ago a student from Barcelona was looking for an easy way to edit PDFs. The website he created is one of the most viewed on the internet In Xataka | 30 years ago he created a player for the university: today his app has more than 6 billion downloads and is still free and without ads Cover | University of Michigan

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.