If ads made with AI seem horrible to you and position you against the brand, you are not alone: ​​science supports you

“The most profitable ad in Pepsi history.” The most voted comment in YouTube of the ad generated with AI by Coca-cola for Christmas 2025 suggests something: a popular rejection of advertising made with AI. Is this true? A new study from the University of Zaragoza on the effect of artificial intelligence on advertising points in that direction. The researchers’ conclusion is that customers avoid services advertised with AI-generated images, especially in companies that offer pleasurable experiences—such as hotel vacations—or that force high-involvement decisions. The reason? Artificially generated images are interpreted as unreliable. Its four authors explain to Xataka that “consumers value real images more because they show a faithful image of the product or service and they distrust companies that use images created with AI because they seem less professional or hide reality.” However, recent studies show that images created with AI can be equally effectiveand easier for companies to obtain, especially when consumers do not know that they are not real, they clarify. What AI gives you, AI takes away Using AI in an advertisement conveys a feeling that “the brand makes little effort, especially in luxury and beauty brands,” explains Lucía Caro Castaño, professor at the Department of Marketing and Communication at the University of Cádiz. After the Christmas controversy, Coca-Cola was forced to share how did you make the announcement to show “all the effort and investment it had required in terms of people.” Caro points to savings in personnel as one of the reasons why content made with AI generates disgust. Coca-Cola has recognized The Wall Street Journal that producing your typical Christmas advertisement has gone from needing a year to a month, recognizing savings in costs and time. However, the creation of spot forced to enormous human work to fine-tune AI-generated images. Coca-Cola is not the only company that has discovered the advertising limitations of AI. Dell share your experience: “We’re very focused on getting the most out of a device’s AI capabilities, but what we’ve learned this year, especially from a consumer perspective, is that they don’t buy based on AI. In fact, I think AI probably confuses them more than it helps them understand a specific outcome,” argued a few months ago Kevin Terwilliger, Chief Product Officer at Dell. There are several reasons for this rejection of advertising AI: the feeling of “already seen”, which penalizes the lack of originality and creative effort; and the perception of “dehumanization” transmitted by excessively robotic content, explains Patricia Coll, doctor in Communication and professor at EAE Business School. Diana Gavilán, professor of Marketing at the Complutense University of Madrid, highlights the benefits of AI in automatable tasks in advertising and digital marketing: “The problematic thing is when it replaces a human. If a robot serves me but you want to convince me that it is like a human, there is a drop in confidence.” According to researchers at the University of Zaragoza, their study shows that real images are particularly effective when it comes to a product or service with high involvementthat is, the consumer wants faithful images when the decision they make is important. Real images are also better than those generated with AI to publicize hedonistic products or services because they allow “a better assessment of what the personal experience will be like.” On the other hand, when the products are utilitarian and low-involvement, images generated with AI are effective. In some sectors it is advisable to use commercial images made with AI, such as schools and social entities to avoid showing real children to protect their privacy, scientists highlight. The professor of Marketing at the University of Alicante, Ana Belén Casado, adds that not all consumers or all brands reject AI: “It depends a lot on the type of product, good, service or idea that is being marketed and the differential value proposition of each brand.” For Gavilán, AI is like the Thermomix: a tool with which you don’t do everything in the kitchen, “but you can use it and it is at your disposal, depending on how you use it, it will be better or worse.” In his opinion, the Coca-Cola ad was “a strategic mistake” for wanting to make the same old ad with AI instead of making a different story with that technology. Brands taking a step back with AI? Before Coca-Cola, the clothing brand H&M had already launched a campaign with real models and “digital twins” generated with AI. Although all images generated by AI are labeled so as not to confuse them with real ones and the models have image rights Regarding his digital copy, Caro highlights that “we do not know exactly what this contract has been like in terms of the rights to his own image that exceed those models, nor will it affect photographers and the rest of the workers who make these campaigns possible.” This innovative campaign was quite small, around a line of denim clothing, and the head of AI at the Swedish multinational, Linda Leopoldleft the company shortly after the campaign. “We don’t know where H&M will continue next, especially with all the controversy generated,” says Caro. Gavilán’s vision is that AI will continue to be implemented and that it will be applied more in areas “where it is very relevant.” Despite his water and energy consumptionthe environmental NGO WWF in Denmark launched a campaign titled “The hidden cost” in April 2025 to denounce the environmental impact of eleven different products. It was made entirely with AI. In Spain the first advertising agency focused on AI, AI::gencyhas worked with brands such as Nissan, Seat, Cushla and Ebro. Other brands have chosen publicly reject the AI. In his campaignWhy don’t we get on the AI ​​bandwagon?” in February 2024, the browser Vivaldi announced that it would not be incorporating AI “for the time being.” The reasons given by the company were copyright and privacy violations, as well as “plausible-sounding lies” generated by AI. At the advertising level, doveUnilever’s personal care brand, has … Read more

data centers in space are a horrible idea

Artificial intelligence has turned energy into the new technological bottleneck. And faced with that limit, some of the largest companies in the world have begun to look up. To give some examples, Jeff Bezos has spoken of “giant AI clusters orbiting the planet” in a decade or two. Google has experienced with running artificial intelligence calculations on solar-powered satellites. Nvidia supports startups who want to launch GPUs into space. Even OpenAI has tried the purchase of a rocket company to ensure his own path off Earth. The promise is seductive: solar data centers running around the clock, without power grids or cooling towers. The problem is that, when you move from the story to physics, engineering and numbers, the idea begins to break down. Data centers in space. There is a question that surrounds this issue: why do technology companies want to send data centers to space? The motivation at first glance is clear. According to data from the International Energy Agencydata center electricity consumption could double by 2030, driven by the explosion of generative AI. Training and running models like ChatGPT, Gemini or Claude requires massive amounts of electricity and huge volumes of water for cooling. In many places, these projects are already running into local opposition or physical network limits. In this context, space appears as a tempting solution. In certain orbits, solar panels can receive almost constant light, without clouds or night cycles. Besides, as Bezos and other defenders explainthe vacuum of space seems to offer an ideal environment to dissipate heat without resorting to cooling towers or millions of liters of fresh water. According to this argument, space data centers would be more efficient, more sustainable and, over time, even cheaper than terrestrial ones. For some executiveswould not be an eccentricity, but the “natural evolution” of an infrastructure that already began with communications satellites. When engineers raise their hands. Faced with the enthusiasm of corporate statements, several space engineering experts have been much more forceful. In one of the most cited texts on the subjecta former NASA engineer with a PhD in space electronics and direct experience in AI infrastructure at Google sums up his position bluntly: “This is a terrible idea and it doesn’t make any sense.” His criticism is not ideological, but technical. And it starts with the first great myth, the supposed abundance of energy in space. Solar energy is not magic. The largest solar system ever deployed outside of Earth is the International Space Station. According to NASA dataits panels cover about 2,500 square meters and, under ideal conditions, generate between 84 and 120 kilowatts of power, a part of which is used to charge batteries for periods in the shade. to put it in contexta single modern GPU for AI consumes on the order of 700 watts, and in practice around 1 kilowatt when losses and auxiliary systems are taken into account. With those figures, an infrastructure the size of the ISS could barely power a few hundred GPUs. As this engineer explainsa modern data center can house tens or hundreds of thousands of GPUs. Matching that capability would require launching hundreds of structures the size—and complexity—of the International Space Station. And even then, each would be equivalent to just a few racks of terrestrial servers. Furthermore, the nuclear alternative does not solve the problem either since the nuclear generators used in space, RTGs, produce between 50 and 150 watts. In other words, not even enough to power a single GPU. Space is not a refrigerator. The second big argument against orbital data centers is cooling. It is frequently repeated that the space is cold, and that this would make it easier to dissipate heat from the servers. According to engineers, this is one of the most misleading ideas in the entire debate. On Earth, cooling is based on convection: air or water carries away heat. In the vacuum of space, convection does not exist. All heat must be removed by radiation, a much less efficient process that requires enormous surfaces. NASA itself offers a compelling examplethe active thermal control system of the International Space Station. It is an extremely complex network of ammonia circuits, pumps, exchangers and giant radiators. And even so, its dissipation capacity is in the order of tens of kilowatts. According to the calculations of the aforementioned engineercooling the heat generated by high-performance GPUs in space would require radiators even larger than the solar panels that power them. The result would be a colossal satellite, larger and more complex than the ISS, to carry out a task that is solved much more simply on Earth. And there is a third factor: radiation. In orbit, electronics are exposed to charged particles that can cause bit errors, unexpected reboots, or permanent damage to chips. Although some tests, such as those carried out by Google with its TPUs, show that certain components can withstand high doses, the failures do not disappear, they only multiply. Shielding systems reduces risk, but adds mass. And each extra kilo increases the cost of the launch. Furthermore, AI hardware has a very short lifespan, as it becomes obsolete within a few years. On Earth it is replaced; In space, no. As critics point outan orbital data center would have to operate for many years to amortize its cost, but it would do so with hardware that is left behind much sooner. So why do they keep insisting? The answer seems to lie less in current engineering and more in long-term strategy. All of these projects depend on the condition that launch costs fall drastically. Some estimatesthey talk about thresholds of about 200 dollars per kilo so that space data centers can compete economically with terrestrial ones. That scenario relies on fully reusable rockets like Starship, which have not yet demonstrated that capability on an operational scale. Meanwhile, terrestrial renewable energies they continue to get cheaperand storage systems They improve year after year. Furthermore, the story of the space fulfills another function because it positions … Read more

The megaindios of Ourense, Zamora and León have paralyzed the Galician bird. It is the nth setback in a horrible year for Renfe

The Madrid-Galicia bird had become an oasis for Renfe. The Spanish company, objective of all kinds of criticism during 2025 had seen how the Galician corridor was even bending the arm to the airlines. The fires are, now, the last unexpected twist that leaves a trail of cancellations, delays and travelers desperate to find an alternative route to the train. Fires. They are, without any doubt, the news that marks the rate of today in Spain. When we write these linesthere are more than 40 active fires in our country. Of these, two out of three foci They are located in Castilla y León. In addition, only in Ourense (Galicia) have burned more than 60,000 hectares. Lack of resources, forests that are lacking maintenance and one heat wave that seems to have no end They have been the perfect fuel to find ourselves in a situation that has been out of control. Click on the image to go to the original tweet Cut. With the provinces of Zamora, León and Ourense surrounded by the fire, the bird has stopped completely. This morning, Renfe confirmed in his X account that the circulation was interrupted “until the competent authorities allow resumption.” It is the fifth day that Renfe keeps the high -speed line between Madrid and Galicia suspended since Last Thursday, August 14, it would be suspended For the first time the rail service. The previous day, Wednesday, August 13the company already had to suspend some of the paths during the afternoon. Click on the image to go to the original publication From Madrid to Zamora. At the moment, the only solution that Renfe has given is to open a train that makes the journey between Madrid and Zamora. With an eye on the evolution of the fires, for the moment the only alternative they keep open is the one mentioned in the image superior. In the responses to the publication, Renfe confirms that There is no alternative plan For travelers who had tickets bought for any of the Galician cities. Bus alternatives have not been established, For example, and they don’t guarantee When can you return to normal. The nth problem. The interruption of traffic in the Galician corridor is the nth bad news for the company that is living a 2025 to forget. If we make a list of everything that has happened in the previous seven months that we have throughout the year we find the following: A flying line. The situation is even more complicated, taking into account that the Galician corridor was being the line that was best working at Renfe. As high speed has been settling, the data says There is a traveler transfer from the plane to the trainwhich is offered as a more flexible and interesting alternative if you want to spend less than 24-36 hours in Madrid. Besides, The particularities of the line and orders saturation to manufacturers have caused Renfe to have a dominant position on competitors. Although the line is liberalized and any company can operate, the truth is that companies such as Ouigo lack adequate rolling material and They do not expect to have it in the short term. What can the traveler do? At the moment, little if I planned to use the high speed line between Madrid and Galicia. As we have seen, Renfe is only partially offering, between Madrid and Zamora. Once there, there is no other than taking a car or a bus to continue the trip. Options for traveling tomorrow, August 19, between Santiago de Compostela and Madrid Shot prices. The other option, of course, is to fly from Madrid to one of the Galician airports. Of course, from one day to another, flying from Madrid to Vigo, to Coruña or Santiago de Compostela forces to invest more than 100 euros per ticket (there are only three options for less than that money and Vigo does not have them) and in some cases the prices per trip exceed 300 euros. But the trip is very complicated in the reverse sense. From one day to another, the cheapest option to travel between Vigo and Madrid forces to disburse 369 euros, from A Coruña you can travel from 272 euros and from Santiago de Compostela you have to disburse 379 euros. All data have been taken from the flight search engine Skyscanner. Photo | Hugh Llewelyn and Ume In Xataka | Each new data we know about the AVE in Spain points in the same direction: it is winning the game to Barajas

Playing video games in the middle of summer and with this heat is a horrible experience. I have found the solution in the cloud

Play on PC is very good Until summer arrives. To a hot room as is my office, the hot air is added that, desperate to compete against Cordoba heat, manage the six fans located in my tower. The feeling is … regular. If I leave the tower on the floor they end up warming my legs. If I upload it to the table, the noise of the fans becomes even more evident and the temperature increases equally. The same happens with the Rog Ally, console that I love But that if it already has quite high temperatures, in summer suffers even more. I have found the solution in a technology that generates opinions of the most varied and that personally, it seems to me the future: the cloud. The heat. Córdoba is currently in orange alert. The asphalt has reached 57.3ºCthe grass in the shade to the almost 27ºC and the maxims exceed 40ºC. I write these lines at 10:40 on Wednesday and we are already at 30ºC. At night it does not cool and opening the windows is like look at the doors of Mordor. Thus, the solution passes through air conditioning, fans and infinite patience. Image: Olivier Collet The components. Electronic components and heat They do not get along. Let’s say they are obliged to understand each other, but excess heat is not good for a computer or for any device. The cooling systems By air, which are those that usually assemble the vast majority of computers and consoles, will refrigerate depending on the ambient temperature. If in a room is hot, the cooling will be less effective than if the room is cold. Fresquibiris. In addition to cleaning the fans and the interior of the PC, which must be done as long as a quick solution could be to put the air-conditioning For a while to cool the room. The problem is that, on the one hand, it is possible that not all people have an air conditioning device in the room or that, on the other hand, the air conditioning is centralized and we have to put it throughout the house when we only want to cool a single stay. My case. I have a temperature sensor in the office. In the morning, at 8:30, the ambient temperature is 31ºC. If I put the air conditioning to work to reduce the temperature to 26ºC. If I remove the room, it ends up reaching 34ºC. Taking into account that the ideal ambient temperature is between 18ºC and 25ºC, you can imagine how pleasant it is to complete the expansions of ‘Guild Wars 2‘or shoot in’Delta Force‘On a PC that at 18:00 is already asking for clemency and mercy. I cannot compete against environmental heat, but I can stop the heat generated by the PC. As? Roughlymaking it work much more slowly. That, a priori, is incompatible with playing in a decent quality, and it is. Unless you do it in the cloud. GeForce Now | Image: Xataka Looking for the fresco. Playing in the cloud has a series of disadvantages that are evident: you depend on having Internet connection, you are tied to a subscription and Not all games are available. The advantages, however, are that the cloud is agnostic to the hardware (I can play in my tower, but also from the mobile, From the laptop used at the university or from the ROG Ally) and does not require the computer to be fully. Just an Internet connection. The game is executed on a server and we receive the image and send the inputs in real time. A matter of temperatures. To give a quick example, playing ‘Guild Wars 2’ in a room at 27ºC The temperature of the CPU goes from 55.4ºC to exceed 70ºC. The GPU, meanwhile, goes from 35ºC to about 55ºC. If I launch the same game, but in the cloud (Geforce Now in my case), the temperature goes from about 55ºC to 57ºC, while the GPU passes from 35ºC to 37.9ºC degrees. ‘Delta Force’ is not in Geforce Now, so I can’t play it in the cloud, but it comes to the hair to show how temperatures change. The processor goes from 55ºC to 68ºC, while the GPU goes from 35ºC to be around 60ºC. That is also noticeable in the noise of the fans, which turn at a speed far higher than when they are in Standby. Asus Rog Ally | Image: Xataka The Rog Ally case. Asus’s portable console is my header device to play titles like ‘Devil IV‘And even’ Guild Wars 2 ‘connecting it to a USB and HDMI hub. If you use it in performance mode (25W), the console reaches 80-85ºC with relative ease, not to mention the battery to make a breath. Playing at the same game in Geforce Now I can do it not only in better graphic quality and with more fps, but putting the console in Eco mode, with a temperature that does not pass from 55ºC, more silent fans and greater autonomy. That, in hand and long, it shows. In short accounts. If the temperature is a problem and we have good access to the Internet, the cloud can be an option if we want to continue playing without making more firewood. In addition, the subscription does not have to be eternal. It can be used during the hottest months and then play again when temperatures fall. Cover image | Unspash In Xataka | I’ve been playing video games for years and now the first thing I do before buying them is simple: look what they are in the cloud

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.