AI already generates photos indistinguishable from reality, so people are using it to collect fake returns

The photo on the left was taken by me on my bathroom counter, the second is the result of asking Nano Banana to simulate that it had been hit during transport. If it weren’t for the Gemini logo in the lower right corner (which can be removed very easily), it would pass perfectly for a real photo. Online stores have a problem. what’s happening. There is an increasingly common trend in the world of online sales. Some customers are taking advantage of imaging models like Nano Banana to request product returns. Is what this person did: He ordered a carton of eggs from a fast home delivery service. One of the 24 eggs was broken. He took a photo and asked Gemini to add more broken eggs. He sent it to customer service and voilà: return completed. Why is it important. The images are no longer proof of anything, we saw it with the first images generated with Nano Banana Pro that went viral, and we are going to have to learn to live with it. Not only will we doubt absolutely every image we see, it also means that anything that could be verified with images may now be false. There are many more scenarios: cheat on your boss telling you that you had an accident on the way to work, exaggerating the damages to your insurance company, lying to your partner… China has the advantage. They count in South China Morning Post that the trend of deceiving online sales companies with AI is a trend that is spreading like wildfire. During the pass 11.11 celebrationseveral stores received images of customers asking for a refund using AI-generated images, such as a rusty electric toothbrush, a frayed item of clothing, or a broken ceramic mug. There is a case that has been echoed Wiredin which a customer who bought live crabs demanded a refund because many had arrived dead. To prove it, he sent images and even videos, but the sellers noticed that the video was not real because there were crabs with more legs. The problem with these detected cases is that the images were not credible, but that makes us think of all the more subtle images that will be considered good. There are more cases. China has the largest online trading systemso it makes sense that more cases have been detected, but it is not a trend exclusive to one country. The egg case that we mentioned above occurred in India and cases have also been detected in the United States. According to the fraud detection firm Forterthe use of AI images for this type of deception has increased by 15% by 2025, coinciding with the arrival of more capable imaging models. With Nano Banana Pro it looks like the trend is going to give a significant boost. Returns without return. Obviously, this technique cannot be used with just any product. There are cases in which stores refund our money without it being necessary for us to return the product. It can be applied in the case of perishable, fragile products and, in general, low-value products. If the images are no longer proof, they will have to change the strategy. Sending us a broken bottle of shampoo, I don’t know if it would be the best option, but they could force that the photos sent can only be taken from the app itself and not upload the ones we have in the camera roll. Image | Amparo Babiloni, Xataka In Xataka | FACUA believes that a lot of V16 beacons “approved by the DGT” are not legal. And there’s a way to sum it up: fraud.

Doomsday’ are indistinguishable from the real thing. In the end, Scorsese was right

Last weekend, YouTube Screen Culture and KH Studio permanently closedchannels based in India and Georgia that accumulated more than 2 million subscribers and one billion views between them. They had been making AI-generated trailers so convincing they were indistinguishable from official promotional materials for months. The phenomenon has reached epidemic proportions with ‘Avengers: Doomsday’where the border between the authentic and the synthetic has become practically undetectable. What has happened? Marvel’s strategy of projecting four exclusive movie teasers ahead of ‘Avatar: Fire and Ashes’‘ (one each week, focusing on different characters) has created the perfect breeding ground for confusion. Without official online distribution, any user who wanted to see them had to rely on an ecosystem of leaks that, as Kotaku states, It’s been broken for years. And in the midst of this information void, generative AI had its day: images showing Doctor Doom began to emerge from under the stones. as “Stark clone”, clips supposedly filmed in theaters and deepfakes of a refinement that deceived the most expert eye. Increasingly sophisticated. A study published by Nature already in 2024 revealed that more than 53% of humans can be fooled by digitally altered videos, while recent academic research They talk about detection tools deepfakes They have difficulty identifying manipulations outside of their training data. With this breeding ground, it is normal that there are more and more fake trailers and images, immersed in a continuous generation of content of this type: on social networks, 71% of images are generated by AI. And it is estimated that they have been published since 2023 more than 10 billion pages generated by AI. Marvel pre-slop. The paradox of this situation is that Marvel did not need AI intervention to become synthetic content. It already was. When Martin Scorsese stated in 2019 that the Marvel films were not cinema but “theme parks” where the actors did “the best they could under those circumstances”, he was actually talking about the fact that the franchises had replaced the human with the algorithmic, which were engineering products devoid of the living component that defines cinema. The visionary thing about it: he did it before ChatGPT came into our lives. We all know how Marvel movies are made (and what first led to that image of fdepersonalized acting in mass-produced films; and second, to the famous “superhero fatigue”), and they fit perfectly with that idea of ​​”movies made by AI before AI”: effects artists changing entire third acts two months before the premieres, films shot on sets with huge green screens generating sequences where 99% of what we see is digitaldifferent proposals but with narrative decisions (especially in their final sections) completely interchangeable… The first mess. Let’s go back to ‘Spiderman: No Way Home‘ to analyze one of the most striking cases of information and misinformation of this type, with the direct precedent of generative AI: the deepfakes. For months they circulated supposedly leaked images of Tobey Maguire and Andrew Garfield in Spider-Man suits. Garfield repeatedly denied involvementstating that the material was Photoshop. Then a YouTuber posted a video claiming to have created a deepfake of the leaked footage, only to later admit that his video was fake and the original video was real. In Corridor Crew They determined that it would be “the most sophisticated deepfake ever created” if it were fake. Sony applied copyright strikes against leaks, implicit confirmation of authenticity. Result: fans spent six months not knowing what was real, but they spread it anyway. Studies on different fandoms reveal that the search for belonging drives the spread of misinformation as much as that of legitimate information. More chaos: the algorithms optimize based on popularitymore than for the quality. And the metrics of engagement can be manipulated through behavior of a deceptive nature: bots, organized trolls, networks of fake accounts… A cocoa. The result: an attention market where manufacturing synthetic content about ‘Avengers: Doomsday’ generates more adhesion, diffusion and popularity than bothering to verify its authenticity. But AI did not create this problem: it only accelerated it until it became unsustainable. And we are not even talking about “serious” topics, linked to politics or society and where real interests already come into play to falsify the content, beyond the mere more or less hooligan fun of spreading a fake trailer. The closing of the house of fake trailers. The closure of Screen Culture and KH Studio by YouTube comes after a conflict that began when both channels in March were demonetized. To avoid this, they added tags such as “fan trailer”, “parody” or “concept trailer” to their titles and recovered monetization. But those warnings disappeared again, and they created 23 versions of fake ‘Fantastic Four’ trailers, some of them surpassing the official videos in search results. But there was an additional controversy: in the Deadline investigation On the subject, it was revealed that several studios, such as Warner and Sony, had secretly requested that YouTube have the advertising revenue from these AI videos bounce back to them, which leaves this not in the field of ethics, but of economic benefit. YouTube tolerated the proliferation of synthetic content for years, even allowing studios to monetize material that misled their own audiences, and only stopped when Disney, which owns Marvel, sent a cease and desist letter to Googleowner of YouTube. In Xataka | OpenAI and Disney have signed an agreement so you can generate AI videos of your favorite characters. It’s what Sora needed

Polymarket and company have sophisticated gambling addiction to the point of making it indistinguishable from “investing”

Prediction markets are no longer a niche of the Internet and datanerds to become the new obsession of Wall Street and Silicon Valley. Platforms like Polymarket and Kalshi are receiving multi-billion dollar valuations by repackaging traditional bets as sophisticated financial instruments. The image that defines the moment occurred recently in Manhattan, according to Bloomberg: the patriarch of the New York Stock Exchange (70 years old, impeccable suit) closing a multimillion-dollar deal with the founder of Polymarket (27 years old, t-shirt and plastic bottle). That meeting sealed the fate of the sector: betting is no longer a game, it is finance. Why is it important. We are facing a radical cultural and regulatory change. By redefining bets as “event contracts”, these platforms try to circumvent gambling legislation (which in Spain would control Consumption) to sneak into the traditional financial system, with the support of giants such as the owners of the New York Stock Exchange (NYSE). The panoramic. Kalshi is already worth $10 billion and Polymarket is looking for $12 billion. They are not beach bars, as we said, the owner of the NYSE has invested there. The hockey league (NHL) and Donald Trump’s media company are already signing deals. It is the traditional financial system embracing chance. It is, above all, legitimation. Semantic reengineering. Polymarket’s true success is not technological, it is linguistic. They have eliminated the stigma of the gambler by changing the dictionary: It’s not a bet. It’s an “investment.” It is not a betting house. It’s a “exchange of contracts”. You are not a gambler. you are a trader which analyzes “market sentiment.” An example of the absurdity of some cases: people betting by Elon Musk entering the race to be president of the United States, oblivious to the fact that Musk was born in South Africa and therefore cannot become president, since the US Constitution vetoes the presidency to foreigners. That is to say: all those bets are money thrown away from minute one. How it works. Instead of betting 50 euros on Trump winning, you buy a “share” of that result that is worth 1 dollar if you are right. This allows the same person who would win or lose money at roulette to now win or lose it in an app with stock market charts. Although the savings fly the same, the user feels smarter and less guilty: he believes that he is operating in something more similar to the IBEX, not in a casino. What’s coming. There is a civil war brewing. The old guard of the game (the owners of traditional casinos) see this as unfair competition. Jay Snowden, CEO of Penn Entertainment (a casino and sports betting company), has already warned: This is a direct threat to your industry. Prediction markets and games of chance overlap. In conclusion. Polymarket has managed to sophisticate gambling addiction for a generation that believes itself too smart to play games of chance. They have created the perfect casino for those who despise casinos, allowing them to risk savings under the illusion of doing financial analysis. In Xataka | Five years ago he worked from his bathroom on the brink of ruin. Today he runs a company valued at 8 billion Featured image | Hush Naidoo Jade PhotographyMockuuups Studio

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.