Doomsday’ are indistinguishable from the real thing. In the end, Scorsese was right
Last weekend, YouTube Screen Culture and KH Studio permanently closedchannels based in India and Georgia that accumulated more than 2 million subscribers and one billion views between them. They had been making AI-generated trailers so convincing they were indistinguishable from official promotional materials for months. The phenomenon has reached epidemic proportions with ‘Avengers: Doomsday’where the border between the authentic and the synthetic has become practically undetectable. What has happened? Marvel’s strategy of projecting four exclusive movie teasers ahead of ‘Avatar: Fire and Ashes’‘ (one each week, focusing on different characters) has created the perfect breeding ground for confusion. Without official online distribution, any user who wanted to see them had to rely on an ecosystem of leaks that, as Kotaku states, It’s been broken for years. And in the midst of this information void, generative AI had its day: images showing Doctor Doom began to emerge from under the stones. as “Stark clone”, clips supposedly filmed in theaters and deepfakes of a refinement that deceived the most expert eye. Increasingly sophisticated. A study published by Nature already in 2024 revealed that more than 53% of humans can be fooled by digitally altered videos, while recent academic research They talk about detection tools deepfakes They have difficulty identifying manipulations outside of their training data. With this breeding ground, it is normal that there are more and more fake trailers and images, immersed in a continuous generation of content of this type: on social networks, 71% of images are generated by AI. And it is estimated that they have been published since 2023 more than 10 billion pages generated by AI. Marvel pre-slop. The paradox of this situation is that Marvel did not need AI intervention to become synthetic content. It already was. When Martin Scorsese stated in 2019 that the Marvel films were not cinema but “theme parks” where the actors did “the best they could under those circumstances”, he was actually talking about the fact that the franchises had replaced the human with the algorithmic, which were engineering products devoid of the living component that defines cinema. The visionary thing about it: he did it before ChatGPT came into our lives. We all know how Marvel movies are made (and what first led to that image of fdepersonalized acting in mass-produced films; and second, to the famous “superhero fatigue”), and they fit perfectly with that idea of ββ”movies made by AI before AI”: effects artists changing entire third acts two months before the premieres, films shot on sets with huge green screens generating sequences where 99% of what we see is digitaldifferent proposals but with narrative decisions (especially in their final sections) completely interchangeable… The first mess. Let’s go back to ‘Spiderman: No Way Home‘ to analyze one of the most striking cases of information and misinformation of this type, with the direct precedent of generative AI: the deepfakes. For months they circulated supposedly leaked images of Tobey Maguire and Andrew Garfield in Spider-Man suits. Garfield repeatedly denied involvementstating that the material was Photoshop. Then a YouTuber posted a video claiming to have created a deepfake of the leaked footage, only to later admit that his video was fake and the original video was real. In Corridor Crew They determined that it would be “the most sophisticated deepfake ever created” if it were fake. Sony applied copyright strikes against leaks, implicit confirmation of authenticity. Result: fans spent six months not knowing what was real, but they spread it anyway. Studies on different fandoms reveal that the search for belonging drives the spread of misinformation as much as that of legitimate information. More chaos: the algorithms optimize based on popularitymore than for the quality. And the metrics of engagement can be manipulated through behavior of a deceptive nature: bots, organized trolls, networks of fake accounts… A cocoa. The result: an attention market where manufacturing synthetic content about ‘Avengers: Doomsday’ generates more adhesion, diffusion and popularity than bothering to verify its authenticity. But AI did not create this problem: it only accelerated it until it became unsustainable. And we are not even talking about “serious” topics, linked to politics or society and where real interests already come into play to falsify the content, beyond the mere more or less hooligan fun of spreading a fake trailer. The closing of the house of fake trailers. The closure of Screen Culture and KH Studio by YouTube comes after a conflict that began when both channels in March were demonetized. To avoid this, they added tags such as “fan trailer”, “parody” or “concept trailer” to their titles and recovered monetization. But those warnings disappeared again, and they created 23 versions of fake ‘Fantastic Four’ trailers, some of them surpassing the official videos in search results. But there was an additional controversy: in the Deadline investigation On the subject, it was revealed that several studios, such as Warner and Sony, had secretly requested that YouTube have the advertising revenue from these AI videos bounce back to them, which leaves this not in the field of ethics, but of economic benefit. YouTube tolerated the proliferation of synthetic content for years, even allowing studios to monetize material that misled their own audiences, and only stopped when Disney, which owns Marvel, sent a cease and desist letter to Googleowner of YouTube. In Xataka | OpenAI and Disney have signed an agreement so you can generate AI videos of your favorite characters. It’s what Sora needed