We already have fake GTA VI videos circulating on the internet and made with AI. The implications are more serious than it seems

The strange thing is not that it happened. The strange thing is that it hasn’t happened before: the news that we would have to wait a full year (and not the few months that were planned) to enjoy ‘GTA VI‘ fell like a bucket of cold water on the fans. And after all the pertinent guesses, came the inevitable string of fake trailers made with AI.

Despair. The desire for the most anticipated game in recent years has resulted in fake trailers, a subgenre that has always existed, but since the appearance of AI is reaching spectacular levels of verisimilitude. The channel specialized in this type of content Teaser Universe published a fake “final trailer” for the game that It already exceeds 600,000 viewsand which was followed by an additional one that had less impact.

With the complicity of Google. As Kotaku tells itthe merit of the scope is not only the verisimilitude of the video (a trained eye can detect that That trailer for ‘Avengers: Doomsday’ is fakebut with digital video game content and animated images things get complicated), but from Google. The search engine spread the video as if it were official Rockstar Games content, although everything was done with AI, as its own creator recognized. In this way (remember that we are not just talking about the Google search engine, but about YouTube’s own algorithm, which also recommends it), what should be an anecdote becomes a viral phenomenon… and disinformative.

Made to confuse. As much as the video includes in its description that it is made with AI, there are other elements of it that are somewhat more confusing. For example, the title “Grand Theft Auto VI – Final Trailer (2026) Rockstar Games”, a name calculated to be confused with official material. He timing It was also intentionally confusing: the video circulated just after the announcement of the postponement of ‘GTA VI’ until November 2026, a time when players were willing to consume any content linked to ‘GTA’, although in places such as the comments of the video itself The anger at the deception was evident.

The false button of the twerk. It is just one more example of how easy it is to manipulate algorithms and search engines using AI, and not the first related to ‘GTA VI’. This summer, YouTuber Jeffrey Phillips launched a deliberate disinformation campaign publishing on Reddit and TikTok that the Rockstar game would incorporate a button to do twerking. Phillips even created a fraudulent subreddit, r/TrueFactsOnlyz, whose description would theoretically help the AI ​​with its search summaries.

The result exceeded his expectations: Google AI Overview ended up literally quoting the Reddit comments that the Youtuber himself had writtenand referred to these comments as if they were speculations from real players. When someone asked him for proof on Reddit, he responded that Rockstar had called him personally. It was enough for Google’s AI, and it became clear that Google prioritizes content from Reddit without bothering to give it human oversight, allowing malicious lies like this (even if it was an experiment) to contaminate searches.

A systemic problem. All these cases linked to ‘GTA VI’ are not isolated anomalies, but symptoms of a structural failure on YouTube. Mozilla published in 2021 an investigation which revealed data that 71% of the problematic videos reported by volunteers had been directly suggested to them by the recommendation algorithm, not actively searched for. And because of these suggestions, these questionable content recorded 70% more daily views than other videos.

The proliferation of the so-called massive content generated by artificial intelligence aggravates the problem. have been identified channels that have accumulated almost 500 million views using AI tools exclusively. NBCNews uncovered a network of channels spreading hoaxes about famous African Americans with deepfakessome of them commercial ads that generated income for both the creators and YouTube. That is to say, as Jonathan Albright commented on Mediumthe technological paradox is in sight: YouTube is capable of instantly detecting songs protected by copyrightbut it does not identify texts from journalistic articles read with synthetic voices or images extracted from other websites. A real chaos.

In Xataka | I have tried Gemini Dynamic View: here begins the era of visual and interactive AI that you will not want to stop using

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.