A little over a month ago This unpleasant video De Instagram went viral in that social network. In him a strange creature half -spider of a man made a strange appearance in a mall. At this time the video has 3.5 million “like” and more than 23,000, but the truly worrying is not that.
Collapse of the videos generated by AI. What is worrisome is that this video is part of an avalanche of videos generated by AI that is collapsing networks such as Instagram or Tiktok.
And broken algorithms. As they explain in 404 averagethe algorithm of these platforms has ended up breaking through a unique brute force attack. One in which these networks do not stop receiving content generated by AI to end up saturating those recommendation algorithms.
A LEA. Brute force attacks try, for example, to find out a password testing one by one all possible combinations. In this case, what they try is to saturate the algorithms so that they end up showing these videos generated by AI, and they are achieving it. Some already call Instagram Ya Tiktok “Villagers of AI” for the enormous amount of these contents, and that have made the conventional contents generated by human users have lost great relevance to the algorithm.
Reality has changed. For many these networks they serve not only to entertain themselves, but to be aware of the present. Videos often try to profile reality, but that reality has now been disrupted and those Instagram or Tiktok accounts barely show anything that is real. Some videos are deep -to -detecting unappokes, and as we verify in the past, the paradoxical is that sometimes we are not even able to detect what is real and what is not.
AI seeks virality and ends up finding it. The creators in social networks seek to get their videos viral, and for this they dedicate huge resources and time. In spite of this, success is not assured, but spammers that make video content generated with AI do not need to think so much: they can generate thousands of videos with very little effort, flood social networks and wait for some of those contents to be successful. The lottery of the virality is not so much if you have a lot of tickets.
The influencers who are earning money with the landfills of AI. As usual in this type of phenomena, influencers have appeared with new formulas to get rich quickly and with hardly any effort. One of them, a 17 -year -old named Daniel Bittonpresumes having already won two million dollars and has a clear message. “While others invest 5 or 6 hours making a perfect video, we can generate 8 or 10 shorts in less than 30 minutes.” As? Using AI tools.
The “Hot Dog” method. One of Bitton’s friends is a known spammer of Tiktok called Musa Mustafa. His method to go viral is that of the “sad hot dog”: “When you are hungry at two in the morning, even a sad hot dog knows better than any Michelin restaurant meal. Tiktok works in a similar way. Your audience does not expect (you don’t even want) perfectly polished videos.” Mustafa wonders, not without some reason, “when it was the last time you saw a viral video of Tiktok and thought: ‘Uauh, the degree of color in this video is incredible.” Or what is the same: the amount wins by a win to quality.
But platforms embrace these contents. In The Guardian They already warned us Recently: social networks and networks are not stopping this type of spam, but are benefiting from it, they accept it and are even promoting it. In fact they offer tools to facilitate the generation of content through AI, which means that rather than trying to solve the problem, they are aggravating it.
An example: Facebook. Meta’s firm recently launched a tool for advertisers called Advantage+. With it it is possible to create with different versions of an advertisement to try them all with the method of A/b test and then select the one that works best. For advertisers (and for the finish line, of course), all this is fantastic, because they can get more effective ads with much less investment of time and money.
Are there limits? There will undoubtedly users who They reject This type of content: precisely networks such as Bluesky either Mastodon They move away from the algorithm and are more similar to how they were Twitter or Facebook years ago. But it seems clear that a vast majority of them have no problem with these contents generated by AI, which in fact have an example of success in that porn generated by the impossible combinations –A fish generated by the kissing to a woman generated by AI, an orco generated by AI marrying a girlfriend also generated by AI – they are also becoming viral.
More wood for dead Internet theory. It has been talking about how the growing presence of bots on the Internet It will end that the human presence in these contents will be marginal. What we already saw with texts or images generated by the flooding Internet now we are seeing it with videos of the flooding social networks or with Virtual avatars generated by AI. The AI landfill extends, and the worst thing is that not even users – which we contribute to making these contents viral – or the companies – which as we said not only do not stop them, but also promote them – seem to have a greater problem with this situation.
Image | Kenneth Schipper
In Xataka | Meta follows the steps of X: we not only work writing for her, now we will also work moderating her
GIPHY App Key not set. Please check settings