We have reached a point where artists have to explain that they have made their works without the help of AI and not the other way around.

“I spent 40 hours making a digital painting and the first comment I get says: nice AI art.” He tells it a user on Redditbut it is not an isolated case. Not too long ago, we thought that the solution was tag all AI-made contentbut we quickly realized that It was a huge challenge.. Today, it is human artists who have to defend that their art is real.

What is happening. More and more artists are accused of having used AI in their works, especially when they are works that tend more towards realism and have a high level of detail. Many artists choose share your entire work process on networks and some deliver the files in layers to their clients to cover their backs and so that there is no room for doubt. It is not something that happens only with plastic arts, they have also been accused video game developers and writers.

If I don’t know if it’s AI, then everything is AI. AI imaging capabilities have reached a level where the eye is no longer able to distinguish a real image from a generated one. Our ability to capture and distinguish visual information is suffering a shock in real time and the natural response is distrust; Since we can no longer trust what our eyes see, we question it. Is something too well drawn? It must be AI. Is a text suspiciously well written? You sure have done it with ChatGPT. It is a defensive posture that also responds to the fact that, if you believe something false, you look like a loser, while if you question something real, you are simply a skeptic.

Label the human. Labeling AI content sounded good, but it hasn’t worked. Much of the blame lies with the platforms for not having been tougher with their application. We have the case of Etsy, a platform that was the refuge of crafts and has ended up becoming a bazaar of slop AI that pretends to be real. In this context, the solution seems to be just the opposite: labeling what is made by humans, as a kind of quality seal. Adam Mosseri said itdirector of Instagram, a few months ago:

Platforms like Instagram do a good job of identifying AI-generated content, but their effectiveness will decrease over time as AI improves. It will be more practical to identify real content than fake content.

AI detectors are not reliable. It is a fact and we have seen it on several occasions: universities falsely accusing hundreds of students of using AI because a software (also AI, of course) told him so, AI detectors who believe ‘One Hundred Years of Solitude’ was made with a chatbot…The quality of content generated with AI is advancing so quickly that it is no longer impossible to distinguish it from the real thing, which is why the proposal for human content labels makes sense. Something like the ‘denomination of origin’ seal on food.

There are several proposals. They count in Verge that there are quite a few proposals that want to praise human content, offered by different associations such as Not By AI, ProudlyHuman, Human Authored or Human Made. The problem is that many of these labels do not have a complex authentication process behind them, but are based on simple trust. For it to be a reliable label, it is necessary to verify the work process using sketches or diagrams, something that is much more laborious to achieve.

In Xataka | Crocheting was a peaceful refuge from the stress and information overload of the internet. Until AI arrived


Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.