You have to establish a “red lines” so that the AI ​​does not go out of hand

On Monday more than 200 personalities and more than 70 organizations joined in a new initiative called Global Call for AI Red Lines (World call to establish red lines in AI). The objective: try to establish clear limits that AI should never cross.

Why is it important. The advances in generative are frantic but once again what is prioritized is that development and the commercialization of these models without too many reserves when doing so. According to the signatories of the initiative,

“Some advanced IA systems have already shown misleading and harmful behavior, and yet these systems are giving more autonomy to act and make decisions in the world. If not controlled, many experts, including those who are at the forefront of development, warn that it will be increasingly difficult to exercise significant human control in the coming years.”

What is requested. The initiative, initiated during the 80th General Assembly of the United Nations, asks that governments act “with decision” and reach “an international agreement on clear and verifiable red lines to avoid universally unacceptable risks.”

What are those red lines. What is proposed is specifically prohibit some uses and behaviors of AI that can end up being dangerous. Among them they would be for example prohibit:

Those who are. In that group of more than 200 personalities are ten Nobel Prizes, AI experts, scientists, diplomats and even heads of state. Among them are well -known names as those of scientists Geoffrey Hinton and Yoshua Bengio who already They carry time warning of these dangers. The list is remarkable and they are also experts such as the OpenAi co -leaflet, Wojciech Zarama, or one of Deepmind’s main scientists, Ian Goodfellow.

And those who do not. Although in this list of personalities there are very relevant names, it is also significant to verify that this initiative has not been signed by any CEO of one of the large technological companies involved in the field of AI. Although sometimes there have been speeches that pointed out that they were also worried about this issue and the AI ​​had to be regulatedin this case they have not participated in the initiative.

Better prevent than cure. Charbel-Raphaël Segerie, responsible for a French agency called Safety Center in AI (CESIA), “the objective is not to react after an important incident occurs, but to avoid largely and potentially irreversible risks before they occur.”

The European Act goes in that line. The European Union already created its regulation and launched it In August 2024, and the idea was to establish a series of restrictions based on Risk levels. At the moment the impact of this regulation has been negative, especially because has restricted the use and development of AI models in the EU. So much so that the EU has decided reverse and soften its regulations.

And we already have a precedent. Just a few months after the chatgpt launch several experts made a similar request. Among them were Elon Musk – who has not signed this initiative – or Steve Wozniak, which They asked to pause for six months the training of AI models. That does not come anywhere, and without an explicit prohibition that development of AI models has continued unstoppable.

In Xataka | “Estimated passengers: comply with the rules to avoid negative points,” China is implementing their social credit

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.