in

What will there be when AI ends humanity

In a mansion with a view to Golden Gate, the elite of artificial intelligence met last Sunday to discuss a disturbing issue: the end of humanity and what will come later. Among glasses (without alcohol) their around 100 attendees, among which there were philosophers, businessmen and researchers, imagined a future in which humans no longer existed, but an intelligence created by us. How should our successor be?

The end of the end of the world. The event we have known through This Wired report It was called ‘Worthy Successor’ and aimed to discuss precisely about that: define a “successor to height” for when humanity no longer exists. This idea is related to the creation of a General Artificial Intelligence or AGIfor its acronym in English. A (for the moment) concept of superintelligence that would overcome the human being in all facets of knowledge, so good that, in the words of Daniel Faggella, host of the party: “You would with pleasure that she (not humanity) determines the future path of life itself.”

Who attended. The first: Who is this guy and why should we listen to it? Faggella is the founder of Emerj Artificial Intelligence Research, a consulting firm and analysis of AI. In 2016 he wrote in Techcrunch On the risks of AI and is currently focused on disseminating the moral and philosophical approach, specifically the creation of this ‘Worthy Succesor’ an idea that had been hovering for a long time. According to account On LinkedInhas been contacting different relevant personalities of the industry to, two years later, to hold this meeting.

At the party three papers could be heard from the hand of Ginevra DavisNew York writer, the philosopher Michael Edward Johnson and the host itself. The complete list of guests has not transcended, but Faggella presumes that founders of AI companies attended with values ​​of up to 5,000 million dollars, people from the laboratories that are investigating to create an AGI and some of the most important philosophers and thinkers in the sector.

The superintelligence that will end with everything. In Faggella’s words: “The great laboratories know that AGI will probably end humanity, but do not talk about it because incentives do not allow it.” It sounds like conspiracy theory, but it is not the first to warn of something like that. A decade ago, Bill Gates told us We should fear the AI. Shortly after Musk demanded regulation To mitigate the dangers of what was to come. More recently, different AI experts signed A message that alerted the “risk of extinguishing for AI”. Openai also thought about The risks of the AGI. There is even talk that his statements about the creation of an AGI would have been The reason for the sound dismissal of Altman months later.

What’s true in all this. We cannot know for sure, but we do know that most of the arguments about the IMMINENCE OF THE AGI And their risks are based on opinions and speculation, not on empirical evidence or concrete advances. For example, recent research has shown that current systems They still fail in basic reasoning taskswhich contradicts the idea of ​​a short -term superintelligence. Moreover, there are indications that The generative AI could be close to its roof. There is also no consensus among experts. There are detractors who They consider it ridiculousbut of course, it is less ‘viralizable’ than to say that AI will extinguish us.

And most importantly: we cannot ignore the fact that those who make these statements are business people Like Altmanand the business is very expensive And you need to finance. Agite the loop insisting on the imminent arrival of the AGI could be a way to raise more money for its companies.

What we will leave when we disappear. The central theme of the party was not so much how humanity will be extinguished (it seems that this is taken for granted), but what kind of intelligence we should create to make our successor. Attendees heard presentations that revolved around the values ​​and capabilities that this new superior intelligence should have. For Faggella, humanity has the responsibility of designing a successor that is aware and capable of evolving.

The philosopher Michael Edward Johnson highlighted the remains of creating a conscious AI beyond the possible extinction: “We risk enslaving something that can suffer or trust something that cannot be trusted,” he said during his presentation. Rather than forcing AI to obey, he proposed a joint education of humans and Ias to “pursue good”, whatever that is. In short, an interesting debate from the ethical and philosophical point of viewbut with little anchor in reality. At least for the moment.

Cover image | Gemini

In Xataka | The secret weapon of the fashion industry in China: this startup uses AI to predict the next tendenci

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

It is easier to buy a souvenir than bread

Australia was discovered in 1606 by Dutch. A theory defends that someone advanced a century: the Galicians