in

What happens when its creator creates them biased

Elon Musk has always defined itself as An “absolutist” of freedom of expression. Precisely one of the objectives when he bought Twitter was to turn it into a “public square” in which to talk about everything without tap. It seems that he is achieving it, but now he goes further. Now we want us to believe that Grok 3 will always tell us the truth.

Politically incorrect. According to him, Grok 3 “is an AI that seeks the truth to the fullest, although sometimes that truth is at odds with the politically correct.” As they point out In TechCrunchhe said it yesterday during the presentation of this new family of AI models.

Of censorship, little or nothing. Grok’s previous versions have already demonstrated a tendency to have a tone and “psychology” different from that of their competitors. The first version was especially striking for its sarcastic toneand with Grok 2 things got especially interesting In the field of censorship. Above all, because it did not exist, and for example it is possible Generate Deepfakes of famous characters Without any problem from any X.

Grok 3 and the problem of meaning the truth. Musk’s promises in this regard are striking, but hardly realizable. The reason is simple: that the absolute truths exist is already the object of debate in itself, but there is a powerful argument against: the truths are usually relative and depend on the context or point of view. The facts are there, but their interpretation is absolutely variable and is subject to biases. Humans, yes, but afterwards, also from AI.

A neutral AI? Grok 3 seems to want to aspire to become a neutral the AI ​​that gets to the facts and that tells us the truth. Here an element in favor of these models is that lately they cite the sources on which they are based to give their answers. For example, perplexity, and that helps we can contrast those answers.

OpenAI is (a little) left. 2003 studies They detected that models of AI like chatgpt had certain biases and seemed to position themselves more towards those who support the protection of the environment, towards libertarian ideologies and towards political lefties. Gemini, Google’s model, was criticized for creating “woke” images of viking or black Nazis, for example.

Grok 3 could “talk” like Donald Trump. That an AI is politically incorrect makes us wait for unique responses. Dan Hendrycks, director of the Center for Ai Safety and director of Xai has raised How an AI model could respond as Donald Trump does. Direct, without ambiguity and without taking into account the politically correct. That can be irritating, but it would also allow unpopular opinions and controversial points of view. It was biases, for better and worse. Of course, theoretically without violating social norms and ethical principles, such as XAI explains in his “framework for risk management”.

Partisan models. He Hendricks study He evaluated with what US politician models such as GPT-4O, Grok, or call 3.3 were more aligned, and according to their tests everyone seemed to be closer to Biden’s vision than Trump’s, Kamala Harris, or Bernie Sanders. It develops the idea of ​​what calls a “citizen assembly” in which EUU census data on political issues would be collected and then use those answers and modify the values ​​of a LLM Open Source. At this time that model would have values ​​that would be closer to those of Trump than to Biden, for example.

But. The truth is that Grok had to be updated last August precisely not to say the truth. Five Secretaries of State in the US They warned that this chatbot was spreading misinformation when users asked about information about the then imminent elections. The change did not affect Grok’s ability to Generate Deepfakes Of the candidates, although in September 2023 x eliminated The capacity of some users to republicate electoral misinformation.

Training is key. The AI ​​learns from those data that we provide in the training process, so the biases that an AI may have will have their origin in that data. If we train AI with data that profile your answers to a certain ideological current and criticize another, the theoretically replicate that position in your speech. No details have been given about how Grok 3 has trained and it is possible that we never know it, but if Xai wants us “Adjust” your perspectives and biases.

In Xataka | X has been filled with very strange photos of celebrities. The culprit is Grok, which has no limits in its creation of images

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

They were nuclear weapons experts

Tesla had a contract of 400 million dollars from the United States government. They have had to reculate