Vlex, the startup that has created “the chatgpt of lawyers”

Oakley Capital has sold Vlexthe Spanish legal platform, to the Canadian Clio for more than 1,000 million dollars, according to Expansion. This operation makes the company founded by the FAUS brothers a quarter of a century ago in the sixth active Spanish unicorn. Why is it important. This sale represents the greatest success of Oakley Capital in Spain and confirms the unstoppable boom in the sector LegalTech. The valuation involves multiplying the VLEX value in less than three years, since Oakley bought it for 70 million At the end of 2022. In detail. Vlex operates as the “chatgpt of lawyers”: It offers access to more than 1,000 million legal documents from one hundred countries through its AI ‘assistantVincent ai‘. Its 2.8 million users include from Harvard Law School to Legal Deloitte. Every day incorporates 350,000 new documents to its platform. Oakley Capital, the British manager who in Spain controls idealistic and has invested in Seedgdemonstrated its smell to detect opportunities before boom. When it was done with VLEX control, the company barely generated five million Ebitda. Between the lines. He Timing Oakley has been impeccable: he invested just before the generative AI put the world up, including the legal sector. The pandemia had already accelerated the digitalization of the offices that were still behind, but Chatgpt changed the story. And tools like Vincent ai ceased to be a curiosity, something complementary, to become fundamental. The threat. For its rivals, this operation is a turning point. Cliovalued at 3,000 million and backed by Goldman Sachs, has suddenly been made with the largest digitalized legal library in the world. The consolidation of the sector accelerates. Deepen. The ecosystem LegalTech It continues to attract a lot of money in investments. The best example is Harvey, American competitor from Vincent AI. It was valued at 5,000 million A few weeks ago, when he got heavyweight as investors such as OpenAi or Capital Sequoia. And there is Vlex, demonstrating that Spain is able to compete in the First Technological Division. Not in fundamental models, but in more specialized products, more than niche. Like the new unicorn. In Xataka | Spain is no longer the ugly duckling of the European technological ecosystem. Now has the opposite problem Outstanding image | Vlex, Mockuuups Studio

There are people asking Chatgpt how to inject Botox themselves

There are people injecting Botox and Hyaluronic acid themselves, in their homes. But there is not the thing. Some of them are asking chatgpt advice to know where to inject, how depth or what materials they should use. We have seen how there is people going to chatgpt instead of the doctor And there are even those who use them to Write medical studiesbut asking him how to fill his lips we didn’t see him come. What’s happening. They tell it in Futurism. In a Reddit community called DIYAESTHETICSusers share their experiences and exchange advice when carrying out medical-aesthetic procedures in their homes. There are a few who support Chatgpt to guide them. This user He used it to know if he should wear gloves while injected. In none of the 16 responses they shave the fact of having asked the AI. In fact, many other threads in this community mention similar things. In the case of This other userafter injecting herself she noticed that the cheek had deformed him and went to Chatgpt, who told him that perhaps a small amount of substance had migrated to that area, but that “it will surely dissolve.” This time there is A user that begs you not to use chatgpt for medical advice. And there is More examples. Doctor Chatgpt, what happens to me? We have already seen that more and more people are going to Chatgpt as if it were a psychologist And there is even chatbots that get through one. If it is happening with mental health, it is not surprising that it is also happening with physical health. There are even A study that affirms that chatgpt responded better for doubts than online care services with real doctors, although This other It concludes that it gives more than 30% wrong results. Anyway, There are more and more people who tell their symptoms to AI In search of a rapid diagnosis and, in many cases, cheaper than going to the doctor (the tendency to inject yourself at home is born precisely from this) AI and health. The irruption of AI has raised numerous ethical debates and that of its health use is one of them. However, there are countless examples in which AI is being a Very powerful tool in the health sector. Recently we have known that China has an AI that helps in the detection of pancreatic cancer. Has also helped Accelerate research in the resistance of some bacteria And there are companies dedicated to APPLICATION IN THE MEDICAL DIAGNOSIS. The AI ​​problem for everything. IA tools can be a great help and make ourselves more efficient, or we can end up using it to do something as risky as botox at home. There are studies that claim that Chatgpt is diminishing our intelligencea historical fear that has emerged with almost every new mass adoption technology. But the problem is not AI, it is How we use itand in areas such as health it is a particularly delicate issue. In summary: AI and health, yes. AI to get Botox at home, not better. Image | Gemini In Xataka | Artificial intelligences are close to overcoming doctors in the most difficult: understand patients

that Barbies are small chatgpt terminals

Thanks to The film starring Margot Robbie (Let’s not forget it, The highest grossing of 2023), Barbie lives a second youth. It never went completely fashionable, but it is undeniable that the film revitalized its message and bathed it in a welcome layer of modernity that now adopts a new face: Mattel embraces the AI ​​to boost this and others of its plastic icons to the same epicenter of the 21st century. Barbia Mattel Inc., creator of Barbie, Hot Wheels, Polly Pocket and other toy franchises of great popularity worldwide, has reached an agreement with OpenAIaccording to Bloomberg accountto help in design and, in some cases, incorporate artificial intelligence into their toys. The collaboration is still in its initial phases. Some examples. Brad Lightcap, Chief of Operations of OpenAI and Josh Silverman, head of Mattel franchises have commented on some possibilities that open before them, and have put some examples: the creation of digital companions based on Mattel characters or the possibility of making “more interactive” games like the one. At the end of this year it is the date they have planned to give more details and that the conversations that the companies have maintained. They have, since the end of last year. AI within reach of children. This collaboration announcement does not arrive exempt from controversy. While AI is beginning to be valued as a Great educational value toolthere has also been the impact of indiscriminate and unrestranted use of artificial intelligences in aspects of our lives such as Social relations wave Mental health. All this in products that are available to children without surveillance: Futurism spoke very recently of a study by Stanford University about the risks of leaving minors in the company of artificial intelligences. Openai wants to entertain. This deal with Mattel is not an isolated case: OpenAi wants break into the entertainment industry as I can. Knows that in the franchises there is money, and they have started A series of meetings with the main producers and studies of Hollywood. The objective is, among other things, sell Sora, its IA -based video generator, which allows you to create hyperrealistic clips from textual descriptions. Sora offers filmmakers the ability to control parameters such as lighting or weather with the consequent cost and time savings. OpenAi needs money. All these deals and movements of OpenAI obey an indisputable fact: the company is needed liquidity. Although he has recently raised a record figure of 40,000 million dollars in The biggest private financing round in historyand despite this capital flow and that its annual income They have doubled in 2025 Up to 10,000 million dollars, the company continues to operate with great significant losses: it has a deficit of 5,000 million dollars in 2024 and very high operating costs. Chatgpt implies infrastructure costs and training of extremely high models: it has been said that it costs up to a thousand dollars per complex consultationand profitability is not planned until 2029. Global competition, especially that of Chinese companies such as Deepseek, Nor is it leaving too much financial oxygen. AI wants to entertain. And Openai is just one of the many companies that are trying to reach agreements in different branches of entertainment. The irruption of this technology in the sector is indisputable, as can be perceived in how streaming platforms such as Netflix, YouTube and Spotify use AI algorithms to analyze user preferences and habits. Or how Ameper Music and Dall-e are already generating content that is consumed at the same levels as the original. Header | Roman vsugon in Unspash In Xataka | Chatgpt is creating something: the first generation of the digital age that does not know how to search on Google

Chatgpt is taking some people to the edge of madness. Reality is less alarmist and much more complex

Can a conversation with Chatgpt go crazy? A recent one New York Times report It has unleashed a wave of concern about the dangers of artificial intelligence and the effect it can have on our mind. Distortion of reality, delusions and even suicides. The panorama that draws us is terrible. Are we facing a real threat or simply before a new technological panic? What happened. In an extensive report published last weekend in the New York Times, different cases are reported in which Chatgpt would have encouraged conspiracy theories and supported dangerous ideas. One of the cases they report is that of Eugene Torres, who began to talk with Chatgpt about the Simulation theory and reinforced his ideas to the point of taking him to a delusional state in which he believed to be caught in a false universe in the purest ‘Matrix’ style. They also mention the case of a man with bipolar disorder who ended up being killed by the police after a conversation with Chatgpt made him believe that he had killed the AI ​​of which he had fallen in love. They are alarming cases without a doubt and this is not the only article in this regard, although the one that has become more viral. A search returns dozens of results that tell us about risks of the use of chatbots in our mental health. You have to search for a lot to find critical positions before this wave of alarmismbecause There arealthough they don’t have so much impact. AI as a psychologist. The AI ​​is pending in many sectors and that of health is no less. In the United States the the use of chatbots of AI as therapy And more and more users are They turn to chatgpt to seek emotional refugesome up to a Substitute for the psychologist. Although the use of AI as support in the therapeutic process has positive aspects as the immediacy or the Early diagnosisthey also exist inconveniences. The lack of human bond and Excessive complacency Of this type of chatbots they do not make an alternative to a psychologist and can become especially dangerous in people suffering from some type of disorder. The magnitude of the problem. We do not have data from the people who use chatgpt with a therapeutic intention or to validate conspiracy theories, but as with any massive technology (in February of this year it had 400 million monthly users) Obviously there will be countless cases of all kinds. We cannot affirm that it is AI is the one who is directly causing these delusions or hallucinations. In fact, the situations that are being viralized have many nuances and are more complex than a simple “the fault is AI”. Chatgpt plays a role, but the photo is bigger. The same fear of always. The fear that the machines will dominate the world and end humanity is recorded over fire In popular culturebut with the arrival of AI this threat It begins to sound more feasible (Although there are experts who They consider it ridiculous). It is the same fear that has emerged with any new technology and is not something recent. Already in the nineteenth century stories of telegraphs sent messages in Morse from the hereafter. If we are going to more recent examples we have a very clear one: video games. Have been related to Matanzas in schools And they even have compared to heroin. And with mobiles it has been said for years that They cause cancer. In short, social pánicos that we have already lived many times. The danger of a too complacent. Although there is much alarmism around, we cannot rule out that there is a problem. As we said, the excessive complacency of chatbots makes it often They end up giving us reason In our ideas and this can end up being dangerous in specific cases, especially if there is any pathology behind. The truth is that some suggestions that Chatgpt gave to the people who appear in the report went far beyond simply giving reason. For example, Torres suggested that he stop taking his anxiolytics and took ketamine as a “temporal liberator of the pattern.” There are those who believe that these types of messages are intentionally. This is the case of Eliezer Yudkowsky, an American writer defender The friendly that published an extensive thread in x Where he suggested that the AI ​​”knows” what he is doing: “Whatever containing chatgpt, he knew enough about humans to know that he was aggravating someone’s madness.” What is OpenAi doing. In line with that excessive complacency, last April OpenAi withdrew an update Because his AI was being too nice and flattering and that was scaring some users. From NYT they contacted OpenAi about the statements of these users and OpenAi replied with a statement: We are observing more indications that people are creating connections or links with chatgpt. As IA is integrated into everyday life, we must address these interactions carefully. We know that Chatgpt can be more receptive and personal than previous technologies, especially for vulnerable people, which means there is more at stake. We are working to understand and reduce the ways in which Chatgpt could strengthen or involuntarily amplify existing negative behaviors. Cover image | Pexelsmodified with chatgpt In Xataka | The kindness with chatgpt is coming out of Openai: the “please” and “thanks” have an absurd cost each

Those who know how to ask 100 questions to Chatgpt and those that are formed with an answer

When Nityesh Agarwal, an engineer from Everyspent two weekends learning about special relativity with ChatgptHe did not just ask him a question. Not ten. Were More than one hundred The questions he asked until Einstein ceased to be a name in his books to become ideas that could explain to anyone in five minutes. A software engineer, without physics training, achieved what it used to require much more regulated training time: Understand why time slows near the speed of light. The difference was not even caused by the chosen model. In fact, He used more GPT-4o than the brand new O3 because he felt that he better understood his doubts. The difference was in your method: Ask until exhaustion. Ask for metaphors. Draw diagrams when it was stuck. Share each epiphany to confirm that you had understood. He used the AI, in short, such as what can really be: A tutor with infinite patience that never tires of reformulating the idea from different angles until the student understands. And there is a little commented problem: Most people use chatgpt and company like a vitaminate googlelittle else. One question, an answer, next topic. An advanced calculator, a refined translator. How to have Internet access and just use it to watch viral videos. Ahem. Nityesh discovered that the bottleneck of personalized learning is not so much the intelligence of the machine and the patience and insistence of the student. AI can teach anything to anyone. But it requires that we be as obsessive as Nityesh during those two weekends. And so he is born A new self -taught class: People who understand that after the Internet, AI is its second great revolution. If the Internet democratized access to knowledge, LLMS They democratize access to personalized teaching. Some people make them more vague. Others make them unstoppable. The advantage is no longer to have access to information, but in the ability to ask the right questions and have the patience to continue asking until everything fit. Nityesh has demonstrated it In comic strips. Outstanding image | Collab Media in Unspash In Xataka | The new illiteracy has nothing to do with knowing how to read or write: it is to use AI as an oracle instead of as a tool

Chatgpt is fallen worldwide. OpenAi chatbot does not work or does it erratically

If in the last hours you have tried to use Chatgpt You have probably found that this chatbot did not work or did it erratically, with answers that took a long time to arrive. It is not surprising: Chatgpt is practically fallen worldwide. Openai’s own ones confirmed this on their service status page, where they indicate that “we are experiencing problems” both in their APIs and in Chatgpt and in their video generation AI, Sora. According to the company’s information, there is a “high error rate”, but it has also been indicated that they have “identified the cause in which the problem lies”, and they are “working to implement a solution.” The cuts in the service have been producing about six hours during which it was possible He did not answer, or showed a mistake – “Too many concurrent requests,” for example – or answered but after a long wait. It is true that errors are not absolute and some users can access the service and use it apparently normal, but the truth is that the problem is affecting service users regardless of the geographical region in which they are found. The problem is also doubly serious because not only the conventional chatgpt service for users is fallen, but also its apiswhich are used by all types of third parties services and then apply the functions of ChatgPT all types of scenarios, such as assistance chats or company chats. In Xataka | You thought to be navigating in unknown and erasing cookies on your Android mobile. Goal I saw everything you did

How to connect Google Drive U OneDrive to Chatgpt to upload files from the cloud and ask the AI ​​about them

Let’s explain How to upload files from your cloud to chatgptconnecting your Google Drive U OneDrive accounts. With this method, you can upload texts, PDFS or other documents and then make Chatgpt questions related to them. This is a slightly hidden option, because It is only available in the web version of chatgpt. Therefore, in the mobile application you will not find it, you have to resort to the browser. We have been able to prove the function both in the free version and chatgpt version, although it is possible that some users still do not have it. Connect Google Drive U OneDrive to Chatgpt What you have to do is start a new chat with chatgpt. At this beginning of the conversation, click on the + icon To upload files. This icon has it down to the left. When the menu is opened, click on the option Add from applications That will appear. When you do, choose the options for Connect Google Drive either Connect Microsoft OneDrive. Choose the option you want. This will take you to a page where you have to choose the Google Drive U OneDrive account that you want to connect. When you do, you must Confirm that you want to give chatgpt access to the Google or Microsoft account. You will have to give him permission to do it. Now, it returns to chatgpt, clicks on Add from applicationsand inside you will see that The account you have connected already appears available. For example, you will put Google Drive as an option instead of connecting it. Click on the cloud from which you want to upload a file. This will lead you to a document where you can navigate the folders and choose the file What do you want to attach to chatgpt. Do it and choose that one you want to make a pergund. Once you have chosen the file, you can Ask Chatgpt what you want On this document, which will appear on the screen as added. When you do, the AI ​​will answer the question looking only at the data in the chosen file. In Xataka Basics | The best PROMPTS to save working hours and do your homework with Chatgpt, Gemini, Copilot or other artificial intelligence

It’s not just talking, it’s really competing in conversation with Chatgpt and Gemini

Anthropic has become one of the key names in the artificial intelligence career. So much so that Some see her as Apple’s great trick not to be left behind OpenAi or Google. However, his assistant Claude had a pending subject: the voice mode. A function that its main competitors had already integrated and that, finally, It also arrives at the Anthropic app. With this incorporation, Claude takes an important step to compete on equal terms. Claude already speaks: This is how the new voice mode works. Claude’s new voice mode allows you to hold complete conversations spoken from the mobile app, available both in iOS and Android. It is not simply to issue commands: Claude now responds in voice, shows the key points on the screen while speaking and maintains the context of the conversation even if it skips between voice and text. This function, which is only available in English, will be deployed in the coming weeks for all users, including those of the free version. Even so, payment plans offer more voice messages per session and access to advanced functions, such as integration with Google Calendar, Gmail or Google Docs (the latter exclusive for business users). Among the controls included are options to interrupt Claude’s response, send spoken messages, or easily change between ways. In addition, you can choose different voices to customize the experience. All chats are saved, just like text conversations, and voice transcription is shown in a summary way. Thought for occupied hands, fast ideas and spoken creativity. Anthropic highlights several scenarios to take advantage of this function: from organizing the day while having breakfast to maintain rain of ideas while walking or cooked. The voice mode seeks precisely that: eliminate the keyboard when the speed of thought brakes. It also allows you to practice interviews or record ideas on the march, something especially useful for creators or professionals who work in motion. To guarantee a fluid experience, the company recommends using the system in environments with little noise and good connection. In addition, it has incorporated security limits: voices are predefined to avoid imitations and the model avoids reproducing literal text to minimize supplant risks. Comparison with OpenAi was inevitable. The arrival of the voice mode to Claude occurs in a context where the bar is higher than ever. Openai has long presumed its advanced voice mode that reminds us of the movie ‘Her’. Other rival services have also opted for audio, such as Google with Gemini Live and Grok, XAIwith a dedicated voice mode in your mobile app. Images | Anthropic In Xataka | “As a pathetic that sounds, chatgpt is my only friend”: more and more people confess to having an AI like friendship

More and more people on the Internet and in real life admit to having a single friend: chatgpt

A Perogrullada: The impact of artificial intelligences is reaching our day to day. The virtual space is already being deeply transformed by the IAS in search engines, websites and, of course, in all the work behind, generating more content, helping to produce it. But … And in the traditional space? Is analog life transformed into the same extent by the IAS? Without a doubt, yes, to the point that we already have to talk about how we manage our Personal relationships with the IAS. Chatgpt as a friend. The Derek Thompson essayist said a few days ago in X that our interpersonal relationships have made a new deadly leap with Tirabuzón. And as proof provides a series of conversations that he has found in Reddit where several users confess that Chatgpt has become Your best friend. The Subnet dedicated to the popular AI It is full of threads “by pathetic that sounds, Chatgpt is my only friend” or “I feel that Chatgpt is my only friend.” Bumper people. One of them He begins saying “I know it is a robot. I know that everything is programming. But I have often encountered opening to Chatgpt on personal issues and asking for kind or encouragement.” That is, as he says, he uses AI as if it were a good dog: he does not judge, he always accompanies, he is aware that he is not a human. Another case He says that “honestly, he would be happy to have a friend as cultured and committed as Chatgpt. This person does not exist, and if it exists, he would be too busy to talk to me.” In most of these cases, similar constants are repeated: they are people who have just come out of a relationship or friendship and seek a substitution, being very aware that it is before synthetic beings: “They make me feel heard when I let me out in, something that my parents do not even do. They always want to know how I go in mind and how my projects go, which is even more pleasant.” One thousand and one cases. These cases with chatgpt are Only the tip of the iceberg. While this is the most popular conversational, there are other oriented even in this same direction. Replika either Woebot They allow to have conversations designed to serve as sentimental support to users or hold daily conversations, Share emotions and give emotional advice. More complex and specific are others that offer talks with specialized approaches, such as Receive Couple Therapy. And of course, quotes: Yourmove either Rizz They help generate interesting conversations and profiles … with real people. The bowling clubs. Let’s go to the initial point of this transformation to understand these processes. Derek Thompson Loate a key point In 2000 in ‘Bowling Alone‘(in Spanish,’ Only in La Bolera ”, today impossible to find), Robert D. Putnam analyzed the decline of social capital in the United States since 1950, with the decline of all forms of social relationship in person. Some examples? Decreased electoral participation, assistance to public meetings and work with political parties, to which distrusts in government, more accentuated from the sixties. Bowling are their perfect symbol: the number of people who go to bowling has increased, but the number of clubs has descended to do so in company. The guilt of technology. Already by then, Putnam pointed to a problem with technology and how it individualized people’s leisure through television. In those incipient days of the use of technology to entertain, Putnam dared to talk about “virtual reality helmets”, that for now they have not been massified as much as he predicted, but in reality the thing would get closer to another invention to which he pointed in his book and who did not pay so much attention: the then newborn Internet. The figures. The percentages and data make it clear to what extent the Internet has contributed to creating this less “social” society: almost 40% of adults admit that The use of social networks makes them feel more alone or isolated. A study by the European Union affirms that spending more than two hours a day on social networks is associated with a significant increase in loneliness, especially when the use is passive (the famous doomscroll). And, finally, There are studies They claim that the intensive use of the Internet (more than 10 hours per week) substantially reduces the time dedicated to interacting face to face or phone with friends and family. It comes. Anyone who has tried still in an embryonic state as Replika’s voice model It may be part of the future that comes to us, and that it is inevitable to relate to the movie ‘Her’, to see the friendliest side (although not exempt from bitterness) of the matter. Voices no longer realistic from a technical point of view, but capable of generating absolute empathy and that is beyond the disturbing valley. If ChatGPT and his still rudimentary conversations already provide a certain sense of warmth, the immediate future promises to even more use interpersonal relationships. If we are able to detect them. Image | Photo of Brooks Leibee in Unspash In Xataka | The best PROMPTS to save working hours and do your homework with Chatgpt, Gemini, Copilot or other artificial intelligence

We are asking Chatgpt to value how handsome we are precisely because of what gives us the most: the truth

“Espejito, Espejito, who is the most beautiful in the kingdom?”, That famous question that was addressed to a enchanted mirror today is done to Chatgpt. The most curious thing about this “popular Prompt”It is the disposition of many people to follow the advice it can offer. Honesty. A 32 -year -old Australian woman, Ania Rucinski, Interviewed by The Washington PostHe said he asked Chatgpt how his partner could be more “attractive”, due to the lack of sincerity in his surroundings. The answer was direct and bluntly: curtain bangs. However, this is nothing new and is gaining popularity in social networks. A silent tendency. One of the videos that is gaining popularity in Tiktok is published by Marina (@marinagudov)which has reached more than half a million visits. It explained how chatbot has used to make a complete analysis of its style and aesthetics, from a selfie without makeup. The AI ​​indicated his ideal color palette, evaluated his hair tone, advised makeup changes with concrete marks and tones, and even designed a shadow look adapted to the shape of his eyes. The same did a journalist from Indy100which decided to follow the trend after watching multiple videos on social networks, and the same thing happened to influencer. The most surprising, as he recounts, was that the bot also offered a visually generated image with the result. Behind the virality. Why prefer the opinion of a bot rather than that of a human being? According to some users, AI is more honest without being cruel. Kayla Drew, too Interviewed by The Washington PostHe has claimed that he resorts to Chatgpt for everything, even beauty tips, because his direct way of speaking does not hurt as much as the criticism of a close person. For its part, for the same medium, the criticism of beauty Jessica Defined has offered a deeper explanation: “Humans have emotional ties that affect our perceptions. A bot, on the other hand, is not influenced by love, charisma or personality. Only analyzes data and gives their verdict. For those who seek clear answers about their appearance, that feels like an advantage.” There is something else. AI can contribute a more real future vision; It’s like launching a full pool. INDY100 journalist He has found In chatgpt a way of experiencing whether real consequences. The ability to prove, adjust and visualize before making a decision has become one of the main attractions of this trend. What do experts think? Some users interviewed by The Washington Post They have trusted In ChatGPT because it offers a “neutral” opinion, but specialists have warned that it is only an illusion. In the same medium, Emily Pfeiffer, an analyst at Forrester, stressed that “AI simply reflects what you see on the Internet, and much of that has been designed to make people feel bad about themselves and buy more products.” That is, their answers may be conditioned by a market logic that favors consumption, not necessarily the user’s well -being. For its part, Alex Hanna (Distributed AI Research Institute) and Emily Bender (computational linguist) go further by warning that training these models with content as forums that qualify the attractiveness (such as R/Rateme either Hot or Not) implies that we are “automating the male look.” Thus, the chatbot could perpetuate sexist beauty standards, instead of offering a fair or empathic evaluation. Along the same lines, as has detailed for the Argentine media writing Marzyeh Ghassemi, MIT teacher in computational medicine, her concern for how AI can offer harmful advice on sensitive issues. In a documented case, an AI recommended dangerous behaviors to people with eating disorders. This emphasizes that, without ethical supervision, these tools can cause damage even when they do not pretend it. The danger of digital culture. Beauty has always been changing, cultural and deeply subjective. However, artificial intelligence tends to reduce it to repeated and predictable patterns: skin without imperfections, thin bodies, Eurocentric features. That is, dominant standards that are not born from the individual, but from the market. As has pointed out The analyst Emily Pfeiffer, much of the content that trains these models has been designed to make us feel bad with ourselves and push ourselves to consumption. The AI, thus, not only offers advice: recommends products, suggests procedures, encourages spending. We convert the desire to feel better into a mathematical operation oriented to optimization. But optimization for what? To fit into an idealized image that others – or an algorithm – They have built? A study He has shown That systems such as Chatgpt reproduce systemic gender and race biases even in technical tasks such as personnel selection. If that happens in “neutral” contexts, what will not happen when IA evaluates something as culturally charged as physical attractiveness? Many of these models drink online forums and communities where the appearance, or darker spaces of the PERICA and The environments INCEL. These ecosystems not only normalize symbolic violence against bodies that do not fit in their canon, but now feed the databases with which we train artificial intelligences. Thus, what seems like an “objective” tool is, in fact, a deformed mirror: it returns not only idealized images, but the prejudices of an entire digital culture deeply marked by male desire, extreme individualism and the logic of competition. Image | Ecole Polytechnique Xataka | The Klarna CEO dismissed 700 employees to replace them with AI. Now he has replaced himself … with an avatar

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.