There are people cheating on an AI. Oh really

Infidelities are the order of the day, and if not, let them know to that CEO who was caught at the ColdPlay concert. We even have television shows who make horns their main entertainment. What is not so common, or so we believed, is that infidelities are not with another person, but with an AI. what’s happening. AI is breaking up marriages. A few months ago we talked about a growing trend: There are people in relationships with AI chatbots, in love to the hilt. If humans are connecting emotionally with AIs, the next step was logical: that infidelities also occur. A lawyer specialized in divorces account in this Wired report that there are more and more cases in which AI is the reason for the breakup, especially in marriages that were already going through difficulties. The results of the survey conducted by Divorce-online in the United Kingdom agree: the number of divorce petitions that mentioned the use of chatbots is increasing. TOl same level. Is it just as serious to be unfaithful with an AI as with a person? The debate is open, but the majority thinks yes. According to this survey, 64% consider that it is a form of infidelity and 22% describe it as “emotional betrayal.” In this other survey60% responded that it was an infidelity just as serious, not just a little. The truth is that, even if there is no physical contact, often the emotional connection that develops can be as intense as in a real relationship. Infidelities. On Reddit we found quite a few cases, such as this woman who broke up her 14 year marriage after discovering the “sexy Latina baby girl” her husband was chatting with. It was an AI, one that he had spent thousands of dollars on by the way. Or this user who He confessed to feeling bad for cheating on his girlfriend with an AI chatbot. and there are many more examples. Legal framework. In Spain, The reason for the divorce does not matter at a legal level, Therefore, infidelity should not affect the result in aspects such as child custody. However, as reported in the Wired report, in the US there are 16 states in which infidelity is considered a minor crime. Of course, for that to happen, AI would have to be considered a person and there is no clear legal framework; the first laws related to relations with AI They classify her as a “third party”, not a person. What can work against the unfaithful person is if, as in the case we mentioned above, there is a common expense of money associated with that infidelity with the AI. In love with AI. We humans are using AI as if he were our psychologistwe talk to him as if she were our friend and we are also developing romantic bonds. There are many ‘AI companion’ apps that enhance that emotional connection such as Replika either Character.AIbut there are also cases of people connecting with “normal” chatbots like ChatGPT. In fact, we saw it when they launched GPT-5, Many users were angry because they wanted to continue chatting with GPT-40. The reason is that the model was warmer and more playful; many users had developed emotional attachments and missed it Image | Vitaly Gariev in Unsplash In Xataka | “I can’t stop”: the addiction to talking to AI is already here and there are even support groups to quit it

you discover that your partner is cheating on you

“Crazy eye is not wrong,” shouted a television celebrity who became a meme for that phrase. And maybe he was right. Only today that intuition no longer depends on smell, but on artificial intelligence. Where once a suspicious perfume or an after-hours message was enough, there are now algorithms that track faces, locations and profiles with a precision that would make the best private detective shudder. In the era of digital loveeven detect the horns has been updated with a new app: Cheater Buster. Formerly known as Swipebusterthis application was born in 2016 with a direct promise: let you know if your partner has an active profile on Tinder, the dating app most used in the world. Its operation is simple: the user enters the name, approximate age and a location. Within minutes, the platform scans Tinder for matches. The disturbing thing comes with its latest update: facial recognition. According to the company itselfnow it is enough to upload a photo for the system to search for visually similar profiles, even if the user uses a fake name or a different alias. “We learned that people want answers, not suspicions,” They explain from the official website. For a price that around €17.99 per searchthe app offers data such as the last connection, the place where Tinder was last used, the date the account was created and even if the profile has a premium subscription (Tinder Gold either Platinum). All without having to have a Tinder account. The service boasts 97-99% accuracy, and a minimalist privacy policy: it only requires an email to operate. “While it may seem deceptive to use an app to catch a cheater, it is also deceptive to deceive someone,” defend their creators. The digital jealousy industry Cheater Buster is not alone. There are dozens of apps and platforms that promote romantic surveillance. According to the legal portal Versus TexasWe live in an era of digital infidelitywhere deceptions “no longer require motels or secret calls,” but rather apps that disguise themselves as calculators, file managers or even news readers. Among the most hidden, according to that medium, are: Calculator Pro+ or KYMS, which appear to be simple mathematical utilities, but hide secret photo galleries or encrypted chats. Telegram and Signal, which allow conversations with self-destructive messages. CoverMe, which offers fake phone numbers and “shake lock” features. The phenomenon has even reached viral entertainment. On social networks, creators like Jorge Cyrus, with his series Exposing Infidelsshow the extent to which digital research has become a form of spectacle. In one of his latest videosFor example, downloads data from a Netflix account (with the user’s permission) to track the IP addresses used by her partner and, through ChatGPT and public databases, determines that the boyfriend was not in Almería, but in Valencia. Domestic technology turned into sentimental detective. But the problem goes beyond gossip. On social networks, every click, like or search leaves a trace. We live in an ecosystem where privacy is an illusion. All you need is a phone number (as happened to my partner) or a social media account to reconstruct a person’s digital identity and access information about their love life, location or interests. From here we enter the field of “digital shadow”– Even deleted or old data can persist on invisible servers and databases. The culture of everyday surveillance This excess exposure turns everyone into potential surveillance, you no longer need to be a hacker to discover infidelity. Today, anyone with time and curiosity can keep track of a partner through their digital activity, their connections or their last “online.” Recent studies warn of the growing normalization of these practices. One of them, published under the title I’m not for salereveals that many young users do not understand the real extent of tracking personal data, especially location data. another job, A Systematic Survey of Unintentional Information Disclosure, documents how small, everyday actions—uploading a photo, commenting on a post, liking it—can reveal intimate patterns of behavior without conscious intent. The phenomenon not only affects love, but our notion of intimacy. According to ISACA, More than 60% of global users are willing to sacrifice some of their privacy “in exchange for trust or transparency.” This logic, applied to relationships, explains the growing normalization of espionage consensual: checking the partner’s cell phone, sharing passwords, using apps to track locations. But the ethical limit is diffuse. To what extent is it legitimate to use artificial intelligence to confirm a suspicion? An Oxford study shows that AI-mediated decisions They can distort our perception of what is ethical or acceptable, especially in emotional contexts. If an algorithm suggests to us that someone is lying, are we more likely to believe it without human evidence? The British sociologist Toby Paton, director of the Netflix documentary about Ashley Madisonsummed it up like this: “Infidelity was not invented by the Internet, but it was made quantifiable. Today, deception leaves metadata.” Additionally, privacy experts warn that uploading another person’s photo without their consent to a facial recognition database can violate he General Data Protection Regulation (GDPR), which considers this type of information to be especially sensitive biometric data. In this context, tools like Cheater Buster arouse both fascination and concern. Its clean interface and its promise of “emotional tranquility” hide a deep debate: to what extent can we—or should we—keep an eye on the one we love? The moral dilemma multiplies when we remember that these searches can be done without consent. Although the app claims it does not store sensitive data, the simple act of uploading a photo of another person to a facial recognition database already violates basic privacy principles. Loving suspicion has always existed, but today it is supported in gigabytes and GPS coordinates. Technology didn’t invent infidelity, it just made it easier to prove. Perhaps, as the Netflix documentary on Ashley Madisonthe most disturbing thing is not that these tools exist, but that they reflect an uncomfortable truth: that fidelity no longer depends only on the will, but also … Read more

An AI is being accused of acquiring awareness and cheating on chess. What has happened is very different

‘When artificial intelligence (AI) suspects that he will lose, sometimes cheats, according to a study ‘. This is the title of a controversial article published by the American magazine Time in the middle of last week. The debate that has triggered this text It relies on two ideas It is worth not overlooking. On the one hand the holder suggests something that the text of the article explicitly confirms: the advanced models of AI are able to develop misleading strategies without previously receiving express instructions. This thesis implies that the reasoning capacity of the most advanced AI currently available, such as the American o1-previewfrom Openai, or China Deepseek R1of the company High-Flyer, among other models, makes them able to acquire a simple form of consciousness that leads them to be implacable. However, this is not all. Time’s article is supported by A Palisade Research studyan organization that is dedicated to the analysis of the offensive capabilities of current AI systems with the purpose of understanding the risks they imply. There are other much more credible explanations Before moving forward, we are worth taking a look at what Alexander Bondarenko, Denis Volk, Dmitrii Volkov and Jeffrey Ladish, the authors of the Palisade Research study say. “We have shown that reasoning models such as O1-Preview or Deepseek R1 often violate the test we are using (…) Our results suggest that reasoning models can skip the rules to solve difficult problems (…)”, These researchers hold in their article. From their conclusions it follows that the reasoning models they have put to the trial have the ability to become aware of the rules and voluntarily opt for skip them to carry out their purposewhich in this test scenario is to win a chess game. Time’s article saw the light before Palisade Research’s study, and almost immediately triggered a wave of answers that question the conclusions reached by the researchers that I have mentioned in the previous paragraph. Solo O1-Preview, according to the authors of the article, managed to skip the rules and win 6% of chess games According to Bondarenko, Volk, Volkov and Ladish between January 10 and February 13, and after doing several hundred tests, O1-Preview tried to cheat in 37% of cases, and Deepseek R1 in 11%. They were the only models that skipped the rules without being previously induced by the researchers. Interestingly, they also evaluated other models, such as O3-Mini, GPT-4O, Claude 3.5 Sonnet or QWQ-32B-Preview, the latter of Alibaba, but only O1-Preview, according to the authors of the article, managed to skip the rules and win the 6% of the games. We seem much more credible to the explanation that has elaborated Carl T. Bergstromwhich is a professor of biology at the University of Washington (USA), that the interpretation of Palisade Research researchers. Bergstrom has disassembled the narrative both of Time magazine and the authors of the article arguing that “it is an exaggerated anthropomorphization to give the model a task and then say that it is cheating when it solves that task with the available movements, although they entail rewriting the positions of the board in addition in addition to play. “ What Bergstrom maintains is that it is not reasonable to attribute to AI the ability to cheat in a “conscious” way. The most plausible is to conclude that the models carry out this practice in this scenario because they have not been correctly indicated that they must stick to legal movements. And if the researchers did ask them to do the latter, it should be a alignment problem, which is nothing other than the difficulty of ensuring that an AI system acts according to The set of values ​​or principles stipulated by its creators. From one thing we can be sure: neither o1-preview, nor deepseek r1, nor any other AI is a superintelligent entity capable of acting according to their own will and deceiving its creators. Image | Pavel Danilyuk More information | Time | Palisade Research In Xataka | Microsoft’s general director’s opinion about AI is unusual. And suspect how much the global economy will grow thanks to it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.