We do not know if your history is yours, from OpenAI or the government

Everything you write can be used against you. This is what the United States justice raises when trying to force Openai to indefinite all the records of our conversations with Chatgpt. Not only that: they also demand that you keep the chats that we theoretically believed to have eliminated. The situation makes the question inevitable: So all those data, who belongs to?

The New York Times vs OpenAi is complicated. The entire current situation derives from the legal process that faces the prestigious newspaper with the artificial intelligence company. Said confrontation It has been in progress for 17 months. Nyt’s initial argument was that Openai had trained his models with NYT contents that he also showed in his responses to users. Now the thing goes further.

Don’t even think about the chats. After a recent request from NYT, Judge Ona Wang that takes the case ordered A OpenAI to initiate the indefinite preservation of the records of all potentially relevant contents, including temporary chats and even the text output generated by the API and use paid users. Until that moment the Data retention policy Openai imposed a 30 -day limit to preserve conversations. Then theoretically they were erased without more.

Payment walls. Nyt’s fear and other means is that users are using chatgpt to skip payment walls “could be more prone to erase all their searches to cover their footprints,” Openai explained in the judicial process. According to the plaintiffs, evidence that demonstrates it is missing because Openai has only ocmpartido samples of chat records that users had accepted that the company retained.

OpenAi appeals. The Official response From OpenAi to NYT data demands it is clear: “We firmly believe that it is an overreach. It endangers your privacy without really contributing to solving the demand. That is why we oppose.” Those responsible for the company explain that they have asked the judge to reconsider the order highlighting that “that indefinite retention of user data violates industry standards and our own policies.”

What data are affected by the order. The judge’s demand is very broad, but there are important details that allow to clarify who is not there any:

  1. If you use the free chatgpt version, if you use a subscription to Chatgpt Plus, Pro or Team or if you use the OpenAi API without ZDR agreement (Zero Data Retention), you are affected and your chats could be preserved indefinitely.
  2. Chatgpt Enterprise or Chatgpt Edu accounts are not affected by this order.
  3. API users who have opted for ZDR are not affected by the agreement.

What is ZDR. The “ZDR amendment” refers to the data non -retention policy (Zero Data Retention) that guarantees that Prompts are not registered, nor is the models train with our data. This data management is especially Important for business useshence in Chatgpt Enterprise it is activated by default. For the rest of the plans, ZDR is not activated by default and companies and interested people must contact OpenAi to negotiate the terms. There are no published prices for this option, but it is an extra service and as such imposes an additional cost in the use of the OpenAi models API.

Without evidence. According to Openai, there is no evidence that they have intentioned data intentionally and everything is speculation. In addition, there is also no evidence that users who violate copyright when using chatgpt to avoid payment walls are more likely to erase their chats. “Openai did not destroy data, and of course did not delete data in response to the events of the dispute. The order (judicial) seems to have assumed incorrectly otherwise.”

Sensitive data. The company defends its duty to protect “the data and privacy of its users” and explains that millions of users use chatgpt daily for reasons that go “from the mundane to the deeply personal”. That makes these users sharing sensitive data that not only affect financial or medical information, but also their feelings and private reflections.

Screen capture 2025 06 06 at 11 10 29
Screen capture 2025 06 06 at 11 10 29

Ramifications. The impact of that court order is potentially huge. As a user called Kepano stated in X, “if I have understood it correctly, this means that the data retention policies of the applications that use the OpenAi API simply cannot be fulfilled.” That is, if a third company that uses OpenAI models to provide their service promises that your data will be maintained private will not be able to guarantee that promise. The implications for users of all kinds are clear. In Ars Technica They cited the comment of a LinkedIn user suggested that this court order creates “a serious breach of contract for all the companies that use OpenAi.” and also highlighted messages in X of users who claimed that “each and every one of the services of AI” driven by “OpenAi should be worried” about this situation.

Who belongs to the data? This court order opens a disturbing debate: who has control of the data we exchange with Chatgpt and, by extension, with any other chatbot. These companies are theoretically responsible for managing these data and eliminating them, but are they yours?

  • Chatgpt and Gemini do use chats to train their default models, although this behavior can be deactivated. Neither Claude nor co -ilot do it, for example. The data in these last two cases are something “more yours.”
  • But they continue to keep them for a variable period that is usually 30 days
  • With this court order, the US states to be able to access that data if you need it, although there must be a judicial investigation behind as the one that is being carried out with The New York Times.
  • And yet, these data are even more valuable as a source of information not only for private companies, but for intelligence agencies and services. And we know How do you spend the nsa.

How they act normally. Data retention policies They are similar in all the cases And all Delete Those tickets and exits (chats) after 30 days. These companies also have options for that non -retention of data —Hropic It also has it– which can be applied for example to business or private uses. However, the doubts about who in the end has that control over the data are clear, and this trial has only increased them.

Chatbots and confidentiality. Taking into account the increasingly sensitive use of these chatbots – professional as a staff – there should be much stronger guarantees than Chatgpt, Gemini, Claude and other chatbots in the market treat these conversations in an absolutely confidential way. It is just what Sam Altman raised a few hours ago by saying that “talking with an AI should be like talking to a lawyer or a doctor.”

And the encryption, what? An option would be that these conversations with these chatbots could be encrypted from end to end, as with instant messaging services, but in practice that is unfeasible. The servers that collect our request need to “see it” to process it and return an answer, and the encryption would make it impossible. Apple already raised a solution With Private Cloud Computebut for now the main companies have an approach in which these conversations are handled without any encryption. The privacy guarantee is what they say with their users, but of course they have access to those conversations if they need it.

Image | Kaitlyn Baker

In Xataka | You thought to be navigating in unknown and erasing cookies on your Android mobile. Goal I saw everything you did

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.