WhatsApp the privacy seemed pump proof. Until a state prosecutor tried to erase incriminating messages

The State Attorney General, Álvaro García Ortiz, thought he had eliminated messages that could help incriminate him in a crime of revelation of the secrets. It really wasn’t like that, because It doesn’t matter if you delete your messages on WhatsApp: Google keeps them the same. Now we have known that these messages seem to have been successfully recovered, and the question is: how have you managed to read those messages? Metadata to compare other data. During the investigation, the UCO also registered the electronic devices of the provincial chief prosecutor of Madrid, Pilar Rodríguez, they point out In 20 minutes. The content of the Zip folder held by the magistrate could also have been sent by goal/WhatsApp, which would not have sent the messages – it can not theoretically have no access to them – but the metadata of those conversations of García Ortiz. These metadata could serve to compare and contrast Rodríguez’s messages – which did retain those conversations – thus providing evidence for the attribution of the Attorney General. Of deleted, nothing. As they point out in the confidentialthe magistrate of the Supreme Court, Ángel Luis Hurtado, has indicated that the effort to try to recover the messages deleted by the State Attorney General, Álvaro García Ortiz, seems to have been “successful.” The latter It is charged for an alleged crime of revelation of secrets. Google and WhatsApp lend their help. The Central Operating Unit (UCO) of the Civil Guard will be responsible for making an expert report on the messages. The Supreme Court indicated In the country In January, he made a request to the Irish delegations of Google and WhatsApp (Meta) through Eurojustan agency for judicial cooperation in criminal cases. It is not specified which has responded, but the magistrate has received documentation in a Zip folder that seems to contain the messages deleted by García Ortiz in October 2024. How have you managed to read? The essential question is, of course, how it is possible that justice has ended up having access to those messages. The UCO already proved that García Ortiz erased his WhatsApp messages not one if not twice on October 16, 2024, when he was charged. He even changed telephone and reached the factory that he had been using until that moment. The alleged reason was to prevent such messages from incriminating him, but it turned out that this was not enough. The theory of metadata is a strong option, but there are others. End -to -end encryption is there. To begin with, we must clarify that WhatsApp has been using a protocol for years end -to -end encryption For all conversations. Only who sends the message and who (or who receives it can read them, but no other person or entity can decipher those messages. Not even goalthrough whose servers they send and forward texts, images, video or any other type of content. The key was not there, and there are other ways to access those messages. Option 1: physical access to the device. The most obvious way to access a user’s WhatsApp messages is to have physical access to your mobile device. In that case, forensic experts can, with the appropriate tools, obtain the key to decipher the messages from the WhatsApp database, even if they have been deleted. Here García Ortiz erased the messages and restored the terminal to his factory status, which probably made it impossible to recover them from the device even having physically access. But there was another way. Option 2: Surely Backing Copies. This is probably the key to the success of having recovered the messages of the Attorney General. In WhatsApp users can back up their messages in cloud services such as Google Drive or Apple Iloud, but attention: by default those backup copies are not encrypted. It is the users who must proactively enable encryption in backupsand maybe García Ortiz did not. That would have caused Google, to which help was requested, could access that data to send them to the case magistrate. If you want to delete your messages, be careful with backups. WhatsApp users cannot do anything with metadata, which does keep a finish line, but with the messages if they want to erase them effectively. As this case teaches us, it is not enough to delete them from our phone: if we make backup copies of our messages, it is important to activate the encryption of said backup copies. But special notice about backup copies. Special care with the encryption of backup copies, because it does not work as extreme encryption. The copies are encrypted with a password/password that you only know, and therefore it is convenient that it is strong not to be broken with brute force attacks, for example. WhatsApp in fact gives the option to create a 64 -digit key, but … she does it. Here are suspicions about how they manage that encryption password in Google/Apple/Meta, and if they can decipher it in some way for potential judicial requests. Be that as it may, the other solution, of course, is not to back up the messages of the messages unless you consider it absolutely essential. Image | State Council | Brett Jordan In Xataka | If the question is whether your company can add you to a WhatsApp group of work, the law leaves no doubt: it depends on who pays

How to use and configure chatgpt to maximize your privacy with artificial intelligence

We will explain everything you should Configure to maximize your privacy In ChatGPT, Openai’s artificial intelligence chat. One of the methods with which the company trains its AI is using conversations to learn, and this may not be good for your privacy, but there are some things that you can take into account to improve it. Starting from the basis that total privacy hardly exists in a tool that collects your conversations, there are some things you can do to improve it. In addition, we will also tell you other tips that you must take into account Beyond the configurations. We are going to base these tips using the Chatgpt web version, and with captures of it. However, All adjustments are available in other versionsincluding mobile and desktop applications. Use temporary chats The first resource that you must take into account is Use temporary chats for sensitive conversations or have data you want to keep private. By pressing the Temporary chat You will generate something similar to the private windows of a browser. These chats are not going to be included in the Chatgpt conversations history. Besides, Its content will not be used to train The company’s GPT models, nor will the internal memory of Chatgpt. Come on, they are chats that will be as if they had not happened. Do not regal your data to improve chatgpt By default, what you write in chatgpt will be used to train other models, also your voice recordings or videos. However, you can configure the application so that What you write is used to train artificial intelligenceand that your data is not used. To do this, you have to enter the configuration of the chatgpt app or website that you use. There you must enter the section Data controls within Accountand deactivate the option of Improve model for all. When deactivating it, OpenAi will no longer use your content. Prevent you from using other chats Chatgpt stores by default some key data from your past chats, and these sometimes They can appear in new conversations. For example, yesterday I went to draw and put me a shirt of a music band on which I asked him a month ago. If you like avoid using past chats In your new conversations, you have to enter the configuration, go to the section of Personalizationand Disable the option Refer to stored memories which will appear activated by default. Manage saving memories Within the options of Personalization where deactive the use of memories, you have An option to manage your saved memoirs. This will allow you to review all those key data that Chatgpt has stored to use a little later. On this screen to manage the Chatgpt memoriesyou will be able to delete key data that you do not want to take into account in the other chats or directly Delete all stored memories that have been collected so far. Manage your chat history Chatgpt has a history column where you can See all the conversations you have had So far with artificial intelligence. These can be useful to recover concrete chats of specific issues, especially if you put identifiable names and just ask about one thing in them. However, if you do not want them to appear there or if you do not want other people with access to your chatgpt to see what you have written, you have the option of Eliminate or file them. If the archives will remain to recover them whenever you want without appearing in the history, but if you eliminate the conversation it will disappear forever. Delete your whole history If what you want is to erase all the history Of conversations with chatgpt, you don’t need to go one by one. Go to the configuration and enter the section of Data controls. Here you will have the option to erase the complete history of all the conversations you have had. Manage your shared links Chatgpt allows you to create a link to a conversation and share it with other people. But I advise you that efforts from time to time your shared links so that these conversations do not end up reaching people with whom you do not want them to have them. To do this you have to enter the chatgpt configuration. Once inside, go to the section of Data controlsand inside click on the button Manage of Shared links. Here you can see the assets and delete them. Take care of the files you say and what you do Another advisable thing when using chatgpt is Do not mention personal data In your chats, as well as addresses or passwords. Openai takes technical precautions so that what you write is temporarily stored only, but it never hurts to maximize your precautions to prevent something more than it should reach. Also Be careful with the photos you upload and the metadata that they may have. I even recommend that if you are going to use chatgpt for work issues, you have separate accounts for projects or separate the personal and professional and thus isolate risks. Care with extensions and accounts linked And finally, remember Avoid linking apps or plugins that are not necessary To your Chatgpt account to avoid extracting data from what you do. Equal to the reverse. Also be careful to link other cloud services to Chatgpt, on all those that have work or sensitive data. Xataka Basics | How to improve chatgpt responses: 9 steps to guarantee higher quality and better sources

It is a nightmare for privacy

On the Internet everything is shared with incredible ease, we often do it without stopping to think about the possible consequences. Therefore, for years, privacy experts insist on the same thing: we should not publish anything that we are not willing to see publicly exposed. Although it may seem evident, the warning also applies to private accounts. Because what is “only for friends” today, tomorrow can be exposed without prior notice. That risk has just acquired a new dimension. It no longer depends only on what we show, but also on what technology can deduce on their own. The images we publish can hide valuable information, such as the place where they were taken. And with the arrival of artificial intelligence models capable of analyzing and reasoning with photographs, the exhibition is even greater. OpenAi’s AI can know where you took a photo. The new OpenAI models, known as O3 and O4-miniThey have brought visual reasoning to a new level. They are able to analyze images with surprising precision, and combine that ability with tools such as web search and image editing to further refine their answers. This allows them, for example, to give you better explanations than those of a manual or help you understand a complex plane. But it also opens the door to uses that should make us reflect. New viral trend. One of the latest fashions in networks like X has nothing to do with creating Ghibli -style images or Lego -style compositions. Now, many users are using these models to identify the exact place where a photograph was taken, even when it does not include metadata. Just tell the model that they are playing Geoguesr to start analyzing the image, cutting details, looking for coincidences and reaching a conclusion. In one of our tests, the system managed to identify a concrete street in Madrid from a simple screenshot. It took about 15 minutes, but hit the nail. A feature that should make us think. In an environment as hyperconnected as the current one, where the photos are constantly shared, we must keep in mind that it is not necessary to explicitly geolocate an image so that others can find out where it was taken. Artificial intelligence has raised the level of exposure without many realizing. And although this capacity has interesting applications, it also raises important risks. Privacy, more and more, depends not only on what we share, but what others can deduce from it. Images | Screen capture In Xataka | O4-mini is much more than another model of AI. It is the Tesla Model 3 of OpenAI

How to improve Chatgpt’s privacy preventing what you write is used to train artificial intelligence

Let’s explain How to improve Chatgpt’s privacydeactivating the option with which you allow OpenAi to use all the content you write or believe to continue training its artificial intelligence models. It is an option that is activated by default in your profile, but it is easy to disconnect. When you are using chatgpt, if you don’t change anything you are giving the company permission to collect your interactions. Then, these questions that you have asked the AI ​​and the answers generated for you will be used in the future to continue training and improving the models. But if you don’t want this information to be used because it is private, we will tell you how to deactivate it. Disable data sending to chatgpt The first thing you have to do is enter the configuration of Chatgpt. For that, on the mobile click on the side options button and click on your username. In the web version click your profile image and choose the option of Configuration which will appear in the window that opens. If you are on the mobile, what you have to do once you enter the configuration is click on the option Data controls that will appear in the section of Accountwhich is the first to see above all. Once inside, deactivate the option Improve the model for all That will appear in the first place. With this, your content will no longer be used to continue training OpenAi’s models. In the desktop versionwithin the configuration click on the section of Data controls. Once inside, click on Model improvementwhere you will be able Disable the option Improve the model for all That will appear in the first place.

The EU wants to corner the privacy of WhatsApp and Signal with rear doors. In France the play has gone wrong

The National Assembly of France vote On March 20, a fundamental issue for the privacy of its citizens. One that put into play the confidentiality of their conversations in messaging applications such as WhatsApp or Signal. Rear doors. As they point out In EFF (Electronic Frontier Foundation), the French proposal was disturbing. The objective was to force messaging platforms such as WhatsApp or Signal to create “rear doors” that allow hidden access to private conversations. The proposal is part of measures with the argument of avoiding child pornography (sexual Child abuse material, CSAM). We are waiting for you in territory s Do you like Samsung? We have a new specialized medium in Samsung and its products with tricks, offers, tutorials and all news about the brand. Go to territory A for mass surveillance. The law proposal raised a spectacular tool for mass surveillance. One who already tried adopt in the United Kingdom in 2019 and that would allow security agencies to join encrypted chats without their participants getting tale. Terrible for privacy, but also for security. The threat not only affected our privacy, but the security of these applications. The experts They criticized Already in 2019 the proposal and They warned that it could serve to introduce systemic vulnerabilities or create tools that many would end up abusing. The French organization La Quadrature du Net (LQDN), which defends the fundamental freedoms of the digital world, He urged a mobilization Against this “drug law” that among its options included this theoretical creation of rear doors. Of that, nothing. Fortunately for privacy defenders, French legislators voted against said proposal, allowing these platforms to continue to continue operating with end -to -end encryption that protects the confidentiality of these communications. A difficult victory. As indicated in the EFF, this victory was not easy, and only came after popular pressure, expert comments “and the support of civil society.” Even so, they warn, the pressures will follow to try to activate measures that mining the privacy of our communications, “perhaps repaired, or put in full speed through quieter legislative moments.” That is also the danger: that these laws end up being approved in a deceptive way and as part of broader and more generic laws. Blissful chat control. Last September the Presidency of Hungary of the European Union Council recovered the proposal of the so -called Chat Control, that is, the end -to -end encryption elimination. There was a first attempt that was rejected, but during the summer the threads began to be moved for review, the called chat control 2.0 that is in development. The new proposal relaxes the terms. Chat Control 2.0 is still worrying. The proposal Published by the Presidency of Poland of the Council would make the scan of these “voluntary” conversations and would be classified as preventive. The experts They believe That this proposal is a step in the right direction to protect the right of European citizens to maintain their private digital correspondence. The Government of Spain wants rear doors. Fernando Grande-Marlaska, Spanish Interior Minister, is one of the great defenders of the rear doors. In a 2023 document he presented how it is “imperative that we have access to the data.” There are many other European countries in favor of this type of measures, and as explained in the activist organization ChatControl.EU “Created by Patrick Breyer, of the German Pirate Party,” the risks to privacy are important. Image | Nathan Dumlao In Xataka | A rear door 30 years ago has compromised all US safety. Europe wants to make the same mistake

Some Amazon made in the US hid a fantastic option for privacy. Alexa +’s arrival will disappear

In recent times Amazon launched three models of his family with an interesting option that yes, was only available in the US and in English: local processing of voice recordings. That prevented users from They had to send those recordings to the Amazon cloud and supposed an interesting improvement in privacy. That option will soon stop being available. Less privacy. The products that allowed this feature, explain In The Vergeit was the Echo Dot (4th generation), echo show 10, and echo show 15. As of March 28, those devices of the Amazon Echo family will stop being able to process the audio locally on the device itself. Thus, all audio clips will end up transferred to their servers to be processed in their cloud infrastructure. AI as an excuse. In Ars Technica They indicate how Amazon has explained in an email to this publication that the change is due to the new use of AI models in Alexa+. Specifically, they assure, By continuing to expand Alexa’s capabilities with generative functions that depend on the processing power of the safe cloud of Amazon, we have decided to stop supporting this function. Alexa+ in sight. After this decision is the launch of Alexa+the new version of this voice assistant that will now be enhanced by AI models. To process these models, however, the power of the Amazon cloud is needed, which according to the company makes it impossible for requests to be processed at home as happened so far. Voice recognition. The new AI system that will govern these devices will also have more advanced options to recognize the voice of who is using it and thus be able to respond more precisely and contextually, something that the company calls Voice ID. This characteristic forces Amazon to eliminate the privacy function that was present before, although very limitedly: it is likely that many users of these models did not even know that such privacy characteristic was available. It doesn’t matter if you don’t want to in your echo. Users of these devices will not be able to avoid this sending of audio clips even if Alexa+will not use: Amazon will also send those audios even if they do not use said service with ia. The Alexa+ service based on AI models will be included in the Amazon Prime subscription. Another condemnation for privacy. Apart from that special option, the Amazon Echo family devices allow to control certain privacy options like him time that the recordings were stored If they reach their servers. Controversies of the past. In April 2019 it was revealed how Amazon employees listened recordings with the excuse of improving the service. That made The controversy will jump And in fact they already had problems with the suspicion about privacy When Alexa began to be available in Spanish. Amazon promises to protect that data. In that email to Ars Technica Amazon also assured that “Alexa’s voice requests are always sent to the Amazon security cloud, which was designed with security protection layers to keep customer information safe.” They also point out that there will still be various controls to refine that way of managing user data. In Xataka | The announcement of the new Alexa hides an awkward truth: the silent sunset of the Solo-Voz interface

The intentions of the United Kingdom with Apple are a nightmare for privacy. That of the British and that of the whole world

With the security excuse, the United Kingdom has demanded from Apple to open an intentional gap in the privacy of its users that could have severe global consequences. Why is it important. The British government order would force Apple to create a universal access rear door to ICloud. That would not only compromise the security and privacy of British users: it would affect those around the world. The context. The British government has secretly ordered Apple to create a mechanism that allows access to all the encrypted content that any Apple user rises to the cloud, regardless of its location. He has revealed it The Washington Post citing their own sources. Between the lines. The British demand goes far beyond punctual collaboration with justice: it seeks to have total and permanent access to user encrypted data. We had never seen anything like that in a western democracy. What will happen now? It is an enigma, but there is a scenario that seems very likely: if this goes ahead, Apple will surely stop offering encryption storage in the United Kingdom rather than compromise global security. The order is based on the controversial Law of Investigation Powers of 2016, known as “Fake letter“ The law prohibits Apple to reveal the existence of this order or warn its users. The threat. If the United Kingdom gets this access, other countries could demand the same privilege, and that would create a domino effect that would load privacy as we know it. All world’s iPhone users would have a strainer on their phone. The battle for digital privacy always to two irreconcilable visions: National Security that governments claim The protection of data that defend technological ones. The following will be to see how Apple responds to this request, what options they have left and how it protects the privacy of its users if British legislation is reached that raises such an invasive scenario. In Xataka | Apple’s metamorphosis: from the minimalist catalog to calculated maximalism Outstanding image | Pexels, Xataka with Mockuuups Studio

Italy and Ireland act against privacy concerns

It was a matter of time that Deepseek will enter the radar of European regulators. The Chinese Artificial Intelligence Company (AI) is now under the scrutiny of the data protection authorities of Ireland and Italy. We are not talking about formal investigation, but about information requirements that emerge in half concerns about the privacy of users who use the famous chatbot in the old continent. The Ireland Data Protection Commission, responsible for guaranteeing compliance with the RGPD, has requested A Deepseek Details on user data processing. The equivalent body in Italy, known as guarantor, has made a similar decision: It wants to know what personal data are collected, from what sources, for what purposes, what is the legal basis of the treatment and if stored in servers located in China. Deepseek, under the scrutiny of European regulators Earlier this week we indicated that Depseek was becoming tremendously popular, But it was not clear that I was doing the company with our data. However, some of the questions that regulators have asked They have an answer in Privacy policies of Hangzhou Artificial Intelligence Co., Ltd. and Beijing Deepseek Artificial Intelligence Co., Ltd., The signatures behind Depseek. The aforementioned document indicates that the data of all Depseek users, including those residing in Europe, are stored in China. It also accounts for a huge data collection, which includes from the click patterns of the keys to the conversations. The options of opting not to participate in models training were not too clear either, and now they should be completely clarified. However, guarantor has gone one step further. The Italian regulator, That at the time it prohibited Chatgpt and fined OpenAi with 15 million euroswant to know what type of information use Deepseek for Train your models of language. In this sense, the ‘Web Scraping’ technique is focused and if registered and unregistered users of the service have been informed about the processing of the data collected. It remains to be seen how this issue will be resolved in the near future. We know that Depseek has 20 days to respond to guarantor, but it is not clear what Ireland has established. Nor can we download shares from other members of the European Data Protection Committee, composed of about twenty national authorities, among which we find the Spanish Agency for Data Protection. It is important to note that the concerns of data collection by DeDEEK They revolve around their online services. The Chinese company, which has caused an earthquake in the AI ​​industry with the launch of Deepseek V3 and Deepseek R1, Allows you to use your models locally And completely free. That is, open the door to access its novel technology without using the Internet. Images | Deepseek | Guillaume Périgois In Xataka | We already know the secret of the extreme efficiency of Deepseek: it has dodged the Cuda de Nvidia standard In Xataka | LaLiga wants your biometric data to enter the field. Europe has something to say

This is the new design that eliminates distractions

It’s only been a few months since Android 15 was officially released, but Google is already preparing for the next launch of Android 16. And the company has decided to change the development cycle of the system, bringing forward the publication of the system to coincide with the launch of the Pixel mobiles. As a result, the development of Android 16 is already very advanced, despite the fact that the first month of the year has not yet ended; At this point, normally the new version of Android is still in an early state, with little news to report. This year is going to be different, and some rumors even suggest that the first public beta could be launched this month of January. That also means that the new features being prepared for Android 16 are already in advanced development, and that is what has allowed Mishaal Rahman discover one of these innovations. This time, the novelty is in the notifications, specifically in the way they appear on the lock screen. With Android 16, users will be able to activate a new minimalist lock screenwhich will mainly display the wallpaper and widgets like the clock. This mode also affects notifications, which will no longer be displayed in a container in the middle of the screen, but will instead appear as a small pill-shaped notice. In this ‘pill’, only the app icon will appear that you have activated the notification. So if, for example, they send us a WhatsApp message, only the WhatsApp icon will be displayed on this pill; The content of the message sent to us or the name of the app in question will not be shown. Just by touching the icon, the notification will expand and show its content, such as the message they have sent us. To activate this new minimalist look, the user will be able to find a new option in settings of the lock screen, specifically in the section dedicated to notifications. New Android 16 notification style Android Authority The Free Android Here you will find two options: “Full List” shows all the content of the notifications as before, while “Compact” (Compact) will show only the app icon. Although it may seem somewhat extreme, this new design may be liked by many people. For example, it is a more private option, since by showing only the app icon It is not possible for someone to read the messages that have been sent to us. However, it must be remembered that Android notification settings already allow you to hide sensitive conversations and notifications. This new option would be more of an aesthetic change than anything else, to prevent our lock screen from filling up with notifications after a while. Icons take up less space and are more aesthetically pleasing, and They don’t cover the wallpaper so much that we have chosen.

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.