Researchers extracted photos and statuses from 3.5 billion WhatsApp users. Meta didn’t react until they told him.

Between December 2024 and April 2025, a team from the University of Vienna identified 3.5 billion active phone numbers on WhatsApp (practically its entire user base) from a single server and without encountering too much technical resistance. They processed more than a hundred million numbers per hour and extracted not only the existence of accounts, but also public keys, profile photos, status texts, and device metadata. They did it without having to hide, from the same university IP, same server, five accounts. For four months, no one in Meta noticed. Why is it important. This is not the first time that this vulnerability has been demonstrated, as it has already occurred in 2012 and 2021but the first at this scale and speed. The finding exposes a structural contradiction in WhatsApp: Your architecture should show whether a number is registered to enable contact discovery… …but that functional need collides with the privacy of its users. Knowing who uses WhatsApp in countries where it is prohibited, such as China, Burma or North Korea, can have serious consequences. There they detected 2.3 million, 1.6 million and five accounts respectively (not five million, just five). The investigation, published a few weeks ago in NDSS 2026shows that this crack not only persists, but has widened. The context. The researchers developed ‘libphonegen’, a tool that reduces the search space from billions of theoretical combinations of possible mobile phone numbers to “just” 63 billion real candidates for 245 countries. Using unofficial WhatsApp clients that directly access the XMPP API, they queried these numbers at a rate of 7,000 per second. Neither his IP was blocked nor his accounts sanctioned. Meta did not respond until researchers explicitly reported the finding in March of this year, and countermeasures did not arrive until October, just a couple of months ago. The figures. He dataset resulting five times higher the scandal of scraping from Facebook 2021: India leads the document with 749 million users (21% of the total), followed by Indonesia and Brazil. In Spain, 46.5 million accounts. 81% use Android. More than half have a public profile photo. 29% have the status text visible. Between the lines. The researchers were able to infer the operating system by analyzing initialization patterns of the cryptographic keys. Android starts certain identifiers at zero. iOS does this in random values. This detail matters because iPhone users are higher-value targets for attackers. They also detected that public keys are reused. They found 2.3 million different keys used on 2.9 million different devices. In Burma and Nigeria, tens of thousands of numbers shared the same key, pointing either to faulty implementation or outright fraud. They even found twenty American numbers that use a private key composed only of zeros. In detail. The method is not limited to confirming the existence of the accounts. For each one they extracted public keys, timestamps and the list of linked devices. This allows you to build detailed profiles without accessing the content of the messages. The age of the device can be estimated by counting key rotations. The “popularity” of a user is inferred by the frequency of depletion of their prekeys single usewhich are consumed every time you start a new conversation. Researchers downloaded 77 million profile photos of the +1 rank (prefix for the United States and Canada) in a matter of hours. 66% of them contained recognizable faces. They also found disturbing status texts, such as those from traffickers listing prices, accounts business advertising drugs or publicly visible corporate emails from governments and armies. And now what. Meta has deployed probabilistic cardinality counters to limit how many unique accounts a user can query without blocking legitimate contact discovery. It has also restricted bulk access to status photos and texts. The researchers confirmed that the measures work in subsequent tests. But no countermeasures protect those who were already listed during the months in which the system has been wide open. The big question. For four months, from a university server without even hiding their identity, they looted practically the entire user base of the most used application on the planet without anyone at Meta realizing until they were explicitly told. If these researchers were able to do it under these conditions, who else did it before without telling anyone? In Xataka | WhatsApp brings the big update of the season: the most important change is not on the mobile, but on the computer Featured image | Dimitri Karastelev

Spotify is dealing with an avalanche of songs made with AI. So you have decided to react to mark the limits

You open Spotify, you run with a song that you cannot stop listening and, nevertheless, the name of the “artist” sounds at all. You wonder if there is a band behind or if it is a Track generated by AIand the doubt is not trivial: the trained ear may detect it, but for millions of listeners the border has become blurred. With generators like Suno either You raising their creation quality, catalogs are filled and the context matters. This week, Spotify announced New policies to stop three fronts: “Slop”, impersonations and transparency on the use of AI. The company states that it wants to protect artists and prevent the public from feeling deceived, without prohibiting responsible use of these tools. In just a few months, music generators have become accessible tools capable of producing thousands of subjects ready to be uploaded to streaming platforms. We do not talk about master compositions, but about songs that meet the minimum to sneak into mass catalogs. The result is an avalanche that makes it difficult to distinguish between genuine proposals and simple algorithmic exercises. For stamps and artists, this saturation not only generates confusion among listeners, it also threatens to dilute income in a system where each reproduction counts to distribute royalties. Spotify’s plan against music made with AI Spotify frames its new rules in a simple idea: music has always been crossed by technology, from the multipist tapes to auto-tune. The current difference is that artificial intelligence evolves at a speed that generates uncertainty. In this scenario, the platform states that it wants to reinforce transparency and shield the confidence of listeners, while respecting the freedom of artists to decide how to incorporate these tools into their creative process. One of the most sensitive spotlights for Spotify is the impersonation of identity. The company has hardened its rules and clarifies that it will not allow songs that reproduce the voice of an artist without its explicit authorization. This includes voice clones generated with artificial intelligence, “Deepfakes” and any unauthorized vocal replica. In addition, new measures are tested with distributors to prevent music from foreign profiles, an increasingly common attack. The objective is that musicians can denounce quickly and maintain control over their own artistic identity. Another front that the platform wants to stop is spam. Spotify explains that some users try to manipulate the system by uploading songs of just 30 seconds to accumulate reproductions with Right to paymentor repeating the same theme with minimal changes in metadata. To combat it, in the coming months will deploy a filter that will identify this type of practices and stop recommending them. The company ensures that the measure is necessary to protect the distribution of royalties and remember that in the last 12 months it eliminated 75 million fraudulent tracks. The third leg of the plan is transparency. Spotify collaborates with DDEX, the agency responsible for setting standards in the music industry, to create a metadata system that reflects the role of AI in each song. The objective is that the credits indicate if artificial intelligence has been used in the voice, in the instruments or in the production, so that the listener knows clearly. As reported by the company, 15 seals and distributors have already promised to adopt this standard, although for now there is no release date. The real impact of the new rules will be measured over time. For artists, reinforcement against impersonation and spam can translate into a fairer environment to compete for attention and royalties. For listeners, promise is a clearer experiencewith credits that allow distinguishing which part of a song has been generated by Ia. Even so, there is uncertainty about its scope: from the possibility of errors in automatic detection to the difficulty that stamps and distributors adapt their processes quickly and homogeneously. Spotify will probably continue working after this announcement. The effectiveness of the filters and the adoption of the new credits will depend on the industry as a whole move in the same direction. AI will continue to evolve and new methods are likely to make control systems. In that scenario, the company will have to demonstrate that its measures not only slow the abuses, but also help maintain the confidence of the listeners and the value of artists’ work. Images | Xataka with Gemini 2.5 | @felirbe In Xataka | OpenAi wants to bill as much as Microsoft in five years. For this

Five identical cyberbrains did not suffice for Carrefour to react. So they have fined it with 3.2 million euros

The Spanish Data Agency (AEPD) has imposed A fine of 3.2 million euros to Carrefour for alleged infractions of several articles of the RGPD. The surprising thing is not that, but the fact that these infractions were due not to a single cyber attack, but five … exactly the same. Security gaps. Carrefour notified to The AEPD up to five personal data security violations, all related to illegitimate access to customer accounts. The gaps occurred on January 13, January 20, January 24, April 18 and April 21, 2023. All with the same technique. Credential Stuffing. Everything indicates that these security gaps occurred when taking advantage of credential criminals (name, password) of Legitimate Employees of Carrefour who ended up filtering and managed to be obtained by the attackers, probably through Previous massive data robberies. Stolen data. Among the affected data They were name, surname, email, telephone number, DNI, physical address or customer passport number, in addition to information related to their interests, purchase trends and commercial preferences. Thousands of affected. The number of affected according to the AEPD was 118,895 unique accounts that the attacker could obtain personal information. According to Carrefour, the real affectation was much lower: the one that impacted the people’s integrity It was only 234 cases and the confidentiality of its data 973 cases. Several serious infractions. Carrefour recognized his responsibility for the alleged violation of article 34 of the GDPR (communication of the gaps to the affected people) but initially did not consider it “mandatory.” In addition, the AEPD concluded that Carrefour violated the principle of integrity and confidentiality (Article 5.1.F of the RFPD) by allowing illegitimate access to third parties of personal data. And lack of diligence. According to those responsible for AEPD, Carrefour did not have implanted the technical measures necessary to guarantee a level of safety appropriate to risk, but also accused it of lack of diligence. In Carrefour they ended up implementing the double authentication optionbut only since October 2023, when five security gaps had already been used. The fine, broken down. The total fine is 3.2 million euros, but it is actually composed of three concepts: Violation of the principle of integrity and confidentiality (very serious): two million euros Infringement of data processing (serious): one million euros Infringement for communication to interested parties (mild): 200,000 euros. Do not protect customer data is expensive. Iberdrola received an even greater than 6.5 million euros last year after being the victim of a cyber attack that Exposed the data of 850,000 clients. Before, in July 2021, the AEPD I fined 2.5 million euros to Mercadona For a violation of the privacy of the users: in this case, for a pilot of facial recognition that carried out months before and that laid a precedent for this type of systems. With this data, a threat: identity supplaments. Whenever customer data is stolen, there is a clear threat: that they are used to supplant the identity of those customers. With these data it is possible to configure custom and directed scams, much more credible and dangerous for the victims. The other immediate risk is that cybercounts use these credentials to try to steal accounts in all types of services, hence the importance of Do not use the same password on different platforms. Image | Xataka In Xataka | We visited the National CNI cryptological center: here is the epicenter of Spanish cybersecurity

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.