Meta’s glasses record everything we see. Some gentlemen in Kenya are also looking at it to train AI

Meta is competing in two races. On the one hand, that of the artificial intelligence. On the other hand, finding the “new smartphone.” In this sense, your total bet is on glasses with AI. Devices like Ray-Ban Meta 2 They have the potential to record everything we see. And within that “everything” is getting naked in a fitting room, having sexual relations or entering the bank password into our cell phone.

And someone in Kenya is watching all of this with one goal: training artificial intelligence.

In short. Before we delve deeper, let’s get the context. The Swedish media Svenska Dagbladet has published a report in which they explain how Meta’s artificial intelligence is being trained. At least, to the AI ​​that gives life to your smart glasses. For this training, Meta collects our data such as conversations, photos and videos, which are sent in massive packets to companies that break them down and then ‘shot’ the information into the training software.

One of those companies is Sama. It is located in Kenya and some of its employees have revealed to Swedish journalists what type of information they see every day, recounting some cases that are still everyday actions that we all do. The problem is that we do them in privacy. That said, we are going little by little because there is a lot.

Ray-Ban Meta. The glasses need no introduction and, in fact, we tested the second generation a few weeks ago. In our analysis of the Ray-Ban Meta 2 We already said that they were part of that post-smartphone vision thanks to a very decent camera and sound, but with disappointing AI. That is precisely the point on which Meta had to work more and it does so thanks to the images it collects from each user.

What we give up. In the investigation of the Swedish environment, and it is something that we can see in the terms of use of Meta AI services, details a situation where it appears that we have significant control over data such as images or voice recordings. The document notes that certain data can be saved and used to improve Meta products if the user gives their consent, but there is a side B: for the AI ​​assistant to work, voice, text, image and video must be provided.

According to these conditions, “in some cases, Meta will review interactions with the AI, including the content of conversations or messages to the AI. This review may be automated or manual.” In addition, it is also established that the user should not share information that they do not want the AI ​​to use or retain, such as “information on sensitive topics.” The problem is that, if you do not accept, you cannot use Meta AI.

Training AI manually. When the data review is manual, that is when the problem begins. The article states that one of the analysis centers is located in Kenya. It is called Sama and it is a company hired by Meta to carry out a task known as “labeling.” The data leaving the device goes through a cleaning process that blurs faces and private data, but then workers perform some manual actions on the images.

Screenshot 2026 03 04 At 15 01 29
Screenshot 2026 03 04 At 15 01 29

An example of labeling

For example, selecting outlines of people, naming objects such as “lamp”, “car”, “book”, “computer”, registering traffic signs and, in short, everything we see. Then all that correctly labeled is organized into data packets that are ‘launched’ to the artificial intelligence training systems. Because if an AI “knows” that a ‘STOP’ sign is a ‘STOP’ sign, it is because it has been taught before with real images. The goal is to improve, precisely, what we criticized in our analysis: artificial intelligence and its connection with the world.

When the system fails. For the analysis, they have contacted former Meta employees in labeling centers in the United States. They assure that the system automatically anonymizes faces and sensitive data, but “the algorithms sometimes get lost. Especially in difficult lighting conditions, certain faces and bodies are perfectly visible.”

And that’s where the problem begins. The workers at the labeling center that has been put under the microscope are not there watching what I will detail below for pleasure or voyeurism, but because they are labeling to train the AI. The problem is… what you supposedly see in the images.

nothing is private. An employee at the Kenyan data center explains that “in some videos you can see someone going to the bathroom or taking off their clothes. I don’t think they know, because if they didn’t, they wouldn’t record.” But going to the bathroom is not the only thing they have seen at that labeling center. Everyday scenes in a Western room followed by others in which sexual relations take place. Recording another person naked by mistake (when your partner gets out of the shower, for example), or leaving your glasses on a surface in the room to record how your wife changes without her knowing.

Transcripts about protests, “very dark things” crimes or topics such as the description of a woman by a man who argues that he would like to have relations with her are also analyzed. “We see everything and Meta has that type of content in its database. People can record themselves in the wrong way and not know they are doing it,” says one of the workers who assures that, if the clips are leaked, it would be a “huge scandal.”

“I think that if they knew the extent of the data collection, no one would dare to wear the glasses”

What if I don’t record? Svenska Dagbladet has not done this report for two days. They point out that they have been working on the information for months, meeting with the parties and asking both the opticians where the glasses can be purchased and Meta itself. Regarding retailers, they claim that they have no idea where the data goes. Others point out that “everything is kept locally in the application”, which is not true because Meta’s AI does not work on the device: it needs an Internet connection.

Another issue comes into play here, which is the type of training that retailers who sell the devices receive, but there is something in the background that you may be thinking about. Okay, the data is filtered, but… what if I only consciously record when I want? Here is the Meta itself that details how this video and sound recording works:

  • By pressing a physical button on the glasses.
  • When you use the command ‘Hey Meta’ and ask a question.

What does Meta say?. “When Meta AI is used, we process that data in accordance with Meta AI’s Terms of Service and Privacy Policy,” says Joyce Omope, Meta spokesperson. It is not very revealing, but a Meta executive interviewed by the media and who preferred not to be identified affirms that it does not matter where the server on which the data is stored is located as long as the country complies with the requirements of the European Union.

The problem is that they are talking about the privacy policy, not what is done with the data for Meta AI training. From Xataka, we have contacted Meta to find out his vision on the matter.

Add and continue. At this point, you may be thinking, “Wait, this story sounds familiar.” And the truth is that this is not the first time that a controversy related to the manual review of private information in company applications has come to light. A few years ago, and apart from Cambridge Analyticsthen known as Facebook already faced another controversy stating that it scanned all messages, links and images sent through Messenger and Instagram to ensure that “content rules are not violated.”

Also of the conditions of Facebook moderatorsexposed to content of all kinds to decide whether or not something can be seen on the platform. We talk about sex, but also about videos of violent deaths or child abuse. This is something that has slowly been uncovered and has even affected workers in Spain. In the moderation center of Barcelona, ​​specifically, where employees demand million-dollar compensation after years of witnessing the most explicit violence.

These employees experience post-traumatic stress, panic attacks, phobias and even suicidal thoughts. due to the type of content they must display. It’s no longer that they saw naked people because they have to label everything to feed the insatiable AI: we’re talking about beheadings, rapes, live suicides and child pornography. Up to 800 videos a day.

AI = ‘Another Indian’. And added to all the controversies, we have something more fundamental. This data labeling so essential for learning models to be able to… learn, is largely based on precarious work on the part of people who are in developing countries. Kenya is a country where there are several “human data centers” like the one that works for Meta labeling what the Ray-Ban Meta sees. In fact, a few months ago published a report in Coda which told how Kenya, and Sama specifically, was doing the “digital dirty work” in the age of AI. OpenAI was involved.

They also concentrate on facilities in India, hence the very bad joke of ‘Another Indian’ and recently The “trick” about Waymo’s remote taxis was revealed: people in the Philippines “driving” cars remotely. At least helping.

As we say, we have contacted Meta and we will update the article as soon as we have a response.

Images | Xataka (Crossover), We-Vibe ToysUnplash, BaristaVision+

In Xataka | There are 60 countries that have signed an agreement for “open”, “inclusive” and “safe” AI. And two that don’t: the US and the United Kingdom

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.