the trauma of thousands of underpaid workers in developing countries

He data labeling It is a necessary step so that learning models can understand them and thus learn. It’s the ScaleAI business, Alexandr Wang’s company recently valued at $29 billion. However, not everyone involved in data labeling enjoys this status. Much of this work is carried out by workers in poor countries, poorly paid and involves very unpleasant tasks. what’s happening. The advancement of artificial intelligence requires an enormous amount of data labeling. They count in AFP that this work is usually done by workers who reside in impoverished countries such as Kenya, Colombia or India. In addition to being very poorly paid, the job often requires them to review very unpleasant images. For example, for an AI to write an autopsy report, taggers must view hundreds of images of real crimes. The work. It consists of reviewing and labeling files, most often images. It does not require a degree, just knowing how to use a computer and demonstrating that you can think analytically. The ease of access means that many people in vulnerable situations turn to this type of work. The problem is that, to get a decent salary, they have to work long hours, up to 16 hours a day in some casesand also many times the content they must label is violent and extreme. AI moderators. It is a situation similar to the one that moderators of different platforms have been denouncing for years. We recently talked about the lawsuit that a former Chaturbate moderator had imposed on the company. There are other cases like that of Facebook moderators in Barcelona who denounced the company due to the psychological trauma that filtering all that content caused them. Invisible. The data labeling market generated 3.8 billion dollars in 2024 and is expected to grow to 17 billion in the next five years. However, those who make it possible work in very poor conditions. A Colombian worker tells AFP that data taggers are “like ghosts. No one knows we exist even though we are contributing to the technological progress of society.” Better conditions. There was no legislation in Kenya, but data taggers have been organized to achieve regulation and have better working conditions. They denounce the lack of psychological support they receive and demand formal employment contracts, a fair salary that reflects their work and the fundamental right to rest. This mobilization seeks to guarantee a more dignified work environment and protect the rights of these essential workers in the artificial intelligence industry. The platforms. The most mentioned is Remotasks, a subsidiary of Scale AI that has been the subject of protests in countries such as Kenya, Venezuela and the Philippines for defaults and problematic practices. The company defends himself and ensures that they offer “fair and competitive remuneration.” Last year closed its doors in Kenya after workers complained publicly. There are more like the Australian Appen or Sama, a subcontractor of companies like Meta and OpenAI that was sued in Kenya due to poor working conditions and also ceased its activity. The human cost. There is growing concern about the environmental impact of artificial intelligence, that requires large amounts of energy to run, especially due to the training and operation of complex models. However, there is not only an energy and natural resource cost, but also a significant human cost that seems to be going more unnoticed. Image | Christina Morillo, Pexels In Xataka | There are 60 countries that have signed an agreement for “open”, “inclusive” and “safe” AI. And two that don’t: the US and the United Kingdom

There are people who are dedicated to moderating porn in places like chaturbate. And some are demanding for psychological trauma

If we talk about adult content sites, Chaturbate not only has one of the most names, it is also one of the most popular free webcams platforms around the world. Today It is news Because one of its moderators has sued the company. The reason: suffers post -traumatic stress. The case. Neal Barber was hired by Bayside Support Services and Multi Media LLC, the company that owns Chaturbate, in 2020 as a “customer service risk supervisor” (moderator). His work is to review the content that is published and make sure that he meets legality. On July 22, Barber filed a lawsuit where he assures that he suffers psychological trauma for being exposed to “extreme, violent, graphic and sexually explicit content” daily. Although for now only Barber appears in the complaint, the case opens the door for other moderators hired in the last four years to join. Unprotected. Moderators are the first line of defense to stop the illegal content, which according to demand, in chaturbate often includes “child exploitation, non -consensual, violent, obscene or self -harm content.” Barber claims that chaturbate does not protect the mental health of its employees, something that is sector standard. In statements a 404 averagea chaturbate representative ensures that the company “deeply values the work of its moderators and is committed to supporting the team responsible for this fundamental work,” but does not say how they plan to do it. Consequences. Without any protection, the plaintiff developed posttraumatic stress syndrome and is currently under treatment. Being exposed daily to this content “designed to cause trauma” has resulted in, according to your lawyer, “vivid nightmares, emotional distancing, panic attacks and other symptoms compatible with post -traumatic stress disorder.” And it is not the only one. Years fighting. Content moderators have been complaining about working conditionsespecially of the impact that causes them at the psychological level, and we are not talking about adult content platforms, it occurs in soft content apps that we see every day. One of the first cases It happened in 2017when two employees of the Microsoft online security team reached the limit after watching a video of a rape and subsequent murder. In 2020 A moderator denounced YouTube And in 2021 too It happened in Tiktok. The reasons are repeated: the platforms failed to protect the mental health of their employees. At the moment, Goal faces demands in Kenya And also In Barcelona for the psychological consequences and bad working conditions. Solutions. There are already protections for moderators, but not all companies apply them. In the demand, chaturbate is accused of not including some standard practices such as “content filters, breaks for well -being, specialized advice or support systems among colleagues”. Nor do they comply with other usual practices such as putting videos on grays or silence the videos that are reproduced automatically. According to Checkstepa company dedicated to moderation, AI can also be an effective solution to reduce exposure. In addition to accelerating work, integrating advanced recognition systems, moderators can “focus on more complex cases and minimize exposure to potentially harmful material.” In Xataka | Meta follows the steps of X: we not only work writing for her, now we will also work moderating her

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.