teach AI to sound human

In recent months, many of us have spoken to an artificial intelligence without thinking too much about it. We have asked him questions, we have asked him for advice or we have simply tested how far his ability to keep a natural conversationl. Tools like voice modes ChatGPT either Gemini They have brought that experience closer to something that, not so long ago, seemed reserved for science fiction, with inevitable echoes of ‘Her’. But there’s one question we rarely ask ourselves while talking to them: how have these machines learned to sound less and less like a system and more like a person. To understand it, it is convenient to separate what we see from what we do not see. On the one hand there are the applications that we use daily, those assistants that respond with an increasingly natural voice. On the other hand, the systems that support them, models trained with large volumes of data that they need to learn not only what to say, but also how to say it. We do not know which specific products end up using this type of recording, but we do know that they are part of the ecosystem with which increasingly fluid and credible voice systems are trained. The human hand behind an artificial voice When we get down to the details, what these workers do is not very similar to the classic idea of ​​“training an AI.” In many cases, it involves having conversations with strangers about seemingly trivial topics, from everyday tastes to open questions that require you to develop an answer. In others, the assignment is more demanding: playing a role, following a script without seeming like it or enter emotional terrain. Bloomberg accountFor example, the case of a worker who recounted painful memories of her life while speaking with a man who introduced himself as a pastor and who, within the exercise, played the role of therapist. All that recorded material serves a very specific purpose: capturing nuances. We are not just talking about words, but about pauses, breaths, changes in tone, hesitations or emotional reactions that make a conversation sound human. There are also labeling tasks, in which workers have to distinguish whether an audio contains a sob, a laugh, or someone talking between laughs. The underlying logic is simple: if a machine wants to stop sounding robotic, it first needs to be exposed to how we really speak. After passing an initial voice test, they can qualify for tasks that start at about $17 per recorded hour. From there, the question is inevitable: how do you access this type of job and how much do you really earn? Platforms like Babel Audio They function as intermediaries that connect these workers with specific projects. After passing an initial voice test, they can opt for tasks that start at around $17 per recorded hour, although the final income depends on the evaluation received and the volume of orders available. Income also varies greatly: a worker cited by the aforementioned media claims to earn about 600 dollars a week. This is what the BabelAudio website looks like As we progress, the work begins to show a less visible side. Beyond the rates and the promise of flexibility, the testimonies point to an environment marked by uncertainty and constant control. Platforms can limit access to tasks, interrupt projects or suspend accounts without detailed explanations, leaving many workers in a fragile position. In addition, each conversation is subject to real-time metrics that assess whether someone speaks too much or too little, expressiveness, language proficiency, depth of exchange and even the length of pauses. When we broaden the focus, the debate stops being solely work-related and also becomes personal. Part of the value of these recordings lies precisely in the fact that they capture how we speak and how we relate, which implies that workers are providing more than just a mechanical task. The terms generally allow those recordings to be used in voice assistants, speech synthesis, and “other audio-related products and services.” When we connect all the pieces, what we see is an industry that works thanks to a complex production chain. The Pulitzer Center describes This ecosystem is like a fragmented work network in which workers are usually subject to confidentiality agreements, operate with very little transparency and, in many cases, do not even know what system they are training or what company their work ends up going to. In this context, the conversations that feed voice systems are only one part of a larger machine, where each task contributes to building increasingly sophisticated technologies. Images | Xataka with Nano Banana 2 | Screenshot In Xataka | Congratulations, you already program without knowing how to program. Now prepare to wait six weeks for Apple to listen to you

In 1994, a programmer created a “temporary” interface for Windows. Three decades later he is still with us

Windows is one step away from turning 40 years old. The first version of the operating system appeared in November 1985and since then it has not stopped evolving. However, Microsoft tends to take a long time to update some components of its products. With Windows 10, for example, it released a renewed user interface, but it was not until years after its launch that it began to get rid of some icons from the Windows 95 era. Now, in Windows 11is renewing programs like paint and Notepad. Regardless of how modern Windows 11 may feel, and all the new features that come with its updates, the system still retains some elements that we could classify as historical. Among them we find the utility to format disks. WINDOWS 10: 9 VERY USEFUL and LITTLE KNOWN TRICKS Currently, if you wanted to format a storage drive from Windows 11 you would find a pop-up window practically identical to the one you could find decades ago. In fact, we know exactly who created it. The format drives dialog in Windows 10 A former Microsoft programmer named Dave Plummer recently shared an some interesting facts about this part of the operating system. The now entrepreneur says he created the Format dialog box one rainy morning from the end of 1994. He says that they were migrating millions of lines of user interface code from Windows 95 to Windows NT, and that the formatting section was very different between systems, so it was necessary to create a new user interface. And Plummer took on this task. The programmer did not think of doing a definitive job, but of providing a temporary solution with the help of a sheet, a pen, Visual C++ 2.0 and the Resource Editor. “It wasn’t elegant, but it would do until the elegant user interface arrived,” he says in the message. Plummer also set the 32GB limit for the format of FAT volumes that morning. It is curious, because FAT is capable of working with larger volumes, although to create volumes with this capacity it is necessary to use the command line. The disk formatting utility interface appeared in Windows NT-based operating systems, such as Windows 2000 and Windows XPand it has been with us ever since. Throughout this time it has basically been a temporary solution created in 1994. Images | Windows | Genbeta In Xataka | Intel is hunting and capturing new customers. His next goal: convince Elon Musk and make chips for Tesla

Three months ago Australia banned social media for those under 16 years of age. It is already investigating possible breaches

Just three months ago, Australia launched one of the most ambitious regulations that have been proposed so far on social networks and minors. The measure came into force on December 10, 2025 with a clear message: force platforms to prevent those under 16 years of age from having accounts and give families back part of the control over the digital lives of the youngest. From the first moment it was presented as a pioneering initiative, but something important was also assumed from the beginning: applying it was not going to be easy. The first doubts. The rule has already entered its most delicate phase, checking whether it is really being applied as planned. The eSafety regulator has opened the first formal review and has put platforms such as Facebook, Instagram, Snapchat, TikTok and YouTube under scrutiny. The agency speaks of “significant concerns” and points to failures in control mechanisms. It also points out that current systems are not effectively preventing those below that threshold from continuing to open new accounts. How minors are sneaking in. The report goes beyond a general warning and focuses on very specific failures in the control systems. It has been detected that there are not enough safeguards to prevent users under the permitted age from creating new accounts, but also something more striking: some platforms allow the verification processes to be repeated until the user manages to pass them. Also in certain cases, these profiles are invited to demonstrate that they meet the age requirement even after having indicated that they do not, which shows inconsistencies in the application of controls. A problem that was already anticipated. The difficulties in applying the rule have not arisen now, they were already on the table from day one. When the law came into force, The Australian Government itself admitted that its implementation would not be perfect, and the first signs pointed in that direction. According to ABC, Some minors managed to bypass the verification systems with basic tricks, such as altering their appearance in facial controls. The outlet itself also warned that parents and older siblings could help some children get around the restrictions, an early sign that the challenge was not just in passing the law, but in making it really work. What is at stake for the platformss. The investigation opened by eSafety does not remain a diagnosis, it opens the door to possible sanctions if it is demonstrated that companies have not taken reasonable measures to prevent minors affected by the rule from having an account. Reuters points out that The fines can reach 49.5 million Australian dollars and affect the aforementioned services and platforms. The regulator has already begun collecting evidence and hopes to close at least part of its investigations by mid-year, which places technology companies in a scenario in which non-compliance is no longer just a reputational risk. The Spanish mirror. What is happening in Australia helps to put into context a debate that has also gained weight in Spain, although here it is at a different point. Peter Sánchez announced in February that The Government wants to prohibit access to social networks for minors under 16 years of age within a broader package of measures on age verification, traceability of hate and responsibility of technology managers. The key difference is that that ban has not come into force and is not being enforced. Still, the Australian case offers a useful reference to anticipate what kind of challenges may appear when such a measure moves from political announcement to actual implementation. Images | cottonbro studio In Xataka | “What the hell is happening with Lidl Spain?”: Germans are speechless at the chain’s comic surrealism

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.