which models are going to update to the next version

Let’s tell you which Motorola phones are going to update to Android 16, now that the manufacturer has begun to deploy the new version of the operating system. In this way, if you have a mobile phone of this brand we will leave you the list where you can see the models they receive Android 16. Motorola is going to update almost thirty mobile models to the new version of Android, including both its flagships and other models that have been launched in the last three years. Thus, if your mobile model appears on this list, you will receive the update in the coming weeks or months. Motorola phones with Android 16 Motorola has a policy to cover two years of operating system updates. If your mobile is on this list you will receive Android 16although the time to wait will depend on each model. In the cases of the latest high ranges the updates will arrive soon, while in others it may take several months for them to appear. These are the mobile phones that will be updated between the remainder of 2025 and the beginning of 2026: Motorola Edge 60 Pro Motorola Edge 60 Motorola Edge 60 Fusion Motorola Edge 60 Stylus Motorola Edge 2025 Motorola Edge 50 Ultra Motorola Edge 50 Pro Motorola Edge 50 Neo Motorola Edge 50 Fusion Motorola Edge 50 Motorola Edge 40 Pro Motorola Razr 60 Ultra Motorola Razr 60 Motorola Razr 2025 Motorola Razr Plus 2025 Motorola Razr 50 Ultra Motorola Razr 50 Motorola Razr Plus 2024 Motorola Razr 2023 Moto G86 Moto G86 Power Moto G56 Moto G Power 2025 Moto G 2025 Motorola G Stylus 2025 Moto G85 Moto G75 Moto G55 ThinkPhone 25 Cover photo | Ivan Linares In Xataka Basics | Android 16: 17 functions and some tricks of the new version of Google’s mobile operating system

We already know how to retrieve the exact prompts that people use in AI models. It’s terrifying news

A group of researchers has published a study that once again raises alarm bells regarding privacy when using AI. What they have managed to demonstrate is that it is possible to know the exact prompt that a user used when asking a chatbot something, and that puts AI companies in a delicate position. They can, more than ever, know everything about us. A terrifying study. If you are told that ‘Linguistic models are injective and, therefore, invertible’ you will probably be shocked. That’s the title from the study carried out by European researchers in which they explain that large language models (LLM) have a major privacy problem. And it has it because the transformer architecture is designed that way: each different prompt corresponds to a different “embedding” in the latent space of the model. A sneaky algorithm. During the development of their theory, the researchers created an algorithm called SIPIT (Sequential Inverse Prompt via ITerative updates). Such an algorithm reconstructs the exact input text from the hidden activations/states with a guarantee that it will do so in linear time. Or what is the same: you can make the model “snap” easily and quickly. What does this mean. What all this means is that the answer you got when using that AI model allows you to find out exactly what you asked it. In reality, it is not the answer that gives away, but the hidden states or embeddings that the AI ​​models use to end up giving the final answer. That’s a problem, because AI companies keep these states hidden, which would theoretically allow them to know the input prompt with absolute accuracy. But many companies already saved the prompts. That’s true, but that “injectivity” creates an additional privacy risk. Many embeddings or internal states are stored for caching, for monitoring or diagnosis, and for customization. If a company only deletes the plain text conversation but does not delete the embeddings file, the prompt is still recoverable from that file. The study shows that any system that stores hidden states is effectively handling the input text itself. Legal impact. There is also a dangerous legal component here. Until now, regulators and companies argued that internal states were not considered “recoverable personal data,” but that invertibility changes the rules of the game. If an AI company tells you that “don’t worry, I don’t save the prompts” but it does save the hidden states, it’s as if that theoretical privacy guarantee is of no use. Possible data leaks. A priori it does not seem easy for a potential attacker to do something like this because they would first have to have access to those embeddings. A security breach that results in the leak of a database of those internal/hidden states (embeddings) would no longer be considered an exposure of “abstract” or “encrypted” data, but rather a plain text source from which, for example, financial data or passwords that a company or user has used when asking the AI ​​model could be obtained. Right to be forgotten. This injectivity of LLM also complicates the requirements of regulatory compliance for the protection of personal data, such as the GDPR or the “right to be forgotten.” If a user requests complete deletion of their data from a company like OpenAI, they must ensure that they delete not only visible chat logs, but also all internal representations (embeddings). If any hidden state persists in any register or cache, the original prompt would still be potentially recoverable. Image | Levart Photographer In Xataka | OpenAI is making the tech industry unite its destiny with yours. For the sake of the global economy, it better work

There is a trick to make AI models more reliable: talk badly to them

If you greet ChatGPT and thank it when it responds, you’re not getting the most out of it. Some researchers wanted to check if the tone we use when asking the AI ​​for things changes the results and they have discovered something interesting: being rude makes them more trustworthy. Rude. They tell it in How to AI. A study carried out by researchers at the University of Pennsylvania has analyzed whether the tone we use when writing a prompt has an effect on the result and the conclusions are clear. Prompts with a ‘rude’ or ‘very rude’ tone elicited up to 4% more accurate responses than those with a more polite tone. The study. To test it, they generated a list of 50 questions on different topics such as history, science or mathematics. Each of the questions was asked using five different tones: very polite, polite, neutral, rude, and very rude. The model they used was ChatGPT-4o. The results. The researchers did ten rounds with all the questions in different tones and the conclusions are very clear. If we look at the variations, the difference between the neutral or rude tone is only 0.6%, but at the extremes the difference becomes more evident. When using a ‘very friendly’ tone, the average accuracy was 80.8%, while if we went to ‘very rude’, it increased to 84.8%. Kindness by default. We tend to speak kindly to chatbots, this is reflected the survey that Future conducted at the end of 2024. At least 70% of respondents admitted to using “please” and “thank you” when using AI chatbots. Many claimed to do so as a matter of custom, culture and “because it is the right thing to do”, although a small percentage admitted to being afraid that robots would rebel in the future. It is expensive. Regardless of the reasons that lead us to be kind to AI, there is a reality and that is that “please” and “thank you” have an absurd cost. When we thank ChatGPT, requests to the language model increase, which increases electricity and water consumption in data centers. We don’t have figures, but Sam Altman assured that kindness has cost OpenAI “tens of millions of dollars well spent.” The prompt. Despite the enormous advances in AI, language models continue to amaze and are not 100% reliable. However, many times the fault that the answers are not exact does not lie with the model, but with how we are asking it. There is tricks to get a good prompt and being friendly or using fillers like “if you can, I would like to…” is one of the points to avoid. It is not a question of treating them badly either because that does not contribute either, but the more direct and clear you are, the better the result will be. Image | Pexels In Xataka | AI agents want to take our jobs. First they will have to learn not to fail in 70% of the tasks

running AI models

It was “sung” that Apple would present its new Apple M5 sooner rather than later, and we already have it with us. The chip represents a notable qualitative leap judging by its specification sheet, and there are improvements in all sections. The surprise is that where a real change is seen is in its GPU, which is now much more powerful and is clearly prepared to be able to work with AI models in a much more striking way. Apple M5. Apple’s new SoC makes use of third-generation 3nm photolithography. There are therefore no major changes in the manufacturing process, which is surely more efficient and reliable. Even so, the changes in the chip are notable everywhere. To start, the CPU has six high-efficiency cores and four high-performance cores. According to Apple, this CPU offers 15% more multicore performance than the M4. Working with local AI models thanks to apps like LM Studio is going to be much more feasible thanks to these Apple M5s. A GPU with a lot of potential for AI. If there is a standout element in this SoC, it is the GPU, which in the M5 has 10 cores, each of them including a Neural Accelerator. According to Apple, this allows AI workloads to run “dramatically faster”, and promises performance of more than four times that achieved with the Apple M4. There are also improvements in the bandwidth of the unified memory, which is now 30% higher and reaches 153 GB/s, something especially crucial for using AI models locally. These chips allow for configurations with up to 32 GB of unified memory: the figure is not particularly high, but we will undoubtedly see an M5 Pro and an M5 Max with much more room for maneuver in this section. And also to play. There is a lot of good news in this GPU for those who plan to take advantage of it to work with local AI models, but there are also improvements for video games. Graphics performance is up to 45% higher than the M4 GPUs, and we also have third-generation ray tracing technology. The neural motor is strengthened. The new Neural Engine has 16 cores and combines, according to Apple, a mix of efficiency and performance. This means that, for example, the new Vision Pro, which also includes this chip, can transform 2D photos into spatial images in the Photos application easily and quickly. Prepared for the future of Apple Intelligence. Apple Intelligence may not be a notable artificial intelligence platform today, but it is clear that Apple is preparing its devices for a scenario in which being able to run AI models locally (and privately) is the norm. The recent iPhone 17 also made a striking leap in these capabilities with the A19/A19 Pro, and now we see the same movement with these M5s that put all the focus on AI. Waiting for the M5 Pro/Max/Ultra?. These chips can already be found in the first products that use them: the new 14-inch MacBook Pro, the iPad Pro and the Vision Pro (2025). It is to be expected that Apple will soon present more powerful versions of the recently launched M5, because that is what it has done in previous generations. Last year he launched the Apple M4 in May and M4 Pro/Max at the end of October. So five months passed. In the previous generation the M3, M3 Pro and M3 Max were launched in October 2023, and the M3 Ultra appeared much later, in March 2025. The M4s have not seen (at least, for now) an Ultra version, but Apple may end up doing something similar to what it did with the M3s. If they follow the previous scheme, the M5 Pro/Max would be announced in February or early March, and more or less around those dates a hypothetical M4 Ultra could appear. Rumors suggest, however, that these chips would arrive later, in mid-2026. Future teams. Meanwhile, yes, we can now enjoy the first devices that have these chips. It is also expected that Apple will offer new devices with the M5 chip in the middle of next year: that is when the Mac mini with M5, the iMac with M5 and the Mac Studio M5 and on the other hand the MacBook Pro with M5 Pro/Max are expected to arrive. In Xataka | Apple has become a boring company. We wonder who will inherit his throne: Crossover 1×24

These televisions are today (October 13) at outlet prices: models from 169 euros

The television is one of those essential devices, for the vast majority of users, in any home. If you are thinking of renovating yours or want to buy one for a bedroom or even the kitchen. Today, we have found some bargains on TVs that may interest you. Daewoo 43DM56UV by 169 euros: 43-inch LED and with Android TV operating system. Toshiba 43UV3463DG by 229 euros: 43-inch Direct LED and 4K Ultra HD resolution. Philips 50PUS8010 by 329.99 euros: 50 inches and with Ambilight. Hisense 55E7NQ Pro by 499 euros: 55-inch QLED with 144 Hz. Samsung TQ75Q64DAUXXC by 949 euros: 75-inch QLED and with Tizen. Daewoo 43DM62UA We begin our compilation with one of the cheapest televisions that you can find on the Carrefour website. It’s about this Daewoo 43DM56UVwhich presents a very good quality-price ratio. Its usual price is 219 euros, but now it is reduced to 169 euros. This cheap TV has a 43 inch LED panel and with 4K Ultra HD resolution. Its two speakers offer a power of 16 W and it comes with Android TV operating system. It integrates WiFi, Bluetooth and Ethernet and comes with three ports HDMI and two USB 2.0 ports. The price could vary. We earn commission from these links Toshiba 43UV3463DG Another of the cheap TVs that you can buy at Powerplanet is this one Toshiba 43UV3463DG. Usually, it costs 299 euros, but now it has a 34% discount applied, which means that you can purchase it for 229 euros in these moments. This smart TV mounts a panel 43 inch Direct LED with 4K Ultra HD resolution. Its speakers offer an RMS power of 16 W and it comes with VIDAA operating system. It is compatible with HDR10 and Dolby Vision and is also compatible with Google Assistant and Alexa. Toshiba DLED 43″ 43UV3463DG UltraHD 4K VIDAA The price could vary. We earn commission from these links Philips 50PUS8010 If you want to have a TV with Ambilight at home, this model on offer at PcComponentes will interest you. Now, you can get the Philips 50PUS8010a 50-inch model that usually costs around 350 euros, but now you can take it for 329.99 euros. This TV mounts a 50-inch LED panel with 4K Ultra HD resolution. It works under Titan OS operating system and has the lighting system Ambilight. Its two speakers offer a power of 20 W and are compatible with Dolby Atmos and it comes with three HDMI 2.1 ports, two USB ports and is compatible with Alexa and Google Assistant. Philips Ambilight 50PUS8010 4K LED Smart TV The price could vary. We earn commission from these links Hisense 55E7NQ Pro Another of the bargains on televisions that are worth it at PcComponentes is this one. Now, you can take this smart TV Hisense 55E7NQ Pro. With a 28% discount. Its usual price is 569 euros, but now you can get it for 499 euros. It is a TV with a 55-inch QLED panel with 4K Ultra HD resolution and 178º viewing angles and compatible with HDR10+. Its speakers offer a power of 40 W and are compatible with Dolby Atmos and DTS Virtual:X. It has a refresh rate of 144 Hz and AMD FreeSync Premiummaking it ideal for gaming). Finally, it can be noted that it works under the VIDAA operating system. Hisense – QLED TV 139 cm (55′) Hisense 55E7Q Pro, UHD 4K, Smart TV. The price could vary. We earn commission from these links Samsung TQ75Q64DAUXXC And if what you are looking for is a large TVthis 75-inch Samsung Q64A is another of the bargains that you can find now at MediaMarkt. Its usual RRP is 1,079 euros, but now it is available for 949 euros. Assemble a panel QLED with a diagonal of 75 inches and 4K UHD resolution. It supports HDR10+ and comes with Filmmaker Mode. Works under the operating system tizen and it has speakers with a power of 20 W. Finally, it is worth mentioning its connectivity section, since it integrates WiFi 5, Bluetooth 5.2, Ethernet, three HDMI, two USB and optical audio output. Samsung QLED Q64D 75″ 4K UHD Smart TV AirSlim Quantum HDR Q-Symphony The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | Webedia, Daewoo, Toshiba, Philips, Hisense and Samsung In Xataka | Best home theater projectors. Which one to buy and five recommended models from 299 to 18,000 euros In Xataka | Best sound bars in quality price. Which one to buy and seven recommended models from 140 euros

Alibaba has one of the best open source AI models. Your next step: use it in robotics

Alibaba has taken another step in its commitment to artificial intelligence by creating an internal team dedicated to roboticswhich will operate from qwenits AI modeling division. The Chinese giant, owner of one of the best open source AI models, now wants the Qwen team to know how to apply its knowledge in robotics, a sector that is beginning to awaken interest, not only in industry, but also with the arrival of projects in the domestic sphere. Who leads the project. Justin Lin, technology manager at Qwen and expert in multimodal models (capable of processing text, sound and images), was the one who has confirmed the creation of this “small team for robotics and embodied AI” through their social networks. Lin has worked on the development of the Qwen models, which are currently among the most popular in open source globally. The vision behind the movement. According to Linmultimodal AI models are evolving into “fundamental agents” capable of performing complex long-term reasoning tasks thanks to reinforcement learning. “They should definitely make the leap from the virtual world to the physical world,” he said. explained the manager, making clear the intention to apply these technologies in tangible devices. Alibaba’s big bet. This announcement is part of Alibaba’s broader strategy in the sector. Last month, the company led a financing round of 140 million dollars at X Square Robot, a Chinese robotics startup. In addition, its CEO Eddie Wu esteem that global investment in AI will reach $4 trillion in the next five years, a figure that reflects the sector’s expectations. Global competition. Alibaba is not alone in this race. Nvidia and SoftBank are also making significant moves in smart robotics. SoftBank just announced the acquisition of ABB’s industrial robots business for $5.4 billion, while Nvidia CEO Jensen Huang has qualified the combination of AI and robotics as a “multi-billion dollar” long-term growth opportunity. China is also the world’s leading power in the robotics sector. And only in 2024, Chinese factories installed nearly 300,000 industrial robotsa figure higher than the rest of the world combined. The Qwen factor. The choice to place this team within Qwen makes all the sense in the world. Seven models of the Qwen series are currently listed in the top 10 Hugging Facewith the multimodal model Qwen3-Omni occupying first place. This strength in AI provides the company with a solid foundation to develop advanced robotic applications based on the journey they already have with Qwen. Cover image | zhang hui and Possessed Photography In Xataka | AI companies have just encountered an unexpected challenge: insurers have started to turn their backs on them

One of the most downloaded apps for iPhone pays for recording calls to train AI models. It is a security disaster

The sale of personal data is not a hypothesis, it is an expanding reality. Just look at Spotify: Recently a service appeared that paid those who delivered their profile and their listening summaries to resell them to technology companies. The approach was as simple as disturbing, because it became something as innocent as our musical habits. Neon Repeat the scheme, but transfers it to a much more sensitive land, telephone calls, where intimacy becomes the product. We are talking about an app that decided to convert phone calls into the new digital gold. His proposal is direct: “Speak, record and charge.” It promises users to win “hundreds or even thousands of dollars a year” simply allowing their conversations to transform into training material for artificial intelligence systems. The hook worked. In a matter of days he went from irrelevance to place Within the top three positions In the Social Networks category in the United States App Store. How neon works. The neon mechanism is designed for each call to translate into money. It promises to pay 30 cents per minute when two users of the app talked to each other, 15 cents if the call is with someone external and establishes a stop of 30 dollars daily. To this adds a referral system that offers 30 dollars for each new user. The recording, According to your policyalways affect the sender and, when both used neon, to both parties. Conditions of use. Beyond payments, the true neon reach is in its Terms of service. There the users give the company a “world, exclusive, irrevocable and transferable” license on their recordings. This permit includes rights to sell, modify, create derived works and distribute the audio in any format, present or future. To this is added a section of functions in beta, without guarantees or responsibility in case of failures. The amplitude of that assignment makes it difficult to foresee how far the use of the recordings can go. Where is available and how popular it is. Neon’s initial success was as fast as unexpected. At the time of writing this article, it is number 2 of the most downloaded social applications in the United States App Store. The application, however, seems restricted to that market: in tests carried out from Spain is not among those available or allows its download. The security failure. The story took an unexpected turn when a technical analysis revealed that Neon did not protect the information of its own users. As Techcrunch discoveredjust create an account and review network traffic with a tool like Burp Suite to access others. Shortly after the notice, the founder closed the servers and sent an email announcing a pause ‘for security’, not to mention the filtration. What was exposed was especially delicate: Telephone numbers associated with accounts Public links to audio recordings Complete call transcripts Metadata with duration, date and payments obtained Telephone numbers, recordings and transcripts are not accessible is not a minor failure. With this data, private conversations could be rebuilt and associated with specific people. The risks range from attempts to impersonate identity to the creation of synthetic voices. What Neon says in front of what we know, Neon defends that their processes protect users: anonymity of conversations, elimination of personal information and sale only to reviewed companies. However, the ruling showed that these systems are not infallible. The official communication after temporary closure spoke of “adding extra security layers”, but omits to recognize the filtration. Neon’s fall does not erase the background question: what price does our intimacy have when artificial intelligence demands more and more data? The model to pay for calls can reappear in other forms and other markets, because the need to train systems will continue to grow. What happened in the United States is an early warning that we are not talking about science fiction, but about real proposals that already touch the user’s door. The decision, ultimately, is personal. Images | Xataka with Gemini 2.5 | Screen capture | Neon In Xataka | A new generation of robots promises precision and efficiency. It also opens the door to cyberspage risks

Alibaba is becoming the Ai Open Source sponator. Your family of Qwen models is putting the market above

The Chinese giant Alibaba has launched Officially QWEN3-OMNI, an open source artificial intelligence model that can process text, images, audio and video simultaneously. In fact, it is the first model that unifies these four modalities natively and does it completely free, something that none of its US competitors offers. Bet on the Free Code. While Openai and Google charge for using their most advanced multimodal models, Alibaba gives theirs under Apache 2.0 license. This means that any company can download it, modify it and use it commercially without any cost. This open source approach It is the trend that multiple Asian giants are adopting to cause global interest in their language models and that multiple developers around the world want to contribute to their evolution. It is part of China’s strategy to remain relevant in the AI ​​career. Image: Alibaba What can you do exactly. As points The company, QWEN3-OMNI simultaneously processes text in 119 languages, recognizes voice in 19 languages ​​and can speak in 10 different languages. Its “thinker-speaker” architecture separates the reasoning of the audio generation, promising real-time responses with latencies of just 234 milliseconds for audio and 547 milliseconds for video. Benchmarks. In 36 reference tests, QWEN3-OMNI exceeds open source models in 32 of them and establishes new general records in 22. In advanced mathematics (Aime25) obtains 65 points compared to 26.7 of GPT-4O. In writing tasks (Writingbench) 82.6 points, exceeding 75.5 GPT-4O points. While it is true that it is not being compared to Openai’s most avant-garde model to date (GPT-5), it is a real achievement what giants like Alibaba are doing with their free and open source models. Strategy. Alibaba is running a risky but intelligent play: democratize the multimodal AI to gain market share. “This could bring some changes to the panorama of the OMNI open source models,” explained The Qwen team. The announcement occurs just when Nvidia announces Investments of 100,000 million dollars in data centers for OpenAI, while Alibaba and the rest of Asian giants prefer to dispute technological leadership in AI from another angle. What does it mean. Great American technology have opted for proprietary models that generate direct income. Alibaba wants to change the rules by giving instant access to its technology to millions of developers. Even if they offer it for free, they are building an ecosystem that gives them competitive advantage In the long term. And now what. China is not the only one that launches free code models. OpenAi has GPT-Oss And Google has Gemma. Two options that developers have on hand to deploy their ideas, modify them, contribute to their evolution and others, although they are not the main approach of both companies. In the case of Alibaba models, Deepseek either Tencentthe idea does revolve around the open source, and the pulse does not tremble when offering their most powerful models for free (despite the fact that some more complete and specific options are reserved for special agreements). QWEN models A great reputation have been carved Throughout these last years, and this new evolution in his family marks a new ribbon for the rest of the companies, not only in efficiency, but in the deployment of this business model. Cover image | Alibaba and Growika In Xataka | Eight people. An hour of work. A budget dollar. 5,000 new podcasts thanks to AI

This is how new models differ

Mark Zuckerberg says Within five years we will use smart glasses And less the smartphone. We do not know if it will be so, but what we do know is that Goal wants to be the leader in this wearable category And they just made it even clearer with new intelligent glasses models, three to be exact. This is the catalog of glasses with the finish line. Goal Ray-Ban Display They are the most advanced model of the catalog and solve one of the shortcomings that had been dragging smart glasses, add visual information. The new ones Goal Ray-Ban Display are The first ones that have an interior screen that only you can see. On the screen, which is in the right lens, information is projected as maps if you are following a route, a video call, preview the photos we take with the glasses and even for real time translation. There is not everything left, the goal Ray-Ban Display They also come with what Meta calls “neural bracelet”. They work through electromyography and are able to translate muscle activity into concrete actions. For example, if we make the gesture of clamp with the thumb and the index it will be like clicking with the mouse, while if we use the thumb and the heart the action will be to go back. At the moment the new goals with finish screen They will only be sold in the United States, where they will cost $ 799 and arrive in very limited units. It is expected to reach more countries in early 2026, although for now we do not know if Spain will be among the chosen countries. Ray-Ban Meta 2nd Generation Are the successors of the Ray-Ban Meta original As for the design, the first generation was available in the Wayfarer and Skyler models, but the new one also arrives in the Headliner model. The second generation improves some key features such as the camera, which now is capable of recording in 3k and its battery lasts double. According to goal, up to 8 hours with a single load. They are only two improvements, but their impact on the price is quite remarkable. While the first generation had a starting price of 329 euros, the new amounts to 419 euros. Oakley Meta Vanguard Are the successors of the Oakley Meta HSTNa model with sports aspirations, but a design too ‘Lifestyle’ that left them in no man’s land. The new ones Oakley Meta Vanguard They bet on a much more aggressive look and a Sports design that adapts to the face. You can choose different colors, the bad thing is that they do not offer the option to wear graduated lenses. It has a 12 megapixel panoramic chamber and records 3K video to 30 frames per second or FullHD at 60 frames per second. The speakers located in the pins have also been improved and target states that They have been optimized so that we can hear them well even if there is wind. The battery reaches 9 hours and also have a load case that can extend autonomy up to 36 hours. Beyond design, glasses also have functions oriented to sports such as Integration with Strava and Garmin. They can also be connected to Apple and Android health to obtain summons of training directly in the Meta AI app. If you also have a Garmin watch, you can ask the glasses details about the training, such as your heart rate. The price is much higher, specifically 549 euros compared to the 439 euros that cost the Oakley Meta HSTN. Meta smart glasses specifications Finally, we leave you a table with the key specifications of the models of glasses with AI that target has in its catalog, including the ones we already knew. For the first and second generation Ray-Ban we have taken as a reference for dimensions and weight the Wayfarer model. Ray-Ban Meta 1st Gen Ray-Ban Meta 2nd Gen Goal Ray-Ban Display Oakley Meta HSTN Oakley Meta Vanguard weight Glasses: 48g standard, 50g large Case: 133g Glasses: 51g Standard, 53g Grande Case: 133g Glasses: 69g standard, 70g large Bracelet: 42g Case: 169g Glasses: 53g Case: 213g Glasses: 67g Case: 258g Crystals supported For sun in several tones Transitions (they darken alone) Transparent Graduated lenses For sun in several tones Transitions (they darken alone) Transparent Graduated lenses For sun in several tones Transitions (they darken alone) Transparent Graduated lenses Prizm (mirror) Transitions (they darken alone) Transparent Graduated lenses Prizm (mirror) AUDIO Double speaker (76.1db) Five microphones Double speaker (76.1db) Five microphones Double speaker (76.1db) Six microphones Double speaker (76.1db) Five microphones Double speaker (82.1db) Five microphones camera 12MP ultra angular FullHD 30FPS video 12MP ultra angular 3K 30FPS Video, FullHD 60FPS 12MP ultra angular FullHD 30FPS video 12MP ultra angular 2203 × 2938 pixels at 30fps AUTONOMY 4 hours 36 hours with load case 8 hours 48 hours with load case 6 hours 24 hours with load case Neural Band up to 18 hours 8 hours 48 hours with load case 9 hours 36 hours with load case MEMORY 32 GB 32 GB 32 GB 32 GB 32GB others IPX4 resistance IPX4 resistance IPX4 resistance Monocular screen, 600×600 px, 90 Hz Neural bracelet with gestures using EGM IPX4 resistance IP67 resistance Compatible with Strava and Garmin price From 329 euros From 419 euros From 700 dollars, just use From 439 euros From 549 euros Images | Goal In Xataka | After mobile phones, cars, robots and AI, the next great technological avalanche of China arrives: the glasses

There are users selling their Wrapped data to train AI models. Spotify has not taken it well

Did you ever think you could make money with your Spotify data? What for many is only an annual summary with favorite songs and artists, for others it is a source of real value. An increasing group of users has decided sell your listening history To train artificial intelligence models. Spotify, which has turned Wrapped In one of its most recognizable brands, it has not taken long to react and the tension is served. Wrapped was born in 2015 as a tool to visualize listening habits and became a global tradition. Today, its popularity has given rise to UNWRappeda collective managed by the decentralized platform Vain. More than 18,000 users have grouped their data to sell them to interested companies. According to Ars Technicain June they achieved their first sale: a portion of data for $ 55,000 to only AI, with an average payment of about five dollars in cryptocurrencies for each participant. Wrapped is no longer just marketing: it is the center of a data control fight The interest in these data is not accidental: Spotify collects a detailed history of each account, searches, devices, approximate locations and technical records. All of that is available to any user through a Official download option, which offers packages in JSON format with years of history and precise data on the use of the platform. But although access is legal, its use has clear limits in the terms of Spotify. The policy for developers It prohibits using Spotify data or contents to train artificial intelligence models, resell information or replicate essential functions of the platform without permission. Spotify confirmed to the aforementioned medium that he sent a letter to those responsible for UNWRAPPed warning that the project could violate its registered trademark and violate these policies. The company insists that users can download their data, but does not authorize that they are monetic through third parties. UNWRAPPED pagethat until recently allowed to register and sell data, is no longer available. At the time of publishing this article, an English message appears that we have translated: “This service is no longer available. We regret the inconvenience. If you have any questions, get in touch with support. ”It is not clear if the closure responds to Spotify actions or a decision of the developers, who have not given explanations about this. Unwrapped was born, according to its creators, so that users had more control over their data, although their future is uncertain. Vana, the platform that drives it, argues that the project allows to group information in a community and collectively bargain its use. They explain that it is unlikely that a user manages to sell their data separately: companies are looking for broad sets, such as those who create vain, and then distribute the income among those who participate. Wrapped continues to be for most users a curious and shared annual summary. But the interest in monetizing this data makes it clear that the discussion about who controls personal information is not yet resolved. Spotify has expressed reservations on projects that use their services out of what is allowed, while decentralized proposals try to gain relevance. It is about to see what will happen finally with this project and others of similar characteristics. Images | Spotify | Xataka with Gemini 2.5 In Xataka | Google has resolved the dilemma of the age on the Internet as disturbing possible: spying on to protect you

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.