“Probably sometime in the 2030s, when You will have your phone with you but it will stay in your pocket longer“. That prediction was made by Mark Zuckerberg in December 2024. His bet was clear: the Meta Ray-Ban family glasses would be so great that the mobile phone and, above all, its screen, would remain in the background. Both OpenAI and Apple seem to agree, and the latest rumors point to wearables with two things: a lot of AI and zero screens.
Apple prepares its AI wearable. New data revealed in The Information indicate that Apple is developing a new device with AI. Specifically, they talk about a wearable that would be equipped with two cameras and three microphones. What it does not have is a screen, and there is talk of a format similar to that of the current ones AirTags. The company led by Cupertino intends to put it on sale in 2027, although yes, with a moderate launch of about 20 million units.
OpenAI goes all out with its own “Airpods”. It’s been months since Sam Altman and Jony Ive they joined forces to create AI hardware, and now we know that the firm will present it before the end of the year, although it is not clear if it will be sold then. The rumors they point to headphones that could compete with AirPods and that would bring us a little closer to that future that the movie ‘Her’ already painted. As we know, the company already stole a controversial idea from the film.
Apple knows a little about wearables. Above all, because he sells them like hotcakes. Its Apple Watch and AirPods generate sales close to $40 billion a year (2023 and 2024) alone. AirTags are also an important part of that equation, and the Atlas Project —his Ray-Ban Meta type glasses—enlivens this segment even more.
But the screens dominate us. There are many who are looking for that product that can make us forget (a little) our cell phone, but no one has achieved it. Even though there are wearables that have succeeded in the market, all of them are basically accessories for our smartphones. In a world in which we do not stop consuming image and video content—TikTok, Instagram and YouTube demonstrate this—it will be difficult for a wearable without a screen to displace the mobile phone, laptop or PC.
Remembering the Humane AI Pin and the Rabbit R1. It is true that AI has evolved and improved, but we have already experienced a first wave of promises with two AI wearables that failed miserably. He Humane AI Pin and the Rabbit R1 They wanted to take advantage of that fever and expectation to get ahead of the technology giants, but both showed that the hardware is extremely complex. Their products, even with interesting and original ideas, turned out to be very green and to have performance well below what was promised.
Hello, ambient computing.. Those two products relied on voice as a great way to interact with technology, but AI was not prepared to shape this expectation. That is changing, and we are already seeing how AI agents are able to do more and more things and “connect” to other applications thanks to technologies such as MCPs. Ambient computing is that idea of a technology that is present but invisible, and that responds to voice or context without the need for a physical interface. Thus, the idea is to go from before to after:
- Before: clicking on the screen to reserve a table at a restaurant.
- After: tell the wearable “reserve a table at the restaurant” and through the conversation the AI agent completes the task.
Glasses, headphones, pendants, pins? What seems clear is that there is no clear format that at the moment seems to be superior to another, and each one has its pros and cons. The glasses seem especially striking a priori because they open the door to projecting information on small screens, but both the headphones like the pendants or even a Humane AI Pin type device also aim to be very interesting for that theoretical future in which voice interaction will solve many more things than now.

GIPHY App Key not set. Please check settings