He doesn’t always get it right, but Ming-Chi-Kuo just made a particularly striking statement. According to your dataOpenAI is preparing its first “mobile AI agent”, a smartphone that will be quite different from the current ones not so much in form as in substance. If its predictions come true, we could be facing a device that will shake the pillars of the current mobile segment.
Hello, “OpenAI phone”. Kuo states that mass production of this smartphone designed by OpenAI will begin in the first half of 2027. He also tells us that the SoC that will govern this device will be a customized version of the future MediaTek Dimensity 9600 manufactured with TSMC’s N2P process and that will theoretically arrive in the second half of the year.
The mobile that wants to see the world. This chip will have some special features, such as an ISP (Integrated Signal Processor) with an HDR system that allows optimizing the visual perception of the world. It is logical: the mobile wants to become an integral part of our interaction with the world, and that visual capacity is critical.
Two NPUs better than one. It will also have a dual NPU architecture to increase its AI computing capacity. It will theoretically integrate LPDDR6 memory and will have UFS 5.0 to avoid memory bottlenecks. If all goes well, Kuo says, between 2027 and 2028 30 million units will be distributed. Not anything else, but the plan seems incredibly ambitious.
Paradigm shift. This type of device, Kuo points outwill doom the UI as we know it. The concept of navigating a patchwork of icons to perform independent tasks will be obsolete. The concept proposed by OpenAI understands that the user does not want to use an “application stack”, but rather achieve objectives through a centralized agent. This implies a radical redesign of the smartphone in which the screen stops being a menu of options and becomes a kind of mirror of what the user wants, of their “intentions.” We went from a manual interaction to a proactive inference, because the AI is responsible for detecting what needs to be done to complete the action that the user needs. Without touching the screen. Task resolution rules over navigation.
OpenAI being Apple. To achieve this OpenAI needs to control everything on this device, so similar to what happens with Apple and its iPhone. For an AI agent to function seamlessly, it needs access to sensors and device status in real time, something that current operating systems restrict by design. OpenAI wants to control both the hardware and the software to capture all the relevant information at all times. The technical barrier is not the AI model, but that total control that also requires perfect management of memory and energy consumption. Apple, by the way, is in that same battle, although in a different way.
The energy challenge. It seems logical to think that this device bases a good part of its capacity on AI models in the cloud, but also that it will have the ability to execute some tasks thanks to small local models. Hence having two NPUs that allow at least certain tasks to be executed on the mobile itself. That will be crucial precisely regarding energy consumptionbecause this AI that automates tasks by chaining them consumes much more computing than the usual interaction with an app today.
App Store in danger of extinction. There is a particularly striking idea here. The app store economics faces existential disruption. The current model relies on friction: you need to open a specific app for each task, which justifies the 30% “tax” and the walled garden. If an AI agent can book a flight or order food by directly accessing the background APIs, the icon on the home screen disappears. The “app” stops being a destination and becomes an invisible tool. This not only threatens Apple’s revenue, but redefines mobile development towards an “API-first” ecosystem, where the graphical interface is irrelevant and competition is decided by agent efficiency, not UI design.
Goodbye, privacy? And in this context, privacy could once again become the price of that “it’s so convenient to use a device like this” of these future mobiles. For an AI agent to be useful and function truly autonomously, it needs to know everything or almost everything about us. Our location, health, messages and of course the screen content at all times, among other things. The opacity of proprietary models will mean that we will never know what data is leaked to the cloud to “improve the service”, turning privacy into a variable controlled (once again) by the manufacturer.
In Xataka | Microsoft has insisted on making Windows “agent.” His users have reminded him that they had not asked for it

GIPHY App Key not set. Please check settings