Gemini is fine. But the local AI that Google has just launched for mobile phones is amazing

At the end of last week, Google launched Gemma 4. Gemma is a family of generative AI models with a small footprint: models with effective parameters between 2B and 4B created primarily for deployment on mobile devices. Despite their size, they are dense models, and during the weekend the topic of conversation has been mainly this.

How to install Gemma 4. You can install Gemma 4 so that it works offline on your phone, regardless of whether you have an internet connection or not. The installation process requires an additional app signed by Google: Google Edge Gallery.

This open source app allows you to interact with AI models downloaded to your phone, without the need for an internet connection. And, since the launch of Gemma 4the model can run on mobile phones. Gemma 4 models are available in 4 parameter sizes: E2B, E4B, 31B and 26B A4B. The greater the number of parameters, the greater the capacity, but the more energy and memory is consumed.

What does Gemma 4 do. Gemma 4, to date, is one of the best local smartphone models. According to Google, it surpasses the latest versions of DeepSeek, qwen and Kimi. We can use it as a chatbot (taking into account its limitations as it is not connected to the internet), ask it questions about any image we have in the gallery, as well as transcribe and translate audio. Because yes, now Google’s local models are compatible with audio and even real-time vision (if we give it camera permissions).

In addition to these uses, it has its own skills: these allow us to use specialized functions to create interactive maps, perform local searches within tools such as Wikipedia, perform calculations, etc. For the average user, these models represent a gigantic pocket encyclopedia that does not require any type of connection.

What advantages does it have?. The first advantage of using local models like Gemma 4 is the processing speed. There is no lag, the response is immediate, and it is surprising when we come from connected tools like ChatGPT, Gemini or Claude. The second is security: the model does not have an internet connection and the data does not leave your device. You can use them in airplane mode or in any area without coverage.

Currently these models are not a replacement for large connected AIs, they are a perfect complement for situations in which we do not have a connection, and we want to continue having a model for very specific tasks.

Why is it important. That Google is redoubling its efforts in local AI responds to several current and future demands.

  • Running AI on servers is worth a fortune and is generating crisis like that of RAM. Winning in local alternatives is increasingly important.
  • The war for open models is one in which it does not want to be left behind: Llama, Mistral, DeepSeek.
  • Companies, governments and a small portion of users do not want (or cannot) send their data to external servers. Local models solve the problem.
  • Google is doing its homework well with Gemini, but without connection the mobile phone is left without AI.

Google’s commitment to Gemma and its implementation through its own app leaves certain clues about possible offline Gemini functions in the future.

In Xataka | Having an AI on my phone that works without an Internet connection is more useful than I thought: this way you can start it

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.