Internet access democratized knowledge you were rich or poor. AI is destroying that conquest step by step

$ 250 per month for using the most advanced AI. That is The figure that Google has put. Six months ago, OpenAi set his in 200. A few days ago Anthropic expanded Claude’s few limits with another 200. In any case It’s not just about paying for technology. It’s about buying power. And, therefore, to mark distances. Until recently, to talk about ia was talking about universal access. The free chatgpt or gemini versions were far from their older sisters, yes, but allowed to try, learn, benefit from their abilities even if it was a bit. Today that has changed. The powerful version is no longer available to everyone. It has been encapsulated after a monthly three -digit subscription that does not even start by ‘1’. It is the beginning of a new gap: Not between those who use AI and those who do not, but between those who can automate their tasks, think with help, execute complex flows … and those who do not. What Google or Openai offers is not just a better chatbot. Is an operating system of intellectual work. An assistant who not only responds, but understands the context, recalls, acts, generates, automates. Tools like Deep Research (OpenAI) or Project Mariner (Google) represent the decisive step towards Autonomous agents. They execute tasks that previously occupied days of human work. And in many cases they do better. With that, productivity is redefined. But inequality is also redefined. The question It is not only what this technology can do, but who can afford it. Because we don’t talk about a luxury. We talk about a tool that multiplies the performance of those who use it. An invisible advantage but that impacts everything: from the quality of work to the speed with which objectives are achieved. Who accesss these models lives in another learning curve, in another economy of results. Who cannot pay them, is trapped in a slower, less capable, more limited version of himself. This has a clear echo in history: the machinery of the industrial revolution also multiplied productivity … but at first only for those who could afford it. The same goes for now. The advanced AI begins to consolidate as an elite infrastructure. As if at the beginning of the Internet century, they were only available for those who paid $ 3,000 a year. And that changes the knowledge map. Because These tools not only report: they model how you learn, how it is decided, how it competes. Its effect is not immediate or visible, but cumulative. Day after day, those who access them will work with less friction, to make better decisions, to delegate more tasks, to generate more and better content. The others only observe, as much with access to the good, but not maybe. And they lag behind. The future is already priced. In Xataka | Google has become the most leading company at night. And the main winner is Android Outstanding image | Xataka with Mockuuuups Studio, Openai

Gemini’s conquest began in Android mobiles. Now you will flood your TV, your watch or even your car

Google wants to win the AI ​​battle, and has a fundamental advantage over almost all its competitors: it has a perfect ecosystem to do so. Gemini deployment He started on the Android -based smartphones, but now he will go further. Thus, the firm has announced that Gemini’s characteristics will soon reach various types of product based on one of the many Android variants. It is one more proof of how the traditional Google assistant leaves its chatbot from AI. The smart watches based on Wear OS They are a good example of this. It will be possible to speak with the assistant directly on those devices to establish alarms and reminders, for example. And thanks to Gemini’s interconnection with Google applications, any user can ask data about a fact of those services – “what was the name of the restaurant that Miguel spoke to me recently in Gmail?” – To obtain the answer synthesized on the clock screen. The option will be for example available at the Pixel Watch and In the Samsung Galaxy Watch. Gemini too It will disembark in Android Auto. The commands that was already possible to take advantage of the Google assistant will go further with that ability to understand Gemini’s natural language. That, they say on Google, will prevent us from concentrating on the way of saying things and Let’s keep attentive to the road. This interaction in the form of natural conversation with Gemini in the car will also allow access to striking options. For example, ask you to look for electroberas on our route that are also near a park in case we want to take a walk. That connection with Google services will also make it possible that if we receive a message or an email on the Gemini mobile, it translates it, translates it and offers us to respond easily and with the voice. Or make a summary of the news (but perhaps without sports) or a document that we have to review before a meeting towards which we head. If we have a TV TVGemini will also reach said product segment. With the chatbot we can, for example, ask the AI ​​for action films suitable for children to obtain recommendations directly through voice conversation. But also TV can become a large mobile screen to find answers to our questions. As with the computer or mobile, we can talk and ask Gemini about any issue, and here there will be a special focus on offering results in YouTube videos. Finally Google talked about how Gemini will also arrive at Android XRthe operating system they are developing in collaboration with Samsung for connected glasses. Here the interaction is especially interesting because it will offer a striking way to obtain information, answers and content directly through the voice. In the example shown by Google, Gemini will be able to plan a vacation surrounding us with videos, maps and tips on local culture, in addition to offering us an itinerary of the trip. We are thus facing a gradual but total deployment of Gemini, which by the way also It will reach headphones of manufacturers such as Sony and Samsung, who has already announced it in the case of Your Galaxy Buds. There are no concrete dates for that display, but Google’s ambition here is clear: that we can talk to all our devices naturally when we need it. In Xataka | Mark Zuckerberg believes that in 2030 we will not get the smartphone so much out of the pocket: we will do almost everything from the glasses

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.