Gemini and Siri were monopolizing modern cars. So Musk has brought Grok to European Teslas

Tesla is starting to roll out Grok in Europe for free. The electricians of Elon Musk’s company have been betting on their own software from the beginning, leaving hardly any room for third parties. No trace of Android Auto, CarPlayor the best-known assistants, Grok arrives as that intelligent “co-pilot” aboard the Tesla. The problem is that… still very Musk. the arrival. Grok arrives as a free update on European Teslas. We can choose their voice and personality, like in the smartphone application. To start it, all you have to do is activate it from the application launcher itself or press the voice button on the steering wheel. If we have logged in to Grok, from that moment on, it will become the device’s default voice assistant. What can you do. Grok’s list of possibilities is extensive, from guiding us to a destination to locating a nearby supercharger or simply maintaining an informal dialogue with us and recommending options from our Tesla’s digital manual. In addition to this, it has quite curious functions. You can be our language teacher Has special modes for kids, like “Story Time” and trivia games It has a mode for adults (+18), controversial, “sexy”, “extravagant”. In which Teslas it will be available. Currently, this is the list of Tesla cars compatible with Grok. The requirement is that our car has an AMD processor, that the software is updated to version 2025.26 or later, and that we have a WiFi connection or the premium connectivity pack. To find out if your Tesla has an AMD processor, you must go to ‘Controls’ > ‘Software’ > ‘Additional vehicle information’. Careful. Grok, despite its potential as an AI modelis involved in recent controversies. The app has become a focus of misuse, an infinite well of content related to the naked women. Countries like France and India have already denounced itand the Government of Spain has asked the prosecutor’s office to investigate X for the possible dissemination of child pornography through the app. In this context, perhaps it is worth debating whether bringing Grok with an “adult mode” to Tesla vehicles is the most appropriate. In Xataka | Elon Musk thought that Tesla would live outside politics. Germany has shown him the hard way that he was wrong

Apple announced with great fanfare that the new Siri would be different from the rest of the AIs. It turned out that without Google there was no Siri

I’m not going to hide, I’m one of those who believed Apple when announced with great fanfare that Apple Intelligence It would be different from the rest. He had reasons to do so: his financial muscle, his obsession with taking care of the software and his philosophy of arriving late to the game to score the goals at the last minute. But here I was wrong. The only way Google has had to play in this game has been using someone else’s deck. From waiting almost two years to having it now. Apple announced Apple Intelligence in its 2024 keynote. One in which it did not give too many details but showed us a different approach to AI than that of Google and OpenAI. An AI with real interaction with the operating system, integration with both native and third-party apps… a real “co-pilot” completely integrated into iOS, and not a super-vitamined app, but isolated from the whole. From that keynote until then the only thing we have is Siri being able to open ChatGPT when the question gets a little complicated. And, just a few weeks after announcement of the agreement between Apple and GoogleGurman affirms that we will see the new Siri in a matter of weeks. If the prediction came true, it was not a matter of time. It was a matter of not having the resources. What’s coming in February. Gurman tells Power On that the Siri 2.0 that we have been waiting for since 2024 can become a reality in the second half of February. In fact, he points out that one of the reasons why Apple made the announcement of the collaboration with Google official was because it was close to obtaining sufficient demonstrations of its functionality. Although there are no details about how their disembarkation will be, the modus operandi from Apple is easy to predict: we will have to update our iPhone to the corresponding version of iOS 26 that includes these new features, since Apple introduces improvements to its native apps through system updates. Not so fast. Although there are no details on how long Apple and Google have actually been working, what we do know is that the new Siri is not ready yet. Gurman points out that it will arrive in beta phase starting in February, and that the objective is not to delay the final version until beyond April. Again, evidence that Apple did not have the Siri that it boasted so much about ready, accelerating and putting two extra gears now that it has the support of Google. It can turn out well. My colleague Javier Pastor told, very correctly, how Apple can the parasite’s strategy works for him. The company is not going to enter the investment battle for new models: it is going to spend millions of dollars to take advantage of an existing infrastructure and use an already proven pillar. The new Siri will be a premium wrapper for Gemini and, landing in the real world, few beyond those of you reading these lines will even be aware that Google’s AI is what is powering your iPhone’s AI. Image | Xataka In Xataka | The Apple Intelligence and Siri disaster has caused something unusual: Apple gives the keys to its kingdom to Google

Apple has been resisting turning Siri into a chatbot for years. Until the evidence has been surrendered

2026 will be the year of Siri, but not because of an internal turn at Apple or because of the maturity of Apple Intelligence. It will be because the pact with Google will allow Apple to use Gemini technology as your assistant’s base. The details about what Apple will do with its assistant have not taken long to come to light and, we have good news: they will arrive in the next version of iOS. The new Siri. Apple has been announcing the benefits of the new Siri since before having prepared their news. With the Apple Intelligence announcement He put on the table a Siri completely integrated into the system, capable of functioning as a complete assistant and functioning mostly locally. The reality? Everything remains practically the same as before and, when Siri doesn’t know how to respond to something, ends up opening ChatGPT. What is going to change. They explain in Bloomberg that Apple, as of iOS 27will surrender to the chatbot model that has worked so well for companies like OpenAI and Google. The mere assistant model has expired, and Siri will become a chatbot at the service of any of our requests. This new chatbot will be integrated into all Apple apps (expecting an API open to developers to integrate it into their apps), allowing us, for example, to find specific photos in the app Photosfunction as a programming assistant in Xcodeetc. What won’t change. The only certainty with the new chatbot model is that Apple will continue to maintain its obsession with privacy and maintaining its AI ecosystem as its own, even if it is based on Gemini. Apple’s intention is to integrate this experience into iPhone, iPad, Mac and Apple Watchmaintaining activation through Voice command “Siri” or holding down the power button on the iPhone. The difference. Today, Siri is an assistant, a command system. You tell him something Siri classifies the intention (set an alarm, call X person, send a message) Execute the order Moving to the chatbot model means having a generative model capable of interpreting natural language, maintaining conversations and a more “human” interaction with the phone. This is what their rivals have been doing for a few years. Adapting to the inevitable. That Siri will evolve in 2026 is proof that the classic assistant model is exhausted. Apple will have to adopt the chatbot model as an inevitable transition, previously led by OpenAI and in which Gemini now seems to be leading the way. The thing doesn’t end here. The destination of the new Siri is not only current Apple devices. As my colleague Javier Pastor says, the company plans to launch a device without a screenits first AI-focused wearable. According to the leaked information, it will have a format similar to that of AirTags, a microphone system and a launch scheduled for 2027. New assistant, new devices, and alliance with Google. The new stage of artificial intelligence for Apple is finally arriving. The question is whether they will manage to offer something new. Image | Xataka In Xataka | Hey Siri: 134 voice commands to get the most out of Apple’s assistant

Siri is just the Trojan horse for Google to infiltrate the entire Apple ecosystem

Apple doesn’t have its own AI, so it has chosen a girlfriend. That girlfriend is none other than Google, which has just signed an agreement with Cupertino to make Gemini the center of future developments. Not one, no. Of many. We thought this was just about Siri. The initial official announcement was brief. Apple would use Gemini, but it seemed that it was going to do so basically to launch its long-awaited version of that personalized Siri governed by a generative AI model at once. It turns out that the agreement is broader. Gemini is Google’s Trojan horse to conquer Apple. Google’s statement revealed that this alliance went beyond what seemed to focus on Siri. In a post on XGoogle stated the following: “Apple and Google have signed a multi-year collaboration agreement that will see the next generation of Apple’s entry-level models based on Gemini models and Google’s cloud technology. These models will help power future Apple Intelligence features, including a more personalized Siri coming this year.” This makes it clear that although the initial protagonist will be Siri, the scope of the agreement can be much more important and affect the entire Apple hardware and software ecosystem. Considering that AI options from Apple Intelligence and Siri will likely reach many of its products and services, Gemini, which will power all of those models, would end up being an integral part of said ecosystem. Collaboration above all. As indicated in The Informationthe agreement allows Apple to ask Google to modify some aspects of how Gemini works, but above all it will allow Apple to adjust Gemini itself so that it responds to requests in the way that Apple prefers. No Gemini branding. Sources close to the negotiations add an interesting fact: it will not be noticed that the AI ​​that we use in Siri and other Apple products is actually based on Gemini. Google’s branding will be blurred, and users will not know what is underneath and what is the engine of those interactions with AI. A more emotional Siri. That source cited in The Information also reveals that Siri will be more approachable and emotional. “Historically Siri has always had difficulty with emotional support,” he explains, but in the Gemini-based version, “Siri will give more complete conversational responses, just like ChatGPT and Gemini.” That, of course, has two sides: AI becomes more “human”, but for vulnerable people that can end up being dangerous. Apple Intelligence will still be there. Although Gemini thus infiltrates the Apple ecosystem, both companies clarified that Apple Intelligence will continue to be available on Apple devices and on the servers of its Private Cloud Compute platform. What is not so clear is that the Apple Intelligence models do not also end up being based on Gemini. Especially since Apple’s “foundational models will be based on Google’s Gemini models.” A priori that should mean internal changes at Apple Intelligence as they adopt Google technology. The new Siri at WWDC? The new version of Siri is expected to be presented in March or April behind him controversial delay which was announced almost a year ago. The new voice assistant will theoretically debut in iOS 26.4, the update that should arrive in those months, but Apple could take the opportunity to announce it at WWDC 2026, two years after that initial announcement that ended up becoming a fiasco: Apple promised things which it has not achieved until now, but Gemini may finally become the solution to that problem. In Xataka | Apple has decided not to enter the AI ​​war because it believes it has something more important: the entry “door”

the new Siri will be based on Gemini AI models

In the midst of the rise of artificial intelligence, with increasingly sophisticated voice assistants like those of ChatGPT either PerplexitySiri begins to show the passage of time too clearly. He doesn’t always understand what we ask of him and often stumbles as soon as we stray from a few predefined patterns. Among promises that have fallen by the wayside, Internal tensions and leadership changesApple seemed to be losing its footing in one of the most decisive technological races of the decade. And, although it is still too early to know if it will be able to reverse this dynamic, the company has just made a move with a major decision: to ally with one of its great rivals. Agreement with Google. The Cupertino company has signed a collaboration multi-year agreement with the search giant by which the next generation of the so-called Apple Foundation Models will be based on the models Gemini and in the search giant’s cloud technology. The next functions of Apple Intelligenceincluding a more personalized Siri whose arrival is “this year.” With privacy at the center. The statement adds that, despite this change, the system will continue to run on the devices and on its platform. Private Cloud Computingfollowing their privacy standards. Apple insists that the operational heart of Apple Intelligence does not leave home. The starting point of everything is at WWDC 2024. There Apple presented Apple Intelligence as its great response to the rise of generative AI and placed Siri at the center of that strategy, promising a much deeper understanding of personal context, the ability to “see” what appears on the screen and to chain actions between applications. In practice, this meant that the assistant had to be able to interpret emails, messages, appointments or files and act on them without the user having to jump from one app to another. It was a leap in ambition much greater than that of traditional Siri. From promises to reality. At the end of 2024, Apple publicly maintained the pace. In a December press release, it reiterated that Siri’s most advanced capabilities would arrive “in the coming months,” while launching other Apple Intelligence pieces such as Image Playground or Genmoji. In that same context, Apple once again spoke of awareness of personal context, vision of what is on the screen and “hundreds of new actions” within and between its own and third-party apps. Three months later, in March 2025, the tone changed. In an official statement to Daring Fireball, the company admitted that some of those features would require more time than expected and went on to talk about a “more personalized” Siri that would be released “over the next year.” June 2025 arrived and, at WWDC that year, Siri did not show a jump equivalent to the one that had been hinted at twelve months earlier. This lack of news ended up pushing Apple to give explanations in public. Craig Federighi, chief software officer, and Greg Joswiak, head of marketing, addressed the issue in interviews after the event. Federighi went on to explain that Apple had had a “version 1” of the new Siri prepared to arrive between December 2024 and spring 2025, but that they decided to stop it after evaluating that it would not meet customer expectations or the company’s internal standards in that period. In the end, everything comes back to the same point. The company now places a more personalized version on its immediate roadmap, after months of back-and-forth with the calendar. The announced alliance changes the technical basis to get there, but it does not eliminate the acid test. It will be actual use, when users start asking complex things from their iPhone or Mac, that will determine whether Apple has managed to catch up in a race that never lets up. Images | Apple | Google In Xataka | Google has found a way to monetize its AI: adding advertising while you shop without leaving it

The man who failed to transform Siri and the brain of the AI ​​strategy ends his stage

Apple has communicated that John Giannandrea, one of the most influential executives in its AI strategy in recent years, will begin a retirement process that will culminate in 2026. The company explains that the executive will leave his position as senior vice president of Machine Learning and AI Strategy, although he will continue to collaborate as an advisor in the coming months. This announcement comes months after a realignment of responsibilities related to Apple Intelligence and Siri. Giannandrea landed at Apple in 2018 as one of its most notable signings, with the task of strengthening the AI ​​strategy and giving Siri a new direction. His team was in charge of areas such as Apple Foundation Models, the internal search engine and machine learning research, technical pieces on which Apple has built much of its recent strategy. He also took on responsibility for guiding the evolution of Siri and coordinating AI projects that affected multiple teams in the company. A project that began with ambition and ended in postponements. Apple Intelligence was born as a profound renewal of the user experience, but the advances were not at the expected pace. The Information detailed that the demo shown at WWDC 2024 did not fully reflect the advanced capabilities that Apple had suggested, and that many of those features were not implemented at the time of the presentation. The pressure increased when the company confirmed that the new Siri with personalized functions would be delayed until 2026. What was supposed to be the new turning point ended up becoming a chain of postponements. Internal war in Cupertino over the direction of AI. Tensions between the AI/ML group and the software team were long-standing, according to The Information. While the area led by Giannandrea opted for a more cautious advance focused on privacy, Craig Federighi defended a more pragmatic approach aimed at tangible results. The clash of priorities became evident when some engineers began referring to the AI/ML team as “AIMLess,” a sign of the accumulated unrest. The situation led to a March 2025 twist that placed Federighi and Mike Rockwell at the forefront of Siri’s new direction. A loss of influence that had been brewing. According to Bloomberg, Tim Cook’s trust in Giannandrea suffered after the numerous delays in the development of the Apple Intelligence functions promised during WWDC 2024. In a meeting with his team, the manager admitted that the delays were “ugly” and acknowledged the shame and anger that this situation had generated among the staff. After the change in leadership in 2025, a good part of his functions began to be left in the hands of other managers, while he maintained other tasks in research into AI and robotics technologies. This shift in operational focus serves as a backdrop to the announcement that he will become an advisor before retiring in 2026. The landing of Amar Subramanya and the new architecture of power. Apple has hired Amar Subramanya as vice president of AI after his time as corporate vice president of AI at Microsoft and 16 years at Google, where he was responsible for engineering the Gemini assistant. According to the official note, Subramanya will take charge of key areas such as Apple Foundation Models, machine learning research and AI Safety and Evaluation teams. He will report directly to Craig Federighi, thus reinforcing his weight in the artificial intelligence strategy. The rest of the organization linked to this area will be under the supervision of Sabih Khan and Eddy Cue, a cast that seeks to align responsibilities with their respective departments. Giannandrea’s retirement and the arrival of new managers mark a turning point for Apple in its artificial intelligence strategy. The company now relies on a more defined structure, with Craig Federighi at the center of the project and Amar Subramanya leading key research areas and foundational models. The challenge will be to convert this reorganization into visible improvements for users and regain competitiveness in a market that evolves at high speed. Images | Apple In Xataka | Huawei has a patent with which to manufacture 2nm chips. The only problem is that it’s just a patent.

Apple has a plan to fix Siri. One that aims to make Google even richer, according to Bloomberg

Apple Intelligence was introduced in 2024 with great promises. The main one, a Siri completely renewedmuch more capable and versatile. As it turned out, what they showed was a fictitious demo and the new Siri was delayed until 2026. Apple has lost the AI ​​raceat least in the first round, but they already have a plan to recover. One that involves delivering 1 billion a year to Google, all while they continue developing their own model. The agreement. Account Mark Gurman at Bloomberg that Apple is about to close a deal with Google worth $1 billion a year. This will allow them to use Gemini’s AI model to power their Siri assistant, especially in the planning and summary functions, which are what allow the assistant to execute more complex tasks. Apple has been evaluating other competitors such as OpenAI and Anthropic, but has finally settled on Google’s Gemini. The new Siri is expected to arrive in spring of next yearalthough nothing is confirmed. Conditions. The agreement does not involve integrating Gemini as an assistant in iPhones, but rather it will be integrated into Siri and will also do so from Apple’s private servers. This will separate user data from Google’s infrastructure. Furthermore, Gurman says that they are not going to publicize the agreement as they did when Google became Safari’s default search engine; in this case it will be a “behind the scenes” agreement. Temporary solution. Apple does not plan to use Google’s model forever as they are developing their own language model in parallel. We don’t know much about what it will be like, just that it will have 1 billion parameters and they hope to have it ready next year. Apple sources believe that it will have a level of quality similar to that of the Gemini, but for now there is nothing to prove it. Taking into account Apple’s AI stumbles We would not be surprised if the promise of its own model ends up being diluted. Additionally, the company has lost at least three key AI executives because Zuckerberg signed them for his superintelligence team. China. The agreement has a problem and that is that Google services are banned in China, so the new Siri would arrive with modifications to comply with this restriction. It is said that heThe Chinese version could have its own models and a local filter developed by Alibaba. China is a key market for Apple and the latest results do not leave them in a good place. That the new Siri arrives “captured” in China could have more negative consequences. Images | Wikipedia In Xataka | Apple has lost the throne it held for a decade. And the Chinese brands no longer even let it be second

Apple believes that its rivals are not doing well in ia. It is the perfect excuse to delay Siri one more year

We already have news about the arrival of the expected and Nueva Siri. They are not good. Apple points to the spring of 2026 as a launch objective for its new assistant, with the aim of bringing it as part of the update to iOS 26.4, According to Bloomberg sources. The delay is completely in line with The recent interview of Wall Street Journal Craig Federight and Greg Joswiak: Apple knows that he is losing the AI ​​career, and he only has trust that your proposal ends up working in the long term. They painted you little birds in the air. It has been just one year since that presentation of Apple Centered in Apple Intelligence. One in which they told us about an integration of Siri in the system as we had not seen: an assistant with the ability to understand each corner of iOS, with an advanced contextual understanding and integration with each native app of the system. In principle, the final deployment was planned for autumn of 2024, but the delays accumulated and accumulate until they reached a practically unsustainable point. iOS 26 as first real step. Apple took advantage of the WWDC 25 To present iOS 26, one of the Greater changes at the visual identity level Never seen in Apple’s operating systems, now much more unified. Next to him, they finally arrived Artificial Intelligence Functions Applied to everyday use: although for practical purposes they do not go far beyond the translation of texts, filters for calls and some improvements in the generation of genumjis in image playground. After almost a year of delays, Apple finally implied what is its philosophy with AI: work to make it local, but with a functioning similar to that of its rivals. More delays. After the presentation of iOS 26with novelties in AI but not a single word about the new Siri, the question was forced. What is happening with her? The responses of a Craig Federighi, nothing height but not entirely comfortable, perfectly revealed the moment in which Apple is. (Joanna Stern) “Siri is not better than his competition”, (Craig Federight) “already, but it will be, it is our mission.” (Greg Joswiak) “It would be disappointing to launch something that does not meet our quality standards.” (Craig Federight) “This is a new technology, nobody is doing very well right now.” Joswiak, software leader in Apple, justified its road map around the key point: the company wants its AI to be discreet and that the user can perform tasks with the phone without even realizing that he is using Apple Intelligence. In fact, they do not want a dedicated app, as there is for Gemini either Chatgpt. The problem? Google has achieved this goal for a long time. Apple’s rivals are doing well. Although Apple points to the immaturity of the AI ​​for mobiles, the truth is that one of its best moments lives. And phones like Google Pixel or the Samsung Galaxy S25 Ultra They are the best proof of this. Here you have to make a distinction between Gemini as an app integrated in the system, and with Gemini Nano as a language model. On a telephone like the Ultra S25 can: Translate real -time calls without knowing that AI is doing. Remove the background noise from a video automatically, without Gemini telling you what you are doing. You can transcribe a voice recording with a single click, without opening additional applications. If they call you, the phone will automatically detect whether or not spam, without notices about ia. Yes, Google (although its approach is not so local), is able to integrate Gemini silently into the system. So much so that their functions are still native to the system, and the user does not have to know what is done with Ia and what not. Apple needs to overcome Gemini at AI. It will not be easy. Apple, for now, has not shown to live. During the presentation of Apple, Its action fell 1.5%. The expectations were high, but the presentation was a clear Apple message showing not being above any of its direct rivals, and relegating third parties (OpenAi) the most advanced functions of AI. With the looks put in 2026, the pressure to which Apple is subjected is even greater. The maturity point of its main rival, Google Gemini, is very highabove proposals in Benchmarks regarding Grok 3, O3-mini, Deepseek R1, Claude 3 and calls 4. To recover Apple confidence needs results, not promises. Image | Xataka In Xataka | We have discovered something worrying in the AI ​​models: if the problem is too difficult, they give away immediately

How to activate chatgpt in Siri to ask artificial intelligence things through the wizard

Let’s explain How to activate chatgpt in Siriso that you can use the Openai assistant in the iOS assistant. Siri has won some new powers with Apple Intelligence, although there are still several that are yet to come. However, one of its most interesting options is to be able to use Chatgpt. This way, You can ask chatgpt things directly through Siri and without needing to have the official app. You will even be able to link your OpenAI account in the event that you have a payment version of your AI. Doing it is quite easy, although the first time you will have to activate it. These Apple Intelligence functions are available to iPhone from 15 Pro and 16 that have iOS 18.4 and higher versions. Also for iPads with M1 chip onwards or A17 with ipados 18.4 or higher, as well as mac with M1 or superiors with macOS 15.4 or higher. Activa Chatgpt in Siri The first time you want to use chatgpt, you have to Ask Siri to tell Chatgpt somethinga command with which you ask that it invokes Openai’s AI. For example, tell him to draw. When you ask for this, Siri will tell you that you have to activate chatgpt, and then you must click on the button Start using. This will make a necessary data download that can last a few minutes. It will also start a configuration process in which you are informed of the functions, and where you have to click on the button Following On a couple of screens. Now, you have to enter the section Apple Intelligence and Siri of the adjustments of iOS, and click on the chatgpt option within the section Extensions. Here you will have a section where you can link your Chatgpt account. I recommend you deactivate Confirm Petitions to ChatgPTbecause if you have this option activated, every time you ask Siri for something for OpenAi’s AI, you will ask you to confirm the action by clicking a button. Everything will be faster if you have it disabled. Now you can use chatgpt whenever you want in Siri. You just have to Tell him a voice command mentioning that you want to do it chatgpt. In addition to this, if you ask that you look for something that Siri cannot find, such as information or performing some actions, it will also refer Chatgpt without saying anything. In Xataka Basics | 18 style ideas to edit your photos with chatgpt

Siri is not the only broken toy in the world voice attendees. Their rivals still do

Apple is still choking artificial intelligence implementation. The company confirmed at the end of the week that Siri’s advanced functions would take “More than expected” In arriving, without giving a specific date but advancing that until 2026 we will not have news. The company is late, but it is not the only one in trouble with its intelligent assistant. The great alternative to Siri is Gemini, a solution that most Android manufacturers are beginning to implement in collaboration with Google and that follows very, very green. Don’t wait for the new Siri soon. One of Apple Intelligence’s reasons was The new Siri. Native integration with chatgpt, Natural language understandinganalysis of the content of our phone to meet in detail … recently we could Test Apple Intelligence beta And the conclusion was clear: everything was half building or, directly, it wasn’t. Weeks later, Apple confirmed that Siri’s smartest version “will take longer than expected.” His artificial intelligence is still in beta and the arrival of all the news from this spring was expected. It won’t be so. Apple decided not to get into the AI ​​car when its main rivals were at a point of relative maturity, and These delays They have taken her to the current situation. His rival rubs his hands. Meanwhile, the answer on Android is being clear. This operating system is owned by Google, and Google has Gemini as vitamin assistant with artificial intelligence. Thus, most phones (Oppo, Samsung, Xiaomi, etc.) that are sold in Europe, arrive from Gemini. This is the agent that replaces the classic Google (OK, Google) assistant that we have been using on our phones for so many years, with the main difference of being a Google response to tools such as Chatgpt. Not everything that shines. Gemini has improved, and much, Since we tried it in February 2024. Gemini Live now It is completely freehas no problem to execute simple actions (alarms, searches, etc.), but it is still very far from being a natural assistant. One of its main problems is precisely that the distinction between Gemini and Gemini Live dilutes the use we want to give as an assistant. If, for example, I ask Gemini what I can do today, he will give me an especially extensive answer. If I want to stop talking (in addition, Gemini’s tone is quite robotic and unnatural) I cannot do it comfortably, since the only way that allows interruptions is that of Gemini Live. In other words, in an independent app (such as Gemini or Chatgpt) this distinction between conversational modes makes sense. In a fast and native assistant, everything should be available in the most accessible way. And no, if you tell Gemini if ​​you can speak using Gemini Live, do not activate this mode, start talking without stopping what this way is. Gemini also does not have access to native applications (it only works by extensions and, today, there are very few. It is not even able to make adjustments as simple as lowering/uploading the brightness of the phone, and the same happens with the volume. Much less can change basic system adjustments if we ask. There are no more rivals in sight (still). The only Android manufacturer who bet on a conversational assistant was Samsung with Bixby. This assistant is still alive in One UI 7, but it is so secondary that Samsung herself preinstall Gemini on her phones and its extensions are key to the operation of Galaxy AI. In China, the great manufacturers are beginning to integrate Depseek as the native but, for the moment, there is no advanced voice or native integration. Honor wants to change everything With its AI agent, one capable of performing all kinds of requests, including the most important, those of native adjustments. Image | Apple In Xataka | The new Siri forgets the devices where it is more important: the Homepod and Apple Watch

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.