The best applications to have local artificial intelligence on your mobile or PC, without needing a connection and with greater privacy

Let’s tell you the best programs to install a local AI on your mobile or your computer. In this way, you can have the artificial intelligence directly on the installed device, without requiring an Internet connection and keeping your private data within the mobile phone or PC itself. The programs will allow you to install language models LLM open source, being alternatives to ChatGPT and company, and even being able to choose how distillates you want them. However, some of the desktop applications will also allow you to connect to cloud business models such as ChatGPT, Gemini either Claudealthough sacrificing privacy. In the end, the goal of this is have your own private ChatGPT and without the restrictions of online models, to be able to make all the questions you want. They will be less powerful and with fewer capabilities, since there would not be enough space on your mobile or PC to install complete models like the commercial ones, but good tools for quick questions and functions. Apps to have local AI on your mobile Let’s start with a small compilation of the best applications with which you can have artificial intelligence models installed on your mobile. Remember that models can take up a lot of space, so be careful and check your device’s memory. PocketPal AI This is the go-to mobile app if you want to install a local AI on your device. It is open source and freeand is available for both Android and iOS. It stands out above all for its ease of use, and for the many models available. PocketPal AI Integrates directly with Hugging Facewhich is the world’s leading repository for uploading AI models. Since it is integrated, you can download any model directly without leaving the application and without complicating things. MNN Chat This is an exclusive application for Android, and stands out for being one of the fastest applications for this operating system. But its biggest differentiator is your complete multimodal supportbeing able to write your prompts adding text, images or audio. You will find all kinds of models, from those for generating images to others for making responses with text. Has an integrated model catalogso downloading and installing them is a very easy task. It is a free and open source application. Private LLM If you are looking for a premium app for iOSthis is one of the best alternatives. Of course, it has a price of about $5 with a one-time payment. It includes more than 60 curated models, and uses advanced quantization to improve its performance. It also integrates with Siri and Apple Shortcuts, and in addition to your iPhone it will also work for your Mac or iPad. It also has customizable interactions and support for Family Sharing to share the purchase with your family. Google AI Edge Gallery Google AI Edge Gallery is a Google application for Android with which you can install different artificial intelligence models on your mobile. The app allows many uses, from image consultation to audio transcription or chats with AI, all locally. It is an open source project, and against it it must be said that It is a tool in developmentwhich means that there may still be quite a few bugs that will be corrected. Locally AI Another application for the Apple ecosystem, and optimized for Apple Silicon processors. Stands out for having a beautiful and careful interfaceadapted to Apple’s language, and for supporting the main open source models. What this app seeks is to offer you the experience of an app like ChatGPT but with free and offline languages, all local. Has local voice modelanguage and vision models, customizable prompt system, and both integration with Siri and Apple shortcuts. AnythingLLM An exclusive application for Android, although the model’s aspiration is to have a version for iOS. It has support for several small, fast and powerful models. They do not seek to offer a giant catalog, but rather have hand picked those they consider best and most optimized for mobile. This app also offers a default agent modeso you can use it with its models to search websites, read pages, interact with other applications or use your location. And if you’re looking for power, you can sacrifice privacy by connecting to one of the cloud models it offers. SmolChat We finish with another app to download and run popular AI models on Android, locally and offline. All this with an interface adapted to Android, and many customization settings. You can also pin your favorite chats on the home screen with shortcuts. Apps to have local AI on your computer We now continue with another small collection of applications with which you can download artificial intelligence language models directly on your computerwhether Windows, GNU/Linux or Mac. Ollama Ollama is possibly one of the most popular applications you can find to download AI models to your computer. The best of all is that it is multi-system, and you can install it in Windows, macOS and GNU/Linux. It is also open source and completely free, and has a clean and minimalist interface. Its interface is chat, just like any commercial model, and you have a history of conversations. There is also support for dragging files, such as PDFs or images. You have a search engine for AI models to find the one you want with several versions. Jan With over five million downloads, this is an amazing tool. You can add open source models or connect private ones like ChatGPT, Claude and company. This gives you the versatility to become an excellent all-in-one, available for Windows, macOS and GNU/Linux. But there’s more, because Jan also offers the possibility of using connectorsso you can work with your AI in Gmail, Amazon, Google, YouTube, Google Drive and more. They are also working on a memory system, all storing data locally. LM Studio An open source application with a unified graphical interface, since you will be able to Search and download AI models within the program … Read more

An AI publishes 11,000 podcasts a day by copying local journalists. And at the moment there is no way to stop the avalanche

An automated podcast network publishes more episodes in 24 hours than many broadcasters do in a year, using AI to convert news articles to audio in minutes. A specific case, that of the channel ‘The Daily News Now!’, helps us to consider how far the scraping of content in the era of generative AI. To loot. The case was put on the table indicator: On January 31, at 2:57 in the afternoon, the newspaper ‘The Chronicle’ (a completely marginal publication: despite being 120 years old, it is published by Duke University, in Durham, and is run and produced entirely by students) published an article about Gemma Tutton, a student and pole vaulter who had won a university competition. Seventeen minutes later, a podcast called ‘Durham News Today’ uploaded an episode titled ‘Gemma Tutton’s Triumphant Return to Pole Vault’ to Spotify. The podcast, of course, had no connection with the newspaper. But it reproduced almost all the data from the original article in the same order, including practically identical phrases. And it is not an isolated case: ‘Durham News Today’ is one of at least 433 programs that make up ‘The Daily News Now!’ podcast network created by Corey Cambridge. As of January 23, ‘DNN’ has published more than 350,000 episodes (approximately 11,000 per day). How they do it. Obviously, with AI: a system of scraping (software automation that extracts large volumes of content) monitors media websites, excises text from published articles, processes it using natural language synthesis tools, converts it into audio and distributes it on platforms such as Spotify. All in a matter of minutes. And they don’t bother to dissemble: according to Indicator, they reproduce the structure, data and writing of pieces published by outlets such as local Fox and NBC affiliates, ‘TechCrunch’, ‘Toronto Star’, ‘The Verge’ or the radio station ‘WRAL’. The tools. To understand why an operation of this type is technically possible today, we must take a look at the ecosystem of tools that has been democratizing synthetic audio production for two years. In September 2024, Google activated the feature globally Audio Overview of NotebookLM. The tool converts any document uploaded by the user into an audio summary. The impact was immediate: NotebookLM went from 652,000 monthly visits in August of that year to 10.5 million in September, an increase of 371% in thirty days. In the three months following the global launch, users accumulated audio with a total duration greater than 350 years of continuous reproduction. NotebookLM normalized the idea of ​​the synthetic podcast, and it was all downhill from there. ElevenLabsspecialized in speech synthesis and valued at more than a billion dollars, launched its GenFM function in December 2024, which allows you to generate complete episodes from text. Wondercraftfunded in part by ElevenLabs, introduced support for editing podcasts generated with NotebookLM. Podcastle, aimed at podcast creators, incorporated speech generation with text to complete or replace fragments of speech. The secret: the price. In an analysis from a similar network (Inception Point AIwhich generates around 3,000 episodes per week with more than fifty AI announcers) producing an episode costs approximately one dollar, and with just 20 listeners the episode is profitable thanks to programmatic advertising. The model does not seek loyal audiences, but search engine positioning: by publishing hyper-specific episodes on cities or niche topics minutes after local media launch their articles, these networks anticipate humans’ capacity for informative immediacy. In other words: ‘The Daily News Now!’ appears in the top Spotify results for local news searches in dozens of American cities. It directly competes (and in many cases surpasses) the media from which it steals content. Legal issues. Cambridge defends itself by saying that its network only accesses “publicly available information” and merely summarizes it. But Indicator found almost thirty episodes of ‘Durham News Today’ that reproduced the structure, order and specific sentences of articles from ‘The Duke Chronicle’: it is not a specific pattern. And Cambridge may still be legally protected, but the problem is more about information ethics than legal details. In any case, in May 2025, the United States Copyright Office came to the conclusion that “publicly accessible” material is not necessarily free to use. There are legal precedents in that direction: in November 2025, a federal judge from New York did not reject the lawsuit by fourteen major publishers (including Forbes, The Atlantic and the Los Angeles Times) against the AI ​​company Cohere, considering that their summaries could constitute direct infringement if they reproduced “structure, sequencing, tone and expressive choices” of the original articles. On the contrary, in April of the same year, the case NYT vs. Microsoft dismissed claims related to the Copilot-generated summaries on the grounds that they were not “substantially similar” to the source articles. Meanwhile, and still without trialthere is the case of the New York Times against OpenAI and Microsoft, accused of using journalistic content to train their models Very clever. There is another detail: we are not talking about the ‘New York Times’, but rather ‘DNN’ concentrates its production on local niche news (university athletics, student councils, cats trapped in trees), first because these contents generate specific searches with little competition on Spotify. And second, because legally it is safer. They point to more fragile journalism models. Meanwhile, distributors like Spotify are developing tools to detect artificial music (removed more than 75 million tracks), but the next step is to make big brands aware that they do not benefit from the exploitation of newsrooms that cannot defend themselves. In Xataka | AI is already a battlefield: Anthropic has just accused DeepSeek and other Chinese companies of “distilling” Claude

To the question of what sense it makes to compete with Google, OpenAI or Anthropic in AI, Mistral has an answer: small and local models

French startup Mistral AI Mistral 3 has been launcheda family of 10 open source artificial intelligence models that represent its most ambitious commitment to date. The Parisian company, which is often considered the main European hope in the development of AI, seeks to differentiate itself from the large American technology companies by betting on flexibility and deployment in all types of devices instead of raw power. Under these lines we tell you all the news. What Mistral has presented. The Mistral 3 family includes a flagship model called Mistral Large 3, with 675 billion parameters, and nine compact models grouped under the name Ministral 3 (in three sizes: 14,000, 8,000 and 3 billion parameters). All models are released under Apache 2.0 license, allowing unrestricted commercial use. The large model also has multimodal capacity, being able to process text and images. It is also multilingual, with a special emphasis on European languages. On the other hand, small models can run on devices with just 4 GB of memory, making them perfect for modest laptops, mobile phones and embedded systems without the need for an internet connection. Why strategy matters. While OpenAI, Google and Anthropic focus on increasingly powerful and closed systems with agentic capabilitiesMistral has focused on the breadth and scope of its models, efficiency and what its co-founder Guillaume Lample calls “distributed intelligence.” According to declared told VentureBeat, the company believes the future of AI is defined not by scale, but by ubiquity: models small enough to run in drones, vehicles, robots and consumer devices. The economic and practical argument. Lample explained It means that in more than 90% of cases, a small, specifically tuned model can get the job done, especially if it is trained with synthetic data for specific tasks. According to Lample, this is not only cheaper and faster, but it eliminates concerns about privacy, latency and reliability. The company also has teams that work directly with customers to analyze specific problems and fine-tune small models that perform specific tasks. This, above all, can attract companies that become frustrated when choosing the best possible model for a specific task and, if it does not perform adequately, they end up giving up. Europe is lagging behind. If we talk about innovation and technology around AI, we do not hesitate to say that Europe is leagues away of what companies in the United States and China are offering. This is why Mistral AI advocates a different approach in which it prioritizes massive deployment in devices and the flexibility of its smaller models. The capacity offered by open models can be a great asset to continue betting on these technologies. In China, for example, the open models of DeepSeek, Alibaba or Kimi are emerging widelyabove in certain tasks even competitors as large as ChatGPT. Lample explained that most leading Chinese models are exclusively text-based, with separate image processing systems. For this reason, they also want to opt for a multimodal approach. A complete ecosystem. Mistral no longer only offers language models. The company has built an entire ecosystem that includes Mistral Agents APIwith connectors for code execution, web search and image generation; Masterlyyour reasoning model; Mistral Code for programming assistance; and AI Studioan application deployment platform that also has analytical and logging capabilities. Furthermore, his assistant Le Chat It has incorporated a deep research mode, voice capabilities and a list of more than 20 enterprise integrations. Thus, in addition to its model offering, the company can provide other companies with a whole layer of personalized products and services, with the aim of being their main source of financing. Digital sovereignty. Although Mistral is often characterized as Europe’s answer to OpenAI, the company prefers to consider itself as ‘a transatlantic collaboration’. Its CEO, in fact, is in the United States, has teams on both continents and trains these models in collaboration with American teams and infrastructure. However, its positioning as a defender of European digital sovereignty has earned it strategic partnerships with the French army, the country’s employment agency, the Luxembourg government and various European public organizations. The European Commission presented in October a strategy to promote European AI tools that provide security and resilience while boosting the continent’s industrial competitiveness. Offline capabilities for democratization. The use cases that Mistral has designed for its small models include, above all, local applications, such as factory robots that use sensor data in real time and without relying on the cloud, drones in natural disasters or rescues that operate offline, and smart cars with functional AI assistants in remote areas. Lample stood out that there are billions of people without internet access but with laptops or cell phones capable of running these small models, which he considers potentially revolutionary. Additionally, by running on the device, these apps preserve the privacy of user data. Real “open source” debate. Not everyone celebrates Mistral’s approach. Some critics question his decision to opt for models’open weight‘, that is, free to access but providing less information about their code than truly “open source” models, which provide the code and training data necessary to train a model from scratch. Andreas Liesenfeld, assistant professor at Radboud University and co-founder of the European Open Source AI Index, declared to the Financial Times that data at scale is the missing key in the European AI innovation ecosystem and that Mistral does not contribute to that at all. The long-term strategic bet. Lample recognize that their models are “a little behind” the most advanced closed systems, but argued that the important thing is that “they are catching up quickly.” Time will tell if Mistral’s approach to low-cost, versatile models with local applications ends up working for them to end up positioning themselves as one of the great European bets on AI. Cover image | Mistral AI In Xataka | China already has an army of 5.8 million engineers. His new plan involves accelerating doctorates

Spain is a country extremely loyal to its local supermarkets. A chain wants to change that: Action

He already competitive and highly contested sector of retail Spanish has become complicated with the emergence of a new actor, one whom some already present as a direct competitor of Mercadona or Aldi, although its approach is slightly different. Your name: actiona Dutch chain that is expanding strongly throughout Europe. So much so, in fact, that he boasts of having more than 3,000 stores spread across 13 countries and serve 20.2 million customers every week. And among those countries Spain is included. What exactly is Action? A chain of stores. So far nothing exceptional or out of the ordinary. What has made him stand out is his expansion ratesomething it has achieved largely due to its approach: an aggressive commitment to promotions, prices and an offer in continuous review. To start (and how you can check in your website) the company offers a wide catalog of items that includes everything from household items to stationery, electronics, toys, tools, parapharmacy, clothing or sports. What it differs from, for example, Mercadona (or most supermarkets) is in its power line. While Juan Roig’s firm pays more and more attention to already cooked food and ready to goAction is limited to snacks, cookies, candy, soft drinks and some packaged foods, such as instant noodles or protein bars. Nothing fresh. No butcher or fruit shop sections. Is it their only difference? Its main bet is prices, a discount policy that leads it to launch weekly promotions with products under €15. The company gives it so much importance that it presents itself as “a chain of discount stores for non-food products” and assures that the majority of its products (two thirds) can be purchased for less than two euros. It is nothing exceptional, but it is an effective formula that has allowed other companies to grow before, like Temu. Action ensures that it always has 1,500 products for one euro and renews its catalog with 150 new items every week. And does it work for you? It seems so. At least if we look at your history and figures. Although the company is young (it opened its first store in Enkhuizen, Netherlands, in 1993) it has managed to spread throughout Europe to add more than 3,000 stores in 13 countries. Your last balance shows that its net sales in the first half of the year reached 7.3 billion euros, 17.9% more than in 2024. Regarding commercial expansion, during the same period it opened 125 new stores that now receive, on average, around 20.2 million customers every week. Its main markets are France and Germany, where this year it opened its 600th store. Its presence is also notable in Poland, with around 400 premises. In general, its progression over the last 20 years has been more than notable: in 2003 the chain added 100 storesin 2008 they were already double, in 2014 it added half a thousand and in 2022 it exceeded the 2,000 barrier. This year it has already celebrated a new brand (3,000 stores), with the jump to the Romanian and Swiss markets. And in Spain? The chain debuted in Spain in 2022 and two years later it advanced its peninsular expansion with your first store in Portugal. Here the pioneer was an establishment in Girona, although during its inauguration those responsible for the company already announced that they would continue advancing with a view to the rest of Spain. In fact, during the Girona premiere, Monique Groeneveld, director of the firm, already clarified that in a matter of “weeks” more stores would open in the rest of Catalonia. The passing of the years has confirmed that he was not just talking. Today Action has almost 90 stores spread throughout much of the Spanish geography and a notable footprint in the Community of Madrid, Catalonia, Murcia and the Valencian Community. At the beginning of summer, when it had 74 stores, its workforce already exceeded 1,400 people. Recently its expansion throughout the Spanish geography was expanded with new stores in Royal City, Gijón, Baena and Tárrega. Since June, this vast commercial network has also been completed with its first distribution center in the country, the sixteenth in Europe. A facility of around 59,000 square meters (m2) located in Illescas, in the province of Toledo, designed to supply 210 stores throughout Spain and Portugal. Are they all advantages? No. Although the Dutch chain shares part of the strategy of other firms that have achieved a wide presence in Spain, as a commitment to low costaggressive pricing policy, promotions and own brandswill not have an easy time beating other large chains. Its offer is not comparable to that of Mercadona, Aldi or Lidl (especially due to the differences in food), but Spanish retail is already highly contested and has giants such as Roig’s firm, which has a share of almost 30%. The Spanish customer has also demonstrated notable loyalty towards regional firms. Images | Action and Google Maps In Xataka | For Juan Roig, the key to Mercadona’s future is very simple: “Salaries above the sector average”

I had no idea how to execute a local AI and I just did it in five minutes with my Android. It has been a disaster

The career of whom he manufactures The most capable smartphone It will not be alone in its components: AI has become an essential absolute To win this battle. Gemini He is the absolute protagonist, both in Android and in an iOS in which The presence of AI is insignificant. A complete and capable assistant, with the requirement of being permanently connected to the Internet. Google, Gemini’s mother, has decided Give 100% local models a chance in Android. Including those of its competitors. I, who had never executed a local AI on a PC, I have been able to achieve it on my phone in five minutes. 10 Google applications that could have triumphed Installing Google ai Edge Gallery Google has released an open source app so that any user can interact with multimodal AI models. It is completely free, has no publicity, and its only limitation is that it is not published in Google Play Store. You have to Download it from github. It is as easy as clicking on the link and downloading the file, which weighs 115 MB. Once you have downloaded it, click on it and install like any other app. Depending on the customization layer you use and the browser from which you download it, it is possible that the system asks for the occasional permission. Give it without fear, since it is a clean Google app. Once installed, the app will be in the application drawer or desktop of your phone, depending on your layer. Just open it and download the available models to start using it. Using the local AI in my Android This application allows you to use four predetermined models of the local AI: GEMMA-3N-E2B-IT-Int4 GEMMA-3N-E4B-IT-IN4 GEMMA3-1B-IT-Q4 Qwen2.5-1.5b-Instruct Q8 Each of these names is very likely to sound to Chinese, but there is a very simple summary. Gemma are the Google models, and the number behind “E2B, E4B, 1b”, refers to the parameter number of each model (2 billion, 4,000 million, 1,000 million). This means that, in order, Gemma 3 1b is the fastest, but more basic model, followed by E2B and with E4B to the head as the most complete. On the Qwen side, it is a Chinese model developed by Alibaba, with 1.5 billion parameters, and quite logical precision. These are the models that the app brings, but we can install models on our own. What I win and what I lose using the local The Google app is designed to execute three scenarios of use: Images questions: problem solving, identifying objects, etc. Prompt Lab for summaries, rewriting texts and code generation. Ai Chat: Multi -urnal conversations with AI The local AI is somewhat more limited in these scenarios of use, but in return it is more private. It runs 100% on your phone, without connection to servers. This minimizes response latency, something that I wanted to verify with a fairly simple test: the translation of a text not too extensive. “If the computing power on each chip continue to grow exponentially, Moore real Demanded Were Useful in Business Applications, Too. Defense Contractors Thought About Chips Mostly As a Product that Could Replace Older Electronics in All The Military Systems. “WHEN US DEFENSE SECRETARY ROBERT MCNAMARA REFORMED MILITARY PROCUREMENT TO CUT COSTS IN THE EARLY 1960S, CAUSING WHAT SUB IN THE ELECTRONICS INDUSTRY CALLED THE” MCNAMARA DEPRESSION, “FAIRCHILD’S VISION OF CHIPS FOR CIVILS SEEMED PRISON Off-The-Shelf Integrated Circuits for Civilian Customers Even Sold Products Below Manufacturing Cost, Hoping to convince more customers to trym. “ Chatgpt (4th): 5 seconds Deepseek: 19 seconds The local Google (Gemma 3 1b): has not understood the instruction. Google local (Gemma 3 E2b): 26 seconds. Google local (Gemma 3 E4b): 34 seconds. Google local (Qwen 2.5): 16 seconds. The results are quite irregularand that is that the app is in its initial stage. The main problem I have found is that sometimes it is difficult for him to execute the model, even in cases of light models such as Qwen. In a Samsung Galaxy S25 Ultra I had to restart the app on some occasion, since I did not execute the prompt. I have also wanted to prove how simple problems solve with the visual understanding of images, using the heavy model E4B of Gemma. It has also been quite disaster. In the first attempt I have asked him to solve all the problems. He has succeeded. In the second, I have asked him to solve only the first. He was wrong and, after telling him that it was wrong, he has err. Yes, he does a good job recognizing elements in the images, but it still has a hard time going further (Google just lets his Gemma models choose for these tasks). The conversational assistant works well, with its limitations, but well. It serves to talk and help us with any task that we can think of that you do not need a real -time search on the Internet. If you are especially jealous with your privacy, and want to execute the without any server connection, it is a relatively valid alternative. Google has this app as a test laboratory. At least for now it is not a real alternative, much less, to Gemini. Image | Xataka In Xataka | Artificial Intelligence on your PC: The best free tools to install models of AI as Deepseek, call, mistral, gemma and more

What is and how to install this free and open source app to use local and free models

Let’s explain What is and how to use Google AI Edge Galleryan open source application in which you can use artificial intelligence bots to maintain conversations, create images or help you design PROMPTS, all totally free. This application uses local AI models, which means that Everything happens inside your phone And the data is not sent to any server, everything is private. The negative part is that this also means that they are much less powerful than Chatgpt, COPILOT, Deepseek, Gemini or others. What is Google ai Edge Gallery Google ai Edge Gallery is An open source android applicationwith which you will be able to interact with multimodal artificial intelligence models. Google already has Gemini as the main bot of AI, but this gallery gives you the option of using other models. This is a totally free application that only only is available for Android. You are not going to download it from the application store, but you have to lower your file APK for Android. Then, you will have to Install the APK manually on your own. There is still no version for iOS, although they are already working on it. The idea is to have an application where to install open artificial intelligence models, and be able to perform several tasks with them. At the moment there are three sectionsone to ask for images, another to design prompts and another to use AI to chat. The latter becomes the closest thing to chatgpt. This application is for experienced userssince it will require downloading models manually, and linking to its APIs on the origin website. But it is a fairly promising tool. How to install the application The first thing you will need to do is enter the application website, whose address is github.com/google-ai-Ege. In it, go down to the section Get Started In Minuteand click on the link latest apk That will appear. This will automatically start the file download. When you download the APK, you can start the installation directly. Remember that if you have never done it, you will have to give the browser permission to install applications of unknown origins. Now you can enter the main screen, where you will have The three sections to generate content. In each of them you will tell you the number of models available. Below right there is a +button, which opens a lower window with which you can import the file of an AI model you have on your mobile, and thus install it manually. When you enter one of the sections, you will have the possibility of Download any of the available models To use it. You will tell you what they occupy, and when you press in one you will have a button to start the download. You may have to log in to the website where they proceed. And once you download the model, you can click on your name and the chat will open to use it. From here it is like chatgpt, you write the prompt you want and the AI ​​will answer you. In Xataka Basics | The best PROMPTS to save working hours and do your homework with Chatgpt, Gemini, Copilot or other artificial intelligence

Telefónica has a plan to become a giant. Has lit the alarms among local operators

The Spanish telecommunications market enters a new phase of concentration. With Masorange already underway and A possible movement between Telefónica and Vodafonethe president of the first, Marc Murtra, defends a consolidation of the sector to win scale. But Aotec – the association that groups more than 150 local telecommunications operators – has been more than reluctant to The declared objective of Telefónica to create a “European champion”. What has happened. During the presentation of the Congress AOTEC 2025which will be held in June in Madrid, the main representatives of the Association have raised the tone, as he has collected Digital economy. Its executive director, Gonzalo Elguezábal, has been clear: “We are not against consolidation, but that it is forced by legal or regulatory means.” Between the lines. AOTEC does not oppose concentration per se. What rejects is the political and regulatory thrust to facilitate great mergers, to the detriment of an alternative model that is already working: Small operators, with local implementation. Direct attention, without subcontractors. Physical stores that open where the big ones close. For his part, María Jesús Cauhé, vice president of AOTEC, has valued that “close operators are generating more and more business, compared to the destruction of employment that is taking place in large operators.” A growth that, remarks, occurs especially in the rural environment. The context. The notice is not free. In recent months, the CNMC has approved an average rise of 20% in wholesale prices of the Framework model, for which alternative operators pay Telefónica for using their infrastructure. This measure, according to AOTEC, lacks technical and economic justification, and “can strangle the competitiveness of the sector.” The dossier has already reached the European Commission, and the association is confident for Brussels to force a review. The pulse. Beyond prices, what is at stake is the future of the operator ecosystem. AOTEC defends a decentralized, competitive and rooted model in the territory, in front of a vision that Prioritize European concentration and scale To compete better worldwide. Antonio García Vidal, president of the association, summarizes it as follows: “Where others see fear, we see opportunity. The bigger they are, worse they attend.” The contrast. While the big ones seek efficiency based on mergers, local operators put the focus in the vicinity, use of proximity and capillarity in areas where no one else wants to be. According to AOTEC, the consolidation proposed by Telefónica does not guarantee a better service, and can translate into less real competition and more client disconnection. The Murtra Teleco is complicated by his great project. Of course there are many pages to write in that book. Outstanding image | Telefónica In Xataka | 100 years after his birth, Telefónica faces the greatest existential dilemma in its history: what wants to be older

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.