What it is, what you can do with it and how it works to create your bots within Telegram

Let’s explain to you What is BotFather and how does it work? This father of bots, a bot created by Telegram to be able to create other bots within the messaging app. When you want to proceed to carry out projects such as controlling an AI agent from Telegram, or you simply want to have your information and news bot, this will always be the first step to take. Let’s start by explaining what exactly this bot is. Then, we’ll go on to summarize the things you can do with it, and we’ll finish by telling you how you can use it to create your own bot. What is BotFather BotFather is the official Telegram bot to create and manage bots created by you. Its idea is to be a bot that literally acts as the father of all the bots within the platform. Being official, this bot has been created and is maintained by the official Telegram team, not by third parties. With this Telegram bot you will be able to create a bot, give it a name, add a description and avatar, and manage it as you wish. It will also give you a bot token to connect to it. to an external program with which to control it and make it work. Therefore, what BotFather allows you to do is create the bot shellbut then you will need to connect it to some program to give it functionality. Of course, it will also allow you to edit the welcome message, configure visible commandsactivate or deactivate privacy mode or manage your permissions. But then, the actual programming of the bot will have to be done outside. In summary, we can say that this is a bot for creating bots, and that it works in such an extremely simple way that democratizes creating them so that anyone can do it. Of course, without forgetting that you only create the framework, then making it work depends on your skills connecting it with the tools where you program it. What you can do with BotFather The main function of BotFather is to create new bots which will remain in your name so that you can use them as you wish. You will be able to create a name and you will have to give them a username to be located. When you create a bot, you will receive the access token to connect it to external tools. You can also customize the bot you created to adapt it to your needs. You can change their description, their “about” presentation, and their profile photo. You can also choose the list of commands it accepts, and activate or deactivate advanced functions such as inline mode to use it from the text bar of any chat, its payment system, or its privacy in groups. In addition to this, you will also be able to edit the bot to change these aspects as you want, as well as regenerate access tokens in case you need a new one. How to use BotFather. Creating bots is as easy as open a chat in BotFather and write /newbot. This will open a process where you will first have to type the bot name and then the username. The bot’s username, the @bot to find and write to, must be empty and end in “bot.” Once you have it, you will be given the API token and the address to access it. Then, in the bot you will always have a button open that opens a window with your bots. By pressing one of them you can enter its settings and change all the aspects you want about it, from the commands to the information or the internal games and all its settings. In these options you will also be able to configure your payment system, and transfer or delete it. If you delete it, the bot will disappear forever and its username will be free. And if you transfer it, you will make a different user own it and be able to control and configure it. You will no longer be able to do it. In Xataka Basics | How to create a Telegram bot that sends you a summary made by Gemini of each email you receive in Gmail and other emails

How to create songs in Google Gemini using its Lyria component

Let’s tell you how to create music with Geminithe artificial intelligence from Google. Gemini has just implemented the Lyria model within its AI assistant, which is capable of generating songs from your prompt of text. With this, Gemini begins to compete with Suno and other tools for create songs with artificial intelligence. It is true that Lyria in Gemini is still a little far from what the competition offers, but it is capable of generating amazing results. It will create both the music, the lyrics and the voice of a song. You just have to describe what you want, and the AI ​​will sing in your language without problems. The songs it generates are just 30 seconds longsmall musical clips that you can share. How to create music in Gemini Let’s tell you the two methods you have to create music using Gemini. One of them is a method with which the AI ​​tries to help you step by step to configure your musical style, so that it is faster, and the other is just invoking the creator with a prompt Make music with Gemini from its tools The first method is choose the option Create music from the Chrome tools menu. Simply click on tools and choose the option Create music. The option may also appear in the suggestions that appear below the writing field when you start a new chat. This will take you to a screen where you will be able to choose musical style that you want to use for your song. Each of these styles or pre-generated songs to work from has a button to listen to them, and you will only have to click on the style you want to continue. Now you simply have to write a prompt describing the song what do you want to do. When you do, you’ll see Gemini start thinking and summon Lyria, and then she’ll generate a song for you that you can play and even share. You will also have the option to regenerate the result or write a new prompt in which you request the changes you want to make. In this prompt you can give all kinds of details, such as musical style, subgenre, language, rhythm, theme, and you can even add the letter or tell it only words or phrases that you want the lyrics to include. You can specify structures, music speeds, whatever you want. Create a song in Gemini with a single prompt The second method is simply directly writing a prompt with everything. Here, the only important thing is that at the prompt indicate that you want a songand then describe how you want it to be. When you do this, when processing your request Gemini will realize that you have requested music or a song, and will directly run the Lyria tool to generate it. In just a few seconds you will have your song. Then you will be able write more prompts to request changes on the created song, or directly to compose a new one. In this prompt you can give all kinds of details, such as musical style, subgenre, language, rhythm, theme, and you can even add the letter or tell it only words or phrases that you want the lyrics to include. You can specify structures, music speeds, whatever you want. In Xataka Basics | How to Improve Gemini Answers: 14 Steps to Ensure Higher Quality and Better Sources

What they are and how to use them to create web applications within artificial intelligence

Let’s explain to you What are Claude’s Artifacts and how do they work?, one of the functions unique and most differentiating of this chat artificial intelligence. With it, you will be able to create web applications in Claudewhich you will be able to execute and use directly on your website or application. Anthropic’s AI is one of the leaders in the industry, and also the most important for programmers, with other functions such as Claude Code. We are going to start the article by explaining in a simple way what exactly artifacts are, and then we will tell you step by step how they can be created. What are Claude’s artifacts? Claude’s Artifacts are a functionality that allows this artificial intelligence model generate structured contentsuch as code, long documents and complete interfaces. The result will then be shown to you in a separate panel, but within the same conversation. If you have ever used Claude, you will have already noticed that there are times when instead of responding to you with plain text it makes a small functional application. This happens because this AI has a kind of internal mechanism to be able to generate this type of content. There are many types of artifact. They can be web pages written in HTML, CSS and JavaScript, or simple games made with these languages. There may also be interactive React components, charts and data visualizations, Markdown documents, diagrams, or vector images. In this way, on the one hand you will have the code in the text window, but also you will have the possibility to execute it. Think for example that you want to build an application. The rest of the AIs may only generate the code for you, but with Claude an artifact is generated that you can launch and test. And then, if there are things that you want to change that don’t work, you just have to tell the AI ​​so that it can make the modifications and regenerate the artifact. And the same thing happens with the other formats supported by the artifacts. When you ask them to write an article, telling you a structure that contains titles, subtitles, different font sizes, etc., in addition to showing you the code you will also have to access the artifact, in this case a docx documentand download it to your computer. This means that instead of generating purely textual responses that you then have to transform into something useful, you are directly receiving the final product you want, or at least a functional version of it, and you can even download the file. How to use Claude’s artifacts To use Claude’s artifacts, you have to enter the AI ​​website or application and click on the section Artifacts from the left sidebar. This will take you to the section where you have various examples and templates. When you click on one of them, you will go to a screen where you will see a screenshot or the prompt to write to create it. You will also have an option to customize. This will help you test your first artifacts and explore methods to modify them. In the artifact index you will also have a tab to see the ones you have created, and a button New artifact to proceed to create a new one step by step. This will take you to a screen where you will be able to choose an artifact categoryand by clicking on it you will go to a step by step that will help you create your artifact. For example, if you click on the category of GamesClaude will then ask you what you want to do with this category, being able to create a game as an artifact. So, then you will go step by step where you are guided through everything, choosing the type of game, and other characteristics that you want it to have to give the AI ​​enough context to create it. ANDThis will help you when you are not sure the type of code you want to use or how to describe to Claude the artifact you want to generate. It is very useful for beginners, and the best way to start. You can also create artifacts with a prompt in which you describe exactly what you want. You can mention in it that you want to create precisely an artifact, and then describe the language of the application and what exactly you want to use. In Xataka Basics | Claude: 23 functions and some tricks to get the most out of this artificial intelligence

China has managed to create an AI that has made Hollywood tremble. Disney has not been amused at all

The phenomenon of the month in AI is Seedance 2.0. To date, the most amazing text-to-video creation model and theard a dart at the same industry from Hollywood. So much so that Disney itself has legally warned Bytedance, the Chinese giant behind this model. The notice. Sources of Reuters They claim that Disney has sent a cease and desist letter to Bytedance, accusing the Chinese company of having used company characters to train its Seedance 2.0 model. According to statements, Bytedance would have created a package of copyrighted characters to feed this artificial intelligence, the main reason why it is so accurate at recreating them. Bytedance’s response. The Chinese company has not acknowledged having used copyrighted characters to train its model, but it has reacted to Disney’s notice. “We are taking steps to strengthen current safeguards as we work to prevent unauthorized use of intellectual property and likeness by users.” Beyond the statement, the company has not detailed what measures it is taking to prevent users from distributing copyrighted content, such as the one we have been seeing flooding the network for two weeks. They are not the first. Disney has already taken similar measures against Character.AIan AI specialized in creating animated characters capable of perfectly emulating Disney characters. The company It only has an alliance with OpenAIwith whom he signed an agreement so that Sora could generate more than 200 characters thanks to a three-year license. The operation included a $1 billion investment by Disney in OpenAI. Doors to the countryside. “Creative prompt engineering” and code modifications to make AI bypass the very limitations for which it is programmed are inevitable, in addition to all the derived Open-Source models that can be trained outside the jurisdiction. The key here is not in the dispute between Disney and Bytedance, it is that China has created the first model that directly threatens the creation of cinematographic content. Join the enemy. For some time now, the film industry has been clear that the coming years they will be cuts and embrace of AI. CEOs like Sony have already spoken out and positioned themselves as “very focused on AI”, making it clear that the current problem for movies is expense. Models like Seedance now allow us to generate in minutes what previously required entire teams and million-dollar budgets. In the coming years, video generation models will force the industry to rethink its cost structure. In Xataka | We are entering a new era of robotics driven by AI and Disney is its perfect showcase

In 1968 a man had the idea to create the first tablet in history. The problem is that he was decades ahead of his time.

If I tell you to think of the oldest tablet you remember, you may go back to the first iPad, which was released in 2010 (and, by the way, I turned seven last week). Or, if you’ve been following the world of technology since before the turn of the century, you might be familiar with the Microsoft Tablet PC from HP Compaq that was announced in 2001. In reality, there was someone who already tried to create one and it was much earlier, in 1968before the term “tablet” was even coined. At that time, Alan Kay was a young worker at the Xerox Palo Alto Research Center who had been mulling over the concept of a personal computer for some time (in contrast to the military, business and professional use that reigned among manufacturers at the time). After speaking with other colleagues who were beginning their research on how the programming language Logo could help younger children advance in math, Kay came up with an idea: “This encounter finally made me see what the real destiny of personal computing was going to be. Not a personal dynamic ‘vehicle’, as Englebart’s metaphors had it as opposed to IBM’s ‘railway tracks’, but something much deeper: a dynamic personal ‘medium’. With a vehicle, one could wait until high school to take ‘driving lessons’. But if it was a medium, it had to extend into the world of childhood.” In 1968, Kay created the Dynabook conceptwhich he would spend several years profiling. in the book “Tracing the Dynabook: a study of technocultural transformations” They define it like this: “Kay called it the Dynabook, and the name suggests what it was going to be: a dynamic book. That is, a medium like a book, but one that was interactive and controlled by the reader. It would provide cognitive scaffolding in the same way that books and print media had done in recent centuries but, as Papert’s work with children and Logo had begun to demonstrate, it would take the advantages of the new computing medium and provide the means for new kinds of exploration and expression.” “A personal computer for children of all ages” With the idea of ​​its function clear, Kay then began to shape it into cardboard prototypes (as can be seen in the image at the top of the article). In 1972, the researcher presented his paper “A personal computer for children of all ages” in which he offered more details not only about his motivation and his vision of personal computing at the time, but about the own device that I had in mind. His idea was to get a kind of tablet-shaped personal computer aimed at education. This would have a reduced thickness, a liquid crystal touch screen and a keyboard. Like a regular notebook in size, with a graphical interface (a revolution for the time) that allowed the reproduction of graphics, music and text, and with internal storage for 500 pages. The keyboard would not be the only way to enter information: it could also be done via voice. In the image that Kay drew, the word “stylus” can also be seen, although he did not comment on it in his paper. Kay’s idea is that the Dynabook that could be connect to other systems to “copy” information to it (among them, the ARPA Network) and even predicted the existence of content “vending machines”, which could not be accessed until payment had been made. “The books can be installed instead of being bought or loaned,” he said. Regarding digital “ownership”, Kay said the following: “The ability to easily make copies and own the information yourself is not likely to weaken existing markets, as has happened with xerography, which has strengthened publishing; and just as tapes have not hurt the music industry but have provided a way to organize one’s own music. Most people are not interested in being a source or a smuggler, but rather like to trade and play with what they have.” According to Kay’s calculations, the components to manufacture it could cost $294, so it was not unreasonable to be able to sell it for $500, something expensive for the time. “The average annual amount spent per child on education is only $850,” he said, and that is why he even proposed a different financing model: “perhaps the device should be given away as if it were a notebook, and only sell the content (cassettes, files, etc.). “This would be quite similar to the way TV packages or music are now distributed.” “Let’s do it!” he said to finish his paper. Unfortunately for Kay, the Dynabook never materialized. Despite Kay’s enthusiasm, the Dynabook itself was never manufactured for lack of support at Xerox and due to the technological limitations of the time. Do you remember what computers were like then? Well, imagine what it would be like to build a tablet. Two Xerox PARC engineers, Chuck Thacker and Butler Lampson, asked for permission to try to replicate a similar machine on their own, and so it came to light. Highwhich was also known as “Interim Dynabook”. It was not a tablet, far from it, but it maintained some of the ideas that Kay had raised in her publication. He Xerox Alto was one of the first personal computers of history and Steve Jobs and Apple engineers they were inspired in some of its innovations and concepts, such as the use of a graphical interface for its own computers. Starting at Minute 2:27, the Xerox Alto graphical interface in action Kay is not only remembered for the Dynabook itself, but for the educational vision he gave to the project, for his peculiar vision of the personal computing paradigm and for how he came to anticipate some of the problems (and even technologies) that would come later. Not only that: in 2001, Microsoft presented its Microsoft Tablet PC, a project that Chuck Thacker and Butler Lampson had led. Yes, the same ones who once tried to implement … Read more

How to create a Telegram bot that sends you a summary made by Gemini of each email you receive in Gmail and other emails

Let’s explain to you how to create a Telegram bot that sends you a summary of your emails emails, such as Gmail. Thus, when you receive a new email, whether from anyone or from specific senders or topics, an artificial intelligence will make a summary and send it to you. All this without knowing how to program nor have technical knowledge. This is not something you can do simply by asking artificial intelligence, and we are going to need a program that generates workflows or work flows. We will use Make.com, because it is very complete and easy to use. Besides, Make.com It has a free version that is perfect for taking the first steps, although with some limitations. In Make we will have to link any artificial intelligence, although we have opted for Gemini because it is easy to obtain a free API for it. And then, We have chosen Telegram because creating bots is easyand it only takes a few minutes. In the end, what you will need is an API from an AI, a Token key from a Telegram bot, and creating the workflow chain on Make.com. In the examples we have used Gmail because it is also easy to link to it. Get your Gemini API first The first thing we are going to do is get a google api to be able to use Gemini in our project. For that, go to the website aistudio.google.com and sign in with your Google account. When you do, in the bar on the left at the bottom click on Get API Key. Now you have to click on the option Create API key that appears at the top of the screen you have created. This will open a window where you have to create the project for which you are going to use it in order to identify it, for example Gmail Gemini. When you create the project, you can now create the API. When you have created the API, you will see that it appears in the list of API keys. You just have to click on the left, below where it says Clueand a window with the API will open, starting with “AIza–“. Set up the Telegram bot The first thing you have to do is create a bot on Telegram. For that you have to look for the “@BotFather” tool and write to it as if it were a new user. Use the /newbot command to create a new bot, giving it a name to identify yourself and a unique username to access the bot whenever you want. When you do it, it will give you two things, first the username and address of your bot to access it, and second an access token with various figures and letters. You have to save this token to use later. Start creating your project Now you have to go to Make.com and click on the option Create new scenario to create a new project. In the options choose the option Build from scratch to create an automation from scratch. From what we are going to do, you must understand that we will create an automation of several modules, each one of them different. These automations will form a chainso that the action of the first leads to the second, and that of the second to the third. Come on, the order in which we put them is important. Add your email module as a trigger You will go to a blank screen with a button with the plus symbol. Here, click on the + button and from the drop-down menu choose Gmail. Inside now click on the option Watch emails to configure the action of reading your emails. This will cause your automation to be activated every time you receive an email in Gmail. It is a trigger, which is the element that will start this automation. Now click on the button Create a connectionwhich will open a screen where you have to name the connection at the top, and at the bottom log in with your Gmail account to link it. You will have to log in and give the website permissions to access your email. Once the action has been added, you can filter the type of emails that this automation executes. Can choose emails from a specific folder or labelas well as other criteria, so that these are launched and read by the AI. You can also set some limits. This screen gives you the possibility to fully customize the experience depending on What type of emails do you want the AI ​​to summarize for you?. It is an important step, especially because you will be able to make it only perform this action with certain types of emails. For example, they can be from senders related to your work or a specific project. If you open the advanced settings either Advanced settingsyou can specify even more. For example, you can configure so that only runs with emails from a certain senderwith a certain subject, and many other characteristics. Now you can configure from what moment do you want the data to be processed. For example, you can choose From now on so that they are processed from the emails you start receiving from now on. You can also link other emails. For this, instead of the Gmail module you can use the Email module, which will allow you to connect with Google, with Microsoft for Outlook, or with others through IMAP. Outlook also has its own module. Now add the Gemini module Now it’s time to add the second module. To do this, click on the + button to the right of Gmail, and on the screen that opens choose the option Gemini. In the options that appear in the module that opens, choose where to put Generate a response. This will now open a key module, where you simply have to write the Gemini API Key that we have generated at … Read more

one that leads you to create your own AI chip

ByteDance is developing its own artificial intelligence chip and is already negotiating with Samsung Electronics to manufacture it. It is at least what they point out two sources close to the project, which would make the Chinese company an even fiercer competitor in the segment that wants to revolutionize our world. TikTok doesn’t matter anymore (that much). TikTok has turned ByteDance into an empire within the social media segment, but the Chinese company has not stopped there. In fact, it has completely immersed itself in the world of generative AI and already has truly exceptional models like Doubao (GPT-5 or Gemini competitor) or Seedream. The only thing it was missing is its own AI chip, but pay attention, because that may have a solution in the short term. what has happened. Sources close to the company’s plans have indicated that ByteDance is working on the design and development of its own new AI chip, which they have named Seedchip, in line with its Seedream generative image and video AI models. Said chip could be manufactured by Samsung Electronics, with which the Chinese company is holding talks. A spokesperson for ByteDance assures that these plans for its own chip are imprecise, but it does not detail why. They go at full speed. The project appears to be moving forward at high speed and ByteDance aims to receive the first samples of that chip by the end of March. The company intends to manufacture at least 100,000 units of the chip, which would also be especially focused on the inference of its AI models and not on training. One of the sources consulted indicates that ByteDance hopes to later increase production to 350,000 units, but the time frame for that objective is not specified. Inference is increasingly important. Focusing on inference chips makes a lot of economic sense. Training models like Doubao requires the brute force of NVIDIA chips and that part is well covered. However, making its AI work for millions of simultaneous users is one area where ByteDance can save billions by having its own chip optimized for its code. Why partner with Samsung. Taking into account that China tries to avoid dependencies on foreign companies, this alliance with Samsung is striking. However, there may be a compelling reason for making that decision: negotiations with Samsung apparently include access to the supply of memory chips that are currently practically out of stock. And above all, to some very special chips: the HBM. A delicate alliance. The clear alternative to this association with Samsung would be to manufacture, for example with SMIC or even opt for Huawei, which is becoming the “Chinese NVIDIA” and it already has truly remarkable AI chips. Choosing Samsung seems to send a compromising message: that Chinese-made technology continues to lag behind that of manufacturers like Samsung. ByteDance already tried it. In June 2024, data already appeared suggesting that ByteDance had allied with Broadcom to develop an advanced AI chip. At that time, the partner chosen to manufacture these chips was TSMC, but it seems that this project has ended up fading. Everyone wants to have their own chip. ByteDance’s ambition follows the general market trend: almost all major technology companies have decided to create their own advanced AI chips. Google has your TPUsMicrosoft their Maiaand Amazon their Trainium to reduce your dependence on NVIDIA. And of course ByteDance’s main Chinese rivals, Alibaba (with his Zhenwu) and Baidu —which has its Kunlunxin division working on it—they have their own designs. But they continue to bet on NVIDIA. This effort aims to transform its short video businesses and cloud infrastructure services, but even if confirmed and successful, it will take time to make it a true reference. ByteDance plans to invest $22 billion in the AI ​​space, with the majority of that budget going toward purchasing NVIDA chips including the H200. Image | Xataka with FreePik

Emirates just melted down 60 million to create Noah’s Ark. And he has given them to those who want to resurrect the mammoth

We’ve been really into playing God for a few years now. On the one hand, we have Bryan Johnson, a millionaire who lives to rejuvenate -and to sell you oil-. On the other hand, there is Colossal, a company that is doing more serious and interesting things. How far? Until the of chase resurrect the mammoth. At the moment there are more promises than realities, but they have managed to get the United Arab Emirates to give them a check for 60 million dollars. Aim? Create the modern Noah’s Ark. Colossal. This company dedicated to biotechnology has become popular for its objective not only in bring the mammoth back to lifebut also to the dodoto the moa either to the Tasmanian tiger. It does so from well-preserved DNA samples, to the interest of personalities such as Peter Jackson -director of ‘The Lord of the Rings’ and great collector of moa bones– and, evidently, thanks to tremendously generous sums of money. Colossal Biosciences has reached a assessment of more than 10,000 million dollars and, in the latest round, it has been 600 accumulated. Peter Jackson himself collaborated with 25 million for the company to place the moa in its goal list. BioVault. Although there are those who think that What Colossal does is sell the motorcyclethey have achieved some results, like resurrecting the giant wolf. The theory is simple: they take the DNA of the extinct animal, combine it with samples from living relatives and the difficult part comes when they have to filter out the variants to polish the genes and get the animal they want. When they have it ready, they use the belly of a living animal to gestate the extinct creature. UAE does not want them to resurrect anything. At least, that objective has not been made public, but due to Colossal’s activity, they have obtained thousands of DNA samples. And that is what we want to preserve in BioVault. The goal is a capsule in which the DNA of more than 10,000 species is stored, with a special focus at the beginning on the 100 most endangered species today. Which is it? They are in ‘coming soon‘. Museum of the Future. For this, the United Arab Emirates will spend 60 million dollars, and once completed in 2027, this modern Noah’s Ark will be stored in the World Preservation Laboratory, which will be a part of the Museum of the Future from Dubai. Inaugurated in 2022, it is a tremendous building, on par with the pharaonic works built in the Middle East at that time. particular architectural war in which the United Arab Emirates and Saudi Arabia are involved. If it is spectacular on the outside, it is even more so on the inside, and precisely its name is due to the fact that it is a museum that does not show antiquity, but rather presents a journey to the future. To 2071, specifically. The museum is outrageous Backup. In the end, this is one of the largest and most important biotech deals. Ben Lamm, co-founder of Colossal, affirms that we are losing species at an alarming rate and the world “urgently needs a network of global BioVaults, a backup plan for life on Earth.” He threw a dart at the financing of other biobanks, ensuring that they are fragmented, underfinanced and do not have a collaborative spirit that allows them to use data in the event of a crisis. In fact, it is estimated that half of the species on Earth will face extinction by 2050, and BioVault will be there to remedy it. The big question is whether it will be worth bringing animals back just because we can when their ecosystems are destroyed. Images | Colossal, روتانا In Xataka | Face transplants always seemed like something out of science fiction. A hospital in Barcelona has made it a reality

Vodafone negotiates with Telefónica and Orange to create a common front: a RANco

Eamonn O’Hare, CEO of Zegona (the owner of Vodafone Spain), has confirmed to Expansion which is in talks with Orange and Telefónica to create a RANco, a mobile network joint venture in the style of the fibercos which launched in 2025. Why is it important. Spain has three large operators managing three national mobile networks with identical fixed costs, but Orange and Telefónica have double Vodafone’s customers. This asymmetry makes Vodafone’s mobile network, comparatively, inefficient. A RANco would allow sharing infrastructure, reducing expenses and improving quality without eroding profitability. The context. Vodafone has multiplied its share price by 12 in 20 months after reducing costs and close two fibercos that generated 2.2 billion in value. The share went from 345 pence (things from the London stock market) when they bought Vodafone Spain to more than 1,565 pence now, and has returned 1,400 million in dividends to its shareholders. It now trades at 9 times its cash flow when its competitors do so at 13 times. The RANco is the missing piece to close that gap. How a RANco works. A RANco is a wholesale mobile network company shared between operators that provides services to its owners. It is similar to fibercos: the network is unified, synergies are captured and a minority stake is sold to an international investor. Vodafone pays 150 million annually to Vantage Towers for towers at double the market price. With the RANco, those costs are divided. Two possible scenarios: With Orange: easier to execute and attract investors, but fewer synergies because they already share a network in some areas. With Telefónica: more synergies by not having anything shared, but more complex to incorporate a financial partner. The calendar. O’Hare puts the closure of RANco within a year and a half. And in November 2028, the window opens to abandon the contract with Vantage Towers. Vodafone has already made a decision: either Vantage reduces its rates by 50% or terminates the agreement. Yes, but. Mergers between operators are not on the table. O’Hare rules out short-term purchases or sales because the regulatory risk is “too great” and would distract the group from its three priorities: Align your stock valuation with the competition. Reach 1,000 million in cash flow. And develop the RANco. The figures. Vodafone Spain generated 400 million in cash flow when Zegona bought it. Last year it reached 600 million. This year it will be close to 800 million. The goal is to reach 1,000 million in the coming years. At stake. The RANco is not just a financial movement. Turning off the cable network will take three or four years migrating customers to fiber. Small operators will disappear, devoured by Digi and Finetwork. And Vodafone keeps open a possible IPO in Spain within three or four years, when it would complete its transformation. The shadow of Telefónica. As published Populi Voice A few days ago, Telefónica began talks to buy Vodafone Spain and close the operation in the first half of 2026. But a RANco with Orange or with Telefónica itself, in addition to O’Hare’s own interview, would change the equation: Vodafone would enter that negotiation with shared infrastructure and long-term contracts that would make the purchase more expensive or directly unviable. Zegona negotiates the RANco also as a policy. Featured image | Orange, Movistar, Vodafone In Xataka | Any teleoperator would be worried about making less money with each client. Digi is exactly what you are looking for

How to create an image of yourself and a Pixar character with your face using artificial intelligence, with Gemini or ChatGPT

We are going to explain how to create an image in which you appear holding a 3D character of yours miniature using artificial intelligencelooking like Pixar characters. We are going to use a prompt created for use with Geminialthough it will also work in ChatGPT without problems. It is a fairly simple composition, in which you only need to add a photo of yourself and write the prompt, which is quite long and complex. But the result is quite curious, although you may need several tries to get it completely to your liking. An image of you with a 3D cartoon What you have to do is open a new chat with Gemini, which is the AI ​​with which you will have the best results. Once you have it, upload a photo of yourself in which your face looks goodand then add the following text as a request or prompt: “Use the uploaded photo as the ONLY facial and identity reference. The main subject must look exactly like the person in the uploaded image, preserving identical facial structure, proportions, skin tone, hairstyle, eye shape, nose, lips, jawline and overall identity. Do not embellish, alter or replace facial features. Create a cinematic, ultra-detailed scene of your subject smiling naturally. The subject delicately holds a tiny, cartoon-style miniature version of the same person by the hair between his fingers, like a playful puppet suspended in the air. The miniature character is a Pixar/Disney-style 3D version of the same person, with cute, exaggerated proportions, big, expressive eyes, mouth open with joy, arms raised, and a lively, playful stance. The miniature must clearly resemble the same person and be wearing a matching outfit. The main subject looks at the little character with surprise, delight and affection, creating a whimsical and touching interaction. Lighting is warm professional studio lighting with soft rim light, shallow depth of field, and soft golden bokeh background. The real person’s skin texture is photorealistic, while the miniature character has clean Pixar-style materials, smooth shading, and polished 3D surfaces. Cinematic color grading, high contrast, sharp focus, premium portrait composition, 50mm lens look, f/1.8 aperture, ultra-realism mixed with stylized animation, 4:5 aspect ratio, 8K quality, cinematic finish. Anime, 2D illustration, comic style, flat shading, low poly, plastic skin, wax face, face swap, different identity, facial morphing, beauty filters, excessive smoothing, blur, low resolution, grain, noise, distortion, deformed face, incorrect facial proportions, extra fingers, missing fingers, duplicate hands, floating objects, bad anatomy, inconsistent lighting, harsh shadows, neon colors, cold blue tones, washed out colors, excessive saturation, watermark, text, logo, severed head, face out of frame.” Yes, it is a very long text, but each of the sentences that make it up help with the effect. When you send it, you will receive a composition that shows an image of you holding a Pixar character with your face in your fingers. You will also be able to do it with ChatGPTwhich occasionally releases good results. However, the faces are sometimes somewhat deformed, and for now Gemini seems to do better almost always. In Xataka Basics | How to create a character in ChatGPT and Gemini to use it in all the images you make with artificial intelligence

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.