The Pope is asking priests not to use ChatGPT to write their sermons

Artificial intelligence may hallucinate from time to time and make things up, but there is one thing it does quite well: prepare texts from a base. Although the results depend greatly on what you ask for in your prompt, it is great for writing to the OTA about a fine they have given you or making a summary of photosynthesis. And why not: also to explain a parable from the Bible to you grounding it to everyday reality. A sermon from the old priest, come on. Well no. I’m not saying it, he says it the current Pope of the Roman Catholic Apostolic Church Leo XIV. A few days ago, the Augustinian religious was at a meeting with the clergy of the diocese of Rome and there he remembered the technology, issuing a warning to anyone who is tempted to entrust homilies to AI because “to make a true homily, which is sharing the faith, the AI ​​will never be able to share the faith.” That is to say, although language models undoubtedly have the capacity to smooth out the readings of the Bible to bring them down to Earth, bringing them closer to everyday life, one thing is explain the earthly and quite another is providence. In short, spirituality is an exclusive quality of humans and not machines. Perhaps it could help the church staff precisely to select readings from the long list offered by the book par excellence of Christianity and to synthesize what is important so that later they are the ones who, in their own handwriting (it is a way of speaking), write the sermon in the old-fashioned way. What the Pope says goes to mass In any case, Robert Francis Prevost continued with statements that align with science: “like all the muscles in the body, if we don’t use them, if we don’t move them, they die, the brain needs to be usedso our intelligence, your intelligence, must also be exercised a little so as not to lose this capacity” because the exercise of searching the Bible, reading thoroughly and staying with what is important is undoubtedly a mental exercise that, if not done, reduces mental exercise. Another part of his speech was directed at the use of mobile phones and that current paradox of being more connected and more alone than ever, ensuring that there is no human contact and that another type of friendship experience must be sought to establish bonds. In Xataka | Pope Francis made his opinion clear on end-of-life medical ethics. The one we don’t know is that of the Vatican In Xataka | The Vatican, a holy and renewable city: the Pope’s plans to make the small Catholic state more sustainable Cover | Flickr

Anthropic’s security manager leaves the company to write poetry

In a movement more typical of “nihilistic penguin“that the head of security for one of the main protagonists in the development of AI, Mrinank Sharma, head of artificial intelligence security at Anthropic, has announced his resignation with a public letter in your X profile and he will dedicate his life to writing poetry. In his statement, Sharma not only explained why he is leaving the company that develops the models of Claudebut instead described the current state of AI development, with language that mixes alarm with personal reflection. “The world is in danger,” said the former director of Anthropic. The context: who he is and what he did at Anthropic. Mrinank Sharma headed the Safeguards Research Team from Anthropic, a research group focused on studying the risks associated with AI systems. Within Anthropic, Sharma’s work included developing defenses against risks such as AI-assisted bioterrorism and studying phenomena such as sycophancy (the tendency of AI models to user adulation), as well as investigate how AI can influence human perception and change cultural behaviors. He leaves, but leaves a message. The almost cryptic letter that Sharma published in X It quickly went viral due to the messages it contained. In it, he expressed his concerns in a tone that transcends the technical. One of the quotes that has attracted the most attention: “The world is in danger. And not only because of AI, or biological weapons, but because of a series of interconnected crises that are developing at this very moment.” Beyond the almost apocalyptic literalism, Sharma warned that humanity was approaching a critical point in which the development of AI was facing ethical dilemmas for those who develop it “our wisdom must grow at the same rate as our ability to affect the world, otherwise we will face the consequences.” Work to stay out of work. Sharma is not the only one who faces this ethical dilemma. According to sources of The Telegraphother Anthropic employees have expressed concern about the huge evolutionary leap in the latest AI models. “I feel like I come to work every day to stay out of work“one of the employees acknowledged to the British media. In a way this is true, since these employees are working on the development of a technology that, in all likelihood, change nature of his work, and that of millions of peoplea few years away. Is that good or bad? A first reading of the letter leaves the feeling that these workers are developing the weapon that will destroy humanity. However, a reading between the lines leaves Anthropic in a pioneering situation compared to its rivals from OpenAI, Microsoft or xAI: they are achieving advance at a pace which overwhelms even its developers. A sensation that does not seem to occur in the templates of other companies. Could it be that their models are not at that point of evolution? “Throughout my time here, I have seen repeatedly how difficult it is to allow our values ​​to guide our actions. We constantly face pressure to let go of what matters most,” Sharma wrote. The poetic turn. In addition to reflecting on the global risks he perceives, Sharma announced that his next professional step will be very different from the one he had until now. In his letter he mentioned his intention to devote time to what he called “the practice of courageous speech” through poetry. This change of lA for poetry has been interpreted as a sign of dissatisfaction with the pace and focus prevailing in the AI ​​technology industry. Like Sharma, in recent weeks other key figures in Anthropic’s AI development have announced their resignation. Harsh Mehta and Behnam Neyshabur They also announced a few days ago that they were leaving the company. However, in these cases, the exit announcement was made and, immediately afterwards, a new AI project was announced. That is to say, far from the ethical postulates that Sharma proposed, his intention was more along the lines of digging into his own gold mine and not that of others. In Xataka | Daniela Amodei, co-founder of Anthropic: “studying humanities will be more important than ever” Image | mrinank sharmaAnthropic

We have tried to write this article from an AVE. It has been an ordeal

In case anyone is confused, today are the Xataka NordVPN Awards 2025. That means that the editorial team travels from our respective cities to Madrid, mostly by train. We are very hard-working people and we always take advantage of the trip to write an article, or at least we try when Renfe’s WiFi allows us to. We are Amparo Babiloni and Jose García, join us in this sad story. These lines are written by me, Amparo Babiloni, on the Valencia-Madrid AVE on Thursday, November 20 and connected to the Play Renfe network. I like the risk. The simple fact of connecting and being able to start working (halfway) has been an ordeal. How to improve WiFi at home To give you an idea: I got on the train at 8:30 and I wasn’t able to start writing this until almost an hour later. Just logging into the administrator took me about ten minutes and opening the draft at least five more. Slack does not work directly, neither in the app nor in the browser. Jose here. I left Córdoba at 8:33 and I intended to take advantage of the trip to do some work. The departure from Córdoba has been terrible, since it passes through areas with many tunnels, then mountains and then we enter a network wasteland such as Castilla-La Mancha. I don’t know what happens in Castilla-La Mancha, but that stretch is terrible. Not only does the WiFi network not work, but the coverage is terrible. Good. Connecting to the VPN is an impossible mission. In addition to having to confirm that I trust the network certificates, it is impossible to take advantage of the WiFi network and have the VPN activated. In fact, I write this with the VPN disabled, something that gives me some respect on a public network. Ah yes, happy to accept this. During the first hour of the trip I completely depended on the mobile network to write an article and respond to some important emails. Thank goodness I uploaded the images yesterday from home, because having had to upload 30 JPEGs of six megas I might as well have started crying. Slack was only half loading (I couldn’t see my colleagues’ profile photos) and this article is being coordinated by Amparo and I in the best way possible. Amparo is offline, I hope she’s okay. It’s 10:13. They just told me over the public address system that there is an incident at the entrance to Madrid, so I find myself half an hour from Madrid completely stopped 🤷‍♂️ Dizzying speeds Amparo returns. The first leg of the trip I suffered quite a few outages, but now it seems that the network has more or less stabilized and I have been able to write all this in one go. But let’s see what a speed test tells us. The image weighs 13.9KB. It took more than a minute to upload This is the download and upload speed while passing through Castilla-La Mancha. One thing that both Jose and I have noticed is that the network is better as we get closer to Madrid, probably because there are more antennas. This contrasts with what we live in 2016 when we tried Renfe WiFi for the first time. At that time we found “a very good connection speed, with peaks of 53 Mb/s for both upload and download, and with minimums of 9 Mb/s for download and 13 Mb/s for upload in an area with little coverage.” (My connection has been cut here) It’s back, but it took me a while to be able to continue writing because every time I open any new tab it takes an average of 2-3 minutes to load, that is if it doesn’t freeze. The speed entering Chamartín. I have repeated the test by entering the station and the download speed still does not even reach 2MB. In fact, it’s even worse than when I was further away from the city. I have to leave you now, we just arrived. At least this time it wasn’t due to a network outage. Hello, I’m Jose. It’s 10:29, I’m still standing half an hour from Madrid. The train driver is being very considerate in informing us of the situation. The issue seems resolved, but now the entrance is congested. ADIF has not yet given an estimate of the duration of the stoppage, so until further notice, we are still here. Right now, half an hour from Madrid, the network is stable, although the speed barely exceeds 1 Mbps. I have tried to liven up the wait by watching a video about the new Bambu Lab 3D printer, but it was not a good idea. All videos load by default in 240p. If I increase the resolution, the video stops and stays in an infinite loading loop. I could resort to a PlayRenfe movie, but since November 1 They are no longer available. The thing is that I have a 5G network on my mobile (at a speed of 15 Mbps, let’s not go crazy either), so it definitely seems like a problem with the train’s own WiFi network. The cell phone tells us that it is not a WiFi 6 network (which would help with congestion), but the underlying problem could be a host of things. A possible cause A possible origin of the problem is that the desire to eat and hunger come together. First of all, you have a low-speed network that is not prepared for support the huge number of devices that there is a train consuming bandwidth. We are writing this text, but there may be people watching TikTok, YouTube or doing more demanding things. (It’s 10:32, the train is moving again) Secondly, trains cannot escape the laws of physics. The Córdoba-Madrid AVE is currently moving at 248 km/h and the Doppler effect does his thing. As we see each other, the signal intensity constantly changes and the systems must compensate for these variations. The faster … Read more

George RR Martin asked ChatGPT to write ‘Game of Thrones’. He did it so well that he is going to end up before the judge

The debate about the AI usage limits and how is this going to actually affect the creators It is very complex, and it has only just begun. From discerning to what extent AI’s ability to create works outside of humans will continue to grow to the logical ethical and legal concerns that appear around a tool that, from its very definition, is in a completely unexplored area. At the moment, George RR Martin and other authors are taking steps in search of more demanding regulation. What has happened? A federal judge in Manhattan has given the green light for the lawsuit filed by George RR Martin and other authors against OpenAI and Microsoft for alleged copyright violation. The creator of ‘Game of Thrones’ and his colleagues accuse these companies of use his works without authorization to train ChatGPT. According to the ruling issued on October 27, 2025, there are reasons for the case to move forward, since ChatGPT’s proposal for a sequel to the saga was substantially similar to Martin’s work already protected by copyright. The determining test. It came when lawyers asked ChatGPT to create a fictional sequel to ‘A Clash of Kings’. The chatbot immediately spawned a novel called ‘Dance of Shadows’, a sequel that included a new Targaryen heir named Lady Elara, a rebellious sect of the Children of the Forest, and a mysterious form of ancient dragon-related magic. This ability to recreate elements from Martin’s universe made the question clear: how could the AI ​​know his work in such detail without having fed on it? The precedents. The origins of this legal conflict date back to September 2023, when Martin, accompanied by 17 other authors (including people like Michael Chabon, Ta-Nehisi Coates, Jia Tolentino, John Grisham, Jonathan Franzen and Sarah Silverman) raised his voice against what he considered a systematic exploitation of his work. The case was brought by the Authors Guild union, in a lawsuit that spoke of “systematic theft on a massive scale”, arguing that the tool makes use of their works without paying royalties and without the writers’ consent. The letter. Months before the lawsuit, these authors and many others, such as Margaret Atwood or Nora Roberts, they had sent a letter to large technology companies conveying their concerns about generative AI technologies. In that document they warned about “the injustice inherent in exploiting our works as part of your AI systems without our consent, credit or compensation.” The accusation was clear: ChatGPT had not only learned from his books; Now I could replicate them. Other attacks. We are at a key moment in determining the legal implications of generative AI. At the beginning of 2025, for example, it was decided by the juries a similar dispute against Anthropicwhich concluded with an out-of-court settlement: the company paid $1.5 billion to authors whose works were used without permission. This precedent shows that technology companies are willing to negotiate to avoid court rulings that could establish binding jurisprudence. In England, by contrast, the High Court of England determined that Stability AI did not infringe copyright by train your model with Getty imagesthat is, a decision in literally the opposite direction, which has generated alarm among European creators. In all these cases the debate about “fair use” or fair use: The technology companies argue that the training of their models constitutes a transformative use of works, similar to when search engines index content. The creators reply that it is a massive appropriation that replaces, not complements, the original work. And in the background, a shock that has only just begun. Header |Gage Skidmore

There’s a reason you forget to write things down on your shopping list: prospective memory.

Have I turned off the gas? Where have I left the keys? What was I coming to the kitchen to do? These are some of the questions we often ask ourselves. Of the three, the last one is perhaps the most interesting, the one that involves a form of memory with which we are not very familiar: the prospective memory. What exactly is prospective memory? This form of memory is what refers to our ability to remember planned or future actions, to remember intentions. It could be remembering what we were going to look for in the refrigerator or the dentist’s appointment on Thursday. Prospective memory is something we deal with in our daily lives, but it is not a concept that many people are familiar with. Neither do the experts: research on this form of memory was, until the beginning of this century, virtually nonexistent. But in recent years we have managed find out some key aspects of this memory. For example, we now have an idea of ​​which brain regions work for the correct functioning of future memory. A 2010 study found three regions of the brain whose activity was linked to prospective memory results: the parahippocampal gyrus, the left inferior parietal lobe, and the left anterior cingulate. However, there is still much to investigate in this regard. Other studies, for example, have given greater importance to the activation of the right lobe in relation to this memory. Othersfor example, emphasize the role of the anterior prefrontal cortex and the medial temporal lobe. But it’s not all neurobiology. Things we forget Why do we forget what it is that we were going to write down on the shopping list? Prospective memory is not very different in this from other forms of memory. Here attention is key. In one interview for RAC1 radiothe neuropsychologist Saul Martínez-Horta explained, starting from “what did I come to the kitchen to do”, he explains how it is that we have this facility to forget things. Distractions are one of the main factors that affect this memory. If we go to the kitchen to get salt, but in the meantime we remember that we left the oven on, this second fact will make us confused and make it more likely that we will forget about the salt. In Martínez-Horta’s own words “Normally what makes us forget what we should do is the saturation of the system and the distraction mediated by another event. brain capacity “It is limited and sensitive to distraction, so it is relatively easy for us to direct our attention to something other than what we are doing.” Concentration is, therefore, key if we want our prospective memory (or our memory in general) to have more of itself. Memory can be trained, but generally the exercises that allow us to do it They are not useful beyond of the memory function they seek to train. That is to say, there is no evidence that solving crossword puzzles will make us remember to buy popcorn for when we have visitors. That does not mean that we are helpless. Some healthy habits impact in our brain’s ability to perform its tasks, and although studies focused on prospective memory are scarce, it may be a good idea to incorporate them. A varied diet, exercise and sleep properly they can help us with our memory. Perhaps, they can also help us remember what it is that we were looking for in the closet before receiving WhatsApp from our brother-in-law. In Xataka | How much information can our brain store? Image | Cottonbro studio

How to write a claim using artificial intelligence

Let’s explain How to write a claim using artificial intelligencesomething you will be able to do directly with Chatgpt, COPILOT, Deepseek, Gemini or any other. This will help you save time when making these writings. In addition to putting on a Prompt With which you can make claims, we will also give you Some important tips When writing this request for AI. Because there are some important details that should be taken into account to maximize the result you are going to obtain. Write claims with AI Writing a claim for AI is relatively easy, but doing well so that then you do not have to complete it with more data depends largely on your ability when creating the prompt or request with which you ask the AI ​​to generate it. It is important to include many detailsso that chatgpt or the method you choose taking into account. The base prompt on which to then add details would be something like this: “Write a formal claim for an online store, which I will send by email. I bought a speaker on June 15, they arrived broken and they have not answered me. I want the money to return to me. I know polite but firm.” Here, the first thing you should do is Specify the context of your claim. The Prompt says it is an online store, but you can change it to adapt it to the establishment in which you want to make the claim. It is also important Indicate the purposeas they return your money or make a change of the product. It is also important that explain in detail what happened. In the example I put the speaker that came broken, and here you must specifically put what happened to you, specifying prices and how you received the product or what happened to the service you paid. In addition to this, now we are going to tell you a list with all things to include and take into account In the prompt that you are going to write. Type of recipient: Specify if it is a company, a public administration, and put its name if you see it necessary to include. What has happened exactly: As we have told you above, it clearly describes what has happened and why you want to make the claim. What happened? What rights do you think have been violated? They are two questions that you have to answer. Specify how you will send the claim: Specify if you will send an email, form, letter, whatever. Thus the AI ​​will use the context to establish the format. Specifies the purpose: We have also said it before, I gave the purpose of the claim, as if they return the money. Expose all information and context: In addition to exposing what has happened, it includes all the necessary context. Imagine that you are telling another person, and do not leave anything out. Write clearly and concisely: Expose all the information clearly, and try to avoid mistakes or poorly written things that can lead to the IA misinterpret it. Specifies the tone: You can specify the tone in which the claims sheet is written. In the example we have put “I know polite but firm”, but you can modify it to make it more formal, or whatever you want. Don’t forget key data: Date of the purchase or the event that you want to claim, order number, invoice or file, or attached documentation if necessary. Add tests: You can specify or ask after a paragraph is added by mentioning that attacked invoice or a photo of the damaged product. Do not forget your personal data: Your name and surname, your email or your phone to contact you. And with all this information, you have to compose your prompt for chatgpt or for the AI ​​you choose. Then, a claim will be generated. Here, it is important that You read it well and make sure everything is correct. In the event that it is necessary, you can change what you want. If you do not specify some key data, you will be marked to make them in the template. In Xataka Basics | The best PROMPTS to save working hours and do your homework with Chatgpt, Gemini, Copilot or other artificial intelligence

In the new macOS 26 Tahoe there is change of interface, yes. But the surprise is to be able to write an email or throw a shortcut from Spotlight

More than 40 years of history They contemplate the Mac. It may not be the protagonist of the Apple ecosystem, but Macos always has a relevant study in the updates that the company announces every year. This time it has not been the exception, and this year we have a new version that not only changes its name (Tahoe), but also jumps rumored in numbering. Thus, the traditional sequence after MacOS 15 Sequoia It should have brought us Macos 16, but instead Apple has decided to adopt a version number that corresponds to the “year of launch+1”, which in this case is 26. In this new version of macOS A good face washing was expectedand that’s what we have with MacOS 26 Tahoe. The changes in the interface are the absolute focus, although the most striking novelty is the one that reaches Spotlight, the launcher/search engine integrated in macOS. The interface changes, although not radically. Liquid Glass raises an interface in which for example the taskbar is transparent and that is customizable, such as the control center. The latter can be configured to our liking, adding a lot of direct icons and accesses, even that they can correspond to iPad apps. These customization improvements also reach the folders, in which we can change the color or add a small badge to differentiate them from the rest. The new version of continuity allows us that for example we can have a “twin” of our iPhone on the screen to be able to do things like enjoying calls directly on the MAC screen and also doing it using options such as the Real -time translation. However, where the most curious improvements arrive in Shortcuts and in Spotlight. In Shortcuts, a powerful Automatizon tool, now we can create more powerful shortcuts thanks to the so -called “smart actions”, which take advantage of Apple Intelligence’s functions. But It is in Spotlight where things change especially. We are facing that Apple states that it is the most important update in the history of this component. This pitcher allows you to search for files or launch applications – even those of the iPhone that we have matched with continuity – starting to write its name in the text field. But it is also that we now have a way to access very striking functions without leaving Spotlight using small keyboard shortcuts. Thus, if for example we write “SM” (Send Message) we can start the writing of an email message without opening the mail application. There are several of these personalized commands that make Spotlight into A kind of custom “terminal” In itself from which to execute commands, launch applications and even access the history of the clipboard. The last great novelty of Macos 26 Tahoe comes from the hand of video games. Apple, who has never paid too much attention to this area in the Mac, now offers a control center called Apple Games that reminds us of the Xbox application in Windows. As in that case, in Apple Games We have a form from which to access our library and from which to launch the games that we have bought or buy new. There is a new Game Overlay that allows us to chat with friends, invite them to play or adjust our configuration to play. The developers and users will also benefit metal 4, the new rendering technology specifically designed for the Apple chips GPUs and that allows games that take advantage of it improve its visual quality remarkably, promise in the company. In Xataka | Apple plays in today’s event more than its future in AI: its credibility is played as a company

How to improve Chatgpt’s privacy preventing what you write is used to train artificial intelligence

Let’s explain How to improve Chatgpt’s privacydeactivating the option with which you allow OpenAi to use all the content you write or believe to continue training its artificial intelligence models. It is an option that is activated by default in your profile, but it is easy to disconnect. When you are using chatgpt, if you don’t change anything you are giving the company permission to collect your interactions. Then, these questions that you have asked the AI ​​and the answers generated for you will be used in the future to continue training and improving the models. But if you don’t want this information to be used because it is private, we will tell you how to deactivate it. Disable data sending to chatgpt The first thing you have to do is enter the configuration of Chatgpt. For that, on the mobile click on the side options button and click on your username. In the web version click your profile image and choose the option of Configuration which will appear in the window that opens. If you are on the mobile, what you have to do once you enter the configuration is click on the option Data controls that will appear in the section of Accountwhich is the first to see above all. Once inside, deactivate the option Improve the model for all That will appear in the first place. With this, your content will no longer be used to continue training OpenAi’s models. In the desktop versionwithin the configuration click on the section of Data controls. Once inside, click on Model improvementwhere you will be able Disable the option Improve the model for all That will appear in the first place.

Write 1,000 words per second

Mistral competes before giants, but does not give up. Yesterday He released a new version of his chatbot, Le Chatin addition to offering it on iOS and Android and launching a payment version with advanced AI options. Your platform, available in chat.Mistral.AIearn whole and becomes more versatile, but also has an advantage over its competitors: speed. A chatbot at 1,000 words per second. Mistral responsible do not presume to offer more precise or better answers than their competitors, but yes they have a clear thing: “Le chat reasons, reacts and responds faster than any other chat assistant, to approximately 1,000 words per second.” Flash Answers. It is the name of this characteristic of Le Chat, which according to its creators is enhanced by the highest performance and lower latency models, in addition to “the fastest inference engines on the planet.” The characteristic Flash Answers He has just debuted in a preliminary version for all users, and users can deactivate it if they wish. First test. To demonstrate the speed of Le Chat’s responses we wanted to make a comparison with Chatgpt and Claude, two of the most reputed chatbots at the moment. First we wanted to ask “what is Xataka” to see what these engines responded. Claude’s response is the most extensive, but also full of errors. Le Chat answers, of course very fast and is more concise but also makes an error (Weblogs SL was acquired by Webedia years ago). Chatgpt is the most accurate in the description. Second test. But if what we want is to see how quickly these chatbots generate, it is best to explicitly ask you to write a lot. Here we ask the three to write to us 10 paragraphs about the situation of tariffs in the US. Le Chat proved to be the fastest when generating the text, and also his response was very well structured, it was current and included appointments. Claude’s response although relevant was more focused on previous measures of the government of Joe Biden and did not include appointments. Chatgpt, although it took longer, also offered a very correct and references. Third test. Finally we wanted to test the quality of the translation of Spanish to English to see if that affected speed. We pass a link to one of our Latest articles about Google And we ask the three chatbots to translate it. Here Claude apologized first indicating that he could not access the Internet but we could copy the text. Le Chat was the fastest again, although it was noted that in the translation that speed was somewhat lower. Both this model and Chatgpt made a fairly decent translation, although too faithful to the text. It is true that these models can always be asked to make a freer translation, but its quality in any case is remarkable. Le chat earns from afar in speed (and it’s nothing bad in precision). The Mistral model has demonstrated in these tests to be competitive in the precision and quality of the answers, something that is certainly promising for the aspirations of the French startup. The best of all is that it effectively proves to be much faster in the inference and generation of text (it does not apply in other sections such as the generation of images), something that without a doubt its rivals will strive to match in the future. Image: brains. Why is it so fast chat? The answer is simple: to generate those very quick mystral responses It has allied with the brain companywhich autocalifies as “the fastest inference provider in the world.” They are applying their chips and technology to the Mistral Large 2 123b model which is based on le chat, and thanks to that they achieve up to 1,100 tokens per second in text requests. Also search on the web. Le Chat’s responses are also backed by consultations to media or information agencies as AFP – with whom Mistral has a collaboration agreement – but also for the ability to Web on the web With speed to collect information with which to build your answers. In these answers, the sources from which the information is removed is cited (although always). And even generates images. In the new le chat, options arrive as an advanced document loading option to process them with OCR, a Canvas To use the chat in a conversational/collaborative way and even a code interpreter that allows you to execute code in a sandbox. But it is also possible to use it to generate images thanks to the Flux Ultra generative model of Black Forest Labs, one of the most fashionable lately. These types of options can be enjoyed in the free version, but if we want to use them with more daily consultations, we can pay the subscription of 14.99 euros per month of the Pro version of le chat (the students They only pay 4.99 euros per month). In Xataka | Amazon lost the AI ​​train, but wants to recover it. The new Alexa with ia will arrive this month to try

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.