chatbot is not working and OpenAI says it is investigating an issue

If you are a user of ChatGPT and this afternoon you wanted to ask the chatbot something, you’ve probably been left without an answer – it’s time to use your brain again. It is that the famous service powered by artificial intelligence (IA) has been giving errors for several minutes. OpenAI, for its part, has launched an investigation to understand the origins of the problem. The outage entered the scene around 4:00 p.m., preventing users from around the world from using ChatGPT normally. As we can see in the screenshot, the chatbot refused to respond, offering error messages such as ‘Hmm… something seems to have gone wrong’, which in Spanish means ‘Mmm… something seems to have gone wrong’. In development. Images | Solen Feyissa In Xataka | What is Cloudflare, how it works and why a crash or block causes half the Internet to fail

How to create a Claude AI chatbot that responds solely based on your own documents

We are going to explain how to create a chatbot with AI Claude that respond solely based on your own documents. Thus, if you have reliable sources of information or you want to work or make requests around them, you will be able to have everything you ask based only on them. This is something that we have already taught you how to do with NotebookLMa much more specialized service in this. But if Claude is your artificial intelligence Mainly, you should know that you will also have the option to do so. you just need a project and the proper instructions to be able to do it. First you have to prepare the sources The first step you will have to take will also be the most tedious: choose the fonts you want to use so that the AI ​​relies on it when responding to you. These sources must be text files or PDFs, including images. They can be in documents in any language. The ideal for this is save the sources you find in a folderand then upload them all at once. You will also have an option to write texts by hand to use as a font. You can also ask Claude to help you searching for PDF files on specific topics on the Internet, although it is best that you personally choose the sources that you consider the most reliable. Keep in mind that we are going to ask the IAA not to seek information beyond the sources we give them, although there will always be the possibility of doing so. In any case, it is best to complete as complete a knowledge base as possible. You will also be able to upload new files whenever you want. Create your bot with custom fonts The first thing you have to do is enter Claude, and enter the section Projects. Projects allow you to create a separate workspace where you can organize related conversations, and upload reference documents or give personalized instructions. In here, you have to create a new project. This project is the one we are going to use to upload documents and make the AI ​​respond only with the data in them. To create it, give it a name and a description. This will not affect the operation of the project, it will only help you distinguish it from the others you have created. Now, you have to upload the documents you want to use as sources in the section of Files. You can upload documents in PDF, DOC and many more formats, you can upload them from Google Drive or other services that you have linked, or even write the text manually. Take some time to upload all the documents you need. Now, you have to edit the section Instructions to explain to Claude that in this project you just want me to use the documents for reference. For that, we have used this prompt: “All answers to questions and requests made in the chats of this project will be sought only in the content of the files. ONLY the files can be used. In the event that something is asked that does not appear in the files, Claude must say that the answer is not in the files, and ask if I want to look it up on the Internet. By default, the Internet is not used, and it can only be used with explicit consent each time you go to search.” As you can see, in this prompt we tell him several times that you only want to use the information in the documents when you ask him a question. We have also specified that if you do not find the answer in the documents, tell us, and that before you start searching for information on the Internet outside of the information we have given you, ask us for permission. Now all you have left is ask Claude any questions and requests you wantand will compose them using the information you have given him as a source. You can ask him specific questions, travel itineraries, and even ask him to ask you trivia games. Although for the latter remember that we told you how to create a test game from PDF files also with Claude. And as we have told you, if we ask him something that does not appear in the files we have uploaded, Claude will tell you. In the end, we have programmed your instructions for that. In addition, it will also ask us for permission to search for the information on the Internet. It will not do this proactively so that we can know that the information is within the documents. In Xataka Basics | What is Claude Cowork, how it works, and what things you can do with this AI assistant on your computer

Apple has been resisting turning Siri into a chatbot for years. Until the evidence has been surrendered

2026 will be the year of Siri, but not because of an internal turn at Apple or because of the maturity of Apple Intelligence. It will be because the pact with Google will allow Apple to use Gemini technology as your assistant’s base. The details about what Apple will do with its assistant have not taken long to come to light and, we have good news: they will arrive in the next version of iOS. The new Siri. Apple has been announcing the benefits of the new Siri since before having prepared their news. With the Apple Intelligence announcement He put on the table a Siri completely integrated into the system, capable of functioning as a complete assistant and functioning mostly locally. The reality? Everything remains practically the same as before and, when Siri doesn’t know how to respond to something, ends up opening ChatGPT. What is going to change. They explain in Bloomberg that Apple, as of iOS 27will surrender to the chatbot model that has worked so well for companies like OpenAI and Google. The mere assistant model has expired, and Siri will become a chatbot at the service of any of our requests. This new chatbot will be integrated into all Apple apps (expecting an API open to developers to integrate it into their apps), allowing us, for example, to find specific photos in the app Photosfunction as a programming assistant in Xcodeetc. What won’t change. The only certainty with the new chatbot model is that Apple will continue to maintain its obsession with privacy and maintaining its AI ecosystem as its own, even if it is based on Gemini. Apple’s intention is to integrate this experience into iPhone, iPad, Mac and Apple Watchmaintaining activation through Voice command “Siri” or holding down the power button on the iPhone. The difference. Today, Siri is an assistant, a command system. You tell him something Siri classifies the intention (set an alarm, call X person, send a message) Execute the order Moving to the chatbot model means having a generative model capable of interpreting natural language, maintaining conversations and a more “human” interaction with the phone. This is what their rivals have been doing for a few years. Adapting to the inevitable. That Siri will evolve in 2026 is proof that the classic assistant model is exhausted. Apple will have to adopt the chatbot model as an inevitable transition, previously led by OpenAI and in which Gemini now seems to be leading the way. The thing doesn’t end here. The destination of the new Siri is not only current Apple devices. As my colleague Javier Pastor says, the company plans to launch a device without a screenits first AI-focused wearable. According to the leaked information, it will have a format similar to that of AirTags, a microphone system and a launch scheduled for 2027. New assistant, new devices, and alliance with Google. The new stage of artificial intelligence for Apple is finally arriving. The question is whether they will manage to offer something new. Image | Xataka In Xataka | Hey Siri: 134 voice commands to get the most out of Apple’s assistant

When Meta forced us to use its AI chatbot on WhatsApp, it did not have a detail: the European Commission

The European Commission has been fiercely fighting against monopolies in the technology sector for years. The persecution of Microsoft in the early 2000s it was just the beginning. In 2018 the EU imposed a historic fine on Google for abuse of dominant position with Android and last year they fined Facebook for the same reason. According to Financial TimesMeta is going to sit again in the dock accused of monopoly, this time for the Forced integration of Meta AI in WhatsApp. In the spotlight. The European Commission has not commented on the matter, but according to sources consulted by the Financial Times, Brussels is already investigating Meta for the integration of Meta AI into WhatsApp and the announcement will take place imminently. The case will be conducted under traditional monopoly laws and not under the Digital Markets Act or DMA. The accusation. The investigation has not yet been confirmed by the European Commission, but internal sources have revealed that the main reason is the deployment of Meta AI within WhatsApp, its AI chatbot. As we saw in its day, there is no way to avoid being activated and there is no option to hide it either. Let us remember that WhatsApp is the most used messaging app in the world, with 3 billion active users. Meta is already being investigated for this reason the competition authority in Italywhich considers that the integration of Meta AI “could limit production, market access or technical development” in the AI ​​chatbot sector. Goal returns to the bench. Just a year ago, Meta entered the select club of companies fined by the European Commission for violating the antitrust rules of the European Union. On that occasion, the product that was the object of the accusation was Facebook, more specifically for forcing the use of Facebook Marketplace, which, like Meta AI in WhatsApp, was activated without users’ permission. After several years of research, The Commission concluded that the company had violated the law and made them pay a fine of 800 million euros. Also in April of this year They had to pay 200 million for the case that required consent to the transfer of data. Historical fines. Facebook has come out cheap if we compare it with other sanctions, such as more than 4.3 billion that Google had to pay for abuse of dominant position with Android, and it has not been the only one that Mountain View has had to pay. In September of this year The EU fined Google 2.95 billion euros for abusing its position in the digital advertising market and currently Brussels is preparing another case by how they rank media results in their search results. USA against. The Trump administration has charged against the DMA and EU fineswhich he described as unfair and discriminatory, threatening to start a tariff war. Europe’s response was forceful: technological regulation “is a sovereign right of the EU.” Obviously the heads of the technology companies have also positioned themselves against it and earlier this year, Mark Zuckerberg called on the US government to that would protect technology companies from “European censorship”so we can assume that this new research will not have been very amusing. Judges in the US also see monopolies. At the same time as the criticism is occurring, in the United States there are also antitrust cases against big technology companies, such as the one that Google lost in 2024 and that threatened to force them to sell Chrome, although in the end they dodged the bullet. Goal too carried out a similar case recently in which accused them of monopoly over WhatsApp and Instagrambut in this case they won. We will see what happens if Europe makes its case against WhatsApp official. Images | European Commission, Xataka Android In Xataka | The United States seems determined to break its monopolies. And it has an obvious victim between its eyebrows: Google

We believed that Chatgpt was just a very capable chatbot. Openai has just turned it into something very different: a real agent

We have been talking about artificial intelligence agents for a long time, but Openai has just converted that conversation into something much more tangible. The company has presented Chatgpt Agent, a function that turns its popular assistant into something more autonomous: now it is able to execute complex tasks using a virtual computer, with tools that allow you to navigate, program or even make decisions. From Agent Operator. At the beginning of the year it presented Operator, a tool that allowed ChatgPT to interact with web pages. Then Deep Research arrived, focused on writing long reports from multiple sources. The background idea was clear: go beyond the conversation and approach real tasks. What has been presented today is something like a tool that unifies all these previous advances. During the demonstration, those responsible for the project raised a daily situation: organizing a trip to attend a wedding. The agent was able to understand the context, find hotels, propose gifts, take into account the weather, the clothing code and even remember that a suit had to be bought. He did it by analyzing the message, accessing the web and acting step by step, as a person would. The difference is that everything happened within Chatgpt, without the need to alternate tabs or give instructions one to one. A virtual computer for AI. The key is that the agent is not limited to responding to text: it operates within a kind of virtual computer that Openai has given access. You can use a text browser to read pages quickly, a visual browser to interact with buttons and forms, and even a terminal to run commands, generate code and manipulate files. You can also work with spreadsheets, presentations, and access services such as Google Drive, Calendar or Github if the user authorizes it. What is under the hood? The model that drives chatgpt agent (specifically developed for this function, although without official name) was trained with complex tasks that required to combine multiple tools. Openai used reinforcement learning, the same approach that you already use in its reasoning models, to teach you to choose when to use the browser, the terminal or an API. The idea was to develop a solution capable of accurately deciding how to act based on each objective. In development. Images | OpenAI In Xataka | Goal is in a hurry to lead the AI that has done something unusual: it is building a data center in tents

McDonald’s used a chatbot with AI to recruit new employees. Someone seemed to ‘123456’ was a safe password

No one argues that AI The labor market will changeto begin with, it is already very present in the Recruitment processes of personnel McDonald’s franchisees in the US use a chatbot of recruitment based on AI which collects and manages the data of the millions of new candidates who want to work in one of the restaurants in the hamburger chain. However, such and as they publish in Wiredwho configured it forgot something as basic as changing the original password of the administrator of the entire platform. The selection chatbot. McDonald’s uses a platform called Mchire, developed by Paradox.AI, to manage the Personnel selection process through a chatbot known as Olivia. When a candidate shows interest in a job offer, the chatbot comes into play and requests candidates for personal data, shift preferences and directs them to perform a personality test to process their candidacy. The use of artificial intelligence intended Without human intervention. However, such and as they counted Ian Carroll and Sam Curry, the researchers who unintentionally discovered the ruling, were two things that caught their attention. The first one was a Reddit thread in which it was ensured that the McDonald’s hiring AI was giving Some funny failures Going crazy to the candidates who tried to leave their job application. The second thing that led them to investigate a little more about the McDonald’s hiring chatbot was that it seemed very strange that The replacement the curriculums For a personality test. “It seemed quite dystopic compared to a normal hiring process, right? And that was what encouraged me to investigate it more thoroughly,” Carroll said. The security failure: “123456”. Researchers Ian Carroll and Sam Curry have Much experience in cybersecurityso no one is surprising that they have managed to violate the security of a platform. However, as they report in their blog, they did not need any of their great technical knowledge to take control of the platform as administrators. They simply accessed the Mchire portal, which is the platform after the chatbot of employee hiring for the McDonald’s franchises, and used the password “123456” in the access and access password fields. “That allowed us, any other person, access to any entrance tray and recover the personal data of more than 64 million applicants,” said cybersecurity experts. This access not only allowed to see the data of the candidates, but also intervene in the conversations and ongoing selection processes. “It turned out that we had become administrators of a test restaurant within the Mchire system. We could see that all restaurant employees were simply employees of Paradox.AI, the company behind Mchire.” The data were not exposed. After confirming that it was really a real security vulnerability, the researchers immediately contacted Paradox.AI, which, which He published a statement explaining that “only a small part of the records accessed by the researchers contained personal information” and that “the account ‘123456’ that exposed this data had not been accessed by anyone but the researchers.” In addition, he explained that the compromised credential was a trial account that “had not been used since 2019 and, frankly, should have been deactivated“ McDonald’s responsible for his supplier ensuring that “we are disappointed by this unacceptable vulnerability of an external supplier, Paradox.AI. As soon as we knew the problem, we ordered Paradox. Paradox. The without surveillance. The work context makes the data presented especially Attractive for cybercriminalswhich shows the importance of providing additional security layers to Chatbots based on AI They manage such sensitive data. “If someone had exploited this, Phishing’s risk would have been really huge. It is not just identifiable personal information and curriculum. It is that information from people looking for work in McDonald’s, people who are waiting with anxious Electronic response emails“The researchers said. In Xatakto | Builder.AI promised to revolutionize the programming with its AI. There were actually 700 Indians behind it, picing code Image | Wikimedia Commons (Dirk Tussing)

Chatgpt is fallen worldwide. OpenAi chatbot does not work or does it erratically

If in the last hours you have tried to use Chatgpt You have probably found that this chatbot did not work or did it erratically, with answers that took a long time to arrive. It is not surprising: Chatgpt is practically fallen worldwide. Openai’s own ones confirmed this on their service status page, where they indicate that “we are experiencing problems” both in their APIs and in Chatgpt and in their video generation AI, Sora. According to the company’s information, there is a “high error rate”, but it has also been indicated that they have “identified the cause in which the problem lies”, and they are “working to implement a solution.” The cuts in the service have been producing about six hours during which it was possible He did not answer, or showed a mistake – “Too many concurrent requests,” for example – or answered but after a long wait. It is true that errors are not absolute and some users can access the service and use it apparently normal, but the truth is that the problem is affecting service users regardless of the geographical region in which they are found. The problem is also doubly serious because not only the conventional chatgpt service for users is fallen, but also its apiswhich are used by all types of third parties services and then apply the functions of ChatgPT all types of scenarios, such as assistance chats or company chats. In Xataka | You thought to be navigating in unknown and erasing cookies on your Android mobile. Goal I saw everything you did

Right now there are 4500 North American students advised psychologically by a chatbot. It’s just the beginning

Right now, while I write this, there are about 4,500 North American students being “psychologically advised” Through an application Sonny called. It is nothing surprising. In the US, around 17% of secondary schools have no counselor or school psychologist. Most of them are in rural areas or in economically depressed zones and applications of this type bring psychosocial assistance to everyone. The problem is not that. The problem is that Sonny is An artificial intelligence chatbot. Is the chatbot? Let it be put. The idea is not new: in fact, it is one of the first ideas that comes to everyone worried about mental health. In the last 50 years We have concluded that psychotherapy is tremendously effective. For now we have not been able to climb it: that is the promise of the chatbots. Sonny’s example Help to understand it: Students have access to chatbot 18 hours a day (from 8:00 a.m. to 2:00 a.m.) and the “Solo” service costs each district between $ 20,000 and $ 30,000. Much less than a school advisor to use. It is true that it is not so effective that the counselor, but for many environments it is the best that can be allowed. And, in fact, according to some of its users explainbeyond its final therapeutic effectiveness, these types of approaches allow school to identify almost real problems among students. In Berryville (Arkansas), they discovered that more than half of the users sent messages just before exams and allowed them to develop emotional support interventions. Is this the future? A couple of years ago, Zara Abrams published an extensive analysis In the American Psychological Association, where it was concluded that “artificial intelligence chatbots (AI) can make therapy more accessible and less expensive.” Just what Sonny does. However, as also explained Abrams“Despite the potential of AI, there are still reasons to worry: the tools used in the health field have discriminated people depending on their race and disability status and there are malicious chatbots have widespread erroneous information, They have professed their love for users either They have sexually harassed minors“That is precisely what Sonny tries to avoid. How is it possible? In principle, as they explain from the company, Sonny has a team of people with “experience in psychology, social work and online support” that supervise and even rewrite the Bot responses. Each technician supervises between 15 and 25 chats at the same time. It is the form that has HEALTH mental soundthe company behind the app, to avoid the great original sin of the LLM: its tendency to delir, fantasize and give advice that may not be correct. In addition, the hybrid chatbot is designed to notify parents and teachers to the slightest possibility of danger (either for oneself or for others). Have you solved the problem? The truth is that we do not know (because there are no studies on it), but it is honestly unlikely that they have achieved it: we are in a very early phase as for trust that all associated problems are resolved. But it is a radical step. As Abrams said, It is possible that “psychologists and their abilities are irreplaceable”, but since the arrival of AI is inevitable we have to bet on a “reflexive and strategic implementation.” Something that, indeed, is very similar to saying anything. There is much that we still do not know and, therefore, making concrete predictions is dangerous. What is clear is that the question is not if we will have gpteraputas, we already have them. The question is how we can use them to Not worsening the attention that is being given today (a temptation always preset) and turn them into a key tool to reduce human suffering. Let’s hope to answer it soon. Image | Dream / Sigmund In Xataka | 50 years of research on depression psychotherapy leave a surprising fact: we have not improved anything

I have tried the chat, the French Mistral chatbot, and has arguments to fight with Chatgpt. Not just for being faster

Mistral recently launched the mobile app and the new web version of Le Chat, his AI assistant that seeks a European approach (and with a French accent) for chatbots that are already something everyday in our lives. Incidentally, that launch has been accompanied by an important update of its web platform. As a regular user of Chatgpt and ClaudeI wanted to test it in several scenarios, also comparing it with its rivals. The first impression impacts: It is deviably fastmuch more than any model we have seen so far. Of course he has even more crumb. Speed ​​changes the rules The most striking is still your speed in responding. Reach 1,000 words per second thanks to Its integration with brain processors. This, in practice, means long virtually snapshot responses. Not that GPT-4O Or 3.5 Sonnet on duty are too slow, it is a first world problem, but I think we all prefer to wait a second to wait fifteen. In my tests comparing Le Chat with Chatgpt and Claude I have noticed that difference. For example, by asking them to write a long and complete analysis about the tariff policy of the United States, Le chat did it in a couple of seconds, with a well structured text and even with appointments. His rivals offered quite similar quality responses, but it took longer. And if we want to spin fine, Claude was the one who was above Chatgpt: his answers are usually better in pure writing (apart from code, for example), he has a somewhat more refined, natural and professional style, all at the same time. In fact That’s where chat loses against Claude: The most convincing style, more similar to that of a competent human, the Anthropic chatbot wins. Chatgpt maintains a good general balance, but here does not stand out in speed or style. It is also worth commenting on something: their answers are backed by consultations to reputed media and agencies such as AFP, his countryman, with whom he maintains a collaboration agreement. In my translation and content analysis tests, the model has shown to be at the height of Chatgpt and Claude in quality, although with a more concise style. A somewhat trivial example made me glimpse that Le chat seems somewhat more assertive than chatgpt or claude. Especially the first (and sometimes also the second) sin of being too influenced by the human who talks with them. The example that said: I asked him if Xataka was a Spanish soccer club, what would it be. His answer was Barcelona. He argued reasonably well, but I retended in an incisive way: “Why not Real Madrid? Xataka is the most read technology of Spain at a distance of the second, and Real Madrid is the Spanish club with more titles.” This was your answer. Colors out, I liked it to be reaffirmed and that it was not complacent with my re -question: The choice of FC Barcelona to compare with Xataka is based on several factors that go beyond the number of titles. Here are some reasons why I opted for Barcelona instead of Real Madrid: Style and philosophy: FC Barcelona is known for its distinctive game style, often associated with the “Tiki-Taka”, which is innovative and different. Xataka, being a technology blog, also stands out for its innovative approach and its ability to adapt to new technological trends. Identity and community: FC Barcelona has a strong Catalan identity and a very united community of followers. Xataka also has a community of very compromised and passionate readers by technology, which reinforces that sense of community. Adaptability and avant -garde: Although Real Madrid is a huge club with many titles, FC Barcelona has been historically known for its ability to adapt and be at the forefront in terms of play style and talent development. This aligns with how Xataka remains at the forefront of technology. Global recognition: Both Xataka and FC Barcelona have significant global recognition in their respective fields. Although Real Madrid is also globally recognized, the way in which FC Barcelona has built its unique brand and style resonates more with the identity of Xataka. These reasons do not imply that Real Madrid is less relevant, but simply highlight how certain aspects of FC Barcelona resonate more with Xataka’s identity and approach as a technology blog. By the way, when I asked who Real Madrid would be, he gave an argument argued in his prestige, global scope, awards, etc: The country. It sounded at least reasonable, but when, already restless, I asked who Valencia would be, he told me The world. Twisting the gesture, I asked him who was then Atlético de Madrid. Answer: “Public“. End of the test. Personalized agents and automation One of Le Chat’s most interesting characteristics is his agents system, which allows Create specialized attendees invoking them with “@”. This functionality, similar to OpenAI GPTS or Claude projects, but with a different approach, allows you to automate specific tasks and create personalized workflows. Yes indeed: As with GPTs and projects, this is a feature that requires a Pro subscriptionno free plans. Image: Xataka. Agents can be configured in two ways: through the visual interface of the plataforme or through the API for developers. The interesting thing is that you can customize aspects such as: The base model (Mistral Large 2Mistral Nemo or Codestral). The “temperature” or the tone of the answers (to make them more creative or more precise). Specific behavioral instructions. Examples of use to improve your performance. Unlike ChatGPT GPTS, which are more oriented to the end user, Le Chat agents seem to be designed also thinking about business integrations and automated workflows. While Claude does not yet offer similar functionality (although Claude Pro with extended context), Mistral approach seems to be halfway between the ease of use of chatgpt and flexibility They are looking for developers. In my experience, the creation of agents is more technical than with GPTS, but also more powerful in terms of customization and … Read more

The Chinese chatbot faces its first great challenge in Europe

Not everything is positive for Deepseek. Although the AI ​​company is receiving an avalanche of praise for its technological advances, it is also generating restlessness in terms of privacy and safety. In fact, this Thursday we are attending the first tangible consequence of this last scenario: Italy wants to block the Chinese chatbot web version. The Italian Data Protection Authority has ordered Limit the processing of Italian user data by Hangzhou Deepseek Artificial Intelligence Co., Ltd. and Beijing Deepseek Artificial Intelligence Co., Ltd. We are talking about the two Asian companies that control the application that has not stopped gaining popularity Since its launch. Deepseek, before the scrutiny of European regulators The press release of the aforementioned agency, entitled “Artificial Intelligence: the Italian Data Protection Authority Deepseek”, acknowledges that the AI ​​chatbot has been downloaded by millions of people In recent days, and points out that their measure seeks to preserve Italian users from the collection of data by it. In development. Images | Michele Bitetto | Screen capture

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.