I have a chatgpt at home

To use Chatgpt In the cloud it is fantastic. It is always there, available, remembering our previous chats and responding quickly and efficiently. But depending on that service also has disadvantages (cost, privacy), And that is where a fantastic possibility enters: Execute AI models at home. For example, Assemble a local chatpt. This is what we have been able to verify in Xataka when trying the New Open Openai models. In our case we wanted to try the GPT-Oss-20B model, which can be used theoretically without too many problems with 16 GB of memory. That is at least what Sam Altman presumed yesterday, which after launch He affirmed That the upper model (120b) can be executed in a high -end laptop, while the smallest can be executed on a mobile. Our experience, which has gone to trompicones, confirms those words. First tests: failure After trying the model for a couple of hours it seemed to me that this statement was exaggerated. My tests were simple: I have a MAC Mini M4 with 16 GB of unified memory, and I have been testing AI models for months through months Ollamaan application that makes it especially easy to download and execute them at home. In this case, the process to prove that new “small” model of OpenAi was simple: Install Ollama In my Mac (I already had it installed) Term a terminal in macOS Download and execute the OpenAI model with a simple command: “OLLAMA RUN GPT-Oss: 20B “ In doing so, the tool begins to Download the model, which weighs about 13 GBalready then execute it. Throw it to be able to use it already takes a bit: it is necessary to move those 13 GB of the model and pass them from the disk to the unified memory of the Mac. After one or two minutes, the indicator appears that you can already write and chat with GPT-Oss-20B. That’s when I started trying to ask some things, like that already traditional test of counting Erres. Thus, I started asking the model to answer me to the question “How many” R “are there in the phrase” San Roque’s dog has no tail because Ramón Ramírez has cut it? “ There GPT-Oss-20b began to “think” and showed his chain of thought (Chain of Thought) in a more off gray color. In doing so one discovers that, in effect, this model answered the question perfectly, and was separating by words and then break -in each word to find out how many erres there were in each one. He added them at the end, and obtained the correct result. The problem? That was slow. Very slow. Not only that: in the first execution of this model, two Firefox instances had open with about 15 tabs each, in addition to a Slack session in macOS. That was a problem, because GPT-Oss-20B needs at least 13 GB of RAM, and both Firefox and Slack and the background services themselves already consume a lot. That made trying to use it, the collapse system. Suddenly my Mac Mini M4 with 16 GB of unified memory was completely hung, without responding to any bracket or mouse movement. I was dead, so I had to restart it to the hard ones. In the following restart I simply opened the terminal to execute Ollama, and in that case I could use the GPT-Oss-20B model, although as I say, limited by the slowness of the answers. That caused that many more evidence could happen either. I tried to start an unimportant conversation, but there I made an error: This model is a reasoning modeland therefore try to always respond better than a model that does not reason, but That implies that it takes even more to respond and consume more resources. And in a team like this, which is already just starting, that is a problem. In the end, total success After commenting on the experience in X some Messages in x They encouraged me To try again, but this time with LM Studio, which directly offers a graphic interface much more in line with which Chatgpt offers in the browser. Of the 16 GB of unified memory of my Mac Mini M4, LM Studio indicates that 10.67 are dedicated to graphic memory right now. That data was key to use Openai’s open model without problems. After installing it and downloading the model again I prepared to try it, but when I tried it, I gave me a mistake saying that I did not have enough resources to start the model. The problem: the assigned graphic memory, which was insufficient. When navigating the application configuration I proved that unified graphic memory had been distributed in a special way, assigning in this session 10.67 GB to graphic memory. The key is to “lighten” the execution of the model. For this it is possible to reduce the level of “GPU offload” – how many layers of the model are loaded in the GPU. The more we load faster, but also more graphic memory consumes. Locating that limit in 10, for example, was a good option. There are other options such as deactivating “Offload KV Cache to Gpu Memory” (cachea intermediate results) or reduce the “evaluation batch size”, how many token are processed in parallel, which we can download from 512 to 256 or even 128. Once these parameters were established, I got the model finally charged in memory (it takes a few seconds) and being able to use it. And there the thing changed, because I met a chatgpt more than decent that he answered quite quickly to the questions and that was, in essence, very usable. Thus, I asked him about the problem of the Erres (he replied perfectly) and then I also asked that he made a table with the five countries that most championships and runners -up in the world of football have won. This test is relatively simple —The correct data They are on Wikipedia– But the IAS are … Read more

There are those who use chatgpt to create a vacation plan. Swedish prime minister uses it to govern

Ulf Kristersson, Prime Minister of Sweden and leader of the country’s conservative coalition, He has publicly recognized that consults artificial intelligence tools such as Chatgpt and Mishwner (Mistral European chatbot) to obtain “second opinions” in its government decisions. A statement that has unleashed a political and academic storm about the risks of using AI in such critical sectors as public and citizen management. In the focus. In An interview With the Dagens Industri Economic newspaper, Kristersson not only admitted their personal use of these tools, but also revealed that their collaborators also employ them in their daily work. “I use it quite often, even if it is to obtain a second opinion” said the prime minister. Apparently, Kristersson claims to use the tools to ask things like “What have others done? Or should we think just the opposite?”, He explained about the questions he asked. A devastating reaction. The Actonbladet newspaper accused directly to Kristersson having “fallen into the psychosis of the AI of the oligarchs.” Meanwhile, some experts have shown concern about security and democratic implications. Simone Fischer-Hübner, researcher and computer expert at the University of Karlstad, warned about the dangers of introducing sensitive information in Chatgpt: “You have to be very careful.” The warning makes sense, since although the company has security and privacy measures, any conversation we have with the chatbot ends on OpenAi servers. Therefore, it is not a very safe approach, especially if we talk about such a critical use as in the political dome. Beyond security. Virginia Dignum, professor of artificial intelligence responsible at the University of Umeå, assures that AI cannot offer significant opinions about political ideas, since it simply reflects the biases of those who developed it. “The more it depends on AI for simple things, the greater the risk of excess confidence in the system. It is a slippery slope,” he warned. And he finished with a phrase that has turned the world around in networks and media: “We do not vote for Chatgpt.” A defense that does not convince. Kristerson spokesman Tom Samuelsson tried to minimize controversy ensuring that the prime minister does not handle sensitive information through these platforms and that he uses them “more as a general reference.” However, Jakob Ohlsson, an expert in AI, Point out that even seemingly harmless information can reveal patterns of government strategic thinking to technology companies in charge of this type of systems. It is these types of examples that show us The great adoption that have had these types of systems, regardless of the sector in which they work. Cover image | Solen Feyissa and Anders Wiklund/AP In Xataka | OpenAi has just kick the AI board. Your new model is free and can be downloaded to use from the laptop

How to install Openai’s new Got-Oss models on your computer to have your own chatgpt at home

OpenAi has announced New models Open Source that anyone can download and install on their computer: GPT-Oss. With these already on the street, it is an excellent opportunity to start stirring with the The premisesthat is to say, Executed on our computerso today we are going to teach you to install and use them. Differences between the two models Although they are called in a similar way, GPT-Oss-120B and GPT-Oss-20B are not exactly the same nor have the same requirements. The first model, GPT-Oss-120b, reaches a parity near OpenAi O4-mini and requires at least 60 GB of graphic memory. Having your own chatgpt at home is easy, but it requires a team at height | Image: Xataka His little brother, GPT-Oss-20B, is somewhat less capable (similar to O3-mini, according to OpenAI), but can be executed on devices Edge. In other words, it can be executed on your own computer whenever it has at least 16 GB of memory, preferably graphic. In summary: GPT-Oss-120b: Large model, need at least 60 GB of vram or unidicada memory and is not suitable for consumer computers. GPT-Oss-20b: smaller model, need 16 GB of vram or unified memory and is suitable for consumer computers. The one we are going to use, for obvious reasons, is GPT-Oss-20b. Considerations to take into account Executing an AI like in local is an intensive process that can cause, and surely cause, that your computer slows down a lot. Although you could execute it having 16 GB of RAM, the ideal is that your team has A high -end GPU. What will happen if your computer has less than 16 GB of vram memory? Than the tool will use RAMwhose figure must be equal to or greater than 16 GB. If not, the system will not work properly. As a general recommendation, the ideal is to dedicate all the possible resources of your computer to the execution of the model, so it closes everything that is not strictly necessary. Install Ollama on your computer OLLAMA Installation | Image: Xataka For this tutorial we will use a well -known application: Ollama. It is an Open Source platform that simplifies, and much, the installation, access and use of LLMS (Large Language Models). Let’s say he is an executor of models. Chatgpt is an online platform through which we interact with a model, such as GPT-4O. Ollama is the same, But at home and with the models we have installed on our computer. It is a free, open source software and available for Windows, Mac and Linux. Download GPT-Oss Once we have downloaded and installed the program on our computer we will find an interface like this. If you wish, you can also use ancient ollama through the Command interfacebut the truth is that the graphic interface is much more pleasant. Main interface of Ollama | Image: Xataka If we look, we will see a drop -down in the lower right area with the name of the model we are using or that, rather we will use. Access to the different AI models from Ollama | Image: Xataka If we click on the drop -down we can access a good handful of models, such as Deepseek R1, Gemma either Qwen. In the case, we are interested in selecting “GPT-Oss: 20b“ Download of the model, armate of patience | Image: Xataka Having selected “GPT-Oss: 20B”, it will be enough to send a message in the chat to begin the download of the model. At patience, because it weighs 12.8 GB and can take a while. Talking to GPT-Oss-20b through Ollama | Image: Xataka Once it is installed, you can start talking with AI as if it were Chatgpt. Of course, if your GPU does not meet the minimum requirements you will see that it is much slower than chatgpt. Not surprisingly, you are running the model on your computer, not in a data macrocenter full of the latest in Dedicated Gpus of Nvidia. Another option: LM Studio Lm Studio | Image: Xataka Ollama has the advantage of being intuitive, simple and direct. If we want more options, a much more complete program is LM Studio. This is available for Windows, Linux and Mac and, as Ollama, is able to manage several models, GPT-Oss: 20b among them. It is a more advanced application that allows us to better adjust both the behavior of our computer and that of the model, although squeezing it to the maximum requires more advanced knowledge. Cover image | Xataka In Xataka | How to move from image to video using artificial intelligence: 14 essential free tools

There is a chatgpt fever among public officials. What we do not know is how it will affect us as users

In the City of Bétera (Valencia) they have turned Chatgpt into one more employee. He told so In the country Marcos Gallart, the Deputy Secretary of the Urban Planning Area of said town. According to him, AI saves “20% of the time in the writing of reports.” The OpenAI chatbot, like its rivals, allow it, of course to gain time to time, but there is a problem with that adoption. Or several. Be careful to save time. Although chatgpt can of course Help perform all kinds of administrative tasksGallart himself explained how formation, accompaniment and how far this type of tools can be used. And there is the problem, because the dimension and complexity of public administration makes this type of processes of adaptation and use of new technologies suppose a colossal challenge. There are no standards. Despite the EU regulatory obsession and Spain in the field of AI, there is no clear regulation that guides officials on how to use AI and how to manage data that are handled with it. Here teachers, health personnel or judges are included that are part of a huge group (1.6 million workers) who can of course use these tools, but very carefully. To tell the police. In recent months we have proven how the indiscriminate use of AI and confidence in these systems can be a real disaster. The National Police, for example, had been using ia for six years to detect false complaints, but The real reliability of the system was very debatable. In the recent ‘Ábalos’ an AI to transcribe the statements of witnesses and accused in the interrogations, but There were paragraphs that were a gallimatisms. Even more serious was what happened with the IA Viogén system, which was theoretically destined to solve cases of gender violence and It has ended up causing mortal tragedies. Spain wants in administration. Meanwhile, the Ministry of Public Function advertisement These days his intention to incorporate AI to the public administration. To do this, he raised a “sovereign platform of AI” with an investment of 14 million euros. His mission, among other things: to expedite procedures in the administration to provoke the one according to Minister Óscar López will be “the biggest revolution of the general administration from the Internet.” A nightmare for privacy (and security). Someone asked ChatgPT about personal issues is already delicate for both the answer – which may not be accurate or even correct – and for the fact that the chatbot keeps that data. The thing is especially serious If an official introduces documents of all kinds in this or other chatbots to summarize or analyze them: if those documents contain sensitive or private data, they are under the control of these chatbots, which in fact They can filter them By mistake to other users. Citizens, possible victims. That makes AI become a double row weapon for public administrations and citizens. On the one hand they can help expedite efforts and even solve problems much more efficiently. On the other, a Incorrect use Chatgpt and its alternatives can make private and personal data They end where they should not, or even something worse: That the result of a management is wrong because an official used it and considered that it was correct without adequate supervision. Zero Data Retention. In this sense There are many services offered by plans without data retention. (ZDR, Zero Data Retention) This is: The data you enter will not be stored on the supplier’s servers. OpenAI It has it In its Chatgpt Enterprise service, a business version precisely designed so that professionals can use Chatgpt’s capacity without fear of data leaks. Microsoft It is another example. Public administration is more “released”. In March It was approved he Draft law for ethical, inclusive and beneficial use of AI. That document was an adaptation to our legislation of the European Regulation of artificial intelligence Approved in March 2024but there we found a contradiction. It was criticized that the law of AI It was too restrictive “The EU had to back down,” but the funny thing is that it was not with the public administration: there the regulation is warm, does not specify bad clear uses and only considers minor offenses those referred to the deployment and use of the systems (articles 25, 26 and 27). Image | Pickpik In Xataka | The EU regulatory obsession raises a world in which AI will have two speeds. And Europe will lose

The Captcha had become an excellent tool to fight the bots. Until Chatgpt Agent arrived

In 2003 a young Guatemalan named Luis von Ahn published a unique study along with two colleagues from the Carnegie Mellon University and an IBM researcher. That project described an automated test that was easy to solve for humans but practically insurmountable for artificial intelligence systems. Those researchers called that test Captcha. The concept was simple and focused on the already famous Moravec paradox: there are things that humans do effortlessly – such as solving the visual puzzles that the captcha propose – but that the machines fail to solve. The idea turned out to be one of those between one million. Von ahn He ended up creating an improved version to which He called recaptcha that not only verified that you were human: I did it helping Train and perfect OCR systems. That other complementary idea was another unique moment “Eureka!” De von Ahn, and in fact he ended up making him a millionaire in 2009, the year Google decided to buy his service. Then he would dedicate himself to another equally striking project (or perhaps more): Duolingo. A dizzying (and juicy) evolution While doing so, the captcha continued to grow and evolve. Putting it more and more difficult for machines that gradually stated that perhaps those tests were no longer so valid. From those basic captchas we ended up moving to recaptchas of all kinds in which visual puzzles not only challenged the abstraction capacity of machines, but also helped to train no longer OCR systems, but artificial vision systems to better recognize cars, buses, zebra crossings or, how not, fire mouths. But artificial vision and intelligence systems also improved, and that struggle between these tests (Captcha comes from Completely Automated Turing Tests to Tell Computers and Humans Apart) and the machines became more and more interesting. It was a Singular cat and mouse game with spambotsand when some AI system managed to overcome a captcha or any of its variants, the puzzles became more and more difficult. The story has repeated again. He did this Friday. It was then that a User of the R/OpenAI community in Reddit published captures of Chatgpt agent overcoming without apparent problems one of the recaptchas more popular and used today on the Internet. This is the system TURONSTIL from Cloudflare, which presents a small box with the text “I’M not a robot“(” I am not a robot “) to click on it. It seems very simple and simple, but it is not so much for the machines. As indicated in Cloudflare, this recaptcha variant analyzes various signals such as the mouse movement, the time we take to click, the “digital footprint” of our browser, the “reputation” of our IP or some Javascript execution patterns. With them determines whether the user is a human being or is suspicious of being a bot. And if there is suspicion, the system presents after that first captcha another in which we do have to solve some type of visual puzzle. AI does not know if it is human, it only tries to operate as such The funny thing here is that Openai’s agent solved the problem in an obvious way: seeing what was on the screen to act accordingly, something that had not been easy until now. The agent was even narrating what he was doing, and while doing that step he showed the following text: “The link is inserted, so I will click on the” verify that you are human “click to complete the verification in Cloudflare. This step is necessary to demonstrate that I am not a bot and be able to continue with the action.” Or what is the same: The machine was self -healing like a human being. It is something unusual, but perhaps not quite strange considering that 1) AI does not really know what it says and 2) has been trained to speak (and act, at least in a limited way) as a human being. Operator, Openai’s previous agriculture, passed it really bad With these systems. Does this mean that captcha are threatened with death? Probably not. This is nothing more than another battle of that war between the bots and the captcha. One that for example we saw Another AI victory In October 2024 but it did not involve the debacle of this type of user verification tests. As they point out In Ars Technicathe captcha systems have not stopped evolving. From those blurred and deformed texts we go to recaptcha in which we had to solve visual puzzles of all kinds that lately force us not to identify traffic lights, but to place an image in a specific orientation – a increasingly popular system called Arkose MatchKey– Oa having to identify some element of an image that does not agree on it. In fact, the most recent captchas are no longer so much to prevent bots from exceeding that barrier to slow them so much that making brute force attacks with bots does not compensate. Captchas not as a barrier, but as a bots brake An article of those responsible for Arkose Labs, creators of Matchkey, made it clear that “There is no completely impenetrable captcha“, and that with their proposal what they intended was” to introduce an economic deterrence or cost proof for the malicious behavior of the bots. “Or what is the same: than to develop a bot that exceeds those captcha was so expensive that It was not worth it. Thus, we should not worry much in case the agents of AI can overcome this test, because surely they will appear captchas that continue to assume an almost impassable barrier for these systems. It is precisely the same concept with which it works THE ARC-AGI 2 TESTwhich measures visual understanding and abstract reasoning of AI systems and that is so complicated that the best AI models, which are also very expensive, do not exceed 4% of cases in the best case (O3-Preview). Will there come a time when those agents of ia get … Read more

There are more and more people summarizing books with chatgpt instead of reading them: Welcome to the era of post-alfabetization

@alz_zyd_ In X: Spending an hour asking Chatgpt about Adam Smith generally teaches you the triple to spend an hour reading ‘the richness of nations’. University students use because it is a better and more efficient way to learn. The comment of Doug Henwood: We have fully entered postalfabetization and this is really alarming. They talk about A silent transformation but quite fast and, apparently, massive: Students have discovered that they can “read” ‘The richness of nations‘, The academic manual of economy par excellence, talking with Chatgpt. According to the first – University Professor – an hour of dialogue gives them more processable information than weeks battling with Adam Smith’s original. Efficiency is overwhelming. In Xataka The new illiteracy has nothing to do with knowing how to read or write: it is to use AI as an oracle instead of as a tool AND Many students are adopting this practicesomething endorsed by Papers. They do it without fuss. They have simply found a better method to absorb complex knowledge: They load the PDF They ask for the main ideas. They throw specific questions. They ask for clarifications. They request contemporary examples. The result: Understanding the fundamentals of the economy in a fraction of the necessary time so far. The problem is that they learn to read manuals in a very different way. This cognitive mutation has been baptized by Henwood as “postalfabetization.” We continue to have the ability to read, but we have outsourced intellectual digestion. As with spatial orientation and the use of GPS, we are delegating a basic brain function at AI. From this There was already talk More than fifteen years ago. Deep reading It demands mental resistancetolerance to ambiguity, gradual construction of criteria. Chatgpt Instead, it offers instant clarity and immediate synthesis. {“Videid”: “X9N4GWC”, “Autoplay”: False, “Title”: “OpenAi has presented chatgpt agent”, “Tag”: “chatgpt”, “duration”: “47”} The generation that grows with this practice will develop other cognitive skills: On the one hand, higher processing speed, better synthesis capacity, perhaps broader conceptual connections. On the other, a huge dependence on algorithmic intermediary. When all the information comes pre-processed, the muscle of critical thought ends up atrophy. Postalfabetization is surely inevitable, probably irreversible. Those who dominate both modes – traditional lift, dialogue with AI – will have an advantage. The rest will be trapped in their own efficiency: brilliant on the surface but vulnerable in depth. Outstanding image | Thought catog In Xataka | The “superpower” of reading a lot in a short time is less overlap when you hear how Bill Gates gets (Function () {Window._js_modules = Window._js_modules || {}; var headelement = document.getelegsbytagname (‘head’) (0); if (_js_modules.instagram) {var instagramscript = Document.Createlement (‘script’); }}) (); – The news There are more and more people summarizing books with chatgpt instead of reading them: Welcome to the era of post-alfabetization It was originally posted in Xataka by Javier Lacort .

What is it, how it works and what is better (and what not) than chatgpt and competition

Let’s explain What is Claude and how it works This model of artificial intelligence Created by Anthropic. It is a fairly popular model that has several occasions close to the power of the greats as Chatgptso it should always have it on the radar. We are going to start explaining what exactly Claude is, so that you understand your concept in the event that you are not familiar with AI. Then we will tell you how it works, and we will end up explaining you its main advantages and disadvantages Regarding competition models. What is Claude Claude is an artificial conversational intelligence model. This means that you write questions or ask him peititions writing a command or Promptand he answers what you have asked. Specifically it is a Great linguistic model or Large Language Mod (Llm), of the same type as Chatgpt, Google Gemini, Microsoft Copilot and the rest of the alternatives. This model of AI has been trained to understand natural language, and you can write it the same as you would do with a real person. Claude has been created by Anthropic, which is a company founded by former OpenAI employees, which are responsible for Chatgpt, and in fact in several versions that have taken from Claude they have been able to overcome GPT in several things. Claude is the name of the Char de ia with which you speak, but it is also the name that their artificial intelligence language models have. Every few months a new version is going out, such as Claude 3 either Claude 4and each of these versions is an improvement of the “engine” that uses AI when you talk to it. The idea behind this artificial intelligence model is not just to achieve a capable and powerful AI that stands out between competition. Anthropic also wants to achieve artificial intelligence that is aligned with human values and interests. To achieve this, the company is investing a lot of resources to get Your models have a solid ethical baseand so always operate in a responsible and beneficial way. Specifically, it is governed by principles of transparency, responsibility and trust. This means that AI may Recognize associated hazards of herself, and take proactive measures to mitigate them. Thus, each of the models of this family have not only been created to be safe, but also seek that they cannot be misused or that they can cause unintentional negative consequences. How Claude works Claude works thanks to a great language model or LLM; as as chatgpt and other alternatives. These models have been trained with huge amounts of text, and with it have learned the way we write Humans. Thus, when you write a natural phrase, these “engines” under the AI analyze the words and understand the context of the question and what they mean, and then generate an answer with a text which is also natural language and seems written by a person. This is because, in their training, AI has also been taught to build phrases and responses. Regarding your knowledgethe AI uses a database and knowledge with which it has been trained, but also performs internet searches. Thus, your answers are a combination of what you know and what you find, so that you always try to offer you concise and updated information. Keep in mind that artificial intelligence is not really intelligent, it does not “think.” Simply use millions of internal algorithms to understand what you have asked, process the information you get about your request, and then write it in a coherent way. Claude advantages and disadvantages Claude is a model that stands out for having A clear and orderly interfaceminimalist and very clean. Has an excellent capacity to understand textsso you are going to have very good experiences even in your free versison. The answers may seem more human and natural, since its style has been polished, and may seem more empathetic. However, Your answers are usually more concise and summarized, directly to the grain, and without lengthening too much unless you ask. Claude also has some outstanding advantages. For example, You can make a more detailed analysis of long documents or complex texts. In addition, as we have mentioned, Your answers are more ethical without becoming excessively restrictive when giving them. Claude also stands out for the hand of tasks that require step by step reasoning, and has an especially good capacity to Create and modify code In the event that you ask for it. Claude allows you to choose between different styles When answering you, depending on whether you want it to be more concise or more explanatory. It also allows Add connectors In order to interact with applications, such as controlling Spotify or Brave. But it also has some things against, such as Not having so many plugins and additional tools such as the GPTPPT GPT or Gemini Gem. Claude has its “artifacts”, but since it is not such a popular there is not so much. Sometimes, competitors may also have more up -to -date information about some issues. In addition, it is possible that in some of its models the response speed is not so fast, and there are also no direct integrations with other third -party services such as more popular AI, such as Gemini with Google and Chatgpt services with other associates. Claude cannot create drawingsat least not the way Chatgpt can do. Maybe you can generate images by code, but not drawings itself, so it is a limitation to take into account. Claude is cheaper in his payment plans than competition. Its Pro plan costs 15 euros per month, and the largest plan costs 90 euros per month. Meanwhile, Chatgpt plans are 20 and $ 200, and Gemini’s are 22 and 275 euros per month. Something that should also be taken into account. In Xataka Basics | 22 useful and not so well -known free artificial intelligence tools

We believed that Chatgpt was just a very capable chatbot. Openai has just turned it into something very different: a real agent

We have been talking about artificial intelligence agents for a long time, but Openai has just converted that conversation into something much more tangible. The company has presented Chatgpt Agent, a function that turns its popular assistant into something more autonomous: now it is able to execute complex tasks using a virtual computer, with tools that allow you to navigate, program or even make decisions. From Agent Operator. At the beginning of the year it presented Operator, a tool that allowed ChatgPT to interact with web pages. Then Deep Research arrived, focused on writing long reports from multiple sources. The background idea was clear: go beyond the conversation and approach real tasks. What has been presented today is something like a tool that unifies all these previous advances. During the demonstration, those responsible for the project raised a daily situation: organizing a trip to attend a wedding. The agent was able to understand the context, find hotels, propose gifts, take into account the weather, the clothing code and even remember that a suit had to be bought. He did it by analyzing the message, accessing the web and acting step by step, as a person would. The difference is that everything happened within Chatgpt, without the need to alternate tabs or give instructions one to one. A virtual computer for AI. The key is that the agent is not limited to responding to text: it operates within a kind of virtual computer that Openai has given access. You can use a text browser to read pages quickly, a visual browser to interact with buttons and forms, and even a terminal to run commands, generate code and manipulate files. You can also work with spreadsheets, presentations, and access services such as Google Drive, Calendar or Github if the user authorizes it. What is under the hood? The model that drives chatgpt agent (specifically developed for this function, although without official name) was trained with complex tasks that required to combine multiple tools. Openai used reinforcement learning, the same approach that you already use in its reasoning models, to teach you to choose when to use the browser, the terminal or an API. The idea was to develop a solution capable of accurately deciding how to act based on each objective. In development. Images | OpenAI In Xataka | Goal is in a hurry to lead the AI that has done something unusual: it is building a data center in tents

An researcher proposed a game to Chatgpt. What he received in return was functional keys from Windows 10

Sometimes, the most effective is the simplest. That thought Marco Figueroa, cybersecurity researcher, when last week decided to test The limits of Chatgpt. The proposal was as innocent as disconcerting: a riddle game, without technical attacks or explicit intentions. Instead of seeking vulnerabilities in the code, he focused on language. And it worked: he managed to make the system return something that, according to himself, should never have appeared on the screen. The result were generic key installation of Windows 10 For business environments. The key was to disguise him. What Figueroa wanted to check was not if he could force the system to deliver forbidden information, but if it was enough to present the right context. He reformulated interaction as a harmless challenge: a kind of riddle in which AI should think of a real text chain, while the user tried to discover it through closed questions. Throughout the conversation, the model did not detect any threat. He responded normally, as if he were playing. But the most critical part came at the end. When introducing the phrase “I Give Up” – I rindo – Figueroa activated the final answer: the model revealed a product key, as it had been stipulated in the rules of the game. It was not a casual carelessness, but a combination of carefully designed instructions to overcome the filters without raising suspicions. The filters were there, but they were not enough. Systems such as Chatgpt are trained to block any attempt to obtain sensitive data: from passwords to malicious links or activation keys. These filters are known as Guardrailsand combine black lists of terms, contextual recognition and intervention mechanisms against potentially harmful content. In theory, asking for a Windows key should automatically activate those filters. But in this case, the model did not identify the situation as dangerous. There were no suspicious words, or direct structures that alerted their protection systems. Everything was raised as a game, and in that context, the AI acted as if it were fulfilling a harmless slogan. What seemed harmless was camouflaged. One of the elements that made the failure possible was a simple obfuscation technique. Instead of writing directly expressions such as “Windows 10 Serial Number”, Figueroa introduced small HTML labels between words. The model, interpreting the structure as something irrelevant, ignored the real content. Why it worked (and why just worrying). One of the reasons why the model offered that response was the type of key revealed. It was not a unique key or linked to a specific user. Apparently it was a generic installation key (GVLK)such as those used in business environments for massive displays. These keys, publicly documented by Microsoft, only work if they are connected to a KMS (Key Management Service) server that validates network activation. The problem was not only the content, but the reasoning. The model understood the conversation as a logical challenge and not as an attempt to evasion. Did not activate its alert systems because the attack did not seem an attack It’s not just a key problem. The test was not limited to an anecdotal issue. According to Figueroa himself, the same logic could be applied to try to access another type of sensitive information: from links that lead to malicious sites to restricted content or personal identifiers. Everything would depend on the way the interaction is formulated and whether the model is capable – or not – to interpret the context as a suspect. In this case, the keys appeared without their origin being completely clear. The report does not specify whether this information is part of the model training data, if it was generated from already learned patterns, or if external sources were accessed. Whatever the road, the result was the same: a barrier that should be impassable ended up giving up. Xataka with Gemini | Aerps.com In Xataka | Granada promised them very happy with their new degree of the university. Until his feet stopped

AI as chatgpt is possible thanks to the indiscriminate use of online content. Cloudflare just said that it is over

The great IAS we use daily like GPT, Gemini, Claude, Perplexity and Company exist and are able to do what they do thanks, in large part, in large part, to the content available on the Internet. Companies such as Openai, Google and Anthropic, to mention some, have tracked (and track in real time) the web in search of content that answers the user’s questions. And they do it, unless there are specific agreementswithout offering consideration to the creators of said content beyond a link. It is a practice that is in question from the birth of this technology. Blog articles, Wikipedia, books, User generated content, even personal data. The trackers, those automated bots, do not leave anything behind and today Cloudflare has said that it is over From today, Cloudflare will block by default Scrapers of AI, something that has more implications of what it might seem. Let’s start at the beginning. Web Crawlers. This technology is not new and, in fact, it is thanks to it that the foundations on which the Internet is based (the web search) exists. Surely it is familiar about “The Google Spider“, that bot that tracks the entire website in search of content to index and offer the user. It is only one of the thousands and thousands that exist and that generate 30% of all traffic worldwide. This technology was capital to shape the Internet we know and the relationship with content generators was symbiotic. The economy of the click was born: the creator generates a content, Google Lo Indexa, the user finds it through Google, Google generates income with the advertising of the search engine, the creator receives free traffic and generates income thanks to advertising, affiliates, etc. With AI, the movie is quite different. Data. The AI ​​models need information to feed, be trained and be able to answer questions. To do this, the big companies that we all know tracked the website, They extracted all the content they could and used it to develop technologies such as Chatgpt. What is the problem? That content could be protected by copyright, which led to the fact that The New York Times sue Openai For this same reason since the companies of AI had to sign agreements with the means to access their content. Image: Solen Feyissa Ias connected. AI was evolving and, as expected, It ended up connecting to the Internet. Not only did he give answers based on finite training data, but could be connected to the network to search for the response in the media, blogs and online pages in real time (or almost in real time). The user no longer had to click on a link. The AI ​​searched, analyzed and generated the answer, making traffic towards the media and blogs. The user no longer accesses the original content, does not click on the links. Instead, it consumes a derived product generated by AI To this technology the Ai Crawlers or what is the same is given life: the trackers ia. They are the digievolution of the bots that shape the Internet we know. Among them are OPENAI GPTBOT, META-EXTERNALAGENT META, CLAUDEBOT OF ANTHROPIC O BYTESPIDER DE BYTEDANCE. With them the symbiotic relationship that we mentioned above begins to deteriorate because the user no longer accesses the original content, does not click. Instead, it consumes a derived product generated by Ia. The biggest example: new previous views generated with AI that appear on Google every time you do any search. Volume of daily requests of the main AI Bots | Image: Cloudflare Put the brake … or not, I’m just a .txt. How to solve this indiscriminate tracking and without consideration? The first proposal was Update the Robots.txt file to indicate to the bots that cannot extract the content of a website. This file and one of the most used resources to administer the activity of the bots, but has a small problem: its compliance is voluntary. IA companies can follow the instructions, or can ignore and extract the content. In addition, it may happen that we touch what we should not and that our website disappears from Google. Every website who wants to be on Google must allow Googlebot, its spider, to indicate to the bots that cannot extract the content of a website. This file is one of the most used resources to administer the activity of the bots, but it has a small problem: its compliance is voluntary. IA companies can follow the instructions, or can ignore and extract the content. Cloudflare is planted. We arrive at the recent announcement made by Cloudflare. The platform (The middle internet depends on) has announced that, from today, the blockade of the AI ​​Crawler will be active by default. To do this, Cloudflare offers direct management of robots.txt to avoid problems such as the aforementioned. The key, of course, is that Cloudflare will be in charge of maintaining the updated blockages according to the IA panorama. This, although it is activated by default, is voluntary and can be completely deactivated in the adjustments. To pay. Cloudflare’s other proposal is Pay per crawl. Since AI will continue to need access to the content of a website, why not give the creator the option to charge for such access? Pay Per Crawl, which is currently in Beta, allows domain owners to define a fixed price at request. If an AI Crawler wants to extract the content of that domain, you will have to pay for it. On paper, this tool has the potential to change the current panorama, but everything will depend on the scope, its adoption and what measures take the tracker operators. Cover image | Solen Feyissa In Xataka | I have asked the AI ​​any bullshit and now I am writing a news about her

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.