now you can use their photos to write your name with rivers and craters

Since they were launched in 1972, NASA’s Landsat satellites have continually taken images of the earth’s surfaceproviding an uninterrupted data archive that helps managers, planners and policy makers make more informed decisions about natural resources and the environment. But there is also a playful part. 50 years goes a long way. Enough to form a complete alphabet with which you can write your name, thanks to a tool designed by NASA for Earth Day. Your name in landscapes. The tool in question It’s very simple. You simply have to write your name or the word you want to transform into Landsat images. Then, after pressing enter, an image appears on the screen for each letter of your name. You can download the full image or place your cursor over each letter to see the exact coordinates and a brief description of the place that appears in them. For example, a tangled river could form the first letter of your name, while the last letter could be a path of volcanic lava surrounded by mountains. The complete alphabet. In case you are curious, you can visit the complete alphabet and check all the possible photographs that NASA has for a particular letter. Some, like A, have many options. On the other hand, others that are somewhat rarer, such as the G, have only one landscape available that evokes their specific shape. LILY An appointed date. This tool was made public on April 22, when Earth Day is celebrated. It is an anniversary created to raise awareness among the population about the problems facing our planet at an environmental level. The images taken by Landsat are very useful both for raising public awareness and for providing useful data to scientists. More about Landsat. According to NASAresearchers have used the Landsat archive, for example, to study how cities, coastlines, crop cycles, and forests have grown. This is a program in which scientific quality instruments and data are prioritized, so that it can be known with certainty that changes in subsequent Landsat images reflect real changes on Earth. And the best thing is that all the information is free and open access, so that anyone, scientist or not, can access them. After all, the Earth belongs to all of us. Just as any human being must be responsible and avoid destroying it, we also have the right to be participants in what is done to take care of it. Even if it is writing our name with rivers, boulevards or lava paths. Images | NASA tool In Xataka | So much ice has melted in Greenland that plankton has grown by 40%. It’s not good news

Wikipedia has banned using AI to write or rewrite articles in English. Human knowledge begins to raise barriers

The English version of Wikipedia has just banned articles made with AI. In the last update of their guidelines are clear: content generated with language models violates content policies. The largest encyclopedia on the internet positions itself as a refuge for content created by humans. AI no thanks. The ‘AI yes or AI no’ debate has been going on for a while generating tension on Wikipedia and they have finally opted to support human content with an overwhelming majority 40 to 2. The new restriction imposed reads as follows: “Text generated by large language models (…) often violates several of Wikipedia’s fundamental content policies.” Those fundamental policies What it refers to are the neutrality of the content, verifiability and that the content cannot be original research, but must be attributed to reliable sources. With this change, editors are prohibited from using LLM “to generate or rewrite article content.” Two exceptions. Wikipedia contemplates two scenarios in which the use of AI is allowed: Basic style suggestions and corrections, as long as the LLM does not introduce its own content. They warn that it must be used with caution since LLMs tend to “go beyond what is asked of them and alter the meaning of the text.” Translation of articles into other languages, as long as it is reviewed by a person competent in the two languages ​​involved. Here it is important to note that Wikipedia has already had dramas in the past because of AI translations. Why is it important. Wikipedia has positioned itself as a repository of genuinely human content in an internet that is flooded with artificial content. At a time when distinguish the authentic from the synthetic is increasingly difficult, the largest encyclopedia in the world chooses to rely on human authorship as a guarantee of reliability. There is certainly something ironic and that is that Wikipedia rejects AI, but AI continues to draw on Wikipedia to provide answerscausing them to lose clicks and saturating your servers. AI generated vs human made. Until recently we thought that the solution was flag artificial content on platforms with the classic ‘AI’ label, but we are already at a point where it is more valuable and useful to highlight the opposite: that it is made by humans. The advancement of image generation tools and the amount of texts made with AI are overwhelming, to the point that an anti-AI current is emerging; Some artists are starting to designing “badly” to differentiate itself from AI homogenizationthey have created extensions to return to the internet before ChatGPTthere is browsers that filter AI results and even ‘Not by AI’ badge has been created. The point is that it is a David against Goliath. The Etsy case. It is perhaps one of the most bloody cases of the flood of low-quality AI content. The platform that It was presented as a refuge for the authentic, today it is an AI market which also tries to pass itself off as artisanal. Ghibli-style portraits for 20 euros, profiles managed entirely by AI that say things like “I can’t wait to draw you”… Etsy allows content made with AI, but says you have to label it as such. Nobody does it. Proof that the label is no longer useful. A key detail. The last paragraph of Wikipedia’s guidelines is especially striking because it talks about possible sanctions for those who violate the rule, the problem is how they plan to detect who uses AI. Wikipedia admits that “some editors may have writing styles similar to those of large language models” and that “more evidence than mere stylistic or linguistic clues is needed to justify the imposition of sanctions.” We have no idea how they are going to do it, what we do know is that AI text detectors fail more than a fairground shotgun. Image | Wikipedia, edited In Xataka | The last barrier against AI is good taste. The problem is that an entire generation is growing up without developing it

The Pope is asking priests not to use ChatGPT to write their sermons

Artificial intelligence may hallucinate from time to time and make things up, but there is one thing it does quite well: prepare texts from a base. Although the results depend greatly on what you ask for in your prompt, it is great for writing to the OTA about a fine they have given you or making a summary of photosynthesis. And why not: also to explain a parable from the Bible to you grounding it to everyday reality. A sermon from the old priest, come on. Well no. I’m not saying it, he says it the current Pope of the Roman Catholic Apostolic Church Leo XIV. A few days ago, the Augustinian religious was at a meeting with the clergy of the diocese of Rome and there he remembered the technology, issuing a warning to anyone who is tempted to entrust homilies to AI because “to make a true homily, which is sharing the faith, the AI ​​will never be able to share the faith.” That is to say, although language models undoubtedly have the capacity to smooth out the readings of the Bible to bring them down to Earth, bringing them closer to everyday life, one thing is explain the earthly and quite another is providence. In short, spirituality is an exclusive quality of humans and not machines. Perhaps it could help the church staff precisely to select readings from the long list offered by the book par excellence of Christianity and to synthesize what is important so that later they are the ones who, in their own handwriting (it is a way of speaking), write the sermon in the old-fashioned way. What the Pope says goes to mass In any case, Robert Francis Prevost continued with statements that align with science: “like all the muscles in the body, if we don’t use them, if we don’t move them, they die, the brain needs to be usedso our intelligence, your intelligence, must also be exercised a little so as not to lose this capacity” because the exercise of searching the Bible, reading thoroughly and staying with what is important is undoubtedly a mental exercise that, if not done, reduces mental exercise. Another part of his speech was directed at the use of mobile phones and that current paradox of being more connected and more alone than ever, ensuring that there is no human contact and that another type of friendship experience must be sought to establish bonds. In Xataka | Pope Francis made his opinion clear on end-of-life medical ethics. The one we don’t know is that of the Vatican In Xataka | The Vatican, a holy and renewable city: the Pope’s plans to make the small Catholic state more sustainable Cover | Flickr

Anthropic’s security manager leaves the company to write poetry

In a movement more typical of “nihilistic penguin“that the head of security for one of the main protagonists in the development of AI, Mrinank Sharma, head of artificial intelligence security at Anthropic, has announced his resignation with a public letter in your X profile and he will dedicate his life to writing poetry. In his statement, Sharma not only explained why he is leaving the company that develops the models of Claudebut instead described the current state of AI development, with language that mixes alarm with personal reflection. “The world is in danger,” said the former director of Anthropic. The context: who he is and what he did at Anthropic. Mrinank Sharma headed the Safeguards Research Team from Anthropic, a research group focused on studying the risks associated with AI systems. Within Anthropic, Sharma’s work included developing defenses against risks such as AI-assisted bioterrorism and studying phenomena such as sycophancy (the tendency of AI models to user adulation), as well as investigate how AI can influence human perception and change cultural behaviors. He leaves, but leaves a message. The almost cryptic letter that Sharma published in X It quickly went viral due to the messages it contained. In it, he expressed his concerns in a tone that transcends the technical. One of the quotes that has attracted the most attention: “The world is in danger. And not only because of AI, or biological weapons, but because of a series of interconnected crises that are developing at this very moment.” Beyond the almost apocalyptic literalism, Sharma warned that humanity was approaching a critical point in which the development of AI was facing ethical dilemmas for those who develop it “our wisdom must grow at the same rate as our ability to affect the world, otherwise we will face the consequences.” Work to stay out of work. Sharma is not the only one who faces this ethical dilemma. According to sources of The Telegraphother Anthropic employees have expressed concern about the huge evolutionary leap in the latest AI models. “I feel like I come to work every day to stay out of work“one of the employees acknowledged to the British media. In a way this is true, since these employees are working on the development of a technology that, in all likelihood, change nature of his work, and that of millions of peoplea few years away. Is that good or bad? A first reading of the letter leaves the feeling that these workers are developing the weapon that will destroy humanity. However, a reading between the lines leaves Anthropic in a pioneering situation compared to its rivals from OpenAI, Microsoft or xAI: they are achieving advance at a pace which overwhelms even its developers. A sensation that does not seem to occur in the templates of other companies. Could it be that their models are not at that point of evolution? “Throughout my time here, I have seen repeatedly how difficult it is to allow our values ​​to guide our actions. We constantly face pressure to let go of what matters most,” Sharma wrote. The poetic turn. In addition to reflecting on the global risks he perceives, Sharma announced that his next professional step will be very different from the one he had until now. In his letter he mentioned his intention to devote time to what he called “the practice of courageous speech” through poetry. This change of lA for poetry has been interpreted as a sign of dissatisfaction with the pace and focus prevailing in the AI ​​technology industry. Like Sharma, in recent weeks other key figures in Anthropic’s AI development have announced their resignation. Harsh Mehta and Behnam Neyshabur They also announced a few days ago that they were leaving the company. However, in these cases, the exit announcement was made and, immediately afterwards, a new AI project was announced. That is to say, far from the ethical postulates that Sharma proposed, his intention was more along the lines of digging into his own gold mine and not that of others. In Xataka | Daniela Amodei, co-founder of Anthropic: “studying humanities will be more important than ever” Image | mrinank sharmaAnthropic

We have tried to write this article from an AVE. It has been an ordeal

In case anyone is confused, today are the Xataka NordVPN Awards 2025. That means that the editorial team travels from our respective cities to Madrid, mostly by train. We are very hard-working people and we always take advantage of the trip to write an article, or at least we try when Renfe’s WiFi allows us to. We are Amparo Babiloni and Jose García, join us in this sad story. These lines are written by me, Amparo Babiloni, on the Valencia-Madrid AVE on Thursday, November 20 and connected to the Play Renfe network. I like the risk. The simple fact of connecting and being able to start working (halfway) has been an ordeal. How to improve WiFi at home To give you an idea: I got on the train at 8:30 and I wasn’t able to start writing this until almost an hour later. Just logging into the administrator took me about ten minutes and opening the draft at least five more. Slack does not work directly, neither in the app nor in the browser. Jose here. I left Córdoba at 8:33 and I intended to take advantage of the trip to do some work. The departure from Córdoba has been terrible, since it passes through areas with many tunnels, then mountains and then we enter a network wasteland such as Castilla-La Mancha. I don’t know what happens in Castilla-La Mancha, but that stretch is terrible. Not only does the WiFi network not work, but the coverage is terrible. Good. Connecting to the VPN is an impossible mission. In addition to having to confirm that I trust the network certificates, it is impossible to take advantage of the WiFi network and have the VPN activated. In fact, I write this with the VPN disabled, something that gives me some respect on a public network. Ah yes, happy to accept this. During the first hour of the trip I completely depended on the mobile network to write an article and respond to some important emails. Thank goodness I uploaded the images yesterday from home, because having had to upload 30 JPEGs of six megas I might as well have started crying. Slack was only half loading (I couldn’t see my colleagues’ profile photos) and this article is being coordinated by Amparo and I in the best way possible. Amparo is offline, I hope she’s okay. It’s 10:13. They just told me over the public address system that there is an incident at the entrance to Madrid, so I find myself half an hour from Madrid completely stopped 🤷‍♂️ Dizzying speeds Amparo returns. The first leg of the trip I suffered quite a few outages, but now it seems that the network has more or less stabilized and I have been able to write all this in one go. But let’s see what a speed test tells us. The image weighs 13.9KB. It took more than a minute to upload This is the download and upload speed while passing through Castilla-La Mancha. One thing that both Jose and I have noticed is that the network is better as we get closer to Madrid, probably because there are more antennas. This contrasts with what we live in 2016 when we tried Renfe WiFi for the first time. At that time we found “a very good connection speed, with peaks of 53 Mb/s for both upload and download, and with minimums of 9 Mb/s for download and 13 Mb/s for upload in an area with little coverage.” (My connection has been cut here) It’s back, but it took me a while to be able to continue writing because every time I open any new tab it takes an average of 2-3 minutes to load, that is if it doesn’t freeze. The speed entering Chamartín. I have repeated the test by entering the station and the download speed still does not even reach 2MB. In fact, it’s even worse than when I was further away from the city. I have to leave you now, we just arrived. At least this time it wasn’t due to a network outage. Hello, I’m Jose. It’s 10:29, I’m still standing half an hour from Madrid. The train driver is being very considerate in informing us of the situation. The issue seems resolved, but now the entrance is congested. ADIF has not yet given an estimate of the duration of the stoppage, so until further notice, we are still here. Right now, half an hour from Madrid, the network is stable, although the speed barely exceeds 1 Mbps. I have tried to liven up the wait by watching a video about the new Bambu Lab 3D printer, but it was not a good idea. All videos load by default in 240p. If I increase the resolution, the video stops and stays in an infinite loading loop. I could resort to a PlayRenfe movie, but since November 1 They are no longer available. The thing is that I have a 5G network on my mobile (at a speed of 15 Mbps, let’s not go crazy either), so it definitely seems like a problem with the train’s own WiFi network. The cell phone tells us that it is not a WiFi 6 network (which would help with congestion), but the underlying problem could be a host of things. A possible cause A possible origin of the problem is that the desire to eat and hunger come together. First of all, you have a low-speed network that is not prepared for support the huge number of devices that there is a train consuming bandwidth. We are writing this text, but there may be people watching TikTok, YouTube or doing more demanding things. (It’s 10:32, the train is moving again) Secondly, trains cannot escape the laws of physics. The Córdoba-Madrid AVE is currently moving at 248 km/h and the Doppler effect does his thing. As we see each other, the signal intensity constantly changes and the systems must compensate for these variations. The faster … Read more

George RR Martin asked ChatGPT to write ‘Game of Thrones’. He did it so well that he is going to end up before the judge

The debate about the AI usage limits and how is this going to actually affect the creators It is very complex, and it has only just begun. From discerning to what extent AI’s ability to create works outside of humans will continue to grow to the logical ethical and legal concerns that appear around a tool that, from its very definition, is in a completely unexplored area. At the moment, George RR Martin and other authors are taking steps in search of more demanding regulation. What has happened? A federal judge in Manhattan has given the green light for the lawsuit filed by George RR Martin and other authors against OpenAI and Microsoft for alleged copyright violation. The creator of ‘Game of Thrones’ and his colleagues accuse these companies of use his works without authorization to train ChatGPT. According to the ruling issued on October 27, 2025, there are reasons for the case to move forward, since ChatGPT’s proposal for a sequel to the saga was substantially similar to Martin’s work already protected by copyright. The determining test. It came when lawyers asked ChatGPT to create a fictional sequel to ‘A Clash of Kings’. The chatbot immediately spawned a novel called ‘Dance of Shadows’, a sequel that included a new Targaryen heir named Lady Elara, a rebellious sect of the Children of the Forest, and a mysterious form of ancient dragon-related magic. This ability to recreate elements from Martin’s universe made the question clear: how could the AI ​​know his work in such detail without having fed on it? The precedents. The origins of this legal conflict date back to September 2023, when Martin, accompanied by 17 other authors (including people like Michael Chabon, Ta-Nehisi Coates, Jia Tolentino, John Grisham, Jonathan Franzen and Sarah Silverman) raised his voice against what he considered a systematic exploitation of his work. The case was brought by the Authors Guild union, in a lawsuit that spoke of “systematic theft on a massive scale”, arguing that the tool makes use of their works without paying royalties and without the writers’ consent. The letter. Months before the lawsuit, these authors and many others, such as Margaret Atwood or Nora Roberts, they had sent a letter to large technology companies conveying their concerns about generative AI technologies. In that document they warned about “the injustice inherent in exploiting our works as part of your AI systems without our consent, credit or compensation.” The accusation was clear: ChatGPT had not only learned from his books; Now I could replicate them. Other attacks. We are at a key moment in determining the legal implications of generative AI. At the beginning of 2025, for example, it was decided by the juries a similar dispute against Anthropicwhich concluded with an out-of-court settlement: the company paid $1.5 billion to authors whose works were used without permission. This precedent shows that technology companies are willing to negotiate to avoid court rulings that could establish binding jurisprudence. In England, by contrast, the High Court of England determined that Stability AI did not infringe copyright by train your model with Getty imagesthat is, a decision in literally the opposite direction, which has generated alarm among European creators. In all these cases the debate about “fair use” or fair use: The technology companies argue that the training of their models constitutes a transformative use of works, similar to when search engines index content. The creators reply that it is a massive appropriation that replaces, not complements, the original work. And in the background, a shock that has only just begun. Header |Gage Skidmore

There’s a reason you forget to write things down on your shopping list: prospective memory.

Have I turned off the gas? Where have I left the keys? What was I coming to the kitchen to do? These are some of the questions we often ask ourselves. Of the three, the last one is perhaps the most interesting, the one that involves a form of memory with which we are not very familiar: the prospective memory. What exactly is prospective memory? This form of memory is what refers to our ability to remember planned or future actions, to remember intentions. It could be remembering what we were going to look for in the refrigerator or the dentist’s appointment on Thursday. Prospective memory is something we deal with in our daily lives, but it is not a concept that many people are familiar with. Neither do the experts: research on this form of memory was, until the beginning of this century, virtually nonexistent. But in recent years we have managed find out some key aspects of this memory. For example, we now have an idea of ​​which brain regions work for the correct functioning of future memory. A 2010 study found three regions of the brain whose activity was linked to prospective memory results: the parahippocampal gyrus, the left inferior parietal lobe, and the left anterior cingulate. However, there is still much to investigate in this regard. Other studies, for example, have given greater importance to the activation of the right lobe in relation to this memory. Othersfor example, emphasize the role of the anterior prefrontal cortex and the medial temporal lobe. But it’s not all neurobiology. Things we forget Why do we forget what it is that we were going to write down on the shopping list? Prospective memory is not very different in this from other forms of memory. Here attention is key. In one interview for RAC1 radiothe neuropsychologist Saul Martínez-Horta explained, starting from “what did I come to the kitchen to do”, he explains how it is that we have this facility to forget things. Distractions are one of the main factors that affect this memory. If we go to the kitchen to get salt, but in the meantime we remember that we left the oven on, this second fact will make us confused and make it more likely that we will forget about the salt. In Martínez-Horta’s own words “Normally what makes us forget what we should do is the saturation of the system and the distraction mediated by another event. brain capacity “It is limited and sensitive to distraction, so it is relatively easy for us to direct our attention to something other than what we are doing.” Concentration is, therefore, key if we want our prospective memory (or our memory in general) to have more of itself. Memory can be trained, but generally the exercises that allow us to do it They are not useful beyond of the memory function they seek to train. That is to say, there is no evidence that solving crossword puzzles will make us remember to buy popcorn for when we have visitors. That does not mean that we are helpless. Some healthy habits impact in our brain’s ability to perform its tasks, and although studies focused on prospective memory are scarce, it may be a good idea to incorporate them. A varied diet, exercise and sleep properly they can help us with our memory. Perhaps, they can also help us remember what it is that we were looking for in the closet before receiving WhatsApp from our brother-in-law. In Xataka | How much information can our brain store? Image | Cottonbro studio

How to write a claim using artificial intelligence

Let’s explain How to write a claim using artificial intelligencesomething you will be able to do directly with Chatgpt, COPILOT, Deepseek, Gemini or any other. This will help you save time when making these writings. In addition to putting on a Prompt With which you can make claims, we will also give you Some important tips When writing this request for AI. Because there are some important details that should be taken into account to maximize the result you are going to obtain. Write claims with AI Writing a claim for AI is relatively easy, but doing well so that then you do not have to complete it with more data depends largely on your ability when creating the prompt or request with which you ask the AI ​​to generate it. It is important to include many detailsso that chatgpt or the method you choose taking into account. The base prompt on which to then add details would be something like this: “Write a formal claim for an online store, which I will send by email. I bought a speaker on June 15, they arrived broken and they have not answered me. I want the money to return to me. I know polite but firm.” Here, the first thing you should do is Specify the context of your claim. The Prompt says it is an online store, but you can change it to adapt it to the establishment in which you want to make the claim. It is also important Indicate the purposeas they return your money or make a change of the product. It is also important that explain in detail what happened. In the example I put the speaker that came broken, and here you must specifically put what happened to you, specifying prices and how you received the product or what happened to the service you paid. In addition to this, now we are going to tell you a list with all things to include and take into account In the prompt that you are going to write. Type of recipient: Specify if it is a company, a public administration, and put its name if you see it necessary to include. What has happened exactly: As we have told you above, it clearly describes what has happened and why you want to make the claim. What happened? What rights do you think have been violated? They are two questions that you have to answer. Specify how you will send the claim: Specify if you will send an email, form, letter, whatever. Thus the AI ​​will use the context to establish the format. Specifies the purpose: We have also said it before, I gave the purpose of the claim, as if they return the money. Expose all information and context: In addition to exposing what has happened, it includes all the necessary context. Imagine that you are telling another person, and do not leave anything out. Write clearly and concisely: Expose all the information clearly, and try to avoid mistakes or poorly written things that can lead to the IA misinterpret it. Specifies the tone: You can specify the tone in which the claims sheet is written. In the example we have put “I know polite but firm”, but you can modify it to make it more formal, or whatever you want. Don’t forget key data: Date of the purchase or the event that you want to claim, order number, invoice or file, or attached documentation if necessary. Add tests: You can specify or ask after a paragraph is added by mentioning that attacked invoice or a photo of the damaged product. Do not forget your personal data: Your name and surname, your email or your phone to contact you. And with all this information, you have to compose your prompt for chatgpt or for the AI ​​you choose. Then, a claim will be generated. Here, it is important that You read it well and make sure everything is correct. In the event that it is necessary, you can change what you want. If you do not specify some key data, you will be marked to make them in the template. In Xataka Basics | The best PROMPTS to save working hours and do your homework with Chatgpt, Gemini, Copilot or other artificial intelligence

In the new macOS 26 Tahoe there is change of interface, yes. But the surprise is to be able to write an email or throw a shortcut from Spotlight

More than 40 years of history They contemplate the Mac. It may not be the protagonist of the Apple ecosystem, but Macos always has a relevant study in the updates that the company announces every year. This time it has not been the exception, and this year we have a new version that not only changes its name (Tahoe), but also jumps rumored in numbering. Thus, the traditional sequence after MacOS 15 Sequoia It should have brought us Macos 16, but instead Apple has decided to adopt a version number that corresponds to the “year of launch+1”, which in this case is 26. In this new version of macOS A good face washing was expectedand that’s what we have with MacOS 26 Tahoe. The changes in the interface are the absolute focus, although the most striking novelty is the one that reaches Spotlight, the launcher/search engine integrated in macOS. The interface changes, although not radically. Liquid Glass raises an interface in which for example the taskbar is transparent and that is customizable, such as the control center. The latter can be configured to our liking, adding a lot of direct icons and accesses, even that they can correspond to iPad apps. These customization improvements also reach the folders, in which we can change the color or add a small badge to differentiate them from the rest. The new version of continuity allows us that for example we can have a “twin” of our iPhone on the screen to be able to do things like enjoying calls directly on the MAC screen and also doing it using options such as the Real -time translation. However, where the most curious improvements arrive in Shortcuts and in Spotlight. In Shortcuts, a powerful Automatizon tool, now we can create more powerful shortcuts thanks to the so -called “smart actions”, which take advantage of Apple Intelligence’s functions. But It is in Spotlight where things change especially. We are facing that Apple states that it is the most important update in the history of this component. This pitcher allows you to search for files or launch applications – even those of the iPhone that we have matched with continuity – starting to write its name in the text field. But it is also that we now have a way to access very striking functions without leaving Spotlight using small keyboard shortcuts. Thus, if for example we write “SM” (Send Message) we can start the writing of an email message without opening the mail application. There are several of these personalized commands that make Spotlight into A kind of custom “terminal” In itself from which to execute commands, launch applications and even access the history of the clipboard. The last great novelty of Macos 26 Tahoe comes from the hand of video games. Apple, who has never paid too much attention to this area in the Mac, now offers a control center called Apple Games that reminds us of the Xbox application in Windows. As in that case, in Apple Games We have a form from which to access our library and from which to launch the games that we have bought or buy new. There is a new Game Overlay that allows us to chat with friends, invite them to play or adjust our configuration to play. The developers and users will also benefit metal 4, the new rendering technology specifically designed for the Apple chips GPUs and that allows games that take advantage of it improve its visual quality remarkably, promise in the company. In Xataka | Apple plays in today’s event more than its future in AI: its credibility is played as a company

How to improve Chatgpt’s privacy preventing what you write is used to train artificial intelligence

Let’s explain How to improve Chatgpt’s privacydeactivating the option with which you allow OpenAi to use all the content you write or believe to continue training its artificial intelligence models. It is an option that is activated by default in your profile, but it is easy to disconnect. When you are using chatgpt, if you don’t change anything you are giving the company permission to collect your interactions. Then, these questions that you have asked the AI ​​and the answers generated for you will be used in the future to continue training and improving the models. But if you don’t want this information to be used because it is private, we will tell you how to deactivate it. Disable data sending to chatgpt The first thing you have to do is enter the configuration of Chatgpt. For that, on the mobile click on the side options button and click on your username. In the web version click your profile image and choose the option of Configuration which will appear in the window that opens. If you are on the mobile, what you have to do once you enter the configuration is click on the option Data controls that will appear in the section of Accountwhich is the first to see above all. Once inside, deactivate the option Improve the model for all That will appear in the first place. With this, your content will no longer be used to continue training OpenAi’s models. In the desktop versionwithin the configuration click on the section of Data controls. Once inside, click on Model improvementwhere you will be able Disable the option Improve the model for all That will appear in the first place.

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.