Science has named what you feel when a Pixar movie makes you cry

Watching the end of a Pixar movie and witnessing an unexpected reunion at an airport can trigger something in some people: a lump in the throat, a warmth in the chest, and even watering in the eyes. And it is not sadness nor euphoric happiness, but a sensation that has received a name very recently. A problem. For years, psychology has had trouble categorizing this specific sensation. We call it “being moved”, “striking a chord” or having “mixed feelings”. However, for a decadea group of scientists from UCLA and the University of Oslo has given it a technical namea theoretical framework and an evolutionary explanation. Is called ‘Kama Muta‘, and it is the scientific label for one of the most powerful tools of our survival, which is sudden connection. Something that we can also feel on social networks when we see the video of a grandmother with her grandson, for example, in a very idyllic situation. Kama Muta. A term that comes from Sanskrit and literally means “to be moved by love” (or “to be filled with love”). And although the name sounds very mystical, there is a scientific part behind it that supports it, since it has been systematically studied. by Kama Muta Labled by anthropologists and psychologists. According to his founding article from 2016and later reviews in Annual Review of Psychologykama muta is not a “new” emotion in the sense that we have just discovered it, but rather that we have just classified it. That is, we had these localized feelings, but we didn’t know what name to give them. Its definition. A positive emotional response triggered by a sudden intensification of communal relationships. In other words: it is what your body feels when you perceive that a social bond is suddenly created, repaired or strengthened. A physical triad. Unlike other abstract emotions, kama muta has a very clear physiological signature that researchers have validated in cross-sectional studies. According to research by Zickfeld published in Emotionwhich spanned 19 countries and 15 languages, the universal symptoms are clear: wet eyes, goosebumps, and a feeling of warmth. A sensation of warmth that curiously centers right in the heart of the chest. Something that already says a lot about this new emotion. Why we are sorry. Why did evolution design us to cry and tremble when we see others hugging? The answer is in group survival. Science suggests that this emotion acts as a social glue, since by feeling physically rewarded by the connection (our own or someone else’s), we are more predisposed to take care of others and sacrifice ourselves for the group. In this way, it has the power to humanize “others.” In an experiment, showing moving videos that induced kama muta significantly increased the perception of humanity towards outgroups, reducing any prejudices one might have. It’s not just “feeling good,” it’s a biological mechanism to expand our circle of empathy. Climate action. The most interesting from recent research is that kama muta does not remain a passive experience, but predicts behavior as well. A 2023 study published in Frontiers in Psychology found that messages about climate change that evoked kama muta (focused on connection to the planet and shared responsibility) were more effective in predicting pro-environmental intentions than those based on fear or guilt. Images | Nik Shuliahin In Xataka | If the question is “where is the secret to happiness,” an expert believes it is hidden in these 15 statements

You feel like going to Sri Lanka because you saw it on Instagram. The problem is that the person who recommended it to you was an AI

The image is familiar. A young woman smiles from a beach with turquoise waters. In the following publication, he appears walking along a cobblestone street in Marrakech. Below, he poses at a luxury hotel in the Maldives. The skin is perfect, the body responds to the prevailing canons and the text accompanies with inspirational phrases about traveling, discovering cultures and “living in the moment.” Nothing seems out of place. Until you discover the reality. That traveler has not flown, she has not walked those streets or tried the food she recommends. It doesn’t exist. She is an influencer generated by artificial intelligence and is part of a phenomenon that is growing quietly: the normalization of artificial profiles that influence the real decisions of millions of people. A silent, but massive boom. In the last two years, Instagram and other social networks have been filled with virtual influencers: characters created with generative AI who pretend to be real people and publish travel content, lifestyle or fashion, the best known case in Spain is Aitana Lopez. Some indicate it more or less clearly in their biography; others do so ambiguously or almost invisibly. However, what is interesting here is how the examples multiply in the tourism sector. Sena Z has been presented as “the first travel and hospitality influencer created with AI”, It’s a collaboration between the luxury group Cenizaro Hotels & Resorts and the technology firm Bracai. Sena publishes cultural recommendations, messages about sustainability and photographs from exotic destinations. Another notable case is Emma, ​​the official influencer and chatbot of the German National Tourism Office. Emma not only publish content on Instagrambut answers questions in more than 20 languages ​​from the official website of the organization. As explained from the entity to the Washington Postits creation is part of a strategy to “stay at the forefront of digital innovation.” Other profiles are added to these profiles, such as Radhika, Emily Pellegrinior corporate avatars like Samathe Qatar Airways virtual stewardess who appears both on the airline’s website and on social networks, publishing as if she were living real experiences. These are not isolated experiments. As detailed by The New York Timesairlines, tourist offices and brands are increasingly turning to these avatars because they are cheaper, faster and completely controllable. An AI influencer does not get sick, does not get tired, does not age and does not generate personal controversies. Inexperienced influencers. The question is inevitable: what happens when the experience is not real? Just look through these profiles to see it: they recommend destinations, restaurants and cultures that they have not experienced. Even so, they generate engagementaccumulate thousands of likes and comments, and influence travel decisions. From the brands’ point of view, the appeal is evident. According to data collected by the New York mediacreating an advanced avatar can cost between $5,000 and $15,000, compared to traditional campaigns that easily exceed six figures. In addition, content can be produced without travel, without filming equipment and without negotiating with human talent. However, for real creators, the impact is already being felt. Human influencers cited by the same medium explain that brands are reducing payments, eliminating extras and offering less advantageous collaborations. AI thus becomes a new direct competition within the creative economy, a sector valued at more than 200 billion dollars globally. Is someone regulating it? While Technology advances quickly, regulation tries to catch up. Going home, in Europe, the clearest answer comes through the Artificial Intelligence Regulations (AI Act). Article 50, which will come into force in August 2026establishes transparency obligations for providers and users of AI systems. Among them: Report when a person interacts with an AI system. Mark content generated or manipulated by AI (text, image, audio or video) in detectable format. Force deepfakes and AI-generated texts that report on matters of public interest to be declared, unless there is human editorial review. The European Commission has already started the preparation of a Code of Good Practices for the marking and labeling of content generated by AI, with the participation of experts, platforms and civil society. The goal is to facilitate compliance before the law is fully applicable. However, many virtual profiles do not clearly indicate either their artificial nature or their commercial links, leaving the user in a field of ambiguity. Unreal bodies, algorithmic authority. Beyond destination promotion, most AI influencers share common traits: eternal youth, slim bodies, perfect skin and a total absence of imperfections. This phenomenon coincides with the return of Y2K aesthetics and extreme thinness on social networks, a trend that has been linked to a decline in body diversity. The most notable case was due to advertising campaigns with models generated by AI, like Guess in Vogue. Mental health experts warned that constant exposure to unreal bodies can aggravate self-esteem problems and increase risk of eating disorders. The difference, they point out, is key: while traditional retouching started from a real body, AI creates bodies that have never existedimpossible to achieve even in theory. This logic has been taken to the extreme with phenomena such as the Miss IA pageantwhere artificially generated models compete showing bodies without pores, without age and without history. According to plastic surgeonsmore and more patients come to consultation with images created by AI asking for impossible interventions and pointing out the risk of frustration, obsession and psychological damage. The underlying problem: we no longer know what is real. All of this occurs in a broader context: a crisis of visual confidence. As my colleague in Xataka has analyzedthe massive generation of hyperrealistic images has broken a chain that for centuries seemed solid: if something was seen, it had probably existed. Today, that presumption has disappeared. Seeing is no longer equivalent to knowing. In this new scenario, we not only doubt whether an influencer has really traveled, but also whether the image itself corresponds to something that happened. The consequence is a permanent suspicion that affects memory, attention and the way we relate to digital reality. The technical solution—seals, metadata, … Read more

With AI, Microsoft has once again insisted that we talk to our computer: experience says that we don’t feel like it

You get up in the morning, go to work and sit in front of the computer, but the first thing you do is not pick up the mouse and keyboard, but say “Hey, Copilot”. Can you imagine it? Me neither, completely, but that is Microsoft’s clear obsession: to get us to talk to our PC instead of using the usual peripherals. That futuristic vision is striking, but it faces several enormous challenges. what memories. The thing about Microsoft and other technology companies with their intention for us to talk to machines goes back a long way. The first generation of voice assistants precisely pursued that goal. There we saw how Alexa, Google Assistant and of course Cortana tried to make us talk much more with our devices. We were not prepared to talk to machines. Its success was rather limited, and even Nadella himself admitted in 2023 that, for example, those “smart” speakers They were “dumber than a stone”. In Xataka Voice assistants and the fight to gain our trust Cortana tried. The Redmond company certainly tried to make Cortana successful. It offered it on both Windows 10 and on Android and iOS…and even the sadly defunct Windows Phone. Over time the company realized that that assistant was not a good fit, and was killing him little by little. The launch of ChatGPT was used by Microsoft to raise your new assistant powered by AI and definitely kill to his first assistant: Copilot wants to be what Cortana could never be. Who asked for this? With that “Hey, Copilot” the same thing is happening as with Cortana: did someone ask Microsoft to integrate a voice assistant into Windows? The voice assistants of that first generation were relegated to residual use, and Amazon suffered this problem firsthand. He bet billions of dollars that Echos would become devices we wouldn’t stop talking about, but most people I just used them to set timers and music. AI promises to go much further. But in spring 2024 we live in a hopeful moment for this type of technology. OpenAI launched GPT-4o and demonstrated that natural conversations with a mobile phone were not only possible, but also They were very powerful. AI could be ours confidant and companion -with controversy included— or our private teacherand as others later wanted to demonstrate, it could also do things for us just by talking to her. Let them tell you to the vibe coders. But we still have a hard time talking to the PC. Since then it certainly seems that we have become a little more accustomed to talking with our smartphone, but things seem to be different on the PC. The statistics reflect that 77% of young people use their voice on their smartphone, while only 38% of them do so on the PC. “But everyone on the PC listens to me”. There is also a sociological component in this use of voice on the PC. The mobile phone is more intimate and personal, while the PC is often used in a static setting in which there are people around who can capture what we say. Furthermore, in the physical context, the unspoken rules of coexistence—do not disturb, do not invade others’ acoustic space—outweigh the promise of comfort. And then there is distrust. Microsoft is not helped by its recent history, especially with Recall, that option that seemed really striking and ingenious but ended up being delayed to generate a great controversy regarding privacy. The launch of the new Windows 11 options, with “Hey, Copilot” as the main protagonist, does not seem to have been received with too much enthusiasm, and the tone, for example, of the comments from this long thread It is skepticism. Rivals focus on mobile phones and speakers, not the PC. The truth is that the adoption of voice as a way to interact with our devices does not seem to be particularly viral. The erratic launch of Alexa+ does not seem to be providing great advantages, Apple continues to make itself wait with its renewed version of Siri, and only Google has taken a step forward with Geminialthough not clearly on the desktop. Talking to machines works, but not as much on the PC as on the mobile. {“videoId”:”x9jvzns”,”autoplay”:false,”title”:”Project Astra Exploring the Capabilities of a Universal AI Assistant”, “tag”:”Project Astra”, “duration”:”116″} A triumph for accessibility. Where there is a clear use scenario for this technology is in the area of ​​accessibility. For users with reduced mobility, the ability to dictate or control the device with their voice can be transformative. This need is concrete and well defined, however: it does not justify a general redesign of the interaction or a marketing campaign that tries to get us all to talk to the computer. The voice should solve things, not be a fair trick. Microsoft’s real challenge is not technical — the technology is there — but human. The company must convince people that talking to the PC makes sense. To do this, it must address three fronts: privacy, the social context—that you don’t mind talking to your PC—and of course, that said interaction has practical use and works. For example, they come in there Copilot Actionswho will have to demonstrate – like everything else – that Microft is on the right path here. Otherwise, “Hey, Copilot” could become the new Cortana. In Xataka | Sundar Pichai (CEO of Google) believes that ‘Her’ is inevitable: “there will be people who fall in love with an AI and we should prepare ourselves” (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); } })(); – The news With AI, Microsoft has once again insisted that we talk to our computer: experience says that we don’t feel like it was originally published in Xataka by Javier Pastor .

not for money, but to feel useful again at work

In recent years, something surprising is happening among those who occupy management positions in companies: many leaders no longer want to continue promoting or change companies and prefer to return to find motivation in his own position. The latest report ‘2025 Workplace Engagement Report’ made by Kahoot! points out that 46% of the managers consulted would be willing to leave their position in the company simply to feel comfortable again. committed to his daily work. This trend coincides with an environment in which motivation and a sense of purpose are becoming a priority for employees. Being in a thousand things, but not being in any. One of the effects of “hustle culture” is that excess workload or responsibilities overshadows the real motivation for the work being done, creating a kind of abstraction among those who lead teams. The data obtained by Kahoot! They point out that only 47% of the leaders surveyed consider themselves “completely involved” in their work, although 79% believe that their team sees them as having sufficient energy. As and how they stand out in Inc.comthis contrast shows that the disconnection begins with the managers themselves and can filter down to the rest of the employees. Furthermore, more than a quarter of leaders have thought about resigning during the last year. Burnout and demotivation in record numbers. The appearance of burnout (emotional exhaustion from work) is especially common among those who manage teams: 34% of those who occupy these positions acknowledge feeling exhausted daily or suffering from this exhaustion frequently. The report ‘State of the Global Workplace 2025’ prepared by the consulting firm Gallup confirms this trend, with a drop to 27% in manager commitment. In this context, it is striking that only 17% of companies offer their leaders the tools they consider useful to keep motivated of your team. 57% have never received adequate leadership training to re-involve their colleagues when the first symptoms of demotivation appear or tension increases. Only 38% admit that they have only received partial training. Given this, 40% of those responsible say that they would resign from their role as head of the team if it guaranteed that employees were committed again. Feel useful and valued. In recent months, a good part of the layoffs in large companies have been aimed at intermediate positionswhat have they seen underestimated his work within companies. Therefore, most managers are not asking for a raise or more power, but for something much more important to them: 69% indicate that what they need to feel more involved is to have their work recognized. In fact, the lack of recognition appears as the main element that 21% of these professionals miss. On a personal level, the managers surveyed for the Kahoot! say they would regain engagement if their days had more energy, creativity or fun (58%), more opportunities to learn and grow (52%) or better technology to connect with the team (48%). What all of this data reflects is that managers no longer aspire to just be promoted, but rather to more real and tangible jobs that allow them to be more creative and develop their skills. Bosses looking for a new role. Faced with these challenges, more and more organizations are criticizing rigid hierarchy models, valuing more those who facilitate work and encourage creativity from any position, regardless of the position. “If leaders are willing to trade their title for the opportunity to feel engaged, this is a sign of something deeper,” said Eilert Hanoa, CEO of Kahoot! in the report. According what was published According to Inc.com, today’s leaders prefer to act as companions to their teams, more attentive to the real work than to the office or the corporate hierarchy. The flexible structures they start to gain strengthpossibly driven by the arrival of generation Zencouraging the exchange of ideas and the active participation of the entire team in decision-making. In Xataka | At the end of this year, one in three young people will have changed jobs: it’s nothing personal, it’s just salary Image | Unsplash (Vitaly Gariev)

If after running you feel "In a cloud" You are not alone and it has a name: it is the "Upon of the corridor"

Running is a healthy exercise that can help improve the health of our heart, our lungs, or our state in a general way. But running has other advantages, and one of them has to do with the pleasant sensation that seizes us at the end of the exercise, a feeling of being “in a cloud” and that we could say is similar to a “colocon.” Only that is exactly that, a “colocon.” The detail is not trivial. For a while we think the “rush of the corridor” (Runner’s High In English) it was caused by endorphins, but now the focus of experts It focuses on another group of hormonesendocannabinoids (ECB), as responsible for this sensation. Parallelism between the names of endorphins and endocannabinoids It goes beyond the shared prefix. The prefix in question refers to the fact that these are compounds that our own body synthesizes internally, which distinguishes these similar ones that we obtain externally. So, Endocannabinoids They are molecules produced by our body and that are able to activate the cannabinoid receptors of our cells, the same receptors as to those who attach the THC or the CBD present in the marijuana. If ECBs are internal cannabinoids, Endorphins They receive their name from morphine. The function of these compounds is to serve as internal analgesics that our body secretes under certain conditions. In Xataka At 34ºC, ten more beats per minute: the effect of heat on our heart rate when playing sport in summer One of these cases is exercise. According to the neuroscientist David Linden In a publication In the blog of the Johns Hopkins University School of Medicine, when we run, our body responds with a series of changes: our breathing is altered, our pulse accelerates … We need more oxygen to develop this activity and our body works to provide it. But there is more. Exercise also makes our body begin to secrete endorphins. Details that, for a long time, we assume that these compounds were Behind the “Away of the Corridor”but something does not fit: the endorphins are transported by the bloodstream to relieve muscle pain, but they are unable to reach the brain due to the presence of the hematoencephalic barrier, a retaining wall that prevents certain compounds from reaching the brain. Endorphins cannot skip this wall, but endocanabinoids do. Confirming suspicion A study published in 2021 In the magazine Psychoneuroendocrinology He suggested that endocanabinoids were “better candidates than endorphins to explain the ‘rush of the corridor’ in humans.” The team used opioid receptor blockers to try to rule out the effect of endorphins, after which it ran to a group of participants in order to verify that the blockade of these receptors did not reduce the sensations of euphoria and reduction of anxiety in the participants. For now it is little that we have discovered about the relationship between endocannabinoids and exercise. A review of literature published in 2022 In the magazine Neuroscientist This would suggest it. The review found 14 studies that after an intense session of exercise, the ECBs increased in our bloodstream; At the same time that four other studies found indications that, after long -term exercises, the presence of these compounds fell. The conclusion: we need investigate more Regarding this relationship. {“Videid”: “X83E1W0”, “Autoplay”: False, “Title”: “Postural exercises for those who spend many hours on the computer Avoid back pain!”, “Tag”: “”, “Duration”: “445”} With or without endocannabinoides, going to run is usually a good idea. Our cardiorespiratory health is the great beneficiary of this type of physical activity and this is reflected in the data: running can extend our life expectancy. Running is not only important for physical health, but also for Mental health. This exercise It has been linked With advantages such as improving the ability to concentrate, less irascibility or a better sleep, among others. Understand whether there is a relationship between the “rush of the corridor” and these benefits are also a reason for study for those who investigate the neuroscience of this popular sport. In Xataka |To the big question of whether you can lose weight, science has an answer. And it has everything to do with genetics Image | Bohle tremble (Function () {Window._js_modules = Window._js_modules || {}; var headelement = document.getelegsbytagname (‘head’) (0); if (_js_modules.instagram) {var instagramscript = Document.Createlement (‘script’); }}) (); – The news If after running you feel “in a cloud” you are not alone and it has a name: it is the “climb of the corridor” It was originally posted in Xataka by Pablo Martínez-Juarez .

How to create a podcast from a text to study, investigate or simply listen if you don’t feel like reading

Let’s explain How to create an audio from text wearing artificial intelligenceso that you can have a kind of podcast to study or review something when you do not feel like reading. Thus, instead of having to be reading a text, you can simply relax while listening. We are going to start the article with a series of previous tips and recommendations to take into account before putting the task. Then, we will tell you how you can do it with one of the best tools available for it, and we will continue with two ways to do it with assistants of AI as Chatgpt And some other alternative that is also interesting for Generate voice with artificial intelligence. Before starting, things to take into account Before starting to get to work with this, there are some things that you must take into account. First, you must define whether this audio or “podcast” that you want to create is educational, informative or only entertainment. Try too Be clear if you are going to listen to only you or also some other person, in order to polish how well finished it is. A written text does not always sound natural when you listen to it, especially when you have longer and somewhat intricate phrases. That is why it can be recommended that The phrases are short and close languagenot something that makes you heavy listening. In cases where you are going to be able to do it, Decide tone and voice stylechoosing between one more formal or informal. This may depend on the content you want to consume in audio, and also if you are going to listen to only you or other people. It is also recommended that You take care of the audio duration. The recommended duration may vary, because if it is something dense or to study it may be that from 20 minutes begin to get tired, but if it is for leisure or something narrative you should not have problems with longer durations. If it is something dense, try to segment the content well. And finally, Always check the result and do not be afraid to try again If you don’t convince you. Because sometimes the voices of AI may sound unnatural or there may be errors, and that is why it is important to check it even if it is a bit. In the end, The most important part is in the script That you will create to be narrated, and in this it is what you will have to spend most of the effort. Try to be natural, with good scores and well structured. Audios from your notes with notebooklm The first tool you can use is Notebooklm from Google, a service with which you can use artificial intelligence to organize your sources, and even Create an audio summary As if it were a podcast. It is a kind of chatgpt, but in which everything you do will be based on the sources you have added by hand. You can use notebooklm Through its official website or by mobile applications. The official website is Notebooklm.google.comor also in their apps In Google Play For Android and In the App Store For the iPhone. You can use it for free, but its payment version will give you more audio summaries. The first thing you should do with this tool is to create a notebook or work space. Inside, In the left column you can add sourceswhich are the files that the AI ​​will use to obtain the information. They can be text documents, slides, PDF, YouTube videos or links to web pages or online items. Once you have added the sources, Go to the section of Studiowhere the tool will create an audio summary of the content of all files. With this you will create your personal podcast To get information about everything you want. How can you do it with chatgpt and others AI Another option is to use a generative AI such as Chatgpt, COPILOT, Gemini either Deepseek. These tools They do not allow you to create a downloadable audio file although this is something you can do with other third -party tools. What you can do is listen directly to the AI ​​app. What you have to do is Create a summary script of an article. Then, this script can be heard directly in Chatgpt, Gemini or other platform, or take it to another AI that generates a downloadable audio. Let’s explain everything step by step. To start, you have to tell Chat GPT to make you summarize an article or a web page. For that, You must attach it and include the prompt or the instructions to generate your script. The article from which you want the summary can be included by uploading the text file or PDF, or directly putting the website. The prompt that we have used is as follows: “I want to create an audio to listen to a summary of the content of this website. I want you to generate the script to then copy and paste it into a program that passes from text to audio. The script has to be narrative, without structure, you simply have to write it to read it from there. (Link)” As you can see, in the prompt we have reiterated that the text generated by chatgpt It must be natural and should be read directlybecause sometimes if you do not mention it, you can tend to generate a script scheme with things to be filled, and what I want is to generate a text to copy and paste. Here, It is you who decides how and what do you want to make the summarybeing able to be one or several files that you raise or add. Just remember that Everything you upload or vintages will be saved on servers of the company that owns the AI, so be careful in the event that you are adding sensitive data that you … Read more

I don’t know any photography. But since I discovered this free app, I feel that my photos with the mobile play in another league

I am not a photographer. But I enjoy taking photos with the mobile. And, as with many people, my way of doing so is quite simple: When what I see on the screen convinces me, I shot. I do not usually touch adjustments such as exposure or format. Sometimes the portrait mode active, sometimes the night mode, adjust the focus if necessary … and little else. The photos I get with me iPhone 15 pro max They seem very good to me. More than enough for what I need. But even with an inexperienced look, it is easy to note that the images made with a traditional camera They have something very different. And deep down I took it for granted. Without entering the field of advanced photography, I thought I couldn’t get my mobile much more. Adobe Indigo, the application that made me change your mind The application that changed my mind is called Indigo Projectand it is an experimental adobe labs tool developed by two figures recognized in this area: Florian Kainz and Marc Levoythe latter known for his role in the development of the Computer photography that we saw in the first Google Pixel. Indigo – that appears installed – is designed to get the entire game to the iPhone with a pro sensor, from The 12 pro Until 15 pro max. What it proposes is a different way of understanding mobile photography, closer to what traditional cameras offer, thanks to a combination of AI algorithms and intelligent processing. How exactly does it work? Indigo does not take a single photo, but several, which later Combine automatically To reduce noise, improve dynamic range and offer a cleaner and more realistic image. All this happens after pressing the shot button. Nothing else. One of its virtues is that it adapts well both to those who only want to point and shoot and those who prefer to control each aspect of the process. The app allows you to configure manual approach, exposure time, ISO, exposure compensation and white balance, with tools such as a precision magnifying glass or the possibility of calibrating the color by playing a neutral gray object. Comparison: Photo captured with the iPhone camera (left) and with indigo (right) Comparison: Photo captured with the iPhone camera (left) and with indigo (right) In this article I show some comparisons. On the left, images taken with the iPhone camera app. On the right, the same scenes captured with indigo. At first glance, the difference is remarkable: less artificial overexposure, less noise, warmer and more natural colors, skies that resemble real ones. Comparison: Photo captured with the iPhone camera (left) and with indigo (right) Comparison with cuts: photo captured with the iPhone chamber (left) and with indigo (right) Of course, it should be taken into account a couple of details. The images take a few seconds to process and, during that time, the phone is heated. There is also an impact on battery consumption if many photos are taken in a row. It’s nothing serious, but it’s there. Since I started using Indigo, the usual chamber app has passed to the background. Not because I have become a photographer, but because I finally feel that the photos I take with my mobile They get closer to what he wanted to capture from the beginning. Images | Xataka In Xataka | Google photos was a place where we kept photos. Google wants it now a place where our photos are “invented”

Until recently they presumed how much they hired. Now the more they say goodbye, the more proud they feel

What times those in which technology companies hired as if there were no morning. The talent war affected even to countries like Spainwhere Amazon fought with a goal to achieve thousands of professionals. The CEO of all these companies celebrated the phenomenon and brought breast explaining that the pandemic had changed the rules of the game. They were proud to hire, but now they are proud otherwise. Checking. Charlie Scharf, CEO of Wells Fargo, has a doubtful record: to have managed to reduce its template for 20 quarters in a row. This manager has described the reduction of workers in this bank “as our ally.” In the last five years the total reduction amounts to 23%, They point In The Wall Street Journal. And there are more cases. Other companies also celebrate these workforce reductions, they indicate in WSJ. Loomis, a Swedish financial company, indicated that it has managed to grow despite recent layoffs, and Union Pacific, the railroad company, stressed that despite firing 3% of its workforce, it has achieved a record quarter. The Verizon CEO, Hans Vistberg, Indian In a recent conference with investors that in the field of workforce “we are being very, very good, and is constantly going down.” Or what is the same: he is very happy to fire people because as he highlighted “we are very efficient when managing our resources.” Lip-bu tan, CEO of Intel, advertisement that They would fire 15% of the workforceand his argument was that they needed to become a “faster, agile and vibrant” company. If you say goodbye it is because you are a great CEO. That seems to be the current perception of the technological industry, because while not having a long time to say goodbye to be a symptom of problems and a strategic withdrawal, it is now seen as a commitment to efficiency and the promising role of artificial intelligence. Elon Musk taught the way. The CEOS did not usually presume to fire people, but the thing changed When you bought Twitter. His first and controversial decision was dismiss approximately 75% of the workforce. The measure made the alarms jump and many wondered if the platform would subsist after that labor debacle. He has done it, and Musk, maximalist of efficiency, has always taken chest When talking about that measure. The efficiency argument. It is as if suddenly the industry had realized that it could make cuts punishing those Employees with lower productivity or looking for maximum efficiency in the template. More and more of the employee is talked about, and there Nvidia is one of the great prominent – clariously benefited by the AI boom. Now the argument between the bosses is defended that No one is irreplaceable. Prize to cut template. Zack Mikewa, of the firm Sloane & Co., explained in WSJ how to “be honest with the costs and the workforce is not only allowed, but is rewarded” by investors. The argument for dismissals in the past was the search for pure economic profitability, but now the excuse is the search for change and renewal, adaptation for that future complex that is presented to us. Downward hiring. Not only are there massive layoffs: there are companies that have also temporarily frozen hiring or slowed them for long periods. This is the case of Bank of America: his CEO, Brian Moynihan, explained to investors that during his mandate the template reduction has been remarkable. Since CEO was named in 2010, the workforce has gone from about 300,000 people to about 212,000, and as stressed “we have to continue working to reduce (that figure).” In Xataka | Microsoft has discovered that lining and saying goodbye to thousands of employees is compatible. The problem is that it is not alone

There are people who feel that the best AI becomes silly and vague over time. Is more than a sensation

A new one comes out artificial intelligence (AI) To the market, social networks and specialized communities hallucinate with their new capabilities, and at the same time, a cycle where users who know the models begin to feel disappointment begins. They begin to see how what until yesterday achieved without problems via Chatbot or API, today stays in a vague attempt. There is a part of sensations, and one real. “Super broken” models. When he launched, Gemini 2.5 PRO reaped huge praise in social networks. The model was Very fastof the cheapest, had A context window huge and it was A beast in programming. However, for a few weeks, comments have emerged in communities such as Reddit that describe a “unusable” model. A model, which as described worked incredibly well between March and June, but that after using now at the end of July it released “absolute nonsense“. Showed a conversation with Gemini summarized by the assistant in which he did not stop recognizing errors. Other users also show Examples of behaviors annoying as Do not finish answers. They are only recent examples of Google’s AI, but even models as praised as Claude have received at different times criticism similareven recently with Claude Code. Suspicion. Many of the users who have criticized the different models speak of cut models: “My assumption is that they reduced the size of the model,” said A Claude 3.5 user in Hacker News. The suspicion is that, over time, and at times of maximum demand, companies begin to use distilled versions of AI models that are not so intelligent, because they have less dedicated resources to respond to the indications. Ian Nuttal developer too He observed Claude Code degradationand claimed that he would pay to have a good version that would never be reduced or degraded at peak hours. Alex Finn, also developer, expressed Equally frustration: “This happened to me with all the IA programming tools that I have used.” It’s not just a sensation. In 2023, many users felt that GPT-4, Openai’s most advanced model at that time, was becoming silly. The company claimed that contrary to what the community denounced, they made each new version “smarter than the previous one.” However, a Paper Academic It ended with speculation: experts from Berkeley and Stanford checked a spectacular precision drop of GPT-4 among its variants in March and June 2023. In programming, for example “the percentage of generated responses that are directly executable was reduced from 52.0 % in March to 10.0 % in June”. Others statistical studies At the end of 2023 they also showed a significant loss of quality between the December and May model. Openai and Anthropic confirmed problems. In December 2023, OpenAI recognized that they had received the feedback on the assistant becoming more vague. They claimed that they had not updated the model from a month earlier, and that it was not intentional, recognizing the problem and explaining that “the behavior of the model could be unpredictable.” Some users came to devise (and achieve, according to their experience) Methods to encourage the model to do betterlike the surprising promise to give a tip or explain to the chatbot that they had no fingers to write the code. More recently, Anthropic acknowledged To TechCrunch have problems in Claude Code, as slower response times, before complaints of users of having limited use without having affirmed. Users who previously performed tasks normally and now could not progress. In Xataka | I have tried day, the browser that replaces ARC and bets everything to AI. It hasn’t come out as expected

If you also feel that you do not have so much time to read as you would like, there is a solution: the “fast reading”

I confess that Bill Gates gives me some healthy envy. Not just to have A current account which looks like a phone number with so many figures, but for its ability to Read more than 50 books a year. I have tried and it has been impossible for me. But Do not shoot the towel. Scientific evidence revealsthat there is a balance point between the speed of reading and the ability to understand and retention of the content that is being read. However, with adequate practice, that balance can be improved by accelerating the reading speed without compromising Reading understanding. Research Like those of the University of Guayaquil, they conclude that the reading speed can increase significantly with the practice and use of appropriate techniques. An average reader reads between 200 and 400 words per minute, while through rapid reading training, that speed can reach speeds of up to 1,000 or 1,700 words per minute. Strike a balance Fast reading is especially useful for processing long texts superficially, being useful for obtaining general ideas or punctual information. It is not recommended in contexts where a deep understanding, detailed analysis or memorization of the content is required, but when it comes to seeking specific information or in texts that address subjects of which previous knowledge is already had. The main ones involved in a slow reading speed are: Subvocalization: It is the habit of mentally pronouncing the words that are read. That limits the reading speed since the visual image of the word that interprets the brain is vocalized as if you are reading aloud, undergoing the need to vocalize each of the words and not limit itself to the compression of it. Word reading by word: The global understanding of the text decreases. Regression: reread several times or have to search in the next line of text as it breaks the reading flow. Low concentration: It affects both speed and retention. Quick Reading Techniques Science reveals that this speed increase is not a magical process, but the result of the conscious training of eye movements, vocabulary expansion and the use of strategies to improve global understanding, and not only the amount of words read per minute. 1. Fragmentation This technique is to avoid reading word by word and assimilating the Words by groups. For example, to start training, you can group the words of the phrases of two by two and increase the number progressively With practice. At first it may be a bit strange, but the brain recognizes the words for its morphology, which explains, for example, that you Cberreo Pdeue Read Etse Txeto. If you read it Word A Word would cost you more than in block because a word contextualizes its adjacent. This principle makes your view jump between words, not word in word during reading. The usual thing is to start reading like this:In-a-place-of-the-scan-of-whose-name-no-quiero-alleging But with a little practice, we soon start reading like this: ANDn a place-of the stain-of whose name-I do not want to remember With proper training, the number of “pauses” is reduced: In a place in the stain-of whose name I do not want to remember. 2. Use a visual guide The rupture of the reading flow is one of the most frequent causes for reading the reading. This usually happens when you have difficulty finding the following text line. The solution: as simple as using the finger to indicate the start of the next text line or an equivalent visual guide. On screens you can use the mouse pointer as a guide. If you prefer Read on mobiles or tablets, the edge the screen or any other reference can contribute to the view not to hesitate when looking for the next line and the reading inertia is maintained. 3. Be faster than voice If when you read you listen to the words in your brain, you are not reading quick enough. As we have said before, this phenomenon is known as subvocalization. To avoid this, it will be necessary to train the reading speed trying to read faster than that inner voice is able to vocalize. In doing so, it will be forcing the brain to prioritize visual processing instead of the “auditory” of your internal voice, thus improving the reading speed. 4. Infinite look Like any other muscle, ocular muscles should also be exercised to improve reading speed. One way to improve your performance is to practice infinity to get a visual sweep of the most efficient text. This technique consists in drawing the infinity symbol (∞) on the text block, so that the look runs in a more global way, not line by line, and improves the capture of words by blocks. In Xataka | The productivity books that have helped me the most: the recommendations of the Xataka editors Image | Unspash (Thought catog, Eliott Reyna)

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.