publishing matters more than research

During last March ICML (International Conference on Machine Learning), the academic conference dedicated to machine learning (machine learning) oldest in the world, rejected 497 scientific articles at once after detecting that 506 reviewers had resorted to the artificial intelligence (AI) to write your evaluations. They had violated a rule which they themselves had agreed to respect. This conference is organized by the International Machine Learning Society (IMLS), a non-profit organization, and has been held annually since 1980. Every year, researchers working in the field of AI submit their scientific papers in late January or early February to ICML. Those papers They are reviewed by a committee made up of other researchers in this field with the purpose of evaluating them and publishing them if they finally pass a thorough review that normally lasts several months. Decisions to accept or reject articles are usually communicated to authors during the month of May, and the ICML conference is usually held in July. Publish in ICML, NeurIPS (Conference and Workshop on Neural Information Processing Systems) or ICLR (International Conference on Learning Representations) is equivalent to what in other disciplines it would be to publish in the scientific journals Nature or Science. But ICML has a serious problem: its authority is being questioned in r/MachineLearninga Reddit community specialized in machine learning which has more than 2.5 million subscribers. A perversion where reviewers don’t have time to review Before moving forward, it is worth stopping at a very important milestone: the number of scientific articles received by ICML is growing overwhelmingly year after year. In 2023 it received 6,538 papersand in 2024 no less than 9,653 articles, which represents a growth of 48%. The root of the problem lies in the fact that the number of qualified reviewers is not increasing with the same rhythm as the number of scientific articles that need to be evaluated. As I mentioned a few lines above, ICML rules establish that reviewers cannot lightly resort to AI to carry out their evaluations because this procedure can introduce bias. In fact, a study carried out on ICLR 2024 has revealed that scientific articles evaluated with AI models They tend to receive higher scores than those reviewed with the conventional procedure. This is the problem. For the 2026 edition, ICML offered evaluators to choose between two policies: one that prohibited the use of AI and another that allowed it, but with conditions. Only those who chose the first option and failed to comply were sanctioned. Of the 506 offenders, only 398 were reciprocal evaluators who had submitted a ‘paper’ However, there is one relevant fact that is worth not overlooking: the 497 scientific articles that were rejected in March of this year were reviewed by offending reciprocal evaluators. This simply means that they are researchers who simultaneously act as authors and reviewers, so their scientific article was penalized due to their violation of the ICML rules of conduct. Of the 506 offenders, only 398 were reciprocal evaluators who had submitted a paper. Interestingly, the detection system that ICML has used consists of hiding specific instructions within the PDFs of articles pending review. Those instructions are invisible to a human reader, but any AI model processing the document interprets them and includes specific, trackable phrases in the evaluation. ICML has not used generic AI detectors. Of course, each case detected was manually verified to verify that a violation had actually been committed when preparing the evaluation. What is happening reflects an unappealable reality: the review system has failed and needs to be rebuilt. The reviewers can’t cope. Neither those of ICML, nor those of NeurIPS, nor those of ICLR. The number of qualified reviewers should grow at the same rate that the number of scientific articles that need to be evaluated, and it is not happening. Furthermore, this scenario has introduced another problem: acceptance or rejection decisions have acquired a random aspect that threatens the consistency and reliability of the evaluations. It is still not entirely clear what path should be followed to resolve this problem beyond the need to increase the number of qualified evaluators. One option is to improve the transparency of the review process publishing all evaluations. Even those of rejected articles. The evaluation process could also be transformed into a two-way procedure in which authors also evaluate the quality of the reviews they receive. In this way, the evaluators will have a history that will prove their good work. We will see what strategy the conferences finally implement. In 2027 we will clear up doubts. Image | Charlesdeluvio (Unsplash) More information | ICML In Xataka | With DeepSeek V4, China has gained more than just an AI model: it has unlocked the potential of its domestic chips

Murcia has been paying the first “shadow toll” in Spain for 27 years. This year will end it

It was 1997 when Murcia approved the Law 4/1997, of July 24, on Construction and Operation of Infrastructures of the Region of Murcia. It might seem like a regulation more related to the infrastructures of the autonomous community, but far from it. Two years later, taking advantage of this text, the Murcia Government gave approval to the construction of the Aunor Highway (the RM-15 highway), granting the concession to a company owned by Sacyr and OHL. In October 2001, the toll road was already in operation. But on this toll highway there are no barriers or personnel to collect the corresponding amount. But yes, the people of Murcia pay for it. It is what is known as a “shadow toll” road. And in 2026 it will end. Goodbye to the first “shadow toll” in Spain Just like explains Sacyr on its websitethis Murcian highway is considered the first shadow toll highway in Spain. A formula unprecedented until then in our country. Operation is simple. The concessionaire company builds and maintains the road for the stipulated period of time. During the years that it is active, the control means certify the number of vehicles that pass on the road but the driver does not stop to pay at any time. At the end of the period stipulated in the contract (in this case, each year), the Government to which the highway belongs pay a variable amountdepending on the number of cars that have circulated through it. That is to say, the cost of traveling on the road does not only affect the driver’s pocket, it is all citizens with their taxes who pay the concessionaire company the amount corresponding to the number of vehicles that circulate on it. In this case, the concession for the RM-15 was 25 years. Therefore, next September the concession period will end and the Government of Murcia will have the opportunity to extend or terminate it and, in that case, take charge of the maintenance and operation of the road itself or contracting the services to a third party. This last option will be the one that comes out ahead, they explain in the local media as The truth. The Government of the Region of Murcia has put out to tender a contract for the maintenance of this road, along with other conservation actions and operations on other roads in the Mula Sector. The amount is 20 million euros and 20 companies have participated in the competition. With the end of this shadow toll, an annual payment of between 10 and 13 million euros per year ends, according to the media. In total, it is estimated that once the contract is finalized, between 305 and 312 million euros will have been paid to the concessionaire company. In its day, the highway was seen as a relief for the residents of the Northwest and Río Mula regions. He explained The truth that the road allowed greater access to the towns in these areas but, above all, it was a much safer alternative than the previous national highway, which crossed municipalities and made it “the most dangerous road in the Region of Murcia.” Photo | Google Maps In Xataka | If the question is how to get rid of tolls, the European Union has a clear answer: being an electric truck

We are increasingly looking for human answers on Reddit. That is the reason why the Google search engine is now a Reddit in disguise

Google has updated its search platform for the umpteenth time, but it has done so with an especially significant change. The user experience in its AI search engines (both AI Overviews and AI Mode) attempts to become more “human”. And to do this, in these searches Google will add more context to the links, such as extracts from internet forums and blogs. And if there is a beneficiary (or harmed) of that movement, it is Reddit. Google was already a gateway to Reddit. There is a behavior that Google has been seeing in its data for years and that for a long time it preferred not to publicly acknowledge: when someone wants a real answer to a real question, add “Reddit” to the end of the search. Not because Reddit is necessarily a reliable source, but because Reddit brings together real people who have experienced this issue, tried to solve it, and written about it without anyone paying them to do so. Google, with all its infrastructure and all its algorithms, had not managed to replicate that. So instead of trying it’s going to incorporate those answers directly. What exactly has changed. The search engine update will make in AI Overviews Fragments from forums, social networks and other “first-person sources” appear. When someone searches for something for which there is no single objective answer, Google’s AI will include perspectives and opinions found in all kinds of (supposedly) human sources online. Doing so will add the name of the creator of that content (or their avatar) and the origin from which said perspective comes. Google also promises to add more context about the origin of its AI-generated answers, similar to how ChatGPT or Claude include links supporting their answers. Tired of so much SEO. The reason is obvious: Google’s organic results for practical and subjective questions—”what vacuum cleaner should I buy”, “how do I cure my dog’s ear”, “what is the best neighborhood to live in in Valencia”— They are dominated by SEO and those techniques optimized to appear on Google. It is important to position, not answer the question well. That is precisely where Reddit, like other forums or personal blogs, has something that this content usually does not have: the real experience of someone who was in the same situation. Google sums it up in its own statement bluntly: “For many searches, people are increasingly looking to other people for advice.” The contradiction that Google has not resolved. There is a potential problem in this new way of conceiving these searches with AI. AI Overviews were designed to answer questions directly and thus save the user the work of clicking, reading and researching. Now they will include diverse and even contradictory perspectives from forums and social networks. So, will AI Overviews answer the question, or will it make us go back to the sources to find the answer? If it is the latter, it will not be very different from what I already did the traditional Google search engine. There is an interesting imbalance here between “we give you the answer” and “we give you context so you can find the answer.” In a sense, Google’s decision complicates searches. AI models are becoming less prone to failure. The famous cases of add glue to pizza are much less common now, and new models often boast a significant reduction in “hallucination” rates that they have. GPT-5.5 Instant, released this week, “produced 52.5% fewer hallucinations than GPT-5.3 Instant,” OpenAI indicated in its official announcement. The problem is that these hallucinations are increasingly difficult to detect because these chatbots hide these mistakes very well. That the system now includes unverified or validated content from networks like Reddit can be problematic: community votes do not always measure how truthful or useful a certain thread is. Using Reddit has its drawbacks. This platform has value precisely because it is not optimized for Google algorithms: It is chaotic and contradictory.. Sometimes there are brilliant responses from people, but other times there are completely wrong comments. When a user adds “Reddit” to their search and reads the results, they are automatically weighing which comments are useful and which are not. But that step disappears if Google extracts fragments of those discussions to include in an AI Overview. Eliminate that human filtering step and presents those answers with an authority that perhaps they should not have. Google will have much more difficulty than a human in distinguishing the comment of someone who has been working in plumbing for twenty years from that of someone who tinkers as a hobby. The shadow contract. This is not just an editorial or technological decision. In 2024 Google signed a deal worth $60 million a year with Reddit to access their data and train their models. You are not incorporating content from this social network as a public service: what you are doing is monetizing a commercial contract. Your message that you are highlighting those “original voices” is really saying that you have paid for that privileged access to Reddit content and now you are going to take advantage of that access and make it profitable. That revenue is interesting for Reddit, no doubt, but there is a problem: clicks. The Stack Overflow Precedent. There is no need to speculate much about what may happen because it has already happened. Stack Overflow is the largest technical Q&A community on the internet, but has lost most of its traffic in two years because AI companies They started collecting all those answers. to train your models and then serve them to your users directly. That caused users to stop visiting Stack Overflow and experts to stop answering questions. The quality of the new content on this network was clearly affected, and it became clear that if the AI ​​already gave you the answer without having to enter Stack Overflow, why enter? The danger for Reddit is exactly the same. Google didn’t have many alternatives. ChatGPT, Claude and Perplexity They have been capturing … Read more

its lunar lander just passed a NASA fire test

The race between SpaceX and Blue Origin to land on the Moon continue and Jeff Bezos’ company has just taken a big step forward. This week, testing of its MK1 lander in NASA’s vacuum chamber was successfully completed. This demonstrates that it is ready to take NASA payloads to our satellite this year and, above all, that it is on the right track to also take the Artemis astronauts to the lunar surface. Everything under control. MK1 is an uncrewed cargo lander. NASA has chosen it to take two payloads to the Moon at the end of this year. On the one hand, the stereoscopic cameras for the surface study of the lunar plumes at the south pole. On the other hand, the retroreflective laser array, which will help find the location of the instruments placed in orbit. To verify that the landing system is ready, it has been tested in NASA’s vacuum chamber A, where the conditions it will be exposed to during its space trip are emulated. The results have been positive, which is why they represent a great advance for Blue Origin and a reason for SpaceX’s fear. A camera to imitate space. NASA Camera A It is a vacuum chamber, 27 meters highin which temperatures fluctuate between -50ºC and 30ºC. It is used to imitate space conditions and check the performance and stability of the systems that are planned to be brought there. In the case of the MK1, it has been proven that it has great structural resistance and that it withstands thermal stress as expected. From MK1 to MK2. In general, the results of these tests They have been very positive. Even so, there have been learnings that will be applied in the development of MK2, the manned lander with which it is expected to perform commercial services for NASA. Logically, the main manned commercial service that Blue Origin wants to be part of is the Artemis program. But there is a competitor with the same purpose. The race against SpaceX. NASA launched the development of the HLS manned landing system for Artemis in the hands of SpaceX and Blue Origin. Both have received funding for the development of their respective technologies, so the US space agency ensures that it will keep the one that finishes first. As long as the result is reliable, of course. It seemed that in this competition SpaceX clearly had the upper hand. However, some recent bugs and delays are allowing Blue Origin advance and continue in the fight. Will you pass them on the right? For now, the situation is equal. We’ll have to wait and see what happens, but the success of the MK1 tests leaves plenty of room for optimism for Jeff Bezos’ space company. Images | POT In Xataka | The launch pads are saturated for all space companies. For all but one: SpaceX

The almadraba has a reputation for being an ancient, artisanal and sustainable art. But behind it lies one of the wildest industrializations of the sea

The Phoenicians arrived on the coasts of Andalusia about 3,000 years ago looking for gold, silver and copper. They stayed for everything else. By the 5th century BC, the factories on the coast of the Strait were already shipping amphorae and amphorae filled with salted tuna throughout the Mediterranean. As we believe, that was when the almadraba was invented. Or so we think. It’s only half the story. The other half is what happens with the tunas that, despite falling into the codend, do not die that day. Many of these tunas (the smallest ones) end up captured and, while still alive, are transferred to marine cages where they remain for up to four months feeding on fish (sardines, mackerel, horse mackerel or chinarros) until they reach the ideal level of fat required by the market. In contrast to the three-millennial trap that enters the codend “with blood and fire” and sacrifices the tuna (to deep-freeze it), there is another that borders on the world of aquaculture, de-seasonalizes the supply and improves the quality of the product. The second, without a doubt, is the most unknown. And that fattening system images They are spectacular. But it’s a logical move. After all, Cádiz traps only catch fish in a short window of time. Normally between the end of April and mid-June. By reserving the smallest tuna and baiting it until September, the product can be sold much more expensive. And it is the only reason to do so because the feed conversion ratio of bluefin tuna in cages is the highest of any species raised or fattened in captivity. While our tunas need between 20 and 30 kilos of oily fish to gain a kilo (20:1-30:1), salmon only need one kilo and pork three. It is not without problems, of course. We already know that filling the sea with fish farms It is a huge source of ecological problems. It is true that it has had a brutal effect on the democratization of fish consumption, but the cost is decimating wild fish populations. However, the case of tuna is different. Its impact on the populations of oily fish that serve as food is great, of course. But it is still small, simply because we have not learned to raise it from scratch: you have to fish to fatten it up. If the efforts of institutions like the CSIC are successful, the Strait will have a problem that will be counted in thousands of tons of exports. Image | SLADE | Big Dodzy In Xataka | Spain is going to continue fishing for eels until we have no more eels to catch

Your employees want a piece of the pie

Samsung has been one of the main beneficiaries of the crisis that has triggered the shortage of memory chips due to the high demand for these components for AI. In fact, in recent weeks, the South Korean manufacturer has set records of capitalization due to the strategic situation of the company as one of the main manufacturers of memories. However, despite the tailwinds that push its stock market price, Samsung faces a serious problem that cannot be resolved by manufacturing more chips: thousands of its workers have said enough and are threatening to stop the factories for 18 days. A scenario that only adds fuel to the fire of RAM problem that shakes the entire technology world. The workers are serious and the conflict it’s starting to get tense really. The labor conflict is not new, the workers unrest It has been brewing for some time within the South Korean company and has reached a point where, as published Reuters Even Samsung’s senior managers have had to come out publicly to ask for calm. What are the workers asking for? Samsung’s majority union in South Korea, which represents some 90,000 Samsung workers, demands two fundamental things: that the company remove the maximum cap on performance bonusesset at 50% of the annual salary, as applied in your competition SK Hynix. A mid-level employee at Samsung “might earn 90 million won a year and receive 45 million more in bonuses, but at Hynix, he would receive a bonus of 250 or 300 million won,” declared to the Financial Times Park Jun-young, a former employee in Samsung’s semiconductor division who now writes about the industry. Furthermore, they ask that the 15% of operating profit from the semiconductor division directly to the workers. This percentage would be equivalent to about 45 trillion won (30 billion dollars) distributed in extra bonuses for the staff. As and as highlighted the local environment The Chosun Dailythis figure means distributing among workers a bonus four times higher than the dividend that Samsung distributed in 2025 among its shareholders (11 trillion won). The company has counteroffered with a reduction of up to 13% of the division’s profit. On the other hand, the union demands a 7% salary increasecompared to the 6.2% that Samsung initially proposed. 93.1% of members who participated in the union vote in early April supported going on strike, reflecting the accumulated discomfort level. The company argues that eliminating the maximum cap on productivity bonuses could harm employees in less profitable divisions, but the union does not accept that argument and maintains its position. If the machines stop your pocket will notice it As and how I collected Reuterssome 40,000 affiliated workers gathered at the Pyeongtaek industrial complex, south of Seoul as a measure of pressure on the company’s management. According to the union organization that organized the concentration, only during that protest, the manufacture of chips fell 58% during the next night shift, and memory chip production down 18%. Samsung declined to comment on the impact. In the current stressed supply chain scenario, even a one-time stoppage can disrupt delivery times on a global scale, something especially sensitive for Samsung when competes directly with SK Hynix for HBM memory orders for artificial intelligence projects. The chairman of the board of directors, Shin Je-yoon, broke his silence on May 5 with a message posted on the company’s internal bulletin board. According to collect Korean Heraldthe manager recognized that the situation had generated concern among shareholders, clients and public opinion, and warned that an escalation could leave workers and management “without options.” The vice president and the executive president also issued a joint statement in which they agreed to negotiate with an “open attitude.” According to economic analyst media, a strike could generate more than 10 trillion won (about $6.8 billion) in operating losses, not counting reputational damage. The union has set the May 21 as start date of the strike, which would extend until June 7 if Samsung does not agree to its conditions. There are 18 days that could directly affect the global memory supply DRAM and NAND Flash. In Xataka | The RAM crisis is so big that even companies that had nothing to do with it are considering manufacturing them. Like Tesla Image | Wikimedia Commons (Choi Kwang-mo), IntelUnsplash (Liam Briese)

Firefox found and fixed more security flaws in one month than in the previous 15 months

A year ago, Mozilla fixed 31 security flaws in its Firefox browser. In April 2026 has corrected 423. The growth is spectacular and has a single person responsible: Claude Mythos Preview, the AI ​​model that Anthropic decided not to release publicly for considering him too capable. The recent analysis by Mozilla experts has confirmed more than ever that Mythos it wasn’t just hype. AI sees everything. The integration of Mythos into the process analysis of Firefox vulnerabilities has caused a kind of technical “cleaning” explosion. It’s not that Firefox’s code is worse now, but that the eyes that analyze it are much sharper and seem to see everything. Mozilla’s graph is compelling: with the help of Claude Mythos, the Firefox team found more security flaws in April than in the past 15 months combined. Smell. The model is not only faster when it comes to detecting these failures, but it has a certain “smell” that surpasses anything seen so far in commercial tools. The AI ​​tool was able to identify 271 of the 423 bugs fixed, and that figure pales in comparison to other traditional methods such as fuzzing or manual inspection. Mythos has shown that he can evaluate his own work and filter out the noise, reasoning recursively and ruling out hallucinations. Archaeological errors. Among the most surprising discoveries they have discovered in this process is a bug in the XSLT engine (bug 2025977) that had been present in the browser for a whopping 20 years. Mythos also unearthed a problem from 15 years ago with the element “ ” of HTML that could only be exploited by a complex combination of edge cases to trigger. AI not only finds “typical” bugs, but it does just that: combine all kinds of actions to find bugs that would be almost impossible to detect in traditional ways. Human patches. Mozilla has, however, been clear about something important: they still do not use AI to write the final code that ends up being deployed in the version of the browser that users use. They do ask Mythos to suggest how to patch the problem, but the engineers have found that those proposals They are often conceptual models that are not ready for production environments. In each of the 423 patches made, there was at least one human engineer who wrote the patch and another who reviewed it. AI is the elite detector, but it is still no substitute for a senior developer in this case. A hopeful future (for Amodei). At a recent event, Anthropic CEO Dario Amodei he was optimistic and highlighted that these new tools ultimately benefit cybersecurity defenders. “If we handle this right, we could be in a better position than we were, because we’ve fixed all these mistakes. There’s only a finite number of mistakes to find, so I think there’s a better world in sight.” In Mozilla they are not so clear. Brian Grinstead, a distinguished engineer at Mozilla, has a more pragmatic and cautious view. He agrees in that having these options available is slightly more advantageous for defenders. However, it warns that it is very likely that attackers are already using similar techniques with their own models. The race won’t be so much who finds the bug, but rather who gets it done first. AI as part of the process. Mozilla’s immediate plan is not only to analyze already published code, but to integrate this analysis into the software development process in real time. Or what is the same: every time a new line of code is “bitten”, analyze how that can introduce vulnerabilities. Firefox 150 is proposed as the most secure version of the browser to date, and all thanks to that work between human engineers and Anthropic’s computing power. The end of bounty hunters? The rise of Mythos as a great vulnerability detector can endanger one of the most traditionally specialized professions in the world: the bug bounty hunters. The famous ‘bug bounty‘ that encouraged human experts to detect new bugs and rewarded them with succulent financial prizes could no longer make sense when faced with the use of tools like Claude Mythos. In Xataka | For decades, Linux has earned a reputation as a “shielded” operating system. Until now

the bracelet that measures your body without distracting you

Google has presented the Fitbit Aira $99.99 fitness tracker with no screen, no notifications, and no watch. It collects data 24 hours a day, dumps it into an app, and disappears from your wrist. This is where the product announcement ends. The interesting thing is that the Fitbit Air is the fourth, perhaps the fifth, on a list that is getting longer and longer. What has happened. Whoop created the category ten years ago. Polar launched its Loop in September for $199. Amazfit released the Helio Strap for 99.99. Garmin has a bracelet called Cirqa in development, according to leaks. And now Google is joining in with a product that costs one hundred euros and shows absolutely nothing. Everyone’s approach is practically identical: A plastic capsule with optical sensors placed in a textile strap. No screen, no buttons and no notifications. A mobile application where, if you feel like it, you can look at the data. Why is it important. We have been convincing ourselves for eleven years that more screen equals better device. The Apple Watch won as a format for its applications, its notifications, its ability to respond to messages… that is, its approach as a wrist-worn mobile phone with a certain focus on health, but also on quick procedures and notifications. And it turns out that we now have products whose success is measured the other way around: by how little you remember that you are wearing them. Whoop got more than 2 million subscribers paying between $199 and $359 a year for a bracelet that doesn’t even tell you the time. Let Google enter this format already having its own PixelWatch It says a lot about the size of the public for whom the smartwatch does not serve. The context. The easy narrative is that people are tired of the screen on their wrist. But the reality is more diverse. We could say that there are four different profiles buying these products for different reasons, and only one fits the idea of ​​being fed up. The athlete or biohacker who already has a sports watch for training but doesn’t want to sleep with it. The screenless bracelet is your second device, light and almost invisible. Anyone who has never wanted a watch with a screen. Because it has a mechanical one. You have never contemplated an Apple Watch or you had one and abandoned it. Now you can measure your body without giving up the jewelry. The smart ring userespecially women, which combines aesthetics with cycle, sleep and temperature monitoring. The underlying logic is the same: clockless data. The normal person who does not play sports and does not want to carry anything every night or anything that overwhelms or distracts him. A 100 euro bracelet with seven days of battery life opens the door. Between the lines. The bracelet is an answer to the problem that smart watches have brought us: turning the wrist into another window of interruptions. Its great commercial virtue is precisely that renunciation of the screen. By not competing visually with anything, it stops competing with any other accessory you are already wearing. mechanical watch, AirPods Pro 2ring, whatever. Zero conflict. Go deeper. Then comes the question of the reader who is not an athlete or biohacker: What exactly is this for? Dream. It’s not about knowing that you’ve slept six hours and twenty-three minutes, but about detecting trends. Many people believe that they sleep seven hours and discover, when measuring, that the actual time is much less. This pushes us to correct specific things: go to bed earlier, don’t have a late dinner, avoid the drink that breaks the deep phase… Resting heart rate. An isolated value is not useful, but the trend over months is. If your heart beats faster than it did half a year ago, something is happening: stress, worse fitness, a brewing infection, etc. Heart rate variability (HRV or HRV). This metric helps explain how well your nervous system responds to effort and rest. It tells you when to train hard and when to stop. Cumulative effort. Especially in order to see the pattern. The bracelet doesn’t tell you what to do. It gives you context about your body and you decide. And now what. The Fitbit Air will not be the last. Garmin will presumably bring out the Cirqa this year. Apple could end up making a move in some similar format if demand for these devices continues to grow. AND Whoop will continue to defend its subscription model against four rivals that make it difficult for it. For ten years, the success of a wearables was measured by engagement: The more you looked at the screen, the better. If the next wave decides to learn to disappear altogether, the smartwatch as we know it has a bigger problem than you think. Featured image | Google In Xataka | After almost a decade with the Apple Watch, I have switched to a Garmin. And I understood what I was missing

The ‘vibe coding’ promised to democratize software. Your first gift is 5,000 apps with open sensitive data

An investigation by the firm RedAccess has found more than 5,000 applications created with tools vibe coding which practically lack authentication. Anyone who stumbles upon its URL can enter. Of those 5,000, 2,000 appeared to contain private data upon inspection. The finding covers apps generated with Lovable, Replit, Base44 and Netlify, four of the platforms that have most popularized describing a program with words and letting a LLM write it. Why is it important. The promise of vibe coding is that anyone, without knowing how to program, can build software. The catch is that this same “anyone” also doesn’t know what questions to ask an application before releasing it on the Internet. The result is a new category of breaches caused not by careless employees or advanced attackers, but by people who have thrown together an internal tool in an afternoon without going through anyone on the security team. In detail. Researchers have located these applications by doing normal searches on Google and Bing, combining the domains of each platform with generic terms. Nothing of hacking: It’s more like reverse engineering a search engine. What appeared behind those URLs included hospital quadrants with doctor data, company strategy presentations, complete records of chatbot conversations with customers (with names and telephone numbers), and freight books from transport companies. In some cases, access even allowed them to gain administrator privileges and expel others. Between the lines. The platforms involved have responded with the predictable argument: it is the user’s fault. Replit remembers that its apps can be marked as private with one click. Base44 maintains that its access controls are robust and that disabling them is a conscious decision. Lovable points out that its role is to provide tools, not configure them for anyone. It is a valid argument and, above all, comfortable. It is also the same one that Amazon used with the buckets Misconfigured S3 leaking Verizon data or from WWE: the setting was there, but the user didn’t find it. The context. He vibe coding takes an old problem to a new level. Every time a layer of abstraction has democratized a craft (like spreadsheets, the wrappers of AI or web templates), the newly arrived group has arrived without the baggage of good practices that the previous one had. What changes now is the speed. Someone from a non-technical department can create a tool in two minutes and upload it to production without it going through IT. Yes, but. The AI ​​models that generate the code are not neutral agents. They do what is asked of them, no more, no less. If no one tells them “protect this in X way and implement Y,” they won’t do it. Security by default is still not a learned behavior in most of these tools, and that is a design decision of the platforms, not the end user. The consequence is foreseeable. There are going to be many more leaks like the ones RedAccess has caught before the industry internalizes that a “publish” button should not coexist with a privacy setting hidden three menus below. In Xataka | I have lived the “miracle” of vibe coding: this is how I programmed an Android TV app without having any idea about programming Featured image | Xataka

How to prevent AI from always being right by default and thus make Claude, Gemini and ChatGPT have fewer hallucinations

Let’s tell you how to prevent AI from agreeing with you by defaultmodifying your attitude to be less accommodating. In this way, by not making an effort to please you, you will get the artificial intelligence make fewer mistakes and hallucinations. To do this, let’s compose a prompt that you must add in the configuration of the artificial intelligence you use, and which serves both Claude as for ChatGPT , Gemini or any other. It will be a prompt which we will add in the AI ​​behavior configuration so that it always takes it into account. However, remember that this will not completely eliminate the hallucinationsbecause making things up is relatively normal in AI. However, since the response will not always be directed towards agreeing with you or pleasing you, you will make them reduce it a little. Of course, another thing to keep in mind is that by doing this the user experience will change. AI can get a little “edge”because you will no longer laugh thank you. Sometimes it will tell you that an idea is bad or that you are wrong, and that will not be a failure, but will show the success of the prompt. A prompt for a less complacent AI To make your AI less complacent and verify information moreyou will have to go to the settings of the one you use and go to the custom instructions section to change its behavior. There you will have to write this entire prompt, which is quite long: Always be honest, direct and rigorous. Your goal is not to please me, but to be accurate. ACCURACY AND VERIFICATION Before answering, do an internal check: is it a verified fact or an inference? If you don’t know, say so. Do not invent data, dates, names or sources. If you’re inferring or not 100% sure, use phrases like “It’s likely that…” or “My information suggests…” instead of outright statements. ANTICOMPLACENCY (Zero Bias) Don’t give me reason by default. If my premise is false or my question is misdirected, correct me before executing the task. Eliminate unnecessary polite phrases (“Sure!”, “I understand,” “Excellent question”). Get straight to the point. If my proposal has logical or technical flaws, criticize it constructively but crudely. NEUTRALITY AND DEBATE On topics with multiple points of view, present the mainstream in a balanced way, even if my question seems to seek a biased answer. AUTO CORRECTION If you spot an error in your text generation, stop and correct it immediately. PREVIOUS THOUGHT For complex queries or questions with verifiable data, briefly reason out loud before answering. For simple queries, go direct. These instructions apply to all types of queries: creative, technical, factual or personal. How to add the prompt to ChatGPT On ChatGPTyou have to enter the settings of your website or application. Once inside, go to the section Personalization. You have to put the prompt within the option of Custom instructions. You will see that the writing field is small, but you will be able to copy and paste the prompt there without problems. How to add the prompt to Gemini In Geminiyou have to click on the button Settings and helpand in the drop-down menu click on Personal context. Once inside the customization screen Gemini, press the button Add of Your instructions for Geminiand a window will open where you can paste the prompt. How to add the prompt to Claude In Claude you have to go into the settings. Once inside, click on the section Generaland you will have to write the prompt in the field Instructions for Claude. Here you can paste everything without problem so that it is always taken into account. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.