It’s called ‘data poisoning’ and it’s poisoning them from within.

AI is everywhere and every time add more users. The logical step is that it would also be the target of malicious attacks. We have already talked about the dangers of ‘prompt injection’, a surprisingly easy attack to execute. He’s not the only one. AI companies are also fighting data poisoning.

Poisoned data. It consists of introducing manipulated data into resources that will later be used for AI training. According to a recent investigationit does not take as many malicious documents to compromise a language model as previously believed. They found that with only 250 “poisoned” documents, models with up to 13 billion parameters were compromised. The result is that the model can be biased or reach erroneous conclusions.

Prompt injection. It is one of the Problems AI Browsers Face like ChatGPT Atlas or Comet. By simply placing an invisible prompt in an email or a website, you can get the AI ​​to deliver private information by not being able to distinguish what is a user instruction and what is a malicious instruction. In the case of AI agents it is especially dangerous since they can execute actions on our behalf.

AI to do evil. According to a Crowdstrike reportAI has become the weapon of choice for cybercriminals, who use it to automate and refine their attacks, especially ransomware. He M.I.T. analyzed more than 2,800 ransomware attacks and found that 80% used AI. The figure is overwhelming.

Collaboration. They count in Financial Times that leading AI companies such as DeepMind, OpenAI, Microsoft and Anthropic are working together to analyze the most common attack methods and collaboratively design defensive strategies. They are turning to ethical hackers and other independent experts to try to breach their systems so they can strengthen them.

Urgency. AI browsers and agents are already here, but we are on time because there has not yet been mass adoption. It is urgent to strengthen the systems, especially to prevent the injection of prompts that can so easily steal our data.

Image | Shayna “Bepple” Take in Unsplash

In Xataka | “The safety of our children is not for sale”: the first law that regulates ‘AI friends’ is here

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.