AI agents are one of the great trends of AI This year. There are many expectations put in these models of AI capable of completing a task from beginning to end for us and almost if our intervention. And yet, one thing seems clear: for the moment it will be better “not to remove your hands from the steering wheel” and watch every step they take to prevent the AI agent from being starring.
Autonomy and trust. The Tesla driving assistance system –badly called Total autonomous driving (FSD For its acronym in English) – it requires that the user trust him to get carried away and that the car takes us from a point of origin to a destination without human intervention. IA agents propose a similar idea, to complete a task from beginning to end autonomously, but for this we must trust that they are able to do so.
decision making. The agents will require huge data amounts and access to updated sources of information to analyze such data and then make decisions. In the past we have seen how AI models are especially good at the time of Summarize concrete information Or to draw conclusions from limited data, which is very useful for that decision making.
Learn from mistakes. Tesla cars receive FSD frequent updates to improve their behavior. These updates are nourished by the data collected by the company when your FSD system is used, what allows you to polish the service. Something similar is expected to happen with AI agents, which will improve – especially at the beginning – when they are updated and “learn from their mistakes” when processing user requests.
AI and companies agents. These types of solutions will be especially striking in companies that can thus automate processes that previously required total or partial human intervention. And precisely that is why this type of integration must be done in a very controlled way, because let’s admit it: we cannot trust 100% of the current AI models.
Tesla knows that FSD is imperfect. It happens of course in the FSD of Tesla, which since its inception has been involved in various accidents, some of them with fatalities. One of the most recent was notified in October 2024: the low visibility made a TESLA with FSD activated a few months ago will run a pedestrian. Tesla has been criticized on numerous occasions of misleading advertising and of save the maximum on radars and sensors To achieve greater profit margin. AI agents can be equally dangerous if they are used incorrectly and “without having their hands in the steering wheel.” Users and companies that begin to use them must keep these risks very present.
The hands behind the wheel, please. The conclusion was already clear in the Tesla FSD system, but also in the case of agents. They have barely done only appear on the market shyly, but everything indicates that this is one of the great trends of AI by 2025. And the problem is that the models of AI are imperfect and therefore can make mistakes, but it is that in the agents of that error it will increase. That they tell Air Canada, who had to return money to a passenger which obtained an erroneous response from the airline chatbot. Or to Chevrolet, whose chatbot was “deceived” by a user who achieved Buy one of your cars for a dollar.
Domino effect. The accumulation of errors in sequential tasks is a fundamental problem in current AI models. We could say that it is something like the domino effect or the compound error: an error in an initial action distorts all subsequent decisions, generating results increasingly far from what expected. Imagine that in applications such as finance, medicine or logistics: consequences could be terrible.
Solution: Constant supervision. To avoid this problem there are several proposed solutions. One of them is the establishment of check points. Thus, at the end of each subtarte the system-and ideally, a human user, what is called Human-In-The-Loop (Hitl)-should verify that everything is going well. It is also possible to minimize the risk using redundant systems – for example, using different models of AI so that the AI agent uses them separately – or taking advantage of the information of the standard limits: if an intermediate fact thrown by an AI agent is too diverted from what is expected, we should rebound that process.
And for the moment, spent (very) bounded. We are in a preliminary phase, and AI agents are “learning to drive alone”, so to speak. And the best way they learn is to go step by step and always starting with relatively simple and very limited scenarios. Thus, the ideal is to try to apply them to very specific cases and with a limited and known casuistry, so that their answers are as precise.
Image | Erik Witsoe
GIPHY App Key not set. Please check settings