For agentic AI to work effectively, critical thinker models must be trained with data that is as close to reality as possible. In other words, you must provide the model with extensive information about specific goals, plans, actions, and results, and provide a lot of feedback based on this. This process may require numerous iterations. It may take hundreds or thousands of iterations of plans and results before the model has enough data to act as a critical thinker.
Reliability and Predictability
The way people interact with computers today is predictable. For example, when building a software system, engineers write step-by-step instructions into the computer, specifying exactly what to do. However, agentic AI processes do not provide step-by-step instructions. If you present the result you want to achieve, the agent decides on its own how to achieve the goal. Because software agents have a certain level of autonomy, their output may contain some randomness.
This is a similar problem that appeared in early ChatGPT and other LLM-based generative AI systems. However, over the past two years, the consistency of generative AI has improved significantly thanks to fine tuning, human feedback loops, and continuous model training and refinement. Likewise, a similar level of effort will be needed to reduce the randomness of agentic AI systems and make them more predictable and trustworthy. Through this, the stability and reliability of the system can be secured.
Data privacy and security
Some companies are hesitant to use agentic AI due to privacy and security concerns. This brings with it similar concerns as generative AI, but in some cases even more serious ones. For example, when a user interacts with an LLM, all information entered into the model is permanently included in the model. There is no way to ask to “forget” this later.
Security attacks, such as prompt injection, exploit this characteristic to attempt to trick models into leaking confidential information. Because software agents have a high degree of autonomy and can access a variety of systems, the risk of exposing personal data from more sources increases.
To solve this problem, you need to start small. Data should be containerized as much as possible so that it is not exposed beyond the internal domains where it is needed. It should also anonymize data, hide user information, and remove personally identifiable information (PII), such as Social Security numbers or addresses, from prompts before sending them to the model.
Source: www.itworld.co.kr