Skip to main content

Agentic AI: Why World Powers Warn Against the Too-Rapid Deployment of Autonomous Agents

Ilustrační obrázek pro jarvis-ai.cz
Agentic AI represents the next step in the evolution of artificial intelligence – the transition from models that merely respond to queries to systems that independently plan and execute tasks. While this shift promises a massive increase in productivity, the security community of Five Eyes (the USA, the United Kingdom, Canada, Australia, and New Zealand) has issued a warning: overly rapid and uncontrolled deployment of these autonomous agents could lead to unpredictable and dangerous consequences for cybersecurity and system stability.

The world is at a moment where the boundary between generative AI and fully autonomous intelligence is beginning to blur. While standard models such as regular ChatGPT or Claude operate in a "query–response" mode, the new wave known as Agentic AI is designed to act. These systems can use tools, browse the internet, manage files, and independently decide on next steps within defined goals.

What exactly is Agentic AI and where does its power lie?

To understand the risk, we must understand the principle. According to an analysis by EY, Agentic AI is a bridge between generative AI and machine learning that enables autonomous decision-making and adaptive intelligence. While a classic model will write you a recipe for baking a cake, an agentic AI will search for the recipe itself, order ingredients via an online store, and then confirm the delivery date.

This shift brings enormous potential. Predictions suggest that by 2028 there will be up to 1 billion AI agents in global service and up to 80 % of customer service will be handled entirely autonomously. For companies, this means a drastic reduction in costs (estimated by up to 30 %), but for security experts, it means new, uncharted territory.

Five Eyes Warning: The Risk of Uncontrolled Deployment

The intelligence community of Five Eyes has expressed deep concern over the pace at which these technologies are being introduced into critical infrastructure and corporate processes. The main problem is unpredictability. Agentic systems operate in closed loops (perceive-plan-act), where every error in planning can trigger a chain reaction of incorrect actions.

A similar warning was issued by the US agency CISA. It emphasizes that organizations must exercise extreme caution when adopting these services. The main threats include:

  • Indirect Prompt Injection: An attack in which an agent reads data (e.g., an email or a web page) that contains a hidden command, which the agent then executes (e.g., sending sensitive data to the attacker).
  • Loss of control over decision-making: If an agent has access to banking system APIs or corporate databases, unintentional financial or data losses may occur.
  • Cyber exploitability: Autonomous agents can be exploited for automated, highly sophisticated cyberattacks that can adapt to defenses on their own.

Comparison: Generative AI vs. Agentic AI

For a better idea, we can compare these two concepts in the context of the best-known models:

Feature Generative AI (e.g., GPT-4o, Claude 3.5) Agentic AI (e.g., AutoGen, OpenAI Assistants API)
Main function Generating text, code, data analysis. Planning, using tools, task execution.
Interaction Reactive (waits for a prompt). Proactive (takes initiative within a goal).
Autonomy Low (requires constant human input). High (can work in cycles without a human).

Impact on Czech Companies and the European Market

For Czech entrepreneurs and developers, this topic has two crucial dimensions: regulation and security. Within the European Union, the key framework is the EU AI Act, which classifies AI systems by level of risk. Agentic systems capable of autonomous decision-making in critical areas (e.g., human resources, finance, infrastructure) will likely be subject to the strictest regulation.

What does this mean for you? If your company is considering implementing agentic tools (e.g., using the OpenAI Assistants API, which is available for a fee based on usage/tokens, or open-source frameworks such as AutoGen), you must implement the "Human-in-the-loop" principle. This means that critical steps (e.g., approving a payment, deleting data, changing configuration) must always be validated by a human. In the Czech environment, where the emphasis is placed on privacy protection and GDPR, auditing the decision-making processes of agents will be absolutely essential.

Price Accessibility and Tools

Currently, "agents" are not standalone products with a fixed price, but rather capabilities implemented into existing models.

  • OpenAI Assistants API: Pay-per-token (in USD), price depends on the volume of processing.
  • Microsoft AutoGen: Open-source (free), requires own computing capacity and API keys (e.g., from Azure OpenAI).
  • Claude (Anthropic): Offers strong reasoning capabilities, pricing is also modeled according to API usage.
All these tools are available to developers in the Czech Republic, and while English support is key for end users, thanks to the capabilities of LLMs, agents can be instructed even in Czech, allowing their deployment in local processes as well.

Conclusion

Agentic AI is a technological shift that will bring unprecedented efficiency, but at the same time it brings the highest level of risk in the history of the digital world. The Five Eyes warning is not an attempt to stop progress, but a call for responsible development. For Czech companies, it is a signal not to look only for savings, but to invest in secure architectures and clear control mechanisms.

Can agentic AI cause real harm in the real world?

Yes. If an agent has access to external tools (e.g., email, banking API, smart home or industrial sensor control), a faulty or manipulated command can lead to financial losses, infrastructure damage, or leakage of sensitive data.

How do I know if the tool I am using is "agentic"?

The main sign is the tool's ability to plan steps independently without your daily input. If the tool says, "I'll do it, I'll search it, and then I'll let you know," it is working with elements of agentic AI.

Is agentic AI compliant with the EU AI Act?

It depends on the application. If an agent makes decisions about important aspects of human life (e.g., approving loans or hiring employees), it will be classified as a high-risk system and must meet strict requirements for transparency and human oversight under European regulation.

X

Don't miss out!

Subscribe for the latest news and updates.