Listen to this article:
When AI stops being just a defense tool
On April 7, 2026, representatives from state administration, security forces, and the private sector gathered in Hanoi for a workshop titled "Security in the Era of Artificial Intelligence." The event was organized by the National Cybersecurity Association (NCA) in cooperation with Check Point Software Technologies. Its goal was to show that AI is not only a helper for security teams but also a vulnerable link in digital infrastructure.
Pham Tien Dung, Vice Governor of the State Bank of Vietnam and Vice President of the NCA, warned in his opening speech that the deep integration of artificial intelligence into operational activities makes systems more interconnected and complex. "Vulnerabilities are no longer limited to traditional 'outer layers' but can appear directly in the core of the system, where data, algorithms, and computational infrastructure intersect," Dung said. Once AI becomes part of critical processes, security risks can spread much faster and on a larger scale.
Three threats that know no borders
Colonel Dr. Nguyen Hong Quan, Deputy Director of the Cybersecurity and High-Tech Crime Prevention Department of the Vietnamese Ministry of Public Security, defined three key strategic challenges at the conference. Although he formulated them for the Vietnamese environment, they apply practically identically to the Czech Republic and the entire European Union.
The first challenge is data sovereignty protection. Artificial intelligence systems are extremely data-hungry — and control over its flow, storage, and usage methods is becoming a key national security issue. In the Czech context, this means, for example, the question of where sensitive data goes when using foreign cloud models.
The second challenge is dependence on foreign platforms and technologies. Vietnam, like the Czech Republic, faces the reality that the global market for large language models is dominated mainly by American and Chinese companies. "This concerns not only technological factors but also issues of security, economy, and digital sovereignty," emphasized Nguyen Hong Quan.
The third and most surprising challenge for jarvis-ai.cz readers is protection of the AI systems themselves, which are becoming a new target for cybercriminals. Models can be attacked by manipulating training data, inserting malicious code, or exploiting vulnerabilities in the operational process. In other words: the hacker no longer attacks just your computer, but directly the brain of the AI that is supposed to help you.
What an AI attack looks like in practice
For an ordinary user, an attack on artificial intelligence may sound abstract. In reality, however, it involves concrete and already documented methods. One of them is data poisoning — that is, poisoning training data. If an attacker manages to insert forged information into the dataset on which the model learns, they can influence its behavior months after deployment.
Another technique is prompt injection, in which the attacker uses an input command to force the model into unexpected activity — for example, leaking sensitive data or generating harmful content. Similarly, jailbreak works, i.e., bypassing the model's security restrictions using specifically formulated queries.
The most dangerous, however, are attacks at the infrastructure level. Large language models run on massive clusters of graphics processing units (GPUs), which are often connected into internal networks. If an attacker gets into this network, they can not only steal data but also modify the model itself or use computational capacity for their own purposes. That is why companies like Check Point propose security architecture already from the data center design phase.
Check Point and AI "factories"
Check Point, which participated in the workshop as a professional partner, presented its global view on AI security. According to Rumy Balasubramanian, the company's president for Asia and the Pacific, Check Point focuses on three pillars: visibility, control, and prevention across the entire artificial intelligence ecosystem.
For organizations that want to operate their own AI infrastructure, Check Point offers the so-called AI Factory Security Blueprint — a reference architecture that manages risks from the GPU level to large language models (LLM). In addition, the company promotes the AI Defense Plane platform, which is a multi-layered security solution defending against new threats, including attacks with real-time data injection, information leaks, and malicious actions of autonomous agents.
"Artificial intelligence is not just a tool, but is becoming a factor that changes the face of cybersecurity and national security," summarized Nguyen Hong Quan. "Proactive adoption, mastery, and ensuring the security of artificial intelligence technologies will be a decisive factor for the sustainable and secure development of every nation in the future."
What this means for the Czech Republic and Europe
The Vietnamese warning is not distant exoticism. The Czech Republic and the entire European Union face the same challenges — although on a different scale. The EU AI Act, which came into force during 2025, places strict requirements on operators of high-risk AI systems precisely in the areas of cybersecurity, transparency, and data governance. Companies must prove that their models are resistant to manipulation and that data does not leak to untrustworthy destinations.
In the domestic market, for example, the Czech National Bank is responding to this by recently announcing the construction of its own AI center for tens of millions of crowns. The goal is precisely to increase analytical capabilities for financial market supervision while maintaining full control over data and infrastructure. Similarly, the National Cyber and Information Security Agency (NÚKIB) repeatedly warns that deploying AI into critical infrastructure must go hand in hand with strengthening security protocols.
For ordinary companies and users, this means one thing: AI is not an all-in-one solution that we can deploy and forget. Artificial intelligence security requires the same attention as network, server, and endpoint security. And given that Czech companies often use foreign models — whether OpenAI, Google, or Anthropic — the question of data sovereignty and training data protection is absolutely fundamental.
From defense to integrated security
The key message of the Hanoi workshop was the transition from a purely defensive approach to integrated security already from the design and development phase of technologies. In other words — AI security should not be a patch that we stick on a finished product, but an integral part of it.
As artificial intelligence penetrates banking, healthcare, transportation, and public administration, cybersecurity is becoming the cornerstone of the entire digital ecosystem. Whether it's the Vietnamese central bank, European legislation, or Czech regulators, the consensus is clear: without secure AI, there can be no talk of sustainable digital development. And not only security experts should keep this in mind, but everyone who introduces artificial intelligence into their company or institution.
What is the difference between a classic hacker attack and an attack on an AI system?
A classic attack usually targets networks, servers, or user devices with the aim of stealing data or disrupting operations. An attack on an AI system goes one level deeper — it tries to manipulate the model itself, for example by poisoning training data, inserting malicious commands (prompt injection), or exploiting vulnerabilities in the infrastructure on which the model runs. The goal may be to change the AI's behavior, leak sensitive information, or exploit computational capacity.
Do Czech companies have to follow any rules when deploying AI?
Yes. The European AI Act, which came into force in 2025, places strict requirements on companies using high-risk AI systems in the areas of security, transparency, and data protection. In the Czech Republic, moreover, the National Cyber and Information Security Agency (NÚKIB) issues methodologies and warnings regarding the secure deployment of AI in critical infrastructure. Companies should therefore evaluate not only functionality but also the security and origin of the models they use.
How can I, as an ordinary user, protect my data when using ChatGPT or other AI?
The basic rule is not to enter sensitive personal, corporate, or customer data into public AI tools. If you work with internal data, use corporate instances with guaranteed processing within the EU or locally deployed models. Always verify what data processing policies the AI tool provider has — and whether your interactions are being used for further model training.