Skip to main content

Pentagon Negotiated Secret Contracts with AI Giants: Why Was Anthropic Left Out?

The Pentagon has officially confirmed the conclusion of strategic deals with the biggest players in the artificial intelligence market. Companies such as OpenAI, Google, Microsoft, Amazon, Nvidia, xAI and startup Reflection have gained the right to provide their advanced models for the needs of the US military within classified networks. While these giants are competing for access to American security infrastructure, one significant name is missing: Anthropic.

A fundamental shift is currently taking place in American geopolitics. Artificial intelligence is no longer just a tool for generating text or images, but is becoming a pillar of national security. According to reports from GNN and DW, the Pentagon has decided to unify the technological ecosystem so that it is capable of operating in extremely secure environments.

Who controls the digital battlefield?

The new contracts allow the US Department of Defense to use AI models directly within its secure, isolated networks. This means that data considered top secret will not have to leave the Pentagon's infrastructure in order to be processed.

The winners include:

  • OpenAI: Their models (likely the GPT-5 series or newer) will be key for data analysis and report generation.
  • Google: Through its cloud services and Gemini models, it will bring to the Pentagon the ability to massively process multimodal data (text, video, satellite imagery).
  • Nvidia: This is the key point. Nvidia provides not only software but also hardware infrastructure (Blackwell-type chips or newer), which is essential for running these models in local, secure centers.
  • xAI: Elon Musk's company is becoming a legitimate player in the field of state security thanks to these deals.
  • Microsoft and Amazon: These giants already have deep relationships with the Pentagon and their role in providing cloud infrastructure (Azure and AWS) is a given within these deals.

Why did Anthropic fail?

The biggest surprise of the entire announcement is the absence of company Anthropic. Although it was previously considered one of the main candidates and had a closed deal worth 200 million dollars, the Pentagon decided to end its role. The reason is Anthropic's designation as a supply-chain risk.

According to information from The Verge, Anthropic refused to submit to certain conditions regarding "red lines" – that is, the ethical and safety boundaries that the company itself defined for its Claude models. However, the Pentagon requires absolute flexibility and control over how the models behave in extreme scenarios. The conflict between Anthropic's "safety ethics" and the Pentagon's "security needs" thus led to a breakup.

Technical comparison: How do the models stack up?

To understand the importance of these deals, it is necessary to look at how individual models move in benchmarks (e.g. MMLU or HumanEval). In the context of 2026, we can observe the following dynamic:

Model / Family Main Advantage Benchmark (Estimated Performance) Availability in the Czech Republic
GPT-5 (OpenAI) Versatility and logic Extremely high (SOTA) Yes (via API/ChatGPT)
Gemini 2.5 (Google) Multimodality (video/audio) High (in multimodal tasks) Yes (Google Workspace)
Claude 4 (Anthropic) Ethics and safety "red lines" High (in text logic) Yes (via Claude.ai)

Note on prices: For regular users in the Czech Republic, prices are standard: ChatGPT Plus costs approx. 20 USD (approx. 460 CZK), Gemini Advanced is part of Google One for approx. 22 USD (approx. 500 CZK). Enterprise versions for companies are based on individual agreements.

Practical impact: What does this mean for us?

Even though these deals concern exclusively the US military, their impact will be felt worldwide, including the Czech Republic and the European Union.

1. Standardization of security: If OpenAI and Google can meet the Pentagon's extreme security standards, they become de facto global standards for "secure AI". This will push other companies to improve their models in the area of cyber resilience.

2. Geopolitical pressure on the EU: While the USA chooses the path of integrating AI into the military, the European Union focuses on regulation through the EU AI Act. This difference in approach can lead to a technological gap. Companies in the Czech Republic that use these models must be aware that the "military" version and the "civilian market" version may have different limits and capabilities.

3. Availability in Czech: All mentioned models (GPT, Gemini, Claude) already in 2026 have a very high level of Czech localization. For Czech companies this means they can use the same technology for data analysis as the Pentagon uses, but with regard to European personal data protection regulations (GDPR).

In conclusion, it can be said that the Pentagon's decision is not just about technology, but about trust. The winners of these contracts are those who can combine unlimited performance with absolute control over security and the supply chain.

What exactly does "supply chain risk" mean for Anthropic?

In the context of AI, this can mean concerns about how the model is developed, where its data is stored, or whether there is a hidden dependency on external entities that could affect the model's integrity in case of conflict.

Can Czech companies use these models for their own secret operations?

Not directly these specific contracts, which are intended for the US government. However, providers such as Microsoft (Azure) or Google offer "Government Cloud" or specific "Enclave" services that allow companies in the EU to create similarly secure environments for their own sensitive data.

Is Claude from Anthropic still a quality model even though the Pentagon didn't choose it?

Yes, Claude remains one of the best models in the world for writing, programming, and logical reasoning. Its "exclusion" is not a question of quality, but a question of political and security alignment with US military standards.

X

Don't miss out!

Subscribe for the latest news and updates.