Skip to main content

Google and the Pentagon: Secret AI Deal Sparks Ethical Tension in Silicon Valley

AI article illustration for ai-jarvis.eu
Google has signed a secret contract with the US Pentagon to supply advanced artificial intelligence. This information, which leaked via a Gizmodo media report, is sparking stormy debates about ethics, security, and the future of military conflicts in the tech world and within Google itself.

The world of tech giants has found itself in a new conflict of interests. Google, a company that has long positioned itself as a leader in "AI for the benefit of humanity," is now becoming a key supplier for the US Department of Defense. This closed agreement, whose details remain classified, suggests that the boundary between civilian AI development and military applications is becoming increasingly thin.

What does the agreement between Google and the Pentagon mean?

Although the exact parameters of the contract remain secret, experts and leaked information suggest that Google will provide infrastructure and models for real-time data analysis, processing of satellite imagery, and likely assistance in decision-making processes in complex operational situations. In the context of current AI development, this involves the use of so-called multimodal models, which can simultaneously process text, images, video, and audio signals.

For the Pentagon, this means access to technologies that are at the absolute top in the field of reasoning (logical thinking) and processing enormous amounts of data (so-called long context window). In this regard, Google is using its models from the Gemini family. If we compare Gemini's current capabilities with the competition, such as OpenAI's GPT-4o models or Anthropic's Claude 3.5 Sonnet, Google has a significant advantage in terms of integration with cloud infrastructure and the ability to work with extremely long context (millions of tokens), which is critical for intelligence data analysis.

Technical aspects: From cloud to "Edge AI"

One of the key pillars of this cooperation is likely the implementation of AI directly in the field, so-called Edge AI. This means that the models will not only run in giant data centers but will be able to operate on devices with limited performance directly in operational zones. This minimizes latency and increases the autonomy of systems, which is crucial for a modern army.

Ethical conflict: Shadows of the Maven project

Google's decision is not without controversy. The company has previously faced internal resistance from employees when it attempted to cooperate on the Maven project (a system for automatic object recognition in drone imagery). At that time, employees united and pressure on management led Google to partially withdraw from military contracts.

Today's situation is different, however. In an era of global technological competition between the USA and China, arguments about "ethics" clash with arguments about "national security." Google employees are now asking: Where does helping humanity end and the development of autonomous weapons begin? This internal rift could affect Google's ability to attract the best talent who do not want to participate in the development of technologies for war.

Impact on the market and regulation: How will this affect Europe and the Czech Republic?

For us in Europe and the Czech Republic, this news has two main impacts:

  1. Regulatory pressure: The implementation of the EU AI Act (European Artificial Intelligence Act) creates strict rules for "high-risk" AI systems. If American companies massively invest in military applications, the market could further split into "civilian, regulated EU AI" and "military, unregulated American AI."
  2. Technological sovereignty: The Czech Republic, which is trying to build its own AI ecosystem (e.g., in the field of computer vision or natural language processing), must face the reality that the most powerful models will be increasingly controlled by states and their security agencies.

From an availability standpoint, it is important to note that models like Gemini are available to ordinary users in the Czech Republic through a Gemini Advanced subscription (approx. 20 USD / 480 CZK per month), while the basic version is free. However, the versions that Google provides to the Pentagon are completely different – they are closed, highly secure instances that are not available to the public or even as part of enterprise solutions.

Performance comparison (Benchmarks)

To understand why the Pentagon chose Google in particular, let's look at general trends in benchmarks (e.g., MMLU - Massive Multitask Language Understanding):

  • Gemini 1.5 Pro: Excels in complex analysis of long documents and videos.
  • GPT-4o: Offers excellent speed and interactive capabilities.
  • Claude 3.5 Sonnet: Is often preferred for precise programming and nuance in text.

In the context of military intelligence, the ability to "read" thousands of pages of documents or hours of video footage at once (which Gemini can do thanks to its architecture) is a decisive factor.

Can ordinary users in the Czech Republic use the same AI as the Pentagon?

No. The Pentagon version is highly classified, runs on isolated servers, and is optimized for specific military tasks. The public version of Gemini is intended for assistance in everyday life, work, and creativity.

What is the impact of this agreement on user privacy in the EU?

The agreement between Google and the USA itself does not directly affect Czech users' data if it is processed within the EU. However, it strengthens the pressure for a clear separation between data for civilian purposes and data used for state security.

Will the EU respond to the deepening ties between tech giants and armies?

The EU, through the AI Act and other security directives, is trying to control risks. If it were shown that civilian models are being used to develop weapons in conflict with European values, it could lead to stricter regulations for service providers in the EU.

X

Don't miss out!

Subscribe for the latest news and updates.