Skip to main content

Anthropic Considers Own AI Chips: Claude Revenue Surpassed $30 Billion and Nvidia Feels the Pressure

AI article illustration for ai-jarvis.eu
Anthropic, creator of the Claude chatbot, is considering developing its own AI chips. The reason is simple: demand for Claude is growing so fast that even a combination of chips from Google, Amazon, and Nvidia is not enough to meet the company's needs. Anthropic's annual revenue has more than tripled in the last four months — from approximately $9 billion at the end of 2025 to over $30 billion in April 2026. This is a signal that Nvidia investors are also reading: the era of dominance by a single chip supplier is slowly coming to an end.

Custom Chips: Anthropic is Currently in the Exploration Phase

According to a Reuters report from April 10, 2026, Anthropic is actively exploring the possibility of designing its own AI chips. However, the project is still in its early stages — the company has not yet established a specialized team or committed to any specific design. The possibility remains that it will ultimately continue to purchase ready-made chips from external suppliers.

Developing an advanced AI chip from scratch is a financially demanding undertaking. According to industry sources, such a project costs approximately $500 million — and that includes not only the hardware itself but also specialized engineering talent and certification of manufacturing processes.

This is not an exception in the industry. Meta and OpenAI are taking similar steps. Google is developing its own TPU chips in collaboration with Broadcom, Amazon is building the Trainium series, and Microsoft has the Maia project. The motivation is the same for all: to reduce dependence on Nvidia and save billions of dollars annually on operating AI systems.

How Claude Works Now — The Chip Mix

Anthropic currently operates its models on a combination of chips from three different suppliers:

  • Google TPU (Tensor Processing Units) — special chips developed by Google in collaboration with Broadcom
  • Amazon Trainium — AWS's own chips, where Anthropic has exceptionally deep integration into the hardware development itself
  • Nvidia GPU — standard accelerators that form the basis of most AI infrastructure worldwide

This diversification strategy is not accidental. Analysts describe Anthropic as one of the few AI companies that has a direct influence on hardware development from two different hyperscalers — Google and Amazon. Thanks to this, it achieves cheaper operation than if it relied exclusively on Nvidia GPUs.

Mega-deal with Google and Broadcom: 3.5 Gigawatts of TPU Power

In early April 2026, Anthropic announced an expanded partnership with Google and chip manufacturer Broadcom. The agreement will provide the company with access to approximately 3.5 gigawatts of TPU computing power starting in 2027 — roughly triple the capacity the company was using at the beginning of 2026.

Specifically, this involves approximately 400,000 TPUv7 (Ironwood) chips purchased directly from Broadcom for an estimated $10 billion, plus another 600,000 chips leased via Google Cloud. The total value of the commitment is around $52 billion, with the contract building on Anthropic's November pledge to invest $50 billion in US computing infrastructure.

According to the analytical group Futurum Research, this agreement creates a structural advantage for Anthropic: at a gigawatt scale, the combination of TPU and Trainium chips is cheaper than a purely Nvidia infrastructure by an estimated $1–2 billion per month for comparable performance.

Amazon Trainium: The Secret of Deeper Collaboration

Anthropic's relationship with Amazon is a special chapter. According to SemiAnalysis, Anthropic — along with Google DeepMind — is the only AI lab that participates directly in hardware design. Anthropic engineers collaborate with Amazon's Annapurna Labs team on the development of Trainium chips to best meet the specific needs of training and operating Claude models.

The result? Part of Claude's operations now runs on over 1 million Trainium 2 chips. The total number of deployed Trainium chips across three generations exceeds 1.4 million units. TechCrunch reported in March 2026 that Trainium has also won over OpenAI and even Apple.

What This Means for Nvidia

Nvidia currently holds approximately 80–85% of the AI accelerator market. Fiscal year 2026 (through January 2026) brought the company revenues of $215.9 billion, up 65% year-over-year. Blackwell series chips (B200, GB200) are sold out months in advance.

However, analysts warn that Nvidia's market share could fall to approximately 75% by 2027 — despite the absolute growth of the market. Hyperscalers' own chips are growing at a rate of over 44% annually, while GPU solutions are only around 16%. Nvidia will still earn more, but a large share of new spending will go elsewhere.

For investors, it is crucial to distinguish between two scenarios: Nvidia losing contracts (unlikely in the short term), or Nvidia growing slower than the market (more likely). For now, the second option applies — and yet it is a factor that is pushing down stock valuations.

Impact on the Czech Republic and Europe

Claude is available in Czech — both its API for developers and the Claude.ai web interface. Anthropic's plans for its own chips will not directly affect the model's availability for Czech users. Indirectly, however, they will: cheaper operation of an AI model typically translates into lower API call prices or more accessible free access.

From the EU's perspective, the geopolitical dimension is also relevant: Anthropic has committed to investments in US infrastructure, not European. For European companies, this means continued dependence on the transatlantic cloud — and the EU AI Act does not change this fact.

Why Now?

Anthropic's revenues more than tripled from the end of 2025 to April 2026 — from $9 billion to over $30 billion annually. At such a rapid growth rate, even a small saving on one chip suddenly amounts to billions. Custom hardware is not a luxury — it is a necessary condition for maintaining profitability if the company wants to continue investing in the development of even more powerful models.

As analysts from Futurum Research aptly summarize: "Whoever controls the silicon controls the AI." Anthropic is clearly taking this seriously.

When could Anthropic have its own chips ready?

Anthropic is only exploring the possibility of developing its own chips and does not yet have a team assembled or a binding design. Developing an advanced AI chip typically takes 3–5 years and costs approximately $500 million. Realistically, Anthropic's first custom chips could be expected no earlier than 2028–2030, if the company decides to pursue the project at all.

Why doesn't Anthropic exclusively use Nvidia chips, given they are the industry standard?

Nvidia chips are powerful but expensive. When operating at a gigawatt scale, a combination of Google TPU and Amazon Trainium is estimated to be $1–2 billion per month cheaper than a purely Nvidia infrastructure. Moreover, Anthropic works closely with Amazon on the development of Trainium chips, which allows it to tailor the hardware directly to the needs of the Claude model.

Is Claude available in Czech, and will that change?

Claude from Anthropic is fully functional in Czech — both via the Claude.ai web interface and via the API for developers. The planned changes in chip strategy will not affect availability in Czech. On the contrary, cheaper operation could mean more affordable prices or an expansion of the free plan in the future.