Skip to main content

DeepMind develops AI co-doctor: Can artificial intelligence fill the gap left by 10 million missing healthcare workers?

AI programming and CLI tools
The World Health Organization warns that by 2030, the planet will be short of more than 10 million healthcare workers. While the situation is worst in low-income countries, even developed nations are not immune. Google DeepMind has now unveiled the research project AI co-clinician — artificial intelligence designed as a full-fledged member of the clinical team. Can a multimodal agent with eyes, ears, and a voice truly alleviate the global health crisis?

WHO: The Shortage of Healthcare Workers Grows Every Year

According to WHO estimates, the deficit of healthcare workers will reach an alarming 11 million by 2030. The most affected are low- and middle-income countries, where not only doctors are missing, but also nurses, midwives, and community workers. Chronic underinvestment in education, the emigration of specialists abroad, and a mismatch between school curricula and the real needs of healthcare systems create a situation that classical approaches cannot solve.

Europe faces different but equally serious challenges: an aging population, the rise of chronic diseases, and burnout among healthcare staff after the COVID-19 pandemic. In Czech healthcare, alarming reports regularly appear about the shortage of specialists in the regions. Digital tools are therefore not a luxury, but a necessity.

From Exam Tests to Telemedicine Practice

DeepMind's journey in medical artificial intelligence has been going on for several years and is gradually becoming more ambitious:

  • Med-PaLM (2023) — a model that passed medical exams at the level of medical school graduates. The results were published in the journal Nature.
  • AMIE (2025) — a text-based diagnostic consultant that achieved performance comparable to general practitioners in simulated conversations. In the real world, it was tested, for example, at the Beth Israel Deaconess Medical Center.
  • AI co-clinician (2026) — the latest evolution, which goes beyond the boundaries of text and enters the world of live audio, video, and real-time interaction.

DeepMind calls the concept "triadic care" — three-member care, in which an AI agent helps the patient under the supervision of the attending physician. Medicine has always been a team sport, and the goal is to add another reliable teammate to the field, not to replace the captain.

How the AI Co-Clinician Works Technically

The AI co-clinician builds on the architecture of the Gemini and Project Astra models. It is a multimodal agent — it understands not only written text, but also live audio and video. This means that during a telemedicine call, it can observe the patient's face, monitor breathing movements, listen to coughing, or watch how the patient uses an inhaler.

From a safety perspective, the system uses a dual-agent architecture:

  • Planner — continuously monitors the course of the conversation and checks whether the other agent remains within safe clinical boundaries.
  • Talker — communicates directly with the patient or doctor and carries out instructions.

This division is meant to prevent so-called hallucinations — situations where AI invents incorrect information. In addition, the system undergoes verification and citation — every clinical recommendation should be backed by a verifiable source.

What the Tests Showed

DeepMind does not want to rely on marketing superlatives, but on hard data. The research team therefore collaborated with academic physicians from Harvard and Stanford on a large-scale simulation study.

Safety According to NOHARM

To evaluate safety, the scientists adapted the NOHARM framework, which tests "errors of commission" (incorrect information) and "errors of omission" (missing critical facts). In 98 realistic questions from primary care, the system recorded zero critical errors in 97 cases and outperformed two widely used AI tools that doctors commonly use.

Medication Knowledge at the Specialist Level

On the OpenFDA RxQA benchmark, which evaluates complex medication reasoning, the AI co-clinician outperformed available frontier models — especially in open-ended questions, the kind doctors actually ask in practice. The original test was designed as multiple choice, where even general practitioners performed only averagely. DeepMind focused on a more difficult form: free-text answers.

Telemedicine Simulation

In a study with 20 synthetic clinical scenarios and 10 medical "patient-actors," 120 hypothetical telemedicine encounters took place. The AI was able to guide patients through complex physical examinations in real time — for example, it corrected inhaler technique or performed shoulder tests to detect rotator cuff injuries.

The results were mixed, but promising. Expert physicians were overall better than the AI, especially in identifying "red flags" (warning signs) and conducting critical physical examinations. On the other hand, the AI co-clinician achieved a level comparable to or better than primary care in 68 out of 140 evaluated areas.

Real-World Deployment and Geographic Scope

Google DeepMind is currently launching phased real-world testing in collaboration with academic and clinical partners in the USA, India, Australia, New Zealand, Singapore, and the United Arab Emirates. Additional countries and healthcare facilities are joining gradually.

Important note: DeepMind explicitly states that these research activities are not yet intended for the diagnosis, treatment, mitigation, or prevention of diseases. It is purely research, not an approved medical product.

What It Means for the Czech Republic and Europe

For Czech readers, several aspects are key:

Language availability: The base Gemini model, on which the AI co-clinician is built, supports Czech. This means that the technological barrier for local deployment is not high. Translation into Czech would be technically feasible as soon as the system reached a commercial phase.

Regulation: The European Parliament has already approved the AI Act, which classifies healthcare AI as "high risk." This means that any tool for diagnosis or treatment recommendations will have to undergo a strict certification process as a medical device. For Czech patients, this is a guarantee of safety, but at the same time, it extends the path from research to practice.

Price and accessibility: The AI co-clinician is currently a research project without a stated price or commercial model. Historically, however, Google offers basic versions of its models for free (for example, through the Gemini app) and advanced features through subscription. In the context of healthcare, it would likely involve licensing for hospitals and clinics.

Practical impact: Even if the system never became a full-fledged "robotic doctor," its ability to perform triage, gather medical history, or check the correctness of medication could relieve overloaded offices in Czech hospitals and private practices.

The Limits of the Possible

DeepMind is extraordinarily cautious when it comes to expectations — and rightly so. Studies show that AI can be an excellent assistant, not a replacement for human judgment. Expert physicians still outperform artificial intelligence in overall clinical assessment, especially in sensitive situations where lives are at stake.

At the same time, it is indisputable that the progress over the last three years is dramatic. From exam tests through text chats to real-time video consultations — medical AI is evolving faster than most forecasts predicted. And at a time when the WHO is counting the missing millions of hands, every helping hand — even a digital one — may be invaluable.

Is the AI co-clinician available to Czech patients or doctors?

No. Currently, it is a research project being tested in a limited number of countries outside Europe. Commercial or clinical deployment in the Czech Republic is not planned in the near future.

How does the AI co-clinician differ from ordinary chatbots like ChatGPT?

While ChatGPT primarily works with text, the AI co-clinician is a multimodal agent capable of analyzing live video and audio. In addition, it uses a special safety architecture with two agents (Planner and Talker) and undergoes stricter medical benchmarks.

What risk does the use of AI in healthcare pose from the perspective of GDPR and the EU?

According to the EU AI Act, healthcare AI falls into the high-risk category. This means mandatory impact assessment, algorithm transparency, and human oversight. In addition, patients' personal health data are subject to a special protection regime under the GDPR.

X

Don't miss out!

Subscribe for the latest news and updates.