×
AI risks to patients prompt researchers to urge medical caution
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

AI-driven healthcare prediction models risk creating harmful “self-fulfilling prophecies” when accuracy is prioritized over patient outcomes, according to new research from the Netherlands. The study reveals that even highly accurate AI systems can inadvertently worsen health disparities if they’re trained on data reflecting historical treatment biases, potentially leading to reduced care for already marginalized patients. This warning comes at a critical time as the NHS increasingly adopts AI for diagnostics and as the UK government pursues its “AI superpower” ambitions.

The big picture: Researchers demonstrate that AI outcome prediction models (OPMs) can lead to patient harm even when they achieve high accuracy scores after deployment.

  • OPMs use patient health histories and lifestyle information to help clinicians evaluate treatment options, but mathematical modeling shows they can reinforce existing healthcare disparities.
  • The study, published in data-science journal Patterns, calls for a fundamental shift in AI healthcare development priorities, moving away from predictive performance toward improvements in treatment approaches and patient outcomes.

Real-world implications: Professor Ewen Harrison illustrated the potential harms with a practical example of how prediction can become self-fulfilling.

  • An AI system predicting poor recovery prospects for certain patients might lead clinicians to provide less rehabilitation support, ultimately causing “a slower recovery, more pain and reduced mobility.”
  • This feedback loop could particularly impact patients from groups that have historically received inequitable healthcare based on race, gender, or socioeconomic factors.

Why human oversight matters: The research emphasizes that human clinical judgment remains essential when implementing AI-driven healthcare systems.

  • Researchers highlighted the “inherent importance” of applying “human reasoning” to algorithmic predictions to prevent reinforcing biases.
  • Dr. Catherine Menon warned that without proper oversight, these models risk “worsening outcomes for patients who have typically been historically discriminated against in medical settings.”

Current applications: AI is already being used throughout England’s National Health Service for various diagnostic functions.

  • The technology currently assists clinicians in reading X-rays and CT scans and helps accelerate stroke diagnoses.
  • Prime Minister Sir Keir Starmer has positioned AI as a potential solution to NHS waiting lists as part of his broader vision to establish the UK as an “AI superpower.”
Why AI could end up harming patients, as researchers urge caution

Recent News

Hacker admits using AI malware to breach Disney employee data

The case reveals how cybercriminals are exploiting AI enthusiasm to deliver sophisticated trojans targeting corporate networks and stealing personal data.

AI-powered social media monitoring expands US government reach

Federal agencies are increasingly adopting AI tools to analyze social media content, raising concerns that surveillance ostensibly targeting immigrants will inevitably capture American citizens' data.

MediaTek’s Q1 results reveal 4 key AI and mobile trends

Growing revenue but shrinking profits for MediaTek highlight the cost of competing in AI and premium mobile chips amid ongoing market volatility.