A new study from Johns Hopkins, Georgetown, and Yale researchers reveals that AI-enabled medical devices from publicly traded companies are nearly six times more likely to be recalled than those from private firms. The findings suggest investor pressure may be driving companies to rush AI medical tools to market without adequate testing, potentially putting patients at risk.
What you should know: The research analyzed nearly 1,000 AI-enabled medical devices approved by the FDA and found a stark disparity in recall rates between public and private companies.
• Publicly traded firms developed 53.2% of the devices studied but were responsible for more than 90% of recall events.
• Among recalled devices, 77.7% from established public companies lacked clinical validation, rising to 96.9% for smaller public firms.
• Only 40% of recalled devices from private companies lacked real-human data testing.
The big picture: The FDA’s 510(k) clearance pathway allows AI medical devices to reach patients without prospective human testing, creating a concerning gap in safety validation.
• Among AI medical device recalls, 43.4% occurred within the first year of clearance—about twice the rate reported for all 510(k) devices.
• Many AI medical tools are reaching clinical settings without being evaluated in real-world conditions.
What they’re saying: Lead researcher Tinglong Dai, a professor at Johns Hopkins Carey Business School, expressed alarm at the concentration of recalls among public companies.
• “The lopsided nature of these recalls should give every advocate for medical AI pause,” Dai said. “Publicly traded companies, the big fish in this still-small pond, built just over half the devices but were responsible for nearly all the recalled units.”
• “We were stunned to find that nearly half of all AI device recalls happened in the very first year after approval,” he added. “If AI hasn’t been tested on people, then people become the test.”
Why this matters: The association between recalls and public companies suggests that investor-driven pressure for faster product launches may be compromising patient safety in the rapidly growing AI healthcare market.
Proposed solutions: Researchers recommend several reforms to address these risks and improve device safety.
• Require human testing or clinical trials before device approval.
• Incentivize companies to conduct ongoing studies after launch.
• Collect real-world performance data to ensure continued safety and effectiveness.
• Make approvals conditional, expiring unless evidence shows the device works as intended.
Research team: The study was published in JAMA Health Forum and included contributions from Yale’s Joseph S. Ross, Johns Hopkins’ Joshua Sharfstein, and several medical students from both institutions.