
A growing number of Chinese hospitals have embraced DeepSeek’s AI models for clinical decision-making, but leading researchers are sounding the alarm over the risks of premature adoption. In a recent paper published in the Journal of the American Medical Association (JAMA), a group of Chinese medical experts warned that relying too heavily on DeepSeek’s AI could introduce diagnostic errors, treatment biases, and cybersecurity issues.
As of March 2025, more than 300 hospitals across China have integrated DeepSeek’s low-cost large language models (LLMs) into their systems. While the models have received global praise for performance and affordability, medical professionals fear they may be deployed too quickly, without enough oversight or safeguards.
Researchers Caution Against Diagnostic Errors and Overreliance
The paper, co-authored by Tsinghua Medicine founding head Wong Tien Yin, is a rare but critical voice urging caution amid China’s AI boom. The researchers note that DeepSeek’s models, despite impressive reasoning abilities, have a tendency to generate “plausible but factually incorrect outputs”—a dangerous flaw in clinical environments.
Professor Wong, a renowned ophthalmologist, warned that doctors may become too dependent on DeepSeek’s output, especially under time pressure. “In time-sensitive clinical settings, verifying AI recommendations manually creates an additional burden, while blind trust in the system could lead to harmful decisions,” he and his team wrote.
They emphasized that caution and human oversight are still vital, but in practice, busy clinicians may not have the time or resources to verify AI-generated diagnoses or treatment plans—making room for avoidable errors.
Privacy and Cybersecurity Gaps in Rapid AI Rollout
Though many hospitals use on-site deployments of DeepSeek’s AI to avoid sending sensitive data to the cloud, researchers pointed out that this merely shifts responsibility for cybersecurity to local healthcare providers, many of which lack robust digital security infrastructure.
The warning comes amid broader concerns about AI safety. A separate study from the Chinese University of Hong Kong recently found that many AI agents built on mainstream LLMs are vulnerable to cyberattacks, with DeepSeek-R1 being especially at risk.
Additionally, the researchers argued that China’s unique healthcare context makes the situation even more volatile. The country faces significant disparities in primary care, while smartphone penetration has enabled underserved populations to access AI-powered health advice, often without medical supervision.
A Surge in AI Adoption Amid Growing Scrutiny
DeepSeek, once hailed as a national success story for its affordable and high-performance AI models, has become a centerpiece of China’s generative AI rollout. In April, fintech giant Ant Group launched nearly 100 AI medical agents on Alipay, while Tsinghua-backed startup Tairex began testing a virtual hospital platform staffed by 42 AI doctors across 21 departments.
Despite this momentum, the JAMA paper urges policymakers and health officials to hit pause and invest in more rigorous validation, oversight, and cybersecurity for these tools.
Innovation Must Not Outpace Responsibility
The DeepSeek hospital AI controversy reflects a growing tension between innovation and responsibility in the healthcare sector. As China accelerates its AI-driven medical transformation, experts warn that speed must not come at the cost of safety, accuracy, or patient trust.