
FDA to Use AI for Faster Drug Approvals, Aims for Full Integration by June 2025
The U.S. Food and Drug Administration (FDA) has announced a major shift in how new medicines will be reviewed and approved, confirming the use of AI in drug approval processes across all its centers. The decision follows a successful pilot program involving generative AI tools, which significantly accelerated scientific review tasks.
According to Jinzhong Liu, deputy director at the FDA’s Center for Drug Evaluation and Research (CDER), the AI system has reduced work that typically took three days to just a few minutes. “This disruptive technology has transformed how we approach scientific reviews,” said Liu.
AI Poised to Transform Scientific Review Work
The FDA’s decision builds on the growing momentum to modernize outdated regulatory systems. Generative AI tools are expected to eliminate repetitive tasks, allowing reviewers to focus on critical decision-making rather than administrative bottlenecks.
Makary, in a tweet shortly after the announcement, questioned why drug approvals often take a decade or more. He called AI “a key to necessary reform,” echoing broader calls to modernize how the U.S. healthcare system handles drug development.
The agency did not name specific partners in its official statement, but Wired recently reported that the FDA had been in talks with OpenAI and that a confidential project called “cderGPT” is underway. Sources say meetings included OpenAI staff and government representatives connected to Elon Musk’s efficiency initiatives.
Previous AI Research and Ongoing Expansion Plans
The FDA has been exploring AI medical review tools internally for years. In 2023, it reportedly funded a study on large language models (LLMs) and how they might support internal reviewers. Former commissioner Robert Califf noted that while AI has promise, its effectiveness depends on which parts of the review process it supports.
The agency is also considering applying AI beyond final reviews, into broader areas like safety analysis, drug label drafting, and post-market surveillance.
Concerns Over AI Accuracy in Medical Use
Despite the potential, there are risks to consider with FDA AI implementation. One former FDA employee pointed out that ChatGPT-like AI models had generated inaccurate or misleading responses while they were in the context of clinical testing, which points to the dangers of using models like AI to support (or inform) such consequential decisions in medicine.
Experts are concerned that AI, if abused or mismanaged, could create new biases or errors in a system that is supposed to create health and public health related protections. “AI is not perfect,” an individual said. “In medicine, those mistakes can cost lives.”
Conclusion
The FDA’s AI rollout marks a historic move toward modernizing drug approvals. By targeting faster, more efficient processes, the agency hopes to speed up life-saving treatments without compromising safety.
Yet, the road ahead is not without challenges. Ensuring that AI in drug approval enhances rather than undermines scientific rigor will be the key to sustainable success.