FDA Discusses Regulatory Concerns for Use of AI in Drug Development
(Thursday, May 11, 2023)
The last 5 years have seen extensive use of artificial intelligence (AI) algorithms for drug and medical device development. Till earlier this year, the regulatory discussions about AI were focused on medical devices but now FDA has released two white papers of use of this technology for the drugs and biologics, as well. FDA recognizes that AI can potentially accelerate drug development, but it also carries several risks that must be addressed. The development and adoption of AI technology is moving at a pace much faster than the pace at which new regulations can be created. Hence, FDA wants to collect industry feedback for developing the regulatory framework to ensure that the safety and efficacy of AI-based medical products stays in step with the technological evolution. FDA’s first white paper, released in March, addressed issues related to the use of AI in drug manufacturing. Its second white paper released this week discusses the use of AI in drug development. This white paper discusses recent experiences with AI in drug and biological development and presents various scenarios. AI has been used recently in the drug target identification and screening of compounds for those targets. AI has also been used in in silico non-clinical research and various tasks in clinical research. It has been used to screen large patient databases for identifying potential recruitment targets, selection and stratification of participants, study design, participant monitoring, data management and analysis, assessment of clinical endpoints. Practically, anything that was till now done by humans could be taught to an AI/ML tool with an intent to expedite the work. And over the last 5 years, FDA has development several internal processes to evaluate the various applications containing various AI/ML tools submitted to it for review on case-by-case basis. Now to create a comprehensive and balanced regulatory framework, the FDA needs feedback on three aspects of this issue. First, it wants industry feedback on human-led governance, accountability, and transparency of AI and Machine Learning (ML) applications in the drug development process. FDA believes based on its experiences that AI/ML require human-led governance to develop trustworthy AI. Second, FDA wants to establish standards for the data used to train the AI/ML tools to reduce bias, increase integrity, privacy, security, explainability, relevance, replicability, reproducibility, and representativeness. Again, the regulatory experience has been that the AI tools presented to it so far needed more work in these aspects and FDA would like to establish clear directions for future developers of such tools. Third, one of the key challenges in regulating AI in drug development is the "black box" problem. AI algorithms can be highly complex, and it can be difficult to understand how they arrive at their conclusions. This makes it difficult to assess the reliability and accuracy of AI-based drug development tools. To address this challenge, the FDA is encouraging developers of AI-based medical products to provide transparency into their algorithms and data sources. AI models are influenced by the totality of evidence used for a specific decision and the consequences of such decisions. FDA wants to define risk criteria for AI/ML model development, performance, monitoring, and validation. These white papers add to the ongoing discussion on using AI for the development of medical devices. Overall, the FDA recognizes the potential of AI in drug development and medical devices, but also recognizes the need for robust regulatory oversight to ensure patient safety and public health. The FDA's approach to regulating AI in drug development is based on the principles of risk-based classification, premarket review, and ongoing monitoring and surveillance.
Dr. Mukesh Kumar
Founder & CEO, FDAMap
Linkedin: Mukesh Kumar, PhD, RAC