FDA and EMA List their Requirements for AI-Medical Devices 

Artificial intelligence is increasingly embedded within FDA-regulated drug development activities, from nonclinical modeling to clinical trial execution and post-market surveillance.
As AI systems begin to influence regulatory decision-making, FDA expectations for validation, control, and documentation are converging with established GxP software requirements.
This week the FDA and EMA jointly released the Guiding Principles of Good AI Practice in Drug Development to clarify how AI must be governed to meet their standards for trust, reliability, and regulatory acceptability.

The document describes a principles-based framework that aligns closely with existing FDA regulatory infrastructure, including 21 CFR Part 11, Quality Management System (QMS) requirements, and the FDA’s Computer Software Assurance (CSA) paradigm. Importantly, the document reinforces that AI systems used in regulated contexts are not exempt from compliance, they are extensions of regulated computerized systems. From an FDA perspective, AI systems that generate, analyze, or support regulatory evidence should be treated as GxP-impacting software, subject to lifecycle controls, risk management, and documented assurance activities.

Consistent with FDA’s quality system principles, AI technologies must have a clearly defined intended use and context of use, which drives risk classification and control strategy. Human oversight remains mandatory, ensuring accountability for decisions affecting product quality, patient safety, or data integrity. This aligns with FDA’s expectation that automated systems support, not replace, qualified personnel and that responsibility remains clearly assigned within the QMS. Risk-based approaches to validation and control mirror FDA’s CSA guidance, where assurance activities are proportional to patient risk and system impact, rather than driven by rigid documentation checklists.

The guidance’s emphasis on data provenance, traceability, and governance directly supports 21 CFR Part 11 compliance. AI systems must ensure data are attributable, legible, contemporaneous, original, and accurate (ALCOA+). Training data, model inputs, outputs, and decision logic must be documented and protected through appropriate access controls, audit trails, and system security. Model development and deployment should follow software engineering best practices integrated into the sponsor’s QMS, including change control, configuration management, and deviation handling. Black-box models without explainability or traceability pose inherent challenges to FDA inspection readiness.

The principles strongly align with FDA’s CSA framework by emphasizing assurance of intended performance over exhaustive validation documentation. Performance testing, evaluation metrics, and verification activities should focus on whether the AI system reliably performs its intended function within its regulatory context. Lifecycle management is critical. AI systems must undergo continuous monitoring to detect data drift, performance degradation, or unintended behavior. Periodic re-evaluation, change impact assessments, and corrective and preventive actions (CAPA) should be integrated into the sponsor’s QMS, ensuring sustained compliance throughout the product lifecycle. Clear, plain-language communication regarding system limitations, updates, and reliance boundaries further supports FDA expectations for transparency and appropriate use.

Taken together, these principles confirm that AI is subject to the same regulatory discipline as any other GxP computerized system. FDA and EMA acceptance will depend on demonstrable control, traceability, and risk-based assurance, not algorithmic novelty. For sponsors adopting AI in drug development, from discovery to clinical development, alignment with Part 11, QMS, and CSA is foundational to regulatory success.

Author

FDA Purán Newsletter Signup

Subscribe to FDA Purán Newsletter for 
Refreshing Outlook on Regulatory Topics