Summary and assessment of EMA’s reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle

As the title suggests, the reflection paper sets out the views of the EMA on AI in the medicine product lifecycle, from discovery through non-clinical and clinical development, into regulatory review and beyond, through manufacturing and post-authorization. It also covers aspects relating to veterinary medicines and touches on AI embedded in medical device and in vitro diagnostics when used in clinical trials. 

Risk-based assessment (Section 2.1)

Having recognized the potential utility of AI when ‘used correctly’, EMA’s reflection paper moves quickly to a discussion of the risks. Specifically, sponsors are invited to reflect on whether the AI system has the potential to give rise to risk to the patient and, if so, seek early regulatory advice. Knowing what to share with EMA when seeking regulatory advice is a key question, and one which will become clearer with accumulating experience. For now, it seems that ‘everything and anything’ is going to be a useful guide (and one has to presume that the Scientific Advice Working Pary (SAWP) is going to be kept busy!). 

On the other hand (section 2.2.5) they also guide that specific AI tools (they particularly call out generative language models) may have potentially less risk: for instance, when deployed as part of a business process intended only to enhance efficiency. In these cases, it may be sufficient to ensure that AI applications are used ‘under close human supervision’. 

AI in clinical development (section 2.2.3)

Unsurprisingly, EMA make it clear that all the usual guidance and expectations would be applicable to AI/ML based approaches deployed in clinical development. This includes GCP and the need to ensure technologies (including AI systems) have been validated and are auditable. But it also means that regulatory guidance relating to data analysis and inference is applicable, including the expectation of pre-specification (i.e., fixed prior to study unblinding). This is particularly significant when we consider that AI models can be designed to continue to learn in-use (as opposed to those with fixed functionality). It is indicated that AI models are unlikely to be acceptable in pre-approval settings. 

AI used to determine treatment assignment, or patient dosing, or to individualize treatment in relation to patient related factors or characteristics, is also called out as an area of higher risk given it would have a direct bearing on patient safety. 

Post-authorization applications of AI (section 2.2.7)

Applications of AI in the post-authorization phase are granted greater leeway for the use of more flexible AI/ML modeling approaches. Here, the value of incremental learning to continuously improve the model is recognized in the context of adverse event reporting and signal detection. 

Technical aspects of AI (section 2.4)

As they made clear at the workshop hosted to receive feedback on the reflection paper in late 2023, EMA is aware that there is still much to understand about AI and is working hard to prepare itself to review the many expected submissions that include AI/ML systems. The general lesson to read from their current position therefore seems to be: expect to have to share everything and anything related to AI system development, deployment and in-use monitoring.  That will include: 

  • The way the AI algorithm was trained, and the data sources used to do so, particularly with regards managing the potential for biases in the training sources that might create misleading or erroneous outcomes 
  • Details of the model that was selected (and how it was chosen) to underpin the AI solution, paying regard to the a priori preference for transparent, simpler, interpretable models; also, how model performance has been assessed 
  • The extent to which the AI system is ‘explainable’, and how any lack of explainability will be mitigated (this is a particular concern with AI based on LLMs where model complexity, as well as uncertainty about the training data source, may be problematic) 
  • Systems in place to monitor for potential AI performance degradation over time, and action steps deployed to mitigate this risk 
  • Details of SOPs and other processes designed to assure quality when in use 
  • Relevant cyber security and data protection measures implemented to support AI systems 
  • Evaluation of data integrity risks attached to the AI system, including the risk of patient re-identification 
  • Evidence that the AI system is designed to be ethical, trustworthy and fair 
What status does a ‘reflection paper’ have?

EMA reflection papers express their viewpoints on topics related to drug development, regulationregulation, or use.  They are intended to promote dialogue with interested stakeholders. EMA regulatory guidance sets out in detail how to meet and comply with regulatory requirements and expectations. In this case, it seems reasonable to expect that the reflection paper, once finalized and published, will be followed in time by a regulatory guidance. 

When can we expect to see regulatory guidance?

EMA has indicated that the current draft reflection paper is expected to be finalized during 2024. The appearance of formal regulatory guidance for AI in drug development is therefore unlikely to be before 2025, and perhaps later. 

For more on this topic, read our full blog: Regulatory acceptability of AI: Current perspectives

Return to Insights Center

Related Insights

Blog

AI Milestones: FDA’s ISTAND program accepts AI-based assessment tool for depression

Mar 19, 2024

Blog

Leveraging the draft FDA Guidance on PBPK for your drug development program

Feb 24, 2021

Blog

Summary and assessment: Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback (FDA)

Mar 7, 2024

Blog

Regulatory acceptability of AI: Current perspectives

Mar 7, 2024

Podcast

RBQM Podcast Series | Episode 3: Staying within the Guardrails: How to Push the Boundaries in a Highly Regulated Industry

Jun 16, 2022

Playbook

Are you using real-world evidence?

Feb 1, 2023

Article

Q&A Project Optimus: What you need to know

Oct 11, 2022

Article

New FDA Guidance Addresses the Need for Data-Generation Strategies Across the Drug Development Lifecycle

May 10, 2022

Blog

Maintaining Data Integrity for Quality and Compliance – Essential Despite Pandemic Disruptions

May 16, 2022

Article

8 things you need to know about eCTDs in China

Jul 1, 2022

Blog

Preparing for the New Era of Hybrid Regulatory Inspections

Jul 11, 2022

Blog

Digital Biomarkers – The Future of Precision Medicine

Jul 21, 2022

Related Insights

Blog

AI Milestones: FDA’s ISTAND program accepts AI-based assessment tool for depression

Mar 19, 2024

Blog

Leveraging the draft FDA Guidance on PBPK for your drug development program

Feb 24, 2021

Blog

Summary and assessment: Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback (FDA)

Mar 7, 2024

Blog

Regulatory acceptability of AI: Current perspectives

Mar 7, 2024

Podcast

RBQM Podcast Series | Episode 3: Staying within the Guardrails: How to Push the Boundaries in a Highly Regulated Industry

Jun 16, 2022

Playbook

Are you using real-world evidence?

Feb 1, 2023

Show more