Summary and assessment: Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products: Discussion Paper and Request for Feedback (FDA)
The FDA discussion paper begins with a lengthy section covering potential use cases for AI/ML in drug development. There is some overlap here with a 2023 publication by CDER and CBER staffers1, although it goes much further, considering potential use cases as well as those already seen by FDA.
It follows with a series of questions on which it seeks stakeholder feedback. These fall under three broad headers:
- Human-led governance, accountability, and transparency
- Quality, reliability, and representativeness of data
- Model development, performance, monitoring, and validation
Since the text poses questions, but does not offer an explicit viewpoint, one must infer what might be in the minds of the authors. However, we see many similar themes to those raised by EMA.
Human-led governance, accountability, and transparency
FDA’s first question is interesting: Where is more regulatory clarity needed when it comes to AI in drug development? To which one is tempted to say ‘everywhere!’. However, this may speak to a desire to draw on existing guidances where they exist, something which EMA also highlights. Questions in this section relate to definitions for transparency, and the extent to which ‘how the AI system operates’ can be understood in detail. This leans on the concept of explainability. It also asks about human oversight of functioning AI systems, and how this can be made ‘meaningful’, by which it would seem they mean, simply, how do we know it works as intended? Traceability and auditability are also raised as aspects of technology validation that are ordinarily required from technology systems deployed in clinical development. Additionally, FDA asks about pre-specification, presumably regarding AI systems that may still be being trained in-use.
Quality, reliability, and representativeness of data
In this section, FDA ask how we ensure data used to train AI algorithms are fit-for-purpose. This indicates that an issue uppermost in their mind is the potential for biased training data to lead to biases in the AI system performance characteristics. In terms of AI system performance, FDA also asks about practices deployed to enhance reproducibility and replicability – perhaps in part reflecting on known behaviors of generative AI models.
They also ask what AI system developers are doing to ensure the integrity of their training data pipeline, manage issues such as missing data, and ensure the data are accurate, consistent, and complete. Like EMA, data privacy and data security are areas highlighted where FDA wants to understand how these twin issues are managed.
Model development, performance, monitoring, and validation
This final set of questions seeks input that may help establish a shared view on relevant best practices for AI developers. These relate to system documentation, algorithm development, validation, model evaluation, model accuracy and model explainability. There is also an interesting reference to use of open-source AI, asking when its use might be appropriate.
When can we expect to see regulatory guidance?
With so many questions to be answered, it seems unlikely that a formal regulatory guidance will be published very quickly. However, given the complexity to be addressed, this is perhaps to be welcomed – even if the one thing sponsors and other participants in clinical development want more than anything else is regulatory clarity.
For more on this topic, read our full blog: Regulatory acceptability of AI: Current perspectives
References
Related Insights
Blog
AI Milestones: FDA’s ISTAND program accepts AI-based assessment tool for depression
Mar 19, 2024
Blog
Leveraging the draft FDA Guidance on PBPK for your drug development program
Feb 24, 2021
Blog
Summary and assessment of EMA’s reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle
Mar 7, 2024
Blog
Regulatory acceptability of AI: Current perspectives
Mar 7, 2024
Podcast
RBQM Podcast Series | Episode 3: Staying within the Guardrails: How to Push the Boundaries in a Highly Regulated Industry
Jun 16, 2022
Playbook
Are you using real-world evidence?
Feb 1, 2023
Article
Q&A Project Optimus: What you need to know
Oct 11, 2022
Article
New FDA Guidance Addresses the Need for Data-Generation Strategies Across the Drug Development Lifecycle
May 10, 2022
Blog
Maintaining Data Integrity for Quality and Compliance – Essential Despite Pandemic Disruptions
May 16, 2022
Article
8 things you need to know about eCTDs in China
Jul 1, 2022
Blog
Preparing for the New Era of Hybrid Regulatory Inspections
Jul 11, 2022
Blog
Digital Biomarkers – The Future of Precision Medicine
Jul 21, 2022
Related Insights
Blog
AI Milestones: FDA’s ISTAND program accepts AI-based assessment tool for depression
Mar 19, 2024
Blog
Leveraging the draft FDA Guidance on PBPK for your drug development program
Feb 24, 2021
Blog
Summary and assessment of EMA’s reflection paper on the use of artificial intelligence (AI) in the medicinal product lifecycle
Mar 7, 2024
Blog
Regulatory acceptability of AI: Current perspectives
Mar 7, 2024
Podcast
RBQM Podcast Series | Episode 3: Staying within the Guardrails: How to Push the Boundaries in a Highly Regulated Industry
Jun 16, 2022
Playbook
Are you using real-world evidence?
Feb 1, 2023
Article
Q&A Project Optimus: What you need to know
Oct 11, 2022
Article
New FDA Guidance Addresses the Need for Data-Generation Strategies Across the Drug Development Lifecycle
May 10, 2022
Blog
Maintaining Data Integrity for Quality and Compliance – Essential Despite Pandemic Disruptions
May 16, 2022
Article
8 things you need to know about eCTDs in China
Jul 1, 2022
Blog
Preparing for the New Era of Hybrid Regulatory Inspections
Jul 11, 2022
Blog
Digital Biomarkers – The Future of Precision Medicine
Jul 21, 2022