Assessing appropriate use of ECAs in clinical trials
Katherine Cloninger:
Hello and welcome to Parexel Biotech's Innovation Readiness Webinar series. I'm Katherine Cloninger, Senior Director of Marketing Strategy, and delighted to introduce today's theme and speakers. In the second of three webinars, today, we'll discuss appropriate use of external control arms. We'll open with Carrie Jones who leads the Orphan Disease Practice at Health Advances. From Parexel, we have David Brown who heads up Epidemiology and Real-World Science. There's a lot to cover today, so I'll pass the mic to Carrie to get the discussion started.
Carrie Jones:
Thank you, Katherine. Thanks to everyone who's attending today. I'm really excited for the webinar. First, I wanted to introduce a survey that Health Advances and Parexel jointly conducted of a cohort of over 30 industry executives at the end of 2022 to assess their experience and trends in innovative trial designs including external control arms. We are hosting, as Katherine said, a series of webinars covering a number of these different innovative trial designs, and today, we're focused on ECAs. As you can see on the slide, the industry executives that we surveyed are enrolled across R&D, clinical development, and clinical operations as well in the C-suite.
They are responsible for clinical development across phases of studies, and for reference, the majority of the executives work at companies headquartered in the US and about 75% are in pre-commercial phases. To kick off the discussion, we wanted to review why a sponsor might opt to use an ECA, given that it's widely recognized that randomized controlled trials are the gold standard for interventional trial design. In cases where a randomized control trial may not be feasible due to patient accessibility or unethical, as we sometimes see in rare diseases, considering an ECA may be a preferred path. Alternatively, ECAs may enable more efficient or less costly development timelines, potentially facilitating faster access to new therapies for patients.
Lastly but certainly not least, ECAs may provide more robust control data if you're able to bring in longitudinal patient data sets, expand the trial diversity, or increase confidence by pulling in multiple external data sources. As we look at results from our survey of industry executives, there were 14 respondents who were highly familiar with external control arms. Of these, they reported several benefits from utilizing external control arms. So, if you are reading the slide, looking at the greenish-yellow color and the blue on the left represents respondents whose experience with external control arms surpassed expectations.
Gray in the middle was a neutral experience, and then the pink and purple on the right would represent when an external control arm did not meet expectations. So, you can see that the benefits that were reported from utilizing external control arms are notably in improving the data quality of their study, accelerating timelines, and reducing costs in some cases. So, clearly, we have more work to do to drive towards higher satisfaction of ECAs, and that's partially what we're aiming to do through this webinar as we discuss some of the draft guidance coming out of the FDA on considerations for the design and conduct of ECAs, as well as sharing with you our framework on how to evaluate when an ECA may be an attractive and fit for use option, which my colleague David will discuss towards the second half of the webinar.
So, we've seen over the last 5 to 10 years a notable increase in the number of approved rare disease products which have incorporated external control arms into their regulatory submissions. There are several key trends that have led to increased use of external control arms. So, along the right side of the slide, we've seen increasing access to high-quality, real-world data sources, greater industry focus on alternative trial formats, which was reflected in the survey that we conducted, a growing number of rare disease assets in clinical trials in general, and we often see ECAs in rare disease products and development programs, improved industry and regulatory familiarity with ECAs and we'll talk more about that.
And also improvements in analytic methods for addressing confounders and biases which have historically been a big barrier for the use of real-world evidence and specifically external control arms in studies. We wanted to share a couple of examples of FDA approvals where ECA has been used successfully. The first here is Zolgensma, which provides an example of an approved product in which an ECA was incorporated into the label. Zolgensma is a one-time gene therapy approved for the treatment of children with spinal muscular atrophy, SMA.
The trial design of its pivotal study featured comparison of a single treatment arm with an external comparator based on patient data from a natural history data set and using an ECA enabled all of the patients in the study to receive treatment. A placebo arm would've been viewed as unethical. A key aspect to this study design was being able to address baseline differences between the treatment and the external control arms. You'll see as we talk in a few slides about the FDA guidance, one of the key themes is aiming to have high similarity between your control arm and your treatment arm, and in cases where you have differences being able to identify biases and confounders and address those adequately.
A second example is Alnylam's Amvuttra, which was approved for hereditary transthyretin amyloidosis polyneuropathy and provides an example of an FDA regulatory approval where the comparator was the placebo arm for a previous phase three study in ATTRPN, so a slightly different source of data. This ECA design enabled all patients in the study to receive RNA silencer therapy, so no patients received placebo in this study either. So, similar to Zolgensma case, the previous Phase III study APOLLO placebo arm was determined to be fit for purpose data set given the similarity of the patient population in the two studies given similar recruitment criteria.
So, again, ECA study design should strive for high similarity between the treatment and external control arms to minimize confounders and biases. You'll see that's a key theme in the FDA guidance. So, moving on to talking about the guidance, as we've seen increasing interest and use of ECAs across the industry, the FDA has shown its commitment to advancing the use of real-world evidence through initiatives under PDUFA VI and the 21st Century Cures Act, and real-world evidence continues to be a priority in PDUFA VII. The agency has issued a series of guidances on real-world evidence for regulatory activities, most recently releasing a draft guidance on external control arms in February.
While these guidances are intended to be a series and sponsors interested in these topics would be best served to review all of the guidance, we're going to be focusing on some of the key takeaways from February's external control arm guidance today. New comments on the guidance are due next week. So, if you are planning on submitting any comments, you should do that soon since the deadline is approaching. As we get into the guidance, the guidance stresses the importance of careful steady design up front. So, first, ensuring your ECA is as similar to your treatment arm as possible to minimize the number of differences you have to control for. Then planning ahead to make sure you'll be able to assess the effect of your treatment in a robust enough way to be convincing.
This last statement on the slide that's highlighted echoes throughout the entire guidance. Each study design will have to be a sponsor and asset-specific design, and there won't be a one-size-fits-all approach. Because of this, a critical takeaway from the guidance is that it's extremely important to take the FDA on the journey with you. You should work with the relevant FDA division during your design phase in which you outline up front all aspects of the study, including your rationale for using an ECA study design, your sources for the control arm, and rationale for why they're fit for use and your statistical analysis plan, including analytic methodologies for addressing confounders, biases, missing data, and the like.
If there are changes in the plan during the study itself, these should be reviewed with the relevant FDA division as well. It's important to document everything relating to your ECA and explain your reasoning behind every decision so that the FDA can follow along. The guidance also indicates that they will want access to patient-level data from the study and the analyses you performed on that data. The guidance does acknowledge that it's going to be close to impossible to completely eliminate all forms of confounding and bias in an ECA. Their goal is to make sure that the impact of bias is reduced to a level that they feel confident in the trial results.
While these are overarching considerations, there are more specific recommendations throughout the guidance examples of which have been highlighted here on the right. I won't go through these verbatim, but there are valuable considerations or watchouts that provide indicators of the types of questions FDA reviewers will be asking as they review study designs and submissions involving ECAs. Stemming from the guidance, one potential structural framework for thinking through trial design and documentation of the design is to consider these six steps on the slide. Starting at the bottom, the first step is to identify all your confounding factors and potential sources of biases.
From there, you have to determine if these factors will impact whether your external control arm can actually be an effective control for your treatment arm. If there are issues, you'll have to explain the extent of them as well. Once you've done that, you'll have to explain how you're going to reduce the impact of confounders and source this bias in your trial design and matching protocol. That's the comparability approach in step three. Following that, you can discuss what level of impact you're anticipating from confounders on your analysis of the treatment effect and hand in hand with that how you're planning to adapt your statistical methods to mitigate this. So, that's steps four and five on analytics.
Finally, once you've outlined all of this, the final step is summarizing what the implications are for the data itself and why the steps you've taken indicate that the outputs of your trial and analyses are valid. Of course, the next question following this is to think about the confounding factors which might influence how comparable the ECA is to your treatment arm. The guidance provides a nice table listing 10 potential factors to think about. At a recent webinar on the guidance, the FDA was clear that this is not set in stone, but you will be well-served to acknowledge these and show that you've considered the impact of each on your study.
Additionally, the guidance goes into more detail on each of these factors, which is helpful as you go through study design as well as prepare for discussions with reviewers. From Parexel's recent research in collaboration with RhetorIQ, historically, the FDA has most commonly found issues with external control arms in rare disease studies because of issues around the structure of the trial, specific forms of biases that were identified, or concerns around the analytic approach, again, tying back to what we now know is important from the guidance. With proper planning, these issues should be avoided or at least identified earlier so you can make an informed decision around whether to proceed before significant investment has occurred.
Circling back to our survey, we asked the executives to indicate what they view as the key barriers to implementing ECAs. The highest perceived barrier is lack of acceptance among regulators and clinicians. Hopefully, this is a barrier that will start to be addressed as sponsors increasingly work with regulators on ECA study designs and both regulators and sponsors become increasingly experienced and comfortable with the successful use of these external control arms. Executives also view use of ECAs as a risk to the outcome of their studies and cite lack of high quality data sets for ECAs.
So, there's certainly areas to improve. I'm going to now turn it over to my colleague David, who is going to help us think about how to address some of these barriers and we'll present on a decision-making framework for assessing when to utilize ECAs. Thank you for your attention today. David?
David Brown:
Thank you, Carrie, and good morning. Good afternoon, everybody. So, I'm going to build off of Carrie's presentation. So, my name is David Brown. I'm the head of Epidemiology and Real-World Science here. I've been at Parexel for 12 years, in the industry, 21 years. I joined the same year that the ICH E10 guidance came out, which talked about controls and trials and Section 5 addresses external controls. Since the 2018 publication and release of FDA's framework on real-world evidence program, we've been engaged by a number of market authorization holders on external controls. So, I'm going to ask, start with the challenge here.
I mean the right data are critical, but if the very first question is asked, and the reason we get this is because we've been asked this, is that what data can we use for an external control arm? It's the wrong first question. It's not the first question I ask. So, the question is then what's the right first question or questions? Next slide. So, it's the question you're trying to answer. Who's the target audience? What are the target stakeholder expectations? How will study results be used? What's the target trial framework, the trial that you're trying to emulate? What are the answers to ICH E10, the five screening questions? What's the right source of patient data? How can we identify amongst multiple sources of real-world data?
What's the best fit to purpose data and is there a structured approach in which we can identify that? And then lastly, is there a structured approach or a model to decision making that we ought to be following in order to make sure that we have a comprehensive approach, that we are able to address all the points, the topics of consideration, the concern, the FDA guidance? So I'm going to start with seven lessons and we're going to end with seven lessons. The first is that to use a structured approach in evaluating whether an external control is feasible or appropriate for specific combination of a product and indication. Second, feasibility is not monolithic. It is iterative, it's multifaceted. I'm going to share with you in a moment five types of feasibility.
It's important to understand that there's five types. Third, that we need to rigorously and critically apply the epidemiologic principles of observational research to the study design and all aspects of methods and operations. Let me pause for a moment. We're using real-world data, but the goal is to transform that real-world data into clinical trial evidence. We're trying to mimic a clinical trial using real-world data and transforming that data using statistical methods and observational research methods. Number four, we need a systematic model and method to evaluate potential sources of real-world data. Recently, as an epidemiologist, I'm involved in orphan drug designation applications.
I was asked to help them on oncology where you both estimate prevalence and you justify the estimation. I did a survey and there's 1,100 different sources in the EU for oncology. So, how in the world do you prosecute 1,100 different sources? I do believe in the future, and at Parexel, we have a team whose responsibility is to identify, characterize, catalog, and make available to us these sources of data, their characteristics, but I believe in the future, AI will be a part and parcel of this cataloging and retrieval. Number five, we need to understand what the regulatory expectations are all across the board in terms of precedent, but in particular with respect to statistical methods and especially for cohort balancing and for matching and/or matching and for endpoint analysis.
Number six, we need to systematically, and Carrie mentioned this, an intercepted approach to the FDA and the guidance need to systematically evaluate for potential biases and resolve the present. We need to mitigate. If you identify the potential or the real or potential, we need to either in a briefing package for the FDA or in the protocol or both explain what biases are potential and how we've mitigated in the study design or in the analysis. If they're not there, explain why they're not there. I list several that FDA has discussed with market authorization holders in the reviews that we've seen. Immortal time bias, lead time bias, selection bias, information bias, those are some examples.
Lastly, in seven, that is we need to engage early and often with the FDA to align on the study objectives and the methodology. That's critically important. FDA themselves had in a 2018 pink sheet that you ought to do so. More on that in just a bit. All right. So, I want to just spend a moment on the past, the present, and the future for external controls. You notice that I've lumped the present in the future together, because that's where we are. As I said, I've been in the industry 21 years. Twenty-one years ago, observational research and observational studies were called dirty data studies. Now we have then the registry, then now it's the rural world studies, but underlying all those names is observational research and principles.
So, external controls are nothing new. As I said, in 2001, ICH E10 guidance came out, section number five, but there was limited use in NDAs and BLA submissions and now they're being used increasingly for efficacy evaluations, the treatment effects. What's driving that legislation and the regulations that are driving that RWE innovation and we're moving beyond the traditional safety to support those efficacy evaluations. In addition, there's what in the industry are considered novel statistical methods for cohort balancing that have been in observational research texts for the last 20, 25 years. For example, estimated propensity scores and using that for cohort balancing. Something that's new is this novel epidemiologic thinking on target trial framework.
I'll explore that briefly with you. Then as I said, there's just a proliferation for the last 20 years, particularly the last 5, 6, 7, and 8 of secondary data sources as well as the associated technologies to ingest and prosecute in that process. So, where do I think the future's going to go? I think the future is once we get resolved with regulatory agencies, we convince ourselves that we can meet the statutory threshold of a well-controlled study, then I think this is applicable to non-rare diseases as well. Diabetes, cancer, those are areas in where the databases are replete with data and they're not knowingly missing.
All right, a couple of key considerations. In that pink sheet article that I referenced in 2018, maybe early 2019, the FDA came out and said they expect the market authorization holders to provide and defend the rationale for their proposed use of the real-world data and real-world evidence in their submission. They recommended that they have a systematic approach in clinical development and they said they should engage early and often FDA. So, out of this, and because sponsors were coming to us, we developed a structured approach in rapidly facilitating and rapidly has a range of definition, go/ no-go decision.
So, this is our decision-making framework. We even tested it with a top 10, let's say, pharmaceutical company and they had some great criticism and comments and suggestions for change. It is not that it is the only framework that's available, but what it does is illustrate the thinking in a systematic comprehensive way in which you have to think through not only the study design but the process. In each one of these nodes, there's a series of questions that we have to be asked and answered. It's important to document, memorialize, and writing the decisions at that time so you don't revisit those decisions later. So, for your edification across the top, we have evidentiary requirements. We have potential applications here. We're really referring to regulatory ones.
We have what are the regulatory criteria? We have the target trial specification, we have the screening questions, we have the initial feasibility. That's the five types. Then we have the due diligence. On the right, we have these 15 key go/no-go decision points that we have summarized as very critical, very important. Then at the very end, you have cohort balancing and then analysis. That's very much an iterative process, because as you think about the evidentiary requirement, you really need to think how you're going to analyze this and how you're going to cohort balance because you need to capture those variables. So, for each one of these nodes, we go even further, we have a very substantive process flow and we walk through.
So, the top left is the five screening questions from ICH E10. On the top right, we go through trial evidence, target audience, regulatory criteria, the target trial framework. Then the bottom left, we have routine clinical practice, data sources, data collection options. You see that arrow there. I stop on that arrow just to point out one for example and this is our experience, where a sponsor came and they wanted to do external control, but the indication had not yet had an ICD-9 or ICD-10 code, which means it hadn't really been medicalized yet and there wasn't even a response. The natural history wasn't well known. So, that was really much a challenge and want to proceed with this.
At the end of the day, they changed from an external control to more of a contextualization of a natural history study. On the bottom right, we have cohort balancing and point analysis is important. So, as I started at the front, the challenge is it's not the right data, what data, but it's what are the right questions to ask? Each of these nodes, there's a series of questions you ask. So, let me give you an example. This is the evidence requirements matrix. Number one, let's suppose it's regulatory. Number two, let's suppose it's the initial submission.
Number three, what's the trial design? Are we talking about a single arm and you're adding an external control or is it a randomized trial already you're thinking of and you want to substitute an existing control, whether it's placebo control or active control with a substitute real-world data control arm, or are you adding an additional control? Remember ICH E10 says it's okay to have more than one control. So, that's a possibility, or lastly, maybe you're augmenting existing control arm data. So, you have the treatment arm, you have a control arm comprised of placebo or active control, and maybe you're augmenting that with real-world data. Then on number five, well, here are the types of data in the hierarchical order where they're preferred.
We're talking about real-world data here in external control for the most part, but actually, you should start thinking about what trial data is available in the post-approval space that you could use as the comparator. Secondly, you can use a combination of trial in RWD, or thirdly, you can use RWD alone. Now down at the bottom shows this pathway where we're going to go from you take real world data, you apply the inclusion-exclusion criteria from the trial to that, and you come up with a potential external control candidate pool. Then you transform that candidate pool through statistical methodology, ethnologic thinking into an external control arm. Not everybody in the candidate pool will necessarily make its way in external control arm.
So, that means not everybody in a natural history study automatically makes its way in external control. The control can be contemporaneous, i.e., in near real-time or it could be historical in the past. So, I mentioned the target trial framework and this is the application to design principles from randomized trials to the analysis observational data and thereby explicitly tying the analysis of external control trial to the trial that is emulating. The framework comprises an underlying set of principles in a standardized approach to emulating clinical trial. We'll see this in the next slide. The purpose is to improve the quality observational research through the application of trial designs.
Then what's the application to external controls? Well, it applies to the transformation of real-world data. That is the external control arm candidate pool into an external control arm that serves as the comparative investigational trial arm. What are the steps involved? It's very simple. Two steps, you specify the target trial. What does that mean? You think about if on the one hand, if you're proposing a clinical trial but you want to substitute instead of randomized clinical trial, you're saying I want to do an investigational arm and I want an external control. Fine. So, we're doing a substitution. You would then still specify the clinical trial that you would propose as the gold standard trial to conduct.
Once you do that in terms of the research question, the objectives, the endpoint, the variables to measure the endpoint that the cadence frequency of data collection for those variables, then once you establish that, you would then try to emulate that target trial using real-world data as an external control compared to the investigational arm. Now I'm going to introduce something here, and this is really important on the topic of index date or time zero. In fact I was recently asked to be a referee on an article defining index date for external controlled trials. What that means is in a randomized controlled trial, when does follow up begin? All right. So, it's when they enroll in a study, when they're randomized. All right.
Well, in a randomized clinical trial, both the investigational arm and both the comparator are randomized. So, the index date of start time follow-up begins or time zero begins at the time of randomization. What about external controlled trial? Well, you don't have randomization. Well, in an investigational arm, time of follow-up could be the time of therapeutic treatment, enrollment into the investigational arm or maybe decide to make it the treatment therapy investigational. But how do you make the decision about time zero in the comparative arm? That's a source of a body of literature and work and it's very important to consider it and get that right. If not, you can induce bias including a type called immortal time bias.
All right. I mentioned the five screening questions and the five screening questions are this. They are number one, does the disease have a well-documented, highly predictable course? The natural history of disease is important to understanding how the placebo arm might respond. Number two, are covariates that influence the outcome of disease well-characterized? Well, that's important because the knowledge of confounders is critical to the statistical cohort balancing. Thirdly, are trial variables similar or I should say are observational variables similar to trial variables? Meaning do the variables in real-world data, are they defined in the same way as a trial? Are they captured in the same frequency and the cadence as they are in the clinical trial?
I'll give you an example. An oncology prognosis is measured by resist criteria. That's a clinical trial construct and it's often not captured in the real-world setting under routine clinical practice. So, it's not necessarily a variable that would easily be captured in real-world evidence. Number four, is the primary study endpoint objective versus subjective? In oncology, survival is an objective endpoint, whereas responder status is sometimes highly subjective. Regulatory agencies and payers prefer a very repeatable objective endpoint. Then lastly, is the treatment outcome markedly different? Meaning, is there any evidence of a large effect size? The reason that's important is that any bias that's introduced can bias the association between treatment and outcome towards the null.
If it's a high effect size, then any bias would not result in a bias all the way towards the null and you would still see a treatment effect. All right. So, let me just show you this. This represents this in a pictorial form, in a graphic form, a flow form. The yellow nodes are the questions, natural history, outcomes variates, covariates, real-world variables, primary endpoint, effect size. Each of those are you ask the question and then the answer is sometimes subjective here. You don't have to have a clear yes or no. It could be, well, we partly know the natural history but we don't know it fully. So, there's going to be some corporate risk. You could proceed at risk, you could stop, you can go get the natural history, you can understand it better.
That would help you both for the natural history and the outcomes covariates. If the real-world variables are not like to trial variables, you have to look at variable by variable. I identify the differences and explain how that might have an impact on any interpretation of study results in externally controlled trial. So, this is how we prosecute a systematic screening questions. I mentioned in that flow chart on the very right was the 15 key decision points. We created this out of necessity. They're slightly different from the FDAs. We're now remapping those to be consistent with FDA and we'll wait until the final guidance out, but these are the decision points that we've identified. They include not only study design but process and methods.
So, it starts with the regulatory statutory and FDA regulatory criteria. It talks about evidence objectives, target trial framework. Each one of those has a series of questions that needs to be asked and then answered. Not only that, but the answers needed by document and writing who the decision makers were. So, we really encourage key decision makers to be involved, so that you're not reversing decisions that have been made because reversing decisions that have been carefully made with the right people results in additional time and the results in a delay of drug approval. I mentioned the five types of feasibility and that's multifaceted and it's highly iterative, which means you might go between one versus another, one might impact the other.
Let me give you an example of scientific feasibility. What if you come and you ask, can we do an external control on this drug, an indication, specific combination? But there's a biomarker that's needed to identify the patients or to characterize treatment outcome. If that biomarker or biomarkers are not available in the real-world data under conditions, routine clinical practice, then it's unlikely that externally controlled trial is going to be feasible right from the start. We've had a number of examples and requests of this. Well, that's using real world data. The next question I ask, let's start with the preferred level of data, and that is trial data. Does any trial data exist that you could potentially use as an external control?
Regulatory feasibility includes what's the precedent, what are the regulatory expectations, and what's the engagement with them with you or others that you know of or that we know of because we work with a number of sponsors. Data feasibility includes what type of data it is, what's its availability, do you get it in patient level aggregate data, can you get the data, does the site keep it or the source keep it, what's the access to that data? We need to do a medical informatics assessment because you get data from multiple sources. You need to make sure that data is captured and defined in the same way across those sources. What are the data collection modalities available?
And then the bottom one is the site and HCP. That's the type of traditional feasibility talk. Are the site investigators interested in it? Are they experienced in this type of research? Do they have the capacity, the capability? Do they more importantly have the patience for in a contemporaneous control or a historical control? We don't want to lose sight. We add the fifth one here on payer expectations. Do they? It's important because in Europe, you need cost-effectiveness. In US, you need to get on the formula, you need to establish that it's going to be cost-effective as well. So, what are the payer expectations? Do they discount the evidence because it's real-world data and not randomized clinical trial?
If so, how much do they discount it? And then do you have the right relative comparator for this external control? Again, that might be different from what it might get you a regulatory submission. All right, there are three data options for external controls. I mentioned clinical trial data only, hybrid between trial data and real-world data, and then real-world data only and other real-world data. There are multiple sources of this data. So, the question then becomes how do I prosecute? How do I evaluate all these various sources? There's electronic health records, there's claims and billing activities, there's product disease registries, there's natural history studies, one that you may be running yourself or that you commissioned to do through a literature review or a database study.
There's patient-generated data including in-home use settings. There's data gathered from other sources that can inform on health status such as mobile devices, the digital world. So, how does one identify and prosecute that data? Remember on the right, if the first question is asked, "What's the data we can use for external control?", that's the wrong first question. Start back up the challenge questions they have. If you were pressed to ask what are the next five questions, go to the ICH E10 guidance document for those series of five questions that starts with, "Do we know the natural history? Do we know outcomes covariates? Do we know if its primary objective is objective, et cetera, and [inaudible 00:37:33]?" All right.
So, when we have multiple sources of data, and I mentioned that one of the orphan drug designation applications that I did had 1,100 different sources of oncology data. So, what's one approach that you can do? The answer is, well, there's a publication that came out in January of 2022 and these authors, Nicolle Gatto and Robert Reynolds, et al. called the structured process to identify fit-for-purpose data, a data feasibility assessment framework, or SPIFD. This is an excellent way as a quantitative method to evaluate different data sources. Then you can say, "Well, I've applied that systematically, comprehensively to each of the data sources and I've got a score for this." So we know how one data source measures up to the other.
So, this is a great example of which to use for evaluating multiple sources of data and then identifying the one that is fit for a purpose and particularly for regulatory submission. All right. So, how do you bring together the right questions, the lessons learned, the five screening questions, the 15 key decision points, the five types of feasibility, the three data options, the evidence generation matrix, the decision-making framework, that nice flow diagram from beginning to end, the slide before on the structured process for evaluating fit-to-purpose data for a regulatory submission. Then the answer, it's our solution and one that we recommend, because there's so many moving parts, is to come up with a workshop, a scenario planning workshop.
So, we do this in essentially four different phases. Phase one, two, and three are designed to get you in front of a regulatory agency with a sufficient level of information that's encoded in a protocol abstract or synopsis, as well that addresses all the prior issues that FDA has commented on, criticized prior applications with respect to external controls or the use of real-world data. So, phase one is getting together individuals that we call the A team, those who are and know the key decision makers and you align on your business needs and provide and conduct an evidence gap assessment. What do we know about those five screening questions? What do we know about the purpose and how we're prosecuting this external control?
And then using the target trial framework and coming up with the evidence objectives, clearly understanding the trial that you're emulating or that you're identifying and then how you're going to emulate that. Then the phase two is what we call a scenario planning workshop where you plan for and facilitate this workshop. The idea is then to go through the lessons learned, the screening questions, the 15 key decision points. When we say decision points, they're all decisions points that could say, "You know what? Based upon what we just learned and found, we decide not to proceed ahead with the external control." I think there needs to be sufficient permission and latitude and opportunity that once you engage on external control, that you have many relief valves or exit opportunities in order to say it is not feasible.
We need to have a go/no-go decision and rapidly make that decision. So, built into our thinking is this pathway, this workshop, and this process that allows you to go through those decision points, the five types of feasibility, et cetera. Then the goal here is to come with a protocol abstract or even a draft protocol, but keep in mind this iterative approach and it's a draft and a regulatory briefing package that addresses all the prior issues FDA has mentioned. Then you have already identified the potential sources of bias in your program and how you've mitigated those through the study design. One of ours that we helped somebody with was on lead time bias.
There was a concern that those in a clinical trial would be identified the outcome after an event in a therapeutic area, there was a medical procedure done and then they were looking for a specific event and wanted to treat that event. Because in a clinical trial phase or setting, that event would be monitored and observed for more frequently in the real world setting and they could have an adverse outcome. Then you have what's called lead time bias there, because you're looking for that event more frequently in the clinical trial data. That regulatory briefing package in the protocol abstract or document draft protocol will address all potential sources, how they're mitigated either through design or through analysis.
Then at that point, you could either wait until FDA feedback or to pursue ahead. You can start feasibility with sites in these five areas, scientific, regulatory, site, data and payer. You could start doing this in parallel. There's an opportunity cost if you don't. If you wait until you get FDA buy-in approval and comments on your protocol, you now are starting at that point. But if you decide to move ahead, it costs money to do that. So, there's a corporate decision, a risk decision related to that. Now even though FDA or regulatory agency may approve your particular approach, then study design your methods with them, then at that point, you are doing full-fledged feasibility.
You'll notice here on the data flow, if you will, decision-making framework. That's at the very end, these 15 key no-go decisions. This is an iterative approach. It's like a hamster wheel. You go around and round on this until you resolve all the questions and all the issues right to all the topics here. You exhaust the questions you have related to the evidence objectives, regulatory expectations, payer expectation, data source options, et cetera, that you walk all the way through sample size, endpoint, or cohort balancing, statistical myth that's related to that. All right. So, now at that point then, and these timelines vary, these timelines that we have in there is six to eight weeks for this.
It'd be longer and depends upon where you are in this, but we put this here as an example and these black triangles represent go/no-go decision points. But truthfully, there's probably multiple ones that occur during this decision and scenario planning workshop and in preparation for that back in phase one as well. All right. So, this is an example and I think it's important just to note the data decision points that allow one to proceed along that pathway. To make fully informed decisions about whether to proceed or you know what, we've got this far, it no longer looks like it's going to be scientifically or regulatory or site or data feasible to conduct this. All right, lessons learned. I started here. I thought I'll stop here. So, use a structured approach.
We know feasibility. I shared that there's five levels of feasibility. Those levels of feasibility are additive and not monolithic. They're multifaceted and they're highly integrated. Number three, need to identify the appropriate epidemiologic principles, observational research, and apply them to the study design and all aspects of the methods and observations with a real emphasis on design, process, and methodology. You really need a systematic method to evaluate potential sources of real world data. That structured approach that I gave, fit-for-purpose framework is a good example.
Number five is to understand what regulatory expectations are regarding their thinking about external control for a certain specific combination of drug and indication, their thinking about the statistical methods, their thinking about cohort balancing. Is it estimated propensity score? Is it inverse weight probability or some combination of that? What do they think about endpoint analysis? What are their thinking about the need to validate endpoints that you might have? Then six, you need to systematically evaluate for potential biases.
This is one that gets a lot of criticism from FDA to market authorization holders because they're not appropriately addressed or identified. Then lastly is alignment with FDA on study objectives methodology. Thank you for this. At this time, that concludes that presentation. I want to thank you for your attention. At this time, Katherine, I think there are some questions from the audience and we'd be happy to answer any of those questions. Katherine, you're on mute.
Katherine Cloninger:
All right. Thank you, David. It looks like we've got a few questions that have already come in, but I would also like to remind the audience if you have additional questions right now, please use the Q&A box at the bottom of your screen to send any additional questions. David, it actually looks like the first question that we have received has your name on it. So, the question reads, you discussed the importance of proper trial design. Can you elaborate on the level of rigor in study designs for ECAs that are most likely to be accepted as the formal comparator for FDA?
David Brown:
Right, I think both Carrie and I discussed this. FDA does in their guidance, and you have to meet the regulatory statutory threshold for a well-controlled study. Because we're using observational data and observational research methods, we know that there's potential biases that are there. So, I hear what's interesting for regulatory submission, this should be at the utmost regulatory scientific rigor. In fact, if I push this to the audience in this slide right here, you'll actually see in the top right hand the trial evidence.
We talked about a regulatory criteria in target audience. We go through, "What's the purpose for this?" Because there's different levels of rigor and a regulatory submission for a new product has the highest rigor, scientific rigor, methodological rigor, process rigor that needs to be considered. That's why I think the systematic approach to a structured decision-making framework is so important.
Katherine Cloninger:
Thank you, David. I think the next question, I'll ask Carrie to answer. You mentioned in-depth engagement with FDA when using ECAs. At what stage of planning should you engage with FDA?
Carrie Jones:
Yeah, that's a great question and it's pretty well-covered in the draft guidance. The FDA does expect to have a discussion early on. I think our mantra is work with the FDA early and often. So, in the trial design phase, once you've got your trial design, your protocol, your plan for your statistical analysis, your analytic methodologies for addressing compounders and biases, that's the time to work with the appropriate division within the FDA to review your plan. Then if you do have to make changes, which they discourage, but if you do have to make changes to your analytic approach later on, circling back to the FDA and making sure that you're going over the rationale for those changes and your new approach at that stage is going to be critical.
So, I think it aligns well with David's slide, going through the phased approach, having your workshop where you're putting together your briefing for the FDA once you've got your protocol draft put together.
Katherine Cloninger:
Carrie, it looks like the next question is probably best answered by you as well. In the survey you referenced, what were the biggest benefits associated with ECAs?
Carrie Jones:
Yeah, let me pull up that slide, one second. Yeah, so we talked about this a little bit at the front. In the survey that we conducted, those respondents that were familiar with ECAs did see advantages of ECAs in improving the data quality of their study, accelerating timelines, reducing costs, and to a lesser degree, improving patient retention or engagement and reducing risk. But I would say improving data quality was the number one area where the ECA surpassed their expectations.
Katherine Cloninger:
David, it looks like it's back over to you for the next question. What do you see as the most common design flaws in ECAs?
David Brown:
This goes back to the prior question of, "What rigor do we need?" For external control is most likely to be accepted, and it would be for those that address... I'll characterize the flaws in three different areas, process flaws, methodological flaws, and design flaws. So, I'll give you an example of design flaw. I'll start in reverse order. Design flaw would be the wrong comparator where the comparator is different and therefore the risk for outcome is different from the investigational arm and as well as a subjective endpoint. FDA, all the communication we have prefers objective endpoint. So, subjective endpoints are often criticized because of the lack of dependency on those. Then on process, I think of things like a charter.
So, one of the very first letters I saw from FDA to a company was talking about the charter. There was going to be a systematic review of medical records, electronic health system records, using an algorithm for signs and symptoms to identify putative patients. They established a charter. So, once initial information was given to this expert committee, expert committee would say, "We think there's submission information here to say that there are in fact a case." You should abstract more information and come back to here. So, there's a charter in place around that. One of the criticisms they saw is the charter changed over time.
I think Carrie mentioned this as well, that you have to think about all these things up front, all of the potential considerations and try to address those initially. Yes, protocol amendments are inevitable, but try to anticipate a charter and keeping that consistent, particularly with respect to the type of analysis you're going to do. Then the other type of design, I mentioned our process was like a firewall. So, when you do cohort balancing, people may not appreciate this, but you need two biostatisticians, one biostatistician who optimizes the cohort balancing and the second biostatistician who's responsible for the endpoint analysis. So, the first biostatistician optimizes the cohorts, the balance in between them. He passes that off, but there's a firewall in between these two biostatisticians.
So, that the second biostatistician can't balance the cohorts to maximize treatment effect. So, that's an example of a couple of reasons. Then methods, of course, all the different sorts of bias there and missing data and the fact that sometimes you get real-world data that's captured at a different cadence, a different frequency, but different definitions than trial data. So, that would be some examples of process, of methods, and study design, some of the biggest flaws.
Katherine Cloninger:
Well, and David, I'll ask you to take the last question that it looks like we have. What resources or partnerships do you recommend to execute an ECA?
David Brown:
Well, so as I said, I've been in the industry 21 years, and I have seen the evolution of the use of observational data that we call now real-world data. I've seen the evolution of statistical and observational research methods related to that. From 2021 where the ICH came out and interesting now, there's going to be another ICH guidance that will come out in 2025 on form of epidemiology studies that's for drug safety. So, there's been an evolution. Just in the last of 18 months or so, FDA has come out with three guidance documents related to real-world data, real-world evidence. They now have an excellent external control guidance document draft. I think if you're going to do this internally, you do need to have intimate understanding of the landscape. It is evolving so fast, rapidly.
You need to understand the policy implications. You need to understand the regulatory implications. So, if you do that internally, then you have to have a team and a group and resource adequately for that. Or if you partner with somebody, they need to help you understand when and what are the right questions to ask at the right time and in a way that enables them to make the most efficient, and I would say even brutally efficient decisions for go/no-go. Because the landscape is just changing so fast that you have to dedicate resources to the external control space for regulatory submissions, as well the use of real-world data to generate real-world evidence, not only support of the traditional drug safety, but in efficacy evaluations of therapeutics.
Katherine Cloninger:
Well, it looks like that's the last of our questions, but I would like to invite anyone in the audience, if you've got any unanswered questions or if there's any additional areas that you'd like to explore or discuss with Health Advances or Parexel, please reach out and we look forward to continuing the conversation. On a final note, I'd like to remind everyone that our next webinar is May 18th. So, if you'll join us for an expert discussion on adaptive trials, we have some additional experts from Parexel and Health Advances to lead the discussion.
David Brown:
Thank you, Katherine. Thank you, everybody.
Carrie Jones:
Thank you.
Related Insights
Playbook
Early-phase development strategies for navigating regulatory complexity in the EU
Apr 29, 2024
Article
Guide for Real-World Evidence
May 21, 2021
Playbook
What emerging trends in the FDA’s most coveted designations might tell us
Feb 8, 2024
Playbook
Are you using real-world evidence?
Feb 1, 2023
Article
Near-term strategies for biotech drug developers facing shifting healthcare dynamics
Feb 14, 2024
Blog
Studying multiple versions of a cellular or gene therapy product in an early-phase clinical trial
Nov 19, 2021
Article
Lessons from China and the United States on the use of RWE in regulatory submissions
Jul 19, 2021
Video
Overcoming asset transfer challenges during a merger and acquisition (M&A)
Oct 20, 2021
Article
Expedited Pathways Comparisons - US EU CHN
Oct 19, 2021
Article
How a joined-up development strategy pays off for early-stage biotechs
May 19, 2021
Article
8 things you need to know about eCTDs in China
Jul 1, 2022
Webinar
Adaptive strategies for more efficient, data-rich and patient-friendly trials
May 28, 2023
Related Insights
Playbook
Early-phase development strategies for navigating regulatory complexity in the EU
Apr 29, 2024
Article
Guide for Real-World Evidence
May 21, 2021
Playbook
What emerging trends in the FDA’s most coveted designations might tell us
Feb 8, 2024
Playbook
Are you using real-world evidence?
Feb 1, 2023
Article
Near-term strategies for biotech drug developers facing shifting healthcare dynamics
Feb 14, 2024
Blog
Studying multiple versions of a cellular or gene therapy product in an early-phase clinical trial
Nov 19, 2021
Article
Lessons from China and the United States on the use of RWE in regulatory submissions
Jul 19, 2021
Video
Overcoming asset transfer challenges during a merger and acquisition (M&A)
Oct 20, 2021
Article
Expedited Pathways Comparisons - US EU CHN
Oct 19, 2021
Article
How a joined-up development strategy pays off for early-stage biotechs
May 19, 2021
Article
8 things you need to know about eCTDs in China
Jul 1, 2022
Webinar
Adaptive strategies for more efficient, data-rich and patient-friendly trials
May 28, 2023