2018 Webinars

2018 Webinars

Advancing the Interpretation of Patient-Reported Outcome Data
Joseph Cappelleri (Pfizer), Lisa A. Kammerman (AstraZeneca) & Kathleen W. Wyrwich (Eli Lilly)
February 22

This webinar discusses approaches for interpreting patient-reported outcome (PRO) data that are intended to support labeling and promotional claims or to extend beyond them, for instance, for publications. PRO measures used for claims and publications should have interpretation guidelines that are useful and meaningful to patients in clinical studies. Two conventional ways to interpret PRO scores are anchor-based methods and distribution-based methods. Anchor-based approaches use a criterion measure that is clinically interpretable and correlated with the targeted PRO measure of interest. Examples include reference-based interpretation, content-based interpretation, and responder analysis. Distribution-based approaches use the statistical distribution of the data to gauge the meaning of PRO scores. Examples include effect size, probability of relative benefit, and cumulative distribution functions. In addition, two novel approaches – bookmarking and qualitative explorations – will be featured. We will also discuss the interpretation of PRO data in the presence of missing data and in the context of estimands. Moreover, some regulatory considerations will be highlighted. Illustrations and real-life applications will be given throughout.

Thirty years of Numbers Needed to Treat (NNT): Why They Don't Mean What Many People Think They Do
Stephen Senn (Luxembourg Institute of Health)
March 13

The Wikipedia entry (consulted on 15 February 2018) on NNTs states the following
The number needed to treat (NNT) is an epidemiological measure used in communicating the effectiveness of a health-care intervention, typically a treatment with medication. The NNT is the average number of patients who need to be treated to prevent one additional bad outcome (e.g. the number of patients that need to be treated for one of them to benefit compared with a control in a clinical trial). It is defined as the inverse of the absolute risk reduction. It was described in 1988 by McMaster University's Laupacis, Sackett and Roberts. The ideal NNT is 1, where everyone improves with treatment and no one improves with control. The higher the NNT, the less effective is the treatment.
The article is generally helpful, yet in my opinion the second sentence encourages misunderstanding. NNTs seems to suffer from a problem that P-values have: the take home message for many users is simply wrong.

The problem is not necessarily inherent to NNTs but it is partly a side effect of wanting to calculate them. Many clinical trial outcome measures, for example, are not naturally binary but patients are frequently classified as ‘responders’ or ‘non-responders’ based on either, a dichotomy of a continuous outcome at a given time-point or the dichotomy of a time to event measure at a given time of follow-up. I shall show how both of these cases cause problems.

The talk is not deep but in my view statisticians should be doing more to explain the problem.. I am confident that not only will some of the audience regard it as obvious but also that some will regard it as wrong. That’s excuse enough for giving it.

Adaptive Enrichment Trial Designs: Statistical Methods, Trial Optimization Software, and Case Studies
Michael Rosenblum (Johns Hopkins)
May 15

This webinar focuses on adaptive enrichment designs, that is, designs with preplanned rules for modifying enrollment criteria based on data accrued in an ongoing trial. For example, enrollment of a subpopulation where there is sufficient evidence of treatment efficacy, futility, or harm could be stopped, while enrollment for the complementary subpopulation is continued. Such designs may be useful when it’s suspected that a subpopulation may benefit more than the overall population. The subpopulation could be defined by a risk score or biomarker measured at baseline. Adaptive enrichment designs have potential to provide stronger evidence than standard designs about treatment benefits for the subpopulation, its complement, and the combined population. We present new statistical methods for adaptive enrichment designs, simulation-based case studies in Stroke and Heart Disease, and open-source adaptive design optimization software. The tradeoffs involved in using adaptive enrichment designs, compared to standard designs, will be presented. Our software searches over hundreds of candidate adaptive designs with the aim of finding one that satisfies the user’s requirements for power and Type I error at the minimum sample size, which is then compared to simpler designs in terms of sample size, duration, power, Type I error, and bias in an automatically generated report.

Assessing Biosimilarity and Interchangeability: Issues and Recent Development
Shein-Chung Chow (Duke University)
June 19

Biological drugs are much more complicated than chemically synthesized, small-molecule drugs. For instance, their size is much larger and structure is more complicated. In addition, they can be sensitive to environmental conditions such as light, temperature or pressure. Moreover, they may expose patients to immunogenic reactions. Consequently, the assessment of biosimilarity and interchangeability calls for greater circumspection than the evaluation of bioequivalence. The FDA recommends the use of stepwise approach for obtaining totality-of-the-evidence for demonstration of biosimilarity and interchangeability. The stepwise approach involves analytical similarity assessment, animal studies for toxicity, pharmacokinetic and pharmacodynamics (PK/PD) studies for pharmacological activities, clinical studies including immunogenicity for safety, tolerability, and efficacy. The present communication discusses some current issues and recent development related to the assessment of biosimilarity and interchangeability of biosimilar products. The current issues include (1) biosimilar versus biobetter, (2) how many biosimilar studies are required?, (3) multiple reference products, (4) criteria for highly variable drug products, (5) development of biosimilarity index, (6) analytical similarity assessment for critical quality attributes, (7) drug interchangeability in terms of switching and alternation, (8) study designs that are useful for the assessment of biosimilarity and drug interchangeability, and (9) the issue of (post-approval) non-medical switching, (10) extrapolation of data (both analytical and clinical) across different indications. These issues and corresponding recent development will be discussed.

An Innovative Design to Combine Proof-of-Concept and Dose Ranging
Naitee Ting & QiQi Deng (Boehringer-Ingelheim)
September 27

In Phase II clinical development of a new drug, the two most important deliverables are proof of concept (PoC), and dose ranging. Traditionally a PoC study is designed as the first Phase II clinical trial. In this PoC, there are two treatment groups – a high dose of the study medication, against the placebo control. After the concept is proven, the next Phase II study is a dose ranging design with many test doses. This prsentation proposes a two-stage design with the first stage attempting to generate an early signal of efficacy. If successful, the second stage will adopt a “Go Fast” plan to expand the current study and add lower study doses of the test drug to explore the efficacy dose range. Otherwise, a “Go Slow” strategy is triggered, and the study will stop at a reduced sample size with high dose and placebo only.

Use of Historical Data in Clinical Trial: An Evidence Synthesis Approach (Methods, Applications)
Satrajit Roychoudhury (Pfizer) & Sebastian Weber (Novartis)
October 11

A Bayesian approach provides the formal framework to incorporate external information into the statistical analysis of a clinical trial. There is an intrinsic interest of leveraging all available information for an efficient design and analysis of clinical trials. This allows trials with smaller sample size or with unequal randomization (more subjects on treatment than control). The use of external data in trials are nowadays used in earlier phases of drug development (Trippa, Rosnerand Muller, 2012; French, Thomas and Wang, 2012; Hueber et al., 2012), occasionally in phase III trials (French et al., 2012), and also in special areas such as medical devices (FDA, 2010a), orphan indications (Dupont and Van Wilder, 2011) and extrapolation in pediatric studies (Berry, 1989). Recently, 21st Century Cure Act and PUDUFA VI encourage the use of relevant historical data for efficient design. In this webinar we'll provide a statistical framework to incorporate trial external evidence with real life examples.
During the first part of the webENAR we will introduce the meta-analytic predictive (MAP) model (Neuenschwander, 2010). The MAP model is a Bayesian hierarchical model which combines the evidence from different sources. MAP approach provides a prediction for the current study based on available information while accounting for inherent heterogeneity in the data. This approach can be used widely in different applications of clinical trial.
In the second part of the webENAR we will focus on three key applications of the MAP approach in clinical trial. These applications will be demonstrated using the R package RBesT, the R Bayesian evidence synthesis tools, which are freely available from CRAN. The aim of the webinar is to teach the MAP approach and enable participants to apply the approach themselves with the help of RBesT.

Pragmatic Benefit:Risk Evaluation: Using Outcomes to Analyze Patients Rather than Patients to Analyze Outcomes
Scott Evans (George Washington University)
October 18

Randomized clinical trials are the gold standard for evaluating the benefits and risks of interventions. However these studies often fail to provide the necessary evidence to inform practical medical decision-making. The important implications of these deficiencies are largely absent from discourse in medical research communities.

Typical analyses of clinical trials involve intervention comparisons for each efficacy and safety outcome. Outcome-specific effects are tabulated and potentially systematically or unsystematically combined in benefit:risk analyses with the belief that such analyses inform the totality of effects on patients. However such approaches do not incorporate associations between outcomes of interest, suffer from competing risk challenges, and since efficacy and safety analyses are conducted on different analysis populations, the population to which these benefit:risk analyses apply, is unclear.

This deficit can be remedied with more thoughtful benefit:risk evaluation with a pragmatic focus in future clinical trials. Critical components of this vision include: (i) using outcomes to analyze patients rather than patients to analyze outcomes, (ii) incorporating patient values, and (iii) evaluating personalized effects. Crucial to this approach entails improved understanding of how to analyze one patient before analyzing many. Newly developed approaches to the design and analyses of trials such as partial credit and the desirability of outcome ranking (DOOR), are being implemented to more optimally inform patient treatment.