2017 Webinars

2017 Webinars

An Introduction to Bayesian Nonparametric Methods for Causal Inference in Pharmacoepidemiology
Jason Roy (University of Pennsylvania)
February 23

In this webinar we provide an overview of Bayesian nonparametric (BNP) approaches to causal inference from observational data. One of the concerns about using fully Bayesian methods in these kinds of studies has been possible mis-specification of models for high dimensional conditional or joint distributions (such as the conditional distribution of outcome given confounders). Recent advances in Bayesian nonparametric methods, however, opens the door to using fully Bayesian methods that make minimal assumptions about the observed data. The combination of the observed data model and causal assumptions allows for identification of any type of causal effect - differences, ratios, or quantile effects, either marginally or for subpopulations of interest. In the first half of the webinar we will review BNP methods. In the second half, we will focus on causal inference problems and illustrate with examples in pharmacoepidemiology. Software and implementation will also be discussed.

Bayesian Biopharmaceutical Applications Using SAS
Fang Chen (SAS Institute) & Frank Liu (Merck)
April 11

This two-part tutorial first introduces the general purpose simulation MCMC procedure in SAS, then presents a number of pharma-related data analysis examples and case studies. The objective is to equip attendees with useful Bayesian computational tools through worked-out examples that are often encountered in the pharma industry. The MCMC procedure is a general purpose Markov chain Monte Carlo simulation tool designed to fit a wide range of Bayesian models, including linear or nonlinear models, multi-level hierarchical models, models with nonstandard likelihood function or prior distributions, and missing data problems. The first part of the tutorial provides a brief brief introduction to PROC MCMC and demonstrates its use with a number of simple applications, such as Monte Carlo simulation, regression models, and random-effects models. The second part of the tutorial takes a topic-driven approach to cover a number of case studies encountered in the pharmaceutical field. Topics include posterior predictions, borrowing historical information, analysis of missing data, and topics in Bayesian designs and simulations. This tutorial is intended for statisticians who are interested in Bayesian computation. Attendees should have a basic understanding of Bayesian methods (the tutorial does not allocate time covering basic concepts of Bayesian inference) and experience using the SAS language.

Sequential and Adaptive Analysis with Time-to-Event Endpoints
Scott S. Emerson (University of Washington)
April 18

A great many confirmatory phase 3 clinical trials have as their primary endpoint a comparison of the distribution of time to some event (e.g., time to death or progression free survival). The most common statistical analysis models include the logrank test (usually unweighted, but possibly weighted) and/or the proportional hazards regression model. Just as commonly, the true distributions do not satisfy a proportional hazards assumption. Providing users are aware of the nuances of those methods, such departures need not preclude the use of those analytic techniques any more than violations of the location shift hypothesis precludes the use of the t test. However, with the increasing interest in the use of adaptive sample size re-estimation, adaptive enrichment, response-adaptive randomization, and adaptive selection of doses and/or treatments, there are many issues (scientific, ethical, statistical, and logistical) that need to be considered. In fact, when considering references to “less well understood” methods in the draft FDA guidance on adaptive designs, it is likely the case that many of the difficulties in adaptive time to event analyses can relate as much to aspects of survival analysis that are “less well understood” as to aspects of the adaptive methodology that has not been fully vetted. In this webinar I discuss some aspects of the analysis of censored time to event data that must be carefully considered in sequential and adaptive sampling. In particular, we discuss how the changing censoring distribution during a sequential trial affects the analysis of distributions with crossing hazards and crossing survival curves, as well as issues that arise owing to the ancillary information about eventual event times that might be available on subjects who are censored at an adaptive analysis.

Statistical Methods for Dynamic Treatment Regimens and Sequential Multiple Assignment Randomized Trial
Abdus S. Wahed & Yu Cheng (University of Pittsburgh)
May 9

Dynamic treatment regimens (DTRs) are sets of decision rules for choosing effective treatments for individual patients, based on their characteristics and intermediate responses. In DTRs the treatment level and type can vary depending on evolving measurements of subject-specific determinants of treatment. Since these regimens provide treatment that is adapted to individual needs, they often produce a more favorable clinical outcome and are cost-effective by avoiding overtreatment or under treatment. With more and more emphasize being put on individualizing treatments, DTRs are becoming common in recent medical studies. DTRs operationalize how clinicians practice medicine. By creating evidence-based DTRs that take as input the available information on the patient up to that point and dictate the next treatment from among the available options, the patient’s treatment is individually optimized and thus the population of the patients is best treated. DTRs consider treatment as a sequential multi-stage decision-making process rather than single steps of treatments to achieve better long-term outcomes for patients. In the first part of the webinar, we will discuss the definition and goals of a DTR and its relationship with clinical practice. The evaluation of a DTR is often laid out in terms of causal inference and potential outcomes. We will introduce key assumptions in the causal inference framework including the assumption of “no unmeasured confounders”, and discuss some existing estimation and modeling strategies in the literature.

Though there is nice conceptual theory, the inference on DTRs is often challenging in practice. If the data are from observational studies such as registries, the most pressing issue is how we can ensure that sufficient information has been collected so that the key assumption of “no unmeasured confounders” is tenable. More definite inferences can be obtained if the data come from Sequential Multiple Assignment Randomized Trial (SMART) studies, where patients are randomly assigned to the initial treatments and then randomized to available treatments in subsequent stages based on their intermediate response. Thus, the focus of the second part of this short course is on SMART designs. We will first introduce some common SMART designs and applications, and discuss guiding principles in designing a SMART study and existing methods to analyze various outcomes from SMART studies. We will also go over some practical issues in implementing SMARTs such as sample size considerations and creation of multiple randomization lists, as well as, methodological challenges in analyzing SMART data.

Key Multiplicity Issues in Clinical Trials
Alex Dmitrienko (Mediana Inc)
June 2

The webinar will review key multiplicity issues arising in confirmatory clinical trials with multiple objectives, including multiple endpoints, dose-control comparisons and patient populations, etc. An overview of multiplicity adjustments used in traditional problems with a single source of multiplicity as well as recent advances in this area, including methods for “multidimensional” multiplicity problems (gatekeeping procedures), will be presented. Gatekeeping procedures have attracted much attention in clinical trials with complex multiple objectives due to the fact that they enable trial sponsors to enrich product labels by including information on relevant secondary objectives. The webinar will offer a well-balanced mix of theory and applications with case studies based on real clinical trials and a detailed discussion of regulatory considerations, including the FDA’s recently released draft guidance on multiple endpoints.

Introduction to Statistical Approaches to Comparative Effectiveness Research
Sharon-Lise Normand (Harvard Medical School)
September 13

Comparative Effectiveness Research (CER) refers to a body of research that generates and synthesizes evidence on the comparative benefits and harms of alternative interventions to prevent, diagnose, treat, and monitor clinical conditions, or to improve the delivery of health care. The evidence from CER is intended to support clinical and policy decision making at both the individual and the population level.  While the growth of massive health care data sources has given rise to new opportunities for CER, several statistical challenges have also emerged.  This tutorial provides an overview of the types of research questions addressed by CER, reviews the main statistical methodology currently utilized, and highlights areas where new methodology is required. Inferential issues in the “big data” context are identified.  Examples from cardiology will illustrate methodological issues.

 **Thanks to Center for Devices and Radiological Health, FDA (U01-FD004493) and to the National Institute of General Medical Sciences (RO1-GM111339) for providing funding for the methodology used in the context of medical devices and Bayesian approaches.

Building a Bayesian Decision-Theoretic Framework to Design Biomarker-Driven Studies in Early Phase Clinical Development
Danny Yu (Eli Lilly & Co)
September 29

Decision theory as a subfield of Artificial Intelligence provides a quantitative strategy to guide decision makers, based on possible outcomes of all scenarios. It enables optimal selection that balances the potential gain and loss. For clinical drug development, prior to the start of a series of trials, it is crucial to characterize and quantify all the possible results that will happen after a decision. Such a quantitative strategy can potentially help decision makers select the trials delivering relatively high response rate with high predicted probability while minimizing the cost on unnecessary studies. Furthermore, a common question in clinical studies is about the sample size for a trial. Using conventional approach (controlling type I and type II errors for a typical effect size) may lead to large sample size, which is typical for phase III confirmatory studies with known tailoring biomarker. However, this frequentist approach may not be appropriate to design multiple pilot experiments or small trials for the purpose of biomarker identification in early phase studies. Therefore, the Bayesian decision-theoretic framework will be explored and applied to construct a tree of probabilities. The root nodes of the tree are the possible actions (i.e. a set of pilot studies or small trials for drug development). The branch nodes are the factors affecting the outcomes (such as safety, efficacy, cost, population size, etc.). The leaves are the outcomes. This presentation will focus on the benefit as well as challenges of Bayesian decision-theoretic approach in biomarker-driven studies.

Quantitative Sciences for Safety Monitoring during Clinical Development
Greg Ball (Merck), Judy Li (Regeneron) & William Wang (Merck)
October 10

In an effort to better promote public health and protect patient safety, there is growing interest in developing a systematic approach for safety evaluation of pharmaceutical products, not only for post-marketing safety surveillance, but also for pre-marketing safety monitoring.  Recent regulatory guidance, such as CIOMS VI, ICH E2C and FDA IND safety reporting guidance (2012, 2015), have highlighted the importance and given recommendations on aggregate safety evaluation.  Biostatisticians and other quantitative scientists can closely engage with clinical and regulatory scientists and play a vital role in these efforts. In 2015, to better enable this, the ASA Biopharm section established a working group on clinical safety monitoring.

This webinar will present the work that has been done by this ASA Safety Monitoring working group, with the following components:

 - Global regulatory landscape for quantitative safety evaluation
 - Results of thought leader interviews and an industry-wide survey on current process and technology enablement
 - Discussion of various statistical methods for safety monitoring , such as:
      - Blinded vs unblinded analyses
      - Frequentist vs Bayesian approaches
      - Premarketing vs post marketing strategies
      - Static vs dynamic assessments

We will conclude with a Q&A session.

Regression Models for Censored Data: When it's NOT a good idea to use PH, AFT and other such models?
Sujit Ghosh (NC State)
November 14

In many clinical applications of survival analysis with covariates, majority of practitioners routinely choose to use proportional hazard (PH) based regression models when in fact it may not be appropriate and may even lead to erroneous inference. The commonly used semiparametric assumptions (e.g., AFT, PH and proportional odds, etc.) may turn out to be stringent and unrealistic, particularly when there is scientific background to believe that survival curves under different covariate combinations will cross during the study period. This webinar presents a very flexible class of nonparametric regression models for the conditional hazard function. The methodology presented is known to have three key features: (i) the smooth estimator of the conditional hazard rate has been shown to be a unique solution of a strictly convex optimization problem for a wide range of applications; making it computationally attractive, (ii) the model has been shown to encompass a proportional hazards structure, and (iii) large sample properties including consistency and convergence rates have been established under a set of mild regularity conditions. Following a brief introduction of the newly proposed methodology, the webinar will however focus more on illustrating the empirical performances of the methods using several simulated and real case studies. The attendees are encouraged to read the published paper (see below) and related R codes will be provided at the webinar.