A few additional comments following from Ed Gracely's:
1) On your point about trying to re-create the p-values: though I have not fully re-created the calculations myself, please be advised that the COVID vaccine trials are generally not testing against a null hypothesis of 0% efficacy but against a null hypothesis of <=30% efficacy. This is because the US FDA had set a target of vaccines requiring to prove a minimum >30% efficacy to receive approval (the rule was actually something like "point estimate showing >50% efficacy with a lower 95%CL bound>30% efficacy" if I recall; as it turns out, several have exceeded that by quite a bit). That may partially explain your difficulty in recreating the calculations, though of course to fully recreate them you should read the sponsor's statistical analysis plan and ensure that you are using the same statistical approach. Some may prove difficult to fully recreate without individual patient data if they adjusted for baseline covariates - at least one of the major vaccine trials did this, I forget which one. Note that they also applied various interim-analysis and alpha-spending strategies which may affect this if they are reported p-values that have been adjusted to account for the interim looks at the data (though usually it seems sponsors report the unadjusted p-value and simply compare against the adjusted threshold for success).
2) On the more general subject of the "absolute risk reduction" from vaccine trials, this line of argument is advanced by people that clearly do not understand the trial designs employed for vaccines. I wrote a piece for Medscape about this...admittedly this requires an account but I believe it is free to create a Medscape account with an email address:
Why Number Needed to Treat Can Be Misleading for Vaccines (medscape.com)I will briefly summarize the key point here. The vaccines are designed using "event-driven" analyses, meaning that the analyses are performed when a certain number of cases of the primary endpoint have occurred. This imposes a "cap" on the "absolute risk reduction" that even *can* be observed in the vaccine trials; if you actually look at the number of participants enrolled and the # of events at which the analyses were scheduled, it's impossible for any of the vaccine trials to show more than about a 1% absolute risk reduction even if all of the cases occur in the placebo arm (e.g. if the vaccine is absolutely perfect at preventing disease). This is a feature, not a bug, despite a lot of people seeming to think it's some sort of "gotcha!" about the vaccines not being all that effective. The reason is that for vaccines to be rolled out in time to do very much good on a population level, they need to be rolled out before the entire population has contracted the disease. If the vaccine trials are required to demonstrate a large absolute risk reduction, they will take a longer time to complete and require a large proportion of the placebo arm to contract disease...but since the placebo arm is likely to *very roughly* approximate the incidence of infection in the general population, this would effectively require that a large proportion of the general population contract disease - e.g. if you want the vaccine to demonstrate a 20% absolute risk reduction, this can only be done if at least 20% if the placebo arm has contracted the disease, which likely would correspond to at least 20% of the general population has contracted the disease - and that's just during the time frame the trial was being carried out (it is more complicated than this, of course, as one might point out that the vaccine trial participants may behave differently than the general population, which is one of the reasons why we need a control group instead of just vaccinating people and comparing their incidence to the general population incidence). The vaccine trials are carried out in a very short time frame using event-driven analyses so they can be deployed in a time frame that allows them to actually have some public health effects before everyone in the population contracts disease. The "ARR" computed from the vaccine trial must be interpreted in context of the time frame in which the trial was carried out (most of these are just a few weeks) and the incidence in the population over that time frame. The precise reason we report VE on the relative scale is not (contrary to popular belief) that it overstates protection; it's that we can actually estimate that quantity relatively invariant to the time and circulating disease prevalence/incidence, whereas the ARR cannot be computed or reported without those other contextual factors.
Anyone seriously writing a piece arguing that the vaccine trials demonstrate a small ARR, or that the "NNT" for vaccines is higher than acceptable, simply does not understand the above points about vaccine trial design. I am sorry if anyone is offended by that level of bluntness but it's a simple fact. I have encountered quite a bit of this on social media and in the comments of Medscape; my general impression is that those who don't back down after being confronted with this are typically not engaging in good faith, though I believe some also are simply too thick to understand the statistical arguments and public health implications.
------------------------------
Andrew D. Althouse, PhD
Assistant Professor of Medicine
Center for Research on Health Care Data Center (CRHC-DC)
Center for Clinical Trials & Data Coordination (CCDC)
University of Pittsburgh School of Medicine
200 Meyran Avenue, Suite 300
Pittsburgh, PA 15213
Email:
ada62@pitt.eduTwitter: @ADAlthousePhD
------------------------------
Original Message:
Sent: 05-11-2021 07:24
From: Edward Gracely
Subject: Covid vaccine success rates
Two comments:
I don't understand their p-values for the relative risk reductions. How can a 95% CI of 90-97% have a p-value of only 0.016? With a null hypothesis of 0% reduction, the p value would be much smaller.
Secondly, their main point is well taken in that absolute risk reductions are important. But mainly if your decision whether to implement is based on an individual-level risk benefit analysis. You have to vaccinate in the neighborhood of 100 people to prevent one case. Knowing that, you could balance the harms (some people getting nasty side effects) against the gains (preventing one case per 100 people vaccinated, with that one case having some chance of serious illness and death).
And, of course, this calculation is heavily affected by the current incidence of the disease. The number of vaccinated people to prevent one case will decrease as cases increase, and increase when the disease is rarer.
But with an infectious disease and the goal of stopping the infection population-wide, The individual level calculations are less useful. We need to know if the vaccine will reduce the reproductive rate (R) of the virus enough to help control it. That calculation is based on the relative risk reduction, not the absolute risk difference. Vaccinating everyone with a 90% effective vaccine multiplies R by 10% (or, reduces it 90%). That's what we need to know.
Since most diseases we vaccinate against are rare, the absolute risk reduction for vaccination is usually small. That's OK. Our purpose isn't protecting just that one person, but in controlling the infection population-wide.
------------------------------
Edward Gracely
Drexel University
Original Message:
Sent: 05-11-2021 02:28
From: Thomas Schmitt
Subject: Covid vaccine success rates
Is anyone aware of more discussion/articles on this topic, especially from members of the ASA community. The only one I could find so far is this: Outcome Reporting Bias in COVID-19 mRNA Vaccine Clinical Trials