ASA Connect

 View Only

Metas epub now availablw

  • 1.  Metas epub now availablw

    Posted 07-18-2023 08:44

    E-Publication on meta-analysis of clinical trials: Be rigorous.

    Jonathan J. Shuster

    The scientific truth.  Until now, Mainstream random effects meta-analysis advocates have never proven its distribution theory.  But now, we have strongly disproven its scientific basis.  In the future, science must trump tradition.

    Here is why the mainstream random effects meta-analysis should never be used in applications of public health importance. It relies on a theory that if assumptions X are true (listed below) then the statistical properties Y follow.  Both links below have independent peer-reviewed proof that Assumption A4 is seriously incorrect.  If X is false, then one cannot assume Y is correct.  You will be shown serious bias in the point estimator, incorrect variance estimator, and a real coverage of the purported 95% confidence interval that converges to 0% as the number of studies gets very large!  Using the mainstream is a danger to public health policy, and that is the only reason for my blogs.  Some of my credentials are listed below. This is in no way quackery.  Science speaks for itself.  As long as society respects rigor over tradition, major statistical malpractice can be prevented in the future.   I call it innocent miss-practice for what has occurred in the past.

    Assumptions X (mainstream acknowledges these)  A1: The true primary effect sizes for each study are drawn independently from a single large conceptual "urn" of primary effect sizes.  A2: The true primary effect sizes in the urn follow a normal (bell-shaped) distribution whose unweighted mean is the target parameter of interest. A3: The individual study provides an unbiased estimate of its study-specific true primary effect size and has an approximate normal distribution about its true primary effect size.  A4: Up to a strong approximation, the weights are "constants" rather than seriously random variables. In other words, if you repeat the total experiment under the same assumptions A1-A3 and same urn, this assumption presumes you obtain identical weights up to a strong approximation.  A5: There is no association between study weight and study true effect size. For example, if big studies tend to have higher (lower) effect sizes than smaller studies, the method will tend to overestimate (underestimate) respectively the overall effect size. This could lead to unacceptable bias.

    To avoid future statistical malpractice in random-effects meta-analysis of collections of clinical trials and other meta-analyses of important public health issues, it is imperative that the statistics community cease and desist from using or advocating mainstream methods (weighting inversely proportional to the estimated study-specific variance).  It has happened innocently in the past, but the statistical community is at least partly responsible for the future.  Continuing with business as usual can cause public health disasters caused by statistical practice. I now bring up some critically important facts gleaned from the two papers linked below.

    From the first link, you will note that two of the most cited mainstream applications to collections of clinical trials reached unsupportable conclusions and arguably (then innocently) caused harm to patients. Example 1 was published in the Journal of the American Medical Association and there is no evidence that the invasive intervention (now common medical practice) has any value. Example two was published in the New England Journal of Medicine, where had survey sampling methods been available and used, three years of wide use rosiglitazone could have been prevented, thereby saving thousands of myocardial infarctions. These were not anyone's fault then, but continued use of the mainstream methods are sure to cause future similar public health disasters.  There are 800 PubMed reports of meta-analysis of randomized clinical trials each year,

    Here are some critical facts from the second link that have been rigorously proven under one of the most stringent peer-reviews ever.

    1.      Other than equal weighting (which no one advocates), the weights are seriously random variables, making the treatment of the estimate as a linear combination invalid (assumption A4 is seriously violated). To use the mainstream results, weights must be at least to a strong approximation, constants.

    2.      Formula (4) tells you that whenever the correlation between the weight and estimated effect size are not exactly zero (a) the estimate of effect size is biased and (b) the estimate is inconsistent as the number of studies being combined gets large.  In other words, it guarantees that regardless of how small a truly non-zero correlation between weight and estimated effect size is in your sequence of adding studies, the coverage probability of your purported 95% confidence interval converges to 0%.

    3.      The correct formula for the weighted variance estimate is given in equation (5). The usual formula (3) is wrong as it inappropriately treats the weights as constants rather than the truly seriously random variables that they are.

    Actions needed:

    A.    Never employ the mainstream methods in situations of public health importance. 

    B.     If you review an important paper that uses mainstream methods, make sure a rigorous method is employed.

    C.     If you teach a course that includes meta-analysis, make sure it includes rigorous methods and assess the validity of the mainstream methods

    D.    If you are involved in current software, make sure it takes account of these recent developments. Business as usual could be damaging to public health.

    Challenge: So far, despite invitations for comments from the two major developers of commercial software companies (Comprehensive Meta-Analysis and REVMAN) no scientific pushback has been received from anyone.

    Two blog readers questioned whether this was just Hartung-Knapp, but the answer is NO!  The only thing in common is the use of the T-distribution.  Hartung-Knapp uses weights inversely proportional to the estimated study-specific variance whereas my approach is ratio estimation. It also uses different degrees of freedom and a ratio of minimum variance unbiased estimators.

    A scientific argument supporting the mainstream is welcome.   But prove your points in a model-free setting.

    Some of my credentials:

    • Over 400 peer-reviewed articles with zero errata for statistics.  A fair number of these articles deal with meta-analysis
    • Over $30 million in career NIH grants as Principal Investigator
    • 15 years' service on NIH standing grant review committees
    • Over 10 years as editorial board member of Research Synthesis Methods
    • Invited by Ingram Olkin to review the Cochrane Handbook, second edition, which I published in Research Synthesis Methods.
    • Invited instructor on meta-analysis of clinical trials at 2019 Eastern North American Regional meeting of the American Statistical Association, Philadelphia.

     My translational science meta paper is now an E publication at the link below.

    http://dx.doi.org/10.18053/jctres.09.202304.22-00019. Click or tap if you trust this link." rel="noopener">10.18053/jctres.09.202304.22-00019

     

    My biostatistical paper on the subject in the journal Biostatistics and Biometrics is at the link below.

     

    Meta-Analysis 2020: A Dire Alert and a Fix juniperpublishers.com)



    ------------------------------
    Jonathan Shuster
    ------------------------------