ASA Connect

 View Only
  • 1.  Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-17-2022 12:08

    Pacira Biosciences:When  are scientists legally liable for a bad statistical analysis?

    A few years ago, I posted about a legal case called FTC v. Quincy  Bioscience Holding Co. According to the district court opinion, Quincy had claimed that its product was successful based on a post-hoc subgroup analysis which found success, unadjusted for multiple comparisonss, in a small number of subgroups. The FTC (the product was not subject to the FDA) had sued for fraud in making this claim.

    At the time, I had posted on this board saying that I don't think the results of a statistical analysis whose data is accurate and whose methodology is disclosed  should be liable for things like fraud, even though I'd completely agree that a post-hoc analysis of this kind is worthless and the positives found likely false.

    At the time, I got a number of messages disagreeing, and there are is a statistician or two in this commumity who completely stopped communicating with me after that post, perhaps believing I favor coddling fraudsters.

    Two developments have happened since then that suggest giving this issue a second look. And perhaps the statisticians who stopped speaking to me might even want to reconsider.

    The first development was in the Quincy case. The 2nd Circuit Court of Appeals said the district court had gotten the facts completely wrong. Quincy hadn't merely advertised success in the subgroups based on the post-hoc analysis. Instead, it had claimed that the product was successful in the entire population based on the aubgroup analysis. This misrepresentation of the subgroup analysis results was the basis of the fraud claim, not the mere fact that a post-hoc subgroup analysis was done. The 2nd Circuit reversed the dismissal of the fraud claim and sent the case back to the district court for trial.

    This is a completely different statement of what happened than what the district court had said. Simce I have no personal knowledge of what happened and based my earlier post only on the district court's assessment of the facts, I also got the facts wrong.

    A recent case explains why it might be advisable not to make it too easy to find scientists. The American Society of Anesthesiology published a meta-analysis in Anesthesiology, its peer-reviewed journal, that concluded that Pacira Biosciences main product, EXPAREL (liposomal bupivacaine), was not superior to much cheaper generic bupivacaine, together with an editorial suggesting EXPAREL was not worth the higher price.

    Pacira Biosciences sued the ASA, journal editors, and paper authors for libel. It claimed the meta-analysis was scientifically flawed, pointing to methodological errors. Pacira Biosciences claimed that among other defects, the meta-analysis omited studies favorable to EXPAREL was based on "crude pooling" rather than a more sophisticated pooling method. , heterogeneity was not properly assessed, etc.

    On February 4, the district court dismissed the case. It said that scientific disputes are immune from libel claims. "Scientific controversies must be settled by the methods of science rather than by the methods of litigation." Accordingly, "scientific conclusions are protected speech to the extent they are 'drawn…from non-fraudulent data, based on accurate descriptions of the data and methodology underlying these conclusions, on subjects about which there is legitimate ongoing scientific disagreement." And as a result, "absent an allegation that the author of a scientific article falsified the data from which she drew her conclusions, a plaintiff cannot sustain a claim for trade libel by alleging that some methodological flaw led to a scientifically 'incorrect' answer."

    Applying these principles, the district court said that Pacira Biosciences was alleging nothing more than methodological flaws. Whether these alleged flaws invalidate the conclusions or not is a matter for scientists, not courts. It dismissed the libel claims and threw out the case.

    This case suggests that it might be best not to make it so easy for people to sue scientists for allegedly flawed published papers. If well-endowed individuals and corporations get to hire lawyers and experts to persuade lay jurors that a paper is methodologically wrong, scientists and non-profit scientific institutions who publish research that aggrieves powers that be may quickly find themselves under siege, and they may find jurors favoring the side with the smoother talkers.

    It was for this reason that I had thought the district court had been right to dismiss the fraud claim in Quincy Bioscience Holding Co. Under the district court's statement of the facts, the FTC was claiming that post-hoc subgroup analyses are inherently fraudulent. That's an overstatement. While such analyses are not conclusive, they can be valuable generators of hypotheses for future research, and prohibiting their use would diminish science. Moreover, it would be an assault on the basic principle that scientific disputes should be resolved by scientific means, not by judges.

    And that principle is worth preserving even though it unquestionably allows bad science through, and not infrequently.

    This is different from government agencies imposing rules for acceptability of studies for particular societal purposes, e.g. clinical trials for drug approval, environmental, toxicology, and engineering studies, etc.

    https://storage.courtlistener.com/recap/gov.uscourts.njd.469581/gov.uscourts.njd.469581.92.0.pdf



    ------------------------------
    Jonathan Siegel
    Director Clinical Statistics
    ------------------------------


  • 2.  RE: Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-18-2022 14:46
    Another case to note is the wire fraud conviction of Scott Harkonen in United States v. Harkonen. See, e.g.,


    ------------------------------
    David Kaye
    Visiting Professor
    Yale Law School
    ------------------------------



  • 3.  RE: Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-20-2022 15:52
    Hi David,

    Appreciate this. I'm not a lawyer, and I don't comment about these things with any knowledge of all their underlying facts, so I need to be careful.

    I would  just point out that the 9th Circuit said in the Harkonnen case that Harkonnen had not allowed any members of Intermune's clinical team (ie. any of the scientists) to review the press release before sending it out. The opinion suggested this fact was evidence he knew that if ethical professionals reviewed it, they would object, i.e. that he knew that what he was saying was false.  

    Perhaps for that reason, the said that the case had nothing to do with any scientific dispute. The facts thus appear to be a little bit different from the Pacira Biosciences case.

    If the scientists, the authors of the article, had been prosecuted for the contents of their published peer-reviewed article, rather than a non-scientist executive for a press release deliberately created without any input from any scientist, the legal status of the case might perhaps have been different. 

    I think one consequence of the difference between tbe two cases is to suggest the value of company executives consulting with the scientists before issuing communications like press releases. Communications grounded in science, even less than perfect science, get a certain amount of legal protection. Communications disconnected from the scientists do not. 

    Sent from my iPhone





  • 4.  RE: Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-19-2022 23:37
    Jonathan,

    What was the response of the editors of Anesthesiology to the suit by Pacira Biosciences?

    The article (Hussain et al. Anesthesiology 2021;134:147-164) illustrates the risk of relying on peer review. The authors' meta-analysis has obvious flaws.

    They say that their meta-analysis used the Mantel-Haenszel random-effects method. As a practical matter, no such method exists; the Mantel-Haenszel method is only a fixed-effect method. The authors cite the 1986 paper by DerSimonian and Laird, which, as I recall, does not discuss a Mantel-Haenszel random-effects method. It is not quite non-existent. The authors report using the Cochrane Collaboration's Review Manager software, which has an option for random-effects meta-analysis that could be interpreted as the Mantel-Haenszel random-effects method. It differs from the DerSimonian-Laird method only in the fixed-effect weights used in the initial stage: DL uses inverse-variance weights, whereas the M-H option substitutes the weights from the Mantel-Haenszel fixed-effect analysis. The "Mantel-Haenszel random-effects method" has received little, if any attention or evaluation in the meta-analysis literature. A technical document describing the statistical methods in RevMan says that the results of using the M-H option will differ little from those of the usual DL method.

    In effect, then, Hussain et al. used the DL method, whose shortcomings have long been documented in the meta-analysis literature. A 2014 article by Cornell et al. (Random-effects meta-analysis of inconsistent effects: a time for change.  Annals of Internal Medicine 160:267-270) points out that it can produce biased estimates with falsely high precision. Thus, a meta-analysis that used the DL method should receive careful scrutiny.

    In assessing heterogeneity of study-level effects, Hussain et al. used the popular I-squared statistic, usually interpreted as the percentage of the variability in effect estimates that is due to heterogeneity rather than sampling error. Unfortunately, results in the literature show that that interpretation is not valid.

    A thorough review should have called attention to these and other problems in the manuscript. In response to the litigation, it would have been reasonable for the editors of Anesthesiology to consult a group of experts in meta-analysis and to consider withdrawal of the article by Hussain et al. as a possible outcome.


    ------------------------------
    David C. Hoaglin
    Adjunct Professor
    Department of Population and Quantitative Health Sciences
    UMass Chan Medical School
    ------------------------------



  • 5.  RE: Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-20-2022 15:36
    Hi David,

    Appreciate this. 

    Editors of Anesthesiology were named defendants in the Pacira lawsuit. I haven't looked beyond the judge's opinion into the arguments the various parties' lawyers raised in their briefs. But I suspect that what the defendants argued was similar to what the judge said in the opinion dismissing the case. They also issued press releases condemning the lawsuit. 

    I would certainly agree that there are many examples of bad statistical analyses published in peer-reviewed journals.

    However, whether a reputable journal should accept an analysis and what standards it should use is a different question from whether the civil court system should hold its authors civilly or criminally liable for it and what standard they should use in doing so. 

    Sent from my iPhone





  • 6.  RE: Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-21-2022 16:13
    Hi, Jonathan.

    I agree that the civil court system is not an appropriate place to resolve scientific disputes.

    My question, then, is What have the Editors of Anesthesiology done to attempt to settle the controversy by  the methods of science? If they are taking no action, what explanation have they offered?

    I'm quoting your most recent message here, to avoid including the full thread.




  • 7.  RE: Pacira Biosciences: Should scientists be liable for a flawed statistical analysis?

    Posted 02-22-2022 07:59
    As a Biochemistry and Physics undergrad, most of what I learned about statistical analysis was either very simple or very wrong, compared to what I learned in my Linear Regression and Design of Experiments classes. So, what constitutes as a flawed statistical analysis?

    For example, If someone uses a One-way ANOVA to test for differences between multiple groups. Then they use 2-sample pooled T-tests to see if there are any differences between the groups AND they don't use any corrections for multiple comparisons, is that a flawed statistical analysis? If so, every "Intro to Stats" textbook I have ever used, does this. Many Intro to Stats teachers and profs do this too. 

    During the Flint Water Crisis, the scientists that analyzed the data used something called the Q-test, a statistical procedure to determine if a sample result is "too much" of an outlier. It assumes data comes from a "normal distribution". As taught in Analytical Chemistry, these outliers can be discarded. In that case, a couple of the readings should be thrown out.  The remaining data shows there isn't much to worry about.... accept THERE WAS!!! The data should have been looked at as say Log-Normal. But, Log-Normal isn't taught in the sciences, nor many intro to stats classes. Even when it is, it gets maybe a section in a chapter and is treated as a side note or after thought. When, in reality, it IS more important than the normal distribution for most real world data. 

    Having worked in chemistry labs for 10+ years, most of the data we generated would be considered "repeated measures" not a true replicate, as we (statisticians) like to define it. At best, most of the data is NOT independent. But, they will use 2-sample T-tests on it anyways. When we (statisticians) analyze their data, we rarely think of it as repeated measures or highly correlated. 

    Having worked in chemistry labs for 10+ years, the version of "quality control" we had in our labs was poor, at best. One QC manager, I call him a QC mangler, believed that if our CCV (Calibration Curve Verification) and IS (Internal Standard) were between the lines, everything was fine. Had another QC mangler, at a different facility, that felt the same way. Our QC criteria were, "The reported concentration of the IS is between 70% and 130%, the sample passes QC protocols." When the reported concentration of the IS goes from 72% to 128% over the course say 15 samples, in a very linear way, it "meets" the QC criteria. So, even if hte samples were independently made and run on the instrument, because of the systematic bias induced by the instruments, they are no longer "independent" samples. 

    On those same instruments, if I have a sample with a reported "toxic level" of some chemical and say the IS reports 102%, then I rerun that sample twice in a row the next day and the IS reports say 74% and 77% and the reported level does not meet that "toxic" threshold, it is common to report all 3 results. But, claim that we did 2 "confirmation analyses" that showed the level wasn't "toxic" after all. Just close. 

    As a graduate student in the sciences, we had to read articles from difference scientists on different topics. In one article, an author used multiple simple liner regressions on their data instead of one Multiple Linear Regression.  So, instead of Y = F(X1, X2, X3...) they had Y = F(X1), Y = F(X2), Y = F(X3), etc. 

    The director of that program denied my thesis proposal, one that used factorial designs to optimize the extraction efficiency of a chemical extraction, because, "You took all these stats classes and you STILL DON'T KNOW that you simply CANNOT CHANGE MORE THAN ONE THINGS AT A TIME DURING AN EXPERIMENT!!!!" And yes, he did yell that. 

    In all of these cases, scientists didn't use proper statistical methodologies, simply because they were never taught how. Or worse, taught they couldn't do something, even though they can. 

    Should scientists be held liable, or should those that looked the other way, never bothered to teach the right thing, never bothered to correct the scientists and blindly accept their data as "good quality" without even bothering to see if that is true, be held liable? 

    Just so we are clear, I was fired from 2 of those chemistry jobs, kicked out of the graduate program, and forced to give a highly flawed departmental final exam and had several classes taken away from me at that same university. I've stood up for good statistical analyses and statistical procedures. It's brought nothing but unemployment, pain and misery.

    ------------------------------
    Andrew Ekstrom

    Statistician, Chemist, HPC Abuser;-)
    ------------------------------