I do not think a free society will ever want to criminalize what legislators or putative experts regard as bad science, though marketing products or securing funding for research while knowingly relying on or employing bad science may often be fraud.
Anything regarding subgroup effects, at least when involving binary outcomes, is an especially problematic area for criminalizing bad science. Consider an intervention that improves cancer survival. Typically, it will cause a larger proportionate decrease in mortality among a group with a lower baseline mortality rate (e.g., young subjects) while causing a larger proportionate increase in survival among a group with a higher baseline mortality rate (e.g., older subjects). In fact, it would be remarkable for this not to happen whenever an intervention has a substantial effect and the groups being compared have substantially different baseline mortality/survival rates. Yet, so far as the published record reveals, few if any persons analyzing subgroup effects are aware that it is even possible for one group to experience a larger proportionate benefit from an intervention with respect to a decrease in mortality while the other group experiences a larger proportionate benefit from the intervention with respect to an increase in survival.
Further, subgroup analyses involving binary outcomes usually are premised on the belief that, absent a subgroup effect, a factor that affects an outcome rate will show the same proportionate effect on different baseline rates for the outcome (or, more precisely, the side of the favorable/adverse outcome dichotomy one happens to be looking at). Yet, it should be evident such an expectation is illogical, given that it is impossible for a factor to cause equal proportionate changes in two groups' different baseline rates for an outcome while at the same time causing equal proportionate change in the groups' rates of experiencing the opposite outcome. Similarly, anytime a factor causes equal proportionate changes in two groups' different baseline rates for an outcome it will necessarily cause unequal proportionate change in the two groups' baseline for the opposite outcome.
Regarding a related matter, the employment of an effect observed in a clinical trial to calculate the number need to treat in circumstances where baseline rates differ from that in the trial commonly involves applying the observed relative change in the outcome rate being examined to the baseline rate for the same outcome (i.e., favorable or adverse) for the patient's group. I assume the same holds for number need to harm. But virtually unknown to physicians is that one will always derive different, and often dramatically different, numbers needed to treat (harm) depending on whether one employs the observed relative effect on the favorable outcome or the observed relative effect on the adverse outcome. In such circumstances, a patient who experienced adverse consequences of an intervention undertaken because of a recommendation that was more optimistic than a reasonable approach to calculation of number need to treat (harm) would have supported ought to have a cause of action for negligence. The physician, of course, would have certain defenses including reliance on sources of presumptive expertise. Possibly those sources should also be liable civilly.
But criminalization is another matter. Further, legislation about perceived bad science may inadvertently validate science that is at least as bad.
As suggested in my November 2016 Comments for the Commission on Evidence-Based Policymaking, my October 2015 letter to the American Statistical Association, and my "Race and Mortality Revisited," Society (July/Aug. 2014), I consider it manifestly inappropriate, probably seriously negligent, for observers to discuss effects of policies on some measure of demographic difference without consideration of the way the measure tends to be affected by the prevalence of an outcome (especially in circumstances where there is no mention that different measures yield opposite conclusions). The same holds for any discussion of reasons for a perceived subgroup effect without consideration of the same issues. But, apart from possible fraud in certain circumstances where the discussants do or should know better, and when the discussants are promoting products or seeking funding, criminalization seems inappropriate.
Further, as discussed in each of these items, Congress is a manifestly innumerate entity, as reflected by, among other things, its mistaken belief that reducing adverse outcomes tends to reduce, rather than increase, relative racial and other demographic differences in rates of experiencing the outcomes. See "Innumeracy at the Department of Education and the Congressional Committees Overseeing It," Federalist Society Blog (Aug. 24, 2017). One should be cautious in urging it to consider subjects that it may not be able to understand.
------------------------------
James Scanlan
James P. Scanlan Attorney At Law
------------------------------
Original Message:
Sent: 03-03-2020 17:19
From: Jonathan Siegel
Subject: Legal status of reporting post-hoc subgroup results
Some time ago, we discussed the question whether reporting results of a post-hoc subgroup analysis without clearly indicating the nature of the analysis performed and the exploratory nature of the results is illegal and whether it is or should be made a crime.
At the time, my view was that, understanding that the practice can be highly misleading and lead people to action based on error and noise, the practice is unfortunately so entrenched that as a pragmatic matter it would probably need to be criminalized explicitly. We are positively inundated with chance phenomena presented as meaningful, and with subgroups presented as significant when they are a actually selected from large frames of them in a manner which goes unreported. To pick one example of many, just look at how financial funds are rated. We see lots of funds with good-looking 5-year performance ratings. But the only funds that receive 5-year performance rates are those that have survived 5 years. And the only funds that survive 5 years are those that have performed reasonably well, quite possibly by chance. We never see the funds that are closed.
While I don't know the outcome of the upcoming elections, it is possible that we will have a Congress interested in revisiting and clarifying the trade practice laws.
If so, this might be the sort of thing that the ASA should take a policy position on.
I understand that the ASA does not like to take positions on statistical practice. But there are certain practices that are so egregious that explicit clarification of the law might be in everyone's interest.
------------------------------
Jonathan Siegel
Director Clinical Statistics
------------------------------