4/12/17
Dear Dr. Greenland,
Regarding your response to my comment on "Results-Blind Manuscript Evaluation," thank you for your interesting remarks.
You note: "....I agree with all said but for one important detail: The use of power to judge the quality of the study. We all seem to recognize the damage inflicted by NHST [Null Hypothesis Significance Testing], yet many continue to endorse the NHST paradigm via emphasis on power. .....Small, 'underpowered' studies can contribute high-quality data to the overall pool of evidence,....".
You make some valid points, especially that any journal requirement that power analysis be done, can be seen as presupposing & implicitly endorsing use of NHST, although use of power analysis for precision of confidence intervals may be on more solid ground. Nevertheless, I would say that all else being equal, if two manuscripts are competing for the same limited space in a journal, the one with the larger sample size (and/or smaller error variance) probably should be given priority.
But in any case, the over-riding point I'm trying to make is that good methodology should be emphasized over what the results of a study are. Or to put it more strongly, what the results turned out to be, should generally be given NO weight at all in the decision as to whether to publish or not. However, I can't predict all possible contingencies. In a rare case, in which the outcome of a study is considered of such importance (e.g., scientific, clinical, ethical) that it should be given weight in the decision to publish, the authors can make that case in their cover letter to the editor or perhaps in the Introduction or Methods sections, & the editor would make a decision about that on a case-by-case basis, but even then, I would ask: should an "important" result be reported as a research finding if the methodology is seen as questionable in a peer review? One could argue that the fact of its importance necessitates especially stringent methodological scrutiny. I suppose a cost-benefit analysis of false positive/negative results might be relevant in such a case.
If our focus is on methodology, and not results, many important questions raised by statisticians regarding overuse & misuse of p values/NHST, use of power analysis, Bayesian statistics, reproducibility, confidence intervals, etc. would be settled, or at least fought, on the playing field of methodology, as they should be, blind to what results are. If someone wants to argue that NHST/p values, or any other statistical method for that matter, is a valid method, or valid in conjunction with a proposed ensemble of methods, for their particular study, let them do so, in the Statistical Analysis subsection of the Methods section, and let the reviewers then decide about that, not knowing how the results turned out. Reviewers, and therefore authors, will now have to be concerned with whether use of NHST & p values is valid, at least in a particular application (or are so among an ensemble of complementary methods), and authors will have to defend their use, and not be concerned at all with what the p value turned out to be, e.g., whether < or > an arbitrary 0.05 cutoff.
BTW, you also say: "....Simply mandating confidence intervals is not enough.... they further need correct, direct interpretation as indicators of the precision or information content of the study, rather than as null tests......This problem is not addressed by blind analysis....". That's right, interpretation of results is found primarily in the Discussion section, but if the paper has passed the Stage 1 screening for methodology & appropriate statistical techniques, any needed corrections in interpretation will hopefully be cited by a reviewer in the Stage 2 review/editing of the Results & Discussion. It seems unlikely that a manuscript judged acceptable in the Stage 1 review of methodology but later found to contain an apparent egregious error in interpretation in the Discussion section, will have to be declined acceptance because the authors refused to modify or failed to adequately justify the criticized interpretation in requests the reviewer makes for what are relatively minor revisions in an otherwise good paper.
Joseph J. Locascio, Ph.D.,
Bio-Statistician,
Neurology Dept.,
Massachusetts General Hospital,
Boston, Massachusetts
Phone: (617) 724-7192
Email: JLocascio@partners.org