ASA Connect

 View Only
Expand all | Collapse all

How to upset the statistical referee?

  • 1.  How to upset the statistical referee?

    Posted 01-28-2016 02:39
    Hi guys 
    I am preparing a talk for Februay,15th based on article of Dr Bland. If you have experience on reviewing articles, specially medical articles, medical trials... what is what upsets you more, even to the point to reject a project? 

    Thanks and havee a good day


  • 2.  RE: How to upset the statistical referee?

    Posted 01-29-2016 01:50

    Hi,

    When I've reviewed manuscripts and raised an issue of concern and spent extra time referencing the reasons for my concerns with relevant citations, to have the authors respond by ignoring or dismissing those concerns as unimportant and not addressing them is frustrating. I had an experience recently with a manuscript which collected data from students nested within schools for latent variable modeling, but which did not treat the data as multilevel. I pointed out to the authors how their approach could potentially result in inflated Type 1 error rates for statistical tests, provided relevant references, and suggested two different ways they could address the issue using the specialized software program they had used to run their analyses. I even contacted the software vendor to make sure my suggestions could be implemented in the version of the software the authors had used to conduct their analyses. I also checked with a knowledgeable colleague to make sure my recommendations were sound, yet the authors still refused to address the multilevel nature of the data. There were three rounds of review of the manuscript and in my final review, I suggested to the editor that the editor seek an additional opinion from a specific multilevel latent variable modeling expert located in the authors' same country and to invite the authors to defend their assertion that considering the multilevel structure of the data in their analyses was not necessary via either citations to relevant literature or simulations they conduct themselves to demonstrate empirically the lack of need to use multilevel techniques to analyze their data. That was the last I heard of the manuscript, but it was frustrating to have to review the paper three separate times and not have my concerns addressed.

    To be clear, I wasn't insisting that they use multilevel techniques. Rather, I was asking that they either follow the recommendations in the literature for best practices for the analysis of clustered data, which is to take that clustering into account in the analysis process, or to demonstrate empirically with simulation evidence why they would not need to follow the standard best practices in this particular instance, but they seemed unwilling to do either. That was upsetting.

    A much smaller pet peeve for me, but one that crops up with surprising frequency in the papers I've reviewed, is forgetting to list the sample sizes in tables, either in the title, body table, or table footnote.

    Tor Neilands

    ------------------------------
    Torsten Neilands
    Professor of Medicine
    UCSF Center for AIDS Prevention Studies



  • 3.  RE: How to upset the statistical referee?

    Posted 01-29-2016 05:07

    Hi, Eduardo.

    Naturally, if the statistical methods are not appropriate or the analysis is not done well, I don't give a favorable report.  Usually the authors should have an opportunity to correct such problems. If they do not, a rejection should result.

    A common failing: inadequate documentation of the statistical analysis.  I expect authors to follow the guidance of the International Committee of Medical Journal Editors: Describe the statistical methods in sufficient detail that a knowledgeable reader with access to the original data could  verify the results.  I do not expect the article to have space for those details, but these days it should be possible to put them in a supplemental file.  Without such information the article is incomplete.  Research should be reproducible.

    Regards,

    ------------------------------
    David Hoaglin



  • 4.  RE: How to upset the statistical referee?

    Posted 01-29-2016 07:22

    Eduardo, I'm not sure what upsets me most, but high on the list would be the bayesian interpretations of frequentist results: pretending a p-value is a posterior probability of the tested hypothesis, a conditional power is a predictive power, a confidence interval is a credible interval, & so on.

    That said, in pharmaceutical statistics, I now see fewer such misinterpretations, which may be both a cause & an effect of more recent uses & familiarity of bayesian inference.

    ------------------------------
    Andrew Hartley
    Associate Statistical Science Director
    PPD, Inc.



  • 5.  RE: How to upset the statistical referee?

    Posted 01-29-2016 07:32

    It is always important to understand what was the study about, and if there are any covariats, that may have affected the study. You mentioned it is an article, so I'm assuming it is a published work. I think it's ok to give your honest opinion about the conducted study, and if there are some specifics you are not on board with, point it out in your presentation. It may be of interest to specify challenges with the analysis, and how they were handled.

    Good luck!

    ------------------------------
    Valeriia Sherina



  • 6.  RE: How to upset the statistical referee?

    Posted 01-29-2016 08:03

    I've encountered a few situations when the Editor-In-Chief of a medical journal wants the paper reviewed after the design of the study is clearly flawed (i.e. apples v. oranges) and no covariate adjustments could save it! It appears that there may be a bias at a higher level--for political reasons the journal may want to publish the findings, or the opposite may also be true.

    Also, please provide the reference for the particular Bland article--thanks!

    Kathy

    ------------------------------
    Katherine Freeman
    President
    EXTRAPOLATE



  • 7.  RE: How to upset the statistical referee

    Posted 01-29-2016 09:05

    Hello Eduardo,

    My brother and I are both statisticians, but he (unlike me) does a great deal of reviewing of medical research journal submissions.  We often "talk statistics," and he seems to have three common themes of refrains about this reviewing.  He rejects articles often because the researcher(s) used a stats technique or tool that was inappropriate for the circumstances/assumptions/data level.  When this is done, it also seems to naturally lead to poor writing as well.  His third common complaint is that when predictive type models are used, the researchers will tend to claim incredible model predictive power without having used even a small hold-out sample with which to actually test the validity or predictive power.  Sorry to say that I know he has other common themes of complaints, but they do not come to mind right now.

    ------------------------------
    Gretchen Donahue
    ------------------------------


  • 8.  RE: How to upset the statistical referee?

    Posted 01-29-2016 09:13

    One thing I do not appreciate is the use of subjective adjectives - for example:  the result was HIGHLY statistically significant.  I prefer just saying statistically significant and giving the p-value so the reader can make up his/her own mind.  Also instead of using notation like NS for not significant or just saying not significant - I would give the exact p-value as there is a difference in possbile interpretation between a p-value of 0.07 vs 0.77 - both are not signficiant at the 5% level, but I ilke to see the exact p-value.  If you are testing multiple hypotheses or mutliple drug groups, should address issues such as multiplicity and multiple comparisons.  Conclusions need to be consistent with the results.  Aretilce should be well written - check grammar, typos, etc.  Statistical methods used and a clear description of the experiment are a must.  What is the main hypothesis and how is it being measured.  Good luck with your submission.

                      Steve

    ------------------------------
    Steve Ascher
    Sr. Director, Biostatistics
    Janssen Research & Development



  • 9.  RE: How to upset the statistical referee?

    Posted 01-29-2016 10:39

    Having served as a reviewer for several different journals in the last few years, here are some things I have observed.

    1. Interpreting the strength of an association based on how low a p-value is.  This is a very common mistake, especially among clinical papers. 

    2. Related to my point in #1, many papers still lack the reporting of effect sizes when comparing groups on continuous variables.  I will say that papers submitted to psychology journals tend to be a little bit better about reporting a Cohen's d value with a t-test or an eta-squared with an ANOVA, but this practice is still lacking in other medical and health disciplines.

    3. In studies that use linear regression models, I often see beta values reported, but their corresponding confidence intervals are not.

    4. Diagnostic accuracy studies that report sensitivity, specificity, and AUC values often neglect to report the confidence intervals of these values.

    Good luck in your talk!

    Mike

    ------------------------------
    Mike Malek-Ahmadi
    Banner Alzheimer's Institute



  • 10.  RE: How to upset the statistical referee?

    Posted 01-29-2016 16:18

    Eduardo - 

    I have refereed a number of articles, theses, and one or two dissertations on survey statistics.  What bothers me is for an author to mechanically derive equations without an indication as to the logic behind their work.  I'd like to know Why are you doing this?  How does this relate to your goal?  What is your foundation?  What is your motivation?  How do I know that your basic idea isn't logically flawed?  I suppose such a concern for clarity would carry over to all areas of statistics, and basically all scientific papers, or basically any writing, even a short story or novel. What story are you trying to tell, and what is the foundation for your story? 

    Too many authors of statistics articles may just grind out equations without thinking about what they mean, and how they should be interpreted.  Can this approach possibly be correct?   What is my logic for doing this?  When authors are not clear on this, it shows. 

    Best wishes.  

    ------------------------------
    James Knaub
    Lead Mathematical Statistician
    Retired



  • 11.  RE: How to upset the statistical referee?

    Posted 02-01-2016 10:00
    My personal favorite review experience was a manuscript that explained in passing that a 95% confidence interval was an interval that contained 95% of the data. This was not the only problem with the manuscript, as you might imagine.