Hi,
When I've reviewed manuscripts and raised an issue of concern and spent extra time referencing the reasons for my concerns with relevant citations, to have the authors respond by ignoring or dismissing those concerns as unimportant and not addressing them is frustrating. I had an experience recently with a manuscript which collected data from students nested within schools for latent variable modeling, but which did not treat the data as multilevel. I pointed out to the authors how their approach could potentially result in inflated Type 1 error rates for statistical tests, provided relevant references, and suggested two different ways they could address the issue using the specialized software program they had used to run their analyses. I even contacted the software vendor to make sure my suggestions could be implemented in the version of the software the authors had used to conduct their analyses. I also checked with a knowledgeable colleague to make sure my recommendations were sound, yet the authors still refused to address the multilevel nature of the data. There were three rounds of review of the manuscript and in my final review, I suggested to the editor that the editor seek an additional opinion from a specific multilevel latent variable modeling expert located in the authors' same country and to invite the authors to defend their assertion that considering the multilevel structure of the data in their analyses was not necessary via either citations to relevant literature or simulations they conduct themselves to demonstrate empirically the lack of need to use multilevel techniques to analyze their data. That was the last I heard of the manuscript, but it was frustrating to have to review the paper three separate times and not have my concerns addressed.
To be clear, I wasn't insisting that they use multilevel techniques. Rather, I was asking that they either follow the recommendations in the literature for best practices for the analysis of clustered data, which is to take that clustering into account in the analysis process, or to demonstrate empirically with simulation evidence why they would not need to follow the standard best practices in this particular instance, but they seemed unwilling to do either. That was upsetting.
A much smaller pet peeve for me, but one that crops up with surprising frequency in the papers I've reviewed, is forgetting to list the sample sizes in tables, either in the title, body table, or table footnote.
Tor Neilands
------------------------------
Torsten Neilands
Professor of Medicine
UCSF Center for AIDS Prevention Studies
Original Message:
Sent: 01-28-2016 02:39
From: Eduardo Castanon
Subject: How to upset the statistical referee?
Hi guys
I am preparing a talk for Februay,15th based on article of Dr Bland. If you have experience on reviewing articles, specially medical articles, medical trials... what is what upsets you more, even to the point to reject a project?
Thanks and havee a good day