As I'm sure we are all aware, the peer-review process is ripe for improvement. In particular, the handling of statistical analyses by many scientists is based more on convenience than expertise. Nevertheless, there is a world of difference between a process that needs improvement and a process that is useless or "doesn't work".
Sometimes, I almost get the impression that the peer-review process is expected to produce error-free publications. Such an expectation is unrealistic and perhaps even damaging to scientific institutions that are, however flawed, engaging in good research.
In any case, the reasons for hope are plentiful. To offer but one example:
AAAS Peer-Review Evaluation
------------------------------
Jamie Farren
Earth Science SI & Research
West Texas A & M University
Original Message:
Sent: 11-29-2016 09:17
From: Sidney Young
Subject: This New Study May Explain Why Peer Review in Science Often Fails
It is not a question of peer review often failing. It fails at least 50% of the time. In 1988 two papers appeared. On over 50 science questions, there were roughly an equal number of papers on each side of an issue. I counted it out 2.4 vs 2.6 papers. All of the papers were peer reviewed. In certain areas of science, once a paradigm is established it can be very difficult to get an anti-paradigm paper published. Peer review can be peer censorship. Government funding can flow to one side of a question. Publication bias. I will read this paper, but I am not hopeful that there is a simple solution.
Readers should add to a list of things that can help produce valid claims. I will start the list with data access.
1. The building of the analysis data set should be done in a way that the data can be placed in a public depository. For example, micro aggregate human data so that personal identity is protected. The analysis data set should also be submitted to whomever funded the work.
2. ....
------------------------------
Sidney Young
Retired
Original Message:
Sent: 11-28-2016 12:00
From: Kelly Zou
Subject: This New Study May Explain Why Peer Review in Science Often Fails
This New Study May Explain Why Peer Review in Science Often Fails
http://www.vox.com/science-and-health/2016/11/23/13713324/why-peer-review-in-science-often-fails
"Among researchers actually contributing to peer review, 70 percent dedicated 1 percent or less of their research work-time to peer review while 5 percent dedicated 13 percent or more of it," the researchers wrote. So again, most researchers didn't dedicate much time to peer review, while a prolific minority spent a relatively large chuck of time going over the research of their peers.
For example, according to data from Elsevier and Wiley, two of the world's largest scientific publishers, the job of peer review disproportionately fell on US-based scientists. By contrast, "Chinese researchers seem to publish twice as many articles as the number they are peer reviewing, despite their willingness to peer review," the researchers wrote.
Please also see:
The Global Burden of Journal Peer Review in the Biomedical Literature: Strong Imbalance in the Collective Enterprise
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0166387