Dr. Ross,
Is there any possibility of getting a copy of the questionnaire upon which your paper is based? It would be interesting to see what questions were asked.
Original Message:
Sent: 04-13-2020 18:11
From: Paul Ross
Subject: Reminder to complete Peer review survey by 4/17
Hello ..... I've NOT examined your paper. I'm generally suspicious of meta-analyses as "trying to mine truth from studies of mediocre design with non-comparable outcome assessments." The evidence in support of my personal view is that so many meta-analyses report correlations of r = 0.2 and r = 0.3 when understanding less than ten percent of the variance in an outcome is not of much scientific or practical value.
Peer review is poorly done (as practiced) and nearly useless. See ...
Ross, P. F. (2019) "The status of peer review in the sciences and the implications," cited in Significance, June issue, p 116, (copy this URL https://www.significancemagazine.com/files/peer-review-status-20181001.pdf into your browser, then press Enter, to access the paper on the Significance website)
... a paper that reports unpopular findings and has been rejected in this and its earlier forms by at least a dozen journals numbering at least twenty times between 1980 and 2019. Even on the website of Significance in its current form, I doubt that it is seen or read very often. I'd be surprised if it has a citation.
The "secrets" to valid peer review -- and valid peer review is possible -- are that (a) every reviewer uses a questionnaire that takes the reviewer through a standard list of considerations, (b) that every manuscript gets two or three or more reviewers, and (c) that the decision to publish is made by a regression equation built upon the responses of the two-three-four- reviewers. Psychology (psychometrics) has known these fundamentals about job performance review for more than half a century ... but bosses don't like the findings. "Letting others participate in the job performance review of my subordinate reduces my influence" is what the boss thinks, and s/he doesn't like that. Further, nearly all scientists and organizational leaders (a) know next to nothing about the behavioral sciences and (b) regard the behavioral sciences as "not sciences" and worthy of no attention.
The forty-five item questionnaire on which the Ross (2019) paper is based took about ten minutes to read and mark ... less time than it takes the reviewer to write an essay justifying the reviewer's recommendation to the editor.
Every scientist is responsible for "peer review," particularly academicians since they take it upon themselves to do most of the journal editing and peer reviewing. Peer review won't get better until, somehow, we collectively "straighten up and fly right," accepting and using the scientific work already done with respect to improving peer review.
Paul F. Ross, Ph.D., ABPP
Industrial and Organizational Psychologist (retired)
Diplomate, American Board of Professional Psychology
Worked in America's Fortune Fifty corporations in North America from 1955 to retirement in 1998 and continues to work in self-selected, self-sponsored research.
Member
American Psychological Association
American Statistical Association
Association for Computing Machinery
Psychometric Society
Society for Industrial and Organizational Psychology
American Association for the Advancement of Science
------------------------------
Paul Ross
Original Message:
Sent: 04-12-2020 18:30
From: Jon Shuster
Subject: Reminder to complete Peer review survey by 4/17
For those who have so far completed it, thank you. The deadline for completion is Friday 4/17 at 5PM Eastern.
Dear Members of Consultant Forum (New information is pasted in below)
This survey was approved by my current editor, after my article was screened by the Associate Editor, who asserted that the content was well-known.
Please reply to my e-mail address or just to sender, as I want independent assessments. My e-mail is shusterj@ufl.edu. Note that the enclosure is self-contained, meaning that no prior knowledge of meta-analysis is needed.
I will not disclose your identity.
I have written a very important paper that (1) shows that the current almost standard practices of meta-analysis are invalid, and that (2) proposes a rigorous fix. When this has been submitted, it has gone to meta-analysis experts who are developers of their methods, major users of these methods, teachers, or software developers. It is understandable that they do not want to be told they are wrong. The paper has gotten screened without justification, and one past submission , after two further resubmissions, finally went to a single reviewer who agreed with the content but rejected anyway. The Consultant Forum of the ASA is an ideal place to get a critical mass of unconflicted peer reviews. Since we all are home-bound during the COVID-19 crisis, I hope a large number of you will be willing to assume this peer-review role for me. I will be eternally grateful to any of you who will weigh in. The crucial issue is (1) above.
I will feed back to you the results. My target date for hearing from you is Friday, April 17.
Best wishes and Take Care,
Jonathan Shuster shusterj@ufl.edu
New Information.
Michael Borenstein is the lead author of the classical text, "Introduction to Meta-Analysis" and the new book, referenced below. He is the lead developer of the program Comprehensive Meta-Analysis. His new book seems to vote for the "seriously Random" option over the "Near Constant" option. If the study-specific point estimates are random, how can the ingredients from the studies that determine weight, clearly random under assumption B, below, not then yield the conclusion that the weights are seriously random? If they are fixed, why is it not equally valid to say the point estimates are constants?
Section 7.4.3 of Bornstein (2019, Page 26, dictated and proofed)
Assumptions of the random-effects model
A. The universe to which we will be making an inference is defined clearly and is the correct universe in the sense that it is relevant to policy.
B. The studies that were performed are a random sample from that universe.
C.The studies that we include in our analysis are an unbiased sample of the studies that were performed.
D. The analysis includes enough studies to yield a reliable estimate of the between study variance, τ2.
Reference:
Bornstein M. (2019) Common Mistakes in Meta-Analysis and How to Avoid Them. Biostat Inc: New Jersey (Michael Bornstein is the president of Biostat Inc.)
------------------------------
Jon Shuster
------------------------------