ASA Connect

 View Only
  • 1.  Reminder to complete Peer review survey by 4/17

    Posted 04-12-2020 18:31
      |   view attached
    ​For those who have so far completed it, thank you.  The deadline for completion is Friday 4/17 at 5PM Eastern.

    Dear Members of Consultant Forum (New information is pasted in below)

    This survey was approved by my current editor, after my article was screened by the Associate Editor, who asserted that the content was well-known.

    Please reply to my e-mail address or just to sender, as I want independent assessments. My e-mail is shusterj@ufl.edu. Note that the enclosure is self-contained, meaning that no prior knowledge of meta-analysis is needed.

    I will not disclose your identity.  

    I have written a very important paper that (1) shows that the current almost standard practices of meta-analysis are invalid, and that (2) proposes a rigorous fix. When this has been submitted, it has gone to meta-analysis experts who are developers of their methods, major users of these methods, teachers, or software developers. It is understandable that they do not want to be told they are wrong. The paper has gotten screened without justification, and one past submission , after two further resubmissions, finally went to a single reviewer who agreed with the content but rejected anyway. The Consultant Forum of the ASA is an ideal place to get a critical mass of unconflicted peer reviews. Since we all are home-bound during the COVID-19 crisis, I hope a large number of you will be willing to assume this peer-review role for me. I will be eternally grateful to any of you who will weigh in. The crucial issue is (1) above.

    I will feed back to you the results. My target date for hearing from you is Friday, April 17.

    Best wishes and Take Care,

     

    Jonathan Shuster shusterj@ufl.edu

     

    New Information.

    Michael Borenstein is the lead author of the classical text, "Introduction to Meta-Analysis" and the new book, referenced below. He is the lead developer of the program Comprehensive Meta-Analysis. His new book seems to vote for the "seriously Random" option over the "Near Constant" option. If the study-specific point estimates are random, how can the ingredients from the studies that determine weight, clearly random under assumption B, below, not then yield the conclusion that the weights are seriously random? If they are fixed, why is it not equally valid to say the point estimates are constants?

     

    Section 7.4.3 of Bornstein (2019, Page 26, dictated and proofed)

     

    Assumptions of the random-effects model

     A. The universe to which we will be making an inference is defined clearly and is the correct universe in the sense that it is relevant to policy.

     B. The studies that were performed are a random sample from that universe.

     C.The studies that we include in our analysis are an unbiased sample of the studies that were performed.

     D. The analysis includes enough studies to yield a reliable estimate of the between study variance, τ2.

     

    Reference:

     

    Bornstein M. (2019) Common Mistakes in Meta-Analysis and How to Avoid Them. Biostat Inc: New Jersey (Michael Bornstein is the president of Biostat Inc.)

     



    ------------------------------
    Jon Shuster
    ------------------------------

    Attachment(s)



  • 2.  RE: Reminder to complete Peer review survey by 4/17

    Posted 04-13-2020 18:11
    Hello ..... I've NOT examined your paper.  I'm generally suspicious of meta-analyses as "trying to mine truth from studies of mediocre design with non-comparable outcome assessments."  The evidence in support of my personal view is that so many meta-analyses report correlations of r = 0.2 and r = 0.3 when understanding less than ten percent of the variance in an outcome is not of much scientific or practical value.

    Peer review is poorly done (as practiced) and nearly useless.  See ...
     

    Ross, P. F.  (2019)  "The status of peer review in the sciences and the implications," cited in Significance, June issue, p 116, (copy this URL https://www.significancemagazine.com/files/peer-review-status-20181001.pdf into your browser, then press Enter, to access the paper on the Significance website)

    ... a paper that reports unpopular findings and has been rejected in this and its earlier forms by at least a dozen journals numbering at least twenty times between 1980 and 2019.  Even on the website of Significance in its current form, I doubt that it is seen or read very often.  I'd be surprised if it has a citation.

    The "secrets" to valid peer review -- and valid peer review is possible -- are that (a) every reviewer uses a questionnaire that takes the reviewer through a standard list of considerations, (b) that every manuscript gets two or three or more reviewers, and (c) that the decision to publish is made by a regression equation built upon the responses of the two-three-four- reviewers.  Psychology (psychometrics) has known these fundamentals about job performance review for more than half a century ... but bosses don't like the findings.  "Letting others participate in the job performance review of my subordinate reduces my influence" is what the boss thinks, and s/he doesn't like that.  Further, nearly all scientists and organizational leaders (a) know next to nothing about the behavioral sciences and (b) regard the behavioral sciences as "not sciences" and worthy of no attention.

    The forty-five item questionnaire on which the Ross (2019) paper is based took about ten minutes to read and mark ... less time than it takes the reviewer to write an essay justifying the reviewer's recommendation to the editor.

    Every scientist is responsible for "peer review," particularly academicians since they take it upon themselves to do most of the journal editing and peer reviewing.  Peer review won't get better until, somehow, we collectively "straighten up and fly right," accepting and using the scientific work already done with respect to improving peer review.

    Paul F. Ross, Ph.D., ABPP
    Industrial and Organizational Psychologist (retired)
    Diplomate, American Board of Professional Psychology

    Worked in America's Fortune Fifty corporations in North America from 1955 to retirement in 1998 and continues to work in self-selected, self-sponsored research.

    Member
       American Psychological Association
       American Statistical Association
       Association for Computing Machinery
       Psychometric Society
       Society for Industrial and Organizational Psychology
       American Association for the Advancement of Science


     



    ------------------------------
    Paul Ross
    ------------------------------



  • 3.  RE: Reminder to complete Peer review survey by 4/17

    Posted 04-14-2020 12:30

    Dr. Ross,

    Is there any possibility of getting a copy of the questionnaire upon which your paper is based?  It would be interesting to see what questions were asked.



    ------------------------------
    Brian Cocolicchio
    Rochester Institute of Technology
    ------------------------------



  • 4.  RE: Reminder to complete Peer review survey by 4/17

    Posted 04-14-2020 10:04
    Here are several papers on the problems with MA. Among others, the papers on which these studies are often based are themselves problematical: publication bias both from editors and authors, multiple unreported tests and covariates so the reported alpha is not close to the real alpha, and ignoring contrary evidence either by authors or editors.

    Young SS, Karr A. (2011) Deming, data and observational studies: A process out of control and needing fixing. Significance, September, 122-126.
    Young SS, Acharjee MK, Das K. (2018) The reliability of an environmental epidemiology meta-analysis, a case study. Regulatory Toxicology and Pharmacology. 102:47-52.
    Young SS, Kindzierski KB. 2019. Combined background information for meta-analysis evaluation. https://arxiv.org/abs/1808.04408
    Young SS, Kindzierski KB. 2019. Evaluation of a meta-analysis of air quality and heart attacks, a case study, Critical Reviews in Toxicology, doi: 10.1080/10408444.2019.1576587



    ------------------------------
    Terry Meyer
    ------------------------------



  • 5.  RE: Reminder to complete Peer review survey by 4/17

    Posted 04-15-2020 09:29
    ​I respectfully disagree with both Drs. Terry Meyer and Paul Ross as I express below.  They both made blanket assertions about meta-analysis in general.   This area will continue to be one of the most published areas of statistical application with or without professional statistical support.  So for those interested, our role should be toward more rigorous applications.

    The areas for my applications in the article are meta-analyses of randomized clinical trials, an area which today should never have publication bias. Since 2008, Clinical trials in the US and Europe are required to be registered and provide results when complete. If you start with the registries, you can easily get all completed trials, whether published or not.  In addition, the definition of the target population can have diverse designs as long as the main outcome parameters are essentially the same (e.g. differences in systolic blood pressure).Covariates and baseline adjustments are fine, yes or no. The random effects analysis takes care of this via its definition of the target population of studies.  In addition, to say that a low correlation is not clinically relevant is not necessarily correct.  The International ISIS studies of cardiovascular outcomes target tiny differences because if you can lower the event rate by a small amount (even 1%) you will have a huge impact. Today, per the recent American Statistician, the emphasis is on interval estimates of effect size not on P-values.

    Not voting on my question is absolutely your right, but I did ask recipients not to publish responses to the general audience. It would have been OK to do this after the end of business on 4/17.


    ------------------------------
    Jon Shuster
    ------------------------------