Some people use the posterior predictive distribution to say that zero doesn't occur inside the 95% confidence interval, and that is evidence for statistical significance. At that point one can use the mode of the posterior predictive distribution as the maximum a posteriori point estimate and call it a day, so yes. That's the easy answer, but Andrew Gelman and Richard McElreath seem to be emphasizing a few caveats.
I'm still catching up on everything they (and others) are saying, but here are a few points.
McElreath's book does a great job of reminding the reader that model choice is important, and that statistically significant positive results from the model don't necessarily reflect truths in the real world. He has a series of lectures on YouTube where he talks to students about these issues, encouraging them to tackle the difficult issues of model selection and evaluation.
https://www.youtube.com/watch?v=WFv2vS8ESkk&list=PLDcUM9US4XdMdZOhJWJJD4mDBMnbTWw_zGelman points out that statistical significance can be detected even when the sign of the effect is wrong. He often blogs about sensational findings that are based on "noisy data".
Gelman 2014 - "Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors"
http://www.stat.columbia.edu/~gelman/research/published/PPS551642_REV2.pdf http://andrewgelman.com/2016/10/25/how-not-to-analyze-noisy-data-a-case-study/Smaldino and McElreath argue that when statistical significance is the main support for positive results, and novel results are rewarded with prestigious publications, academics will tend to produce more false positives.
Smaldino 2016 - "The natural selection of bad science"
http://rsos.royalsocietypublishing.org/content/royopensci/3/9/160384.full.pdfGelman attempts to dissuade readers from any attempt to transition from p-values to similar Bayesian alternatives. He doesn't really seem to want an on/off switch like p<0.05. I think he wants greater understanding of the subtle issues of using data to support science.
http://andrewgelman.com/2015/09/04/p-values-and-statistical-practice-2/I'm excited to see the rest of McElreath's lectures (URL above), because he's aware of these issues and yet appears to be doing well in educating students.
------------------------------
Edward Cashin
Research Scientist II
------------------------------
Original Message:
Sent: 03-01-2017 08:57
From: Eugene Komaroff
Subject: Post Hoc Parameter Estimation
Thank you for the references to books. However, Bayesians do not compute p-values so I doubt they offer any help for someone asking how to estimate the alternative parameter when the null parameter has been significantly rejected by a p-value. I am familiar with Empirical Bayes from running Mixed Models, but now I suspect the CIs for the "fixed effects" as estimators of an alternative parameter also lack credible coverage. Again, I used to believe (and teach) that the CI was a useful estimator for alternative parameter(s) after a significant p-value. Is there any book or article out there that supports this point of view? In other words, provides a counterpoint to the Meeks & D'Agostino argument. At this point, all I can tell students is that when they reject the null parameter with a significant p-value: "don't worry, be happy" - I don't know what you found, but you can still publish an article. Incidentally, "effect size" is needed for a priori sample size calculations with regard to an appropriate non-central probability distribution, but not for post hoc estimation/explanation.
Eugene Komaroff
Professor of Education
Keiser University Graduate School