ASA Connect

 View Only
Expand all | Collapse all

significant

  • 1.  significant

    Posted 11-12-2014 14:03

    I'm tired of saying or writing the adverb, adjective pair "statistically significant".
     I would propose the word "staticant" to replace those clumsy nine syllables.

    stat-i-cant (stăt-ĭ-kănt) adj. 1. statistically significant



  • 2.  RE: significant

    Posted 11-13-2014 07:32
    That would help, Robert, in the sense that it would reduce our use of the word "significant" which is so often taken, wrongly, to correlate closely with "practically significant."

    -------------------------------------------
    Andrew Hartley
    Associate Statistical Science Director
    PPD, Inc.
    -------------------------------------------




  • 3.  RE: significant

    Posted 11-14-2014 04:16
    I very much like the new word, but I think it and the phrase "statistically significant" are redundant with the indicated p < .001. One only need write "Statisticians consumed more doughnuts (M = 2.75) than biologists (M = 1.06), p = .012."

    -------------------------------------------
    Francis Dane
    Chair
    Jefferson College of Health Sciences
    -------------------------------------------




  • 4.  RE: significant

    Posted 11-16-2014 17:44
    Howe about Statsig. Staticant, to me, does imply significant.

    -------------------------------------------
    Michael Mout
    MIKS
    -------------------------------------------




  • 5.  RE: significant

    Posted 11-13-2014 07:41
    I am tired of seeing students write the words "statistically significant" too.  However I would propose that they simply not be allowed to use those words, which in my opinion just encourage them to leave out the context.  Instead, I encourage them to write something like this:

    Based on the p-value of 0.0027, we find evidence that the average number of donuts eaten on a weekly basis is larger for statisticians than it is for biologists.

    OR

    Since our sample resulted in a p-value of 0.38, we lack evidence that the average weekly rainfall in New York City is associated to the number of cars crossing into Manhattan.

    Cheers,
    Joe

    -------------------------------------------
    Joseph Nolan
    Associate Professor of Statistics
    Director, Burkardt Consulting Center
    NKU Department of Mathematics & Statistics
    -------------------------------------------




  • 6.  RE: significant

    Posted 11-14-2014 09:01
    Joseph, I like your first suggestion, as it points to the meaninglessness of statistical significance.

    However, I wonder if your 2nd suggestion is misleading, since it seems to assume some definition of "evidence." "Evidence" is defined within the bayesian world (see writings from Richard Royall, Steven Goodman, Veronica Vieland,...), & the p-value does NOT measure that type of evidence. In the frequentist world, though, "evidence" is not a defined concept.
    -------------------------------------------
    Andrew Hartley
    Associate Statistical Science Director
    PPD, Inc.
    -------------------------------------------




  • 7.  RE: significant

    Posted 11-17-2014 10:50
    Andrew is right that the p-value is not, directly a measure of evidence for the truth of H0, but based on simulations that can be conducted, I'd suggest that the p=value's main problem is not that issue; because under certain conditions, one could estimate an evidence measure from the p-value.

    I'd suggest that this is the bigger problem:  "
    ...Why, when estimating a parameter, we provide a range (a confidence interval), but when testing a hypothesis about a parameter (e.g. μ = x) we proceed as if "=" entails exact equality of the parameter with x? That ...is not the standard expected for power calculations, where we are satisfied to reject H0 if the result is merely "detectably" different from (exact) H0.".    In practical terms H0's "thickness" matters a lot.

    In each simulated case (of thousands) in an experiment, (a) a sample was drawn from a population with a known "true" mean value (the simulated true mean changed for each case ), and (b) based on the sample, the null hypothesis was tested, conventionally, that the true mean = 100; then (c) data were recorded for the p-value for that test,and the "actual" distance between the "true" mean and that sample's mean and (in experiments where these were varied), the simulated value for sigma, etc.  

    Based on the data, here' a rule of thumb I've found to quite robust to changes in details in the experiment.  (I'd love to see someone replicate or improve it.):
    "If the effect size is not at least as large as the specified H0 thickness (e.g. the detectable distance you'd use when calculating sample size), or, preferably, a bit larger, then the best guess is to stick with H0 as likely true (in the 'thick' sense of 'true')-regardless of what p-value you obtain. If, on the other hand, the effect size is quite a bit larger than the H0 thickness, then rejecting H0 is a safer-even if the p-value is on that occasion not that persuasive."

    Here's where I'm quoting from:  http://www.statlit.org/pdf/2010GoodmanASA.pdf

    Best regards,
    Bill

    -------------------------------------------
    William Goodman
    University of Ontario Institute of Technology
    -------------------------------------------




  • 8.  RE: significant

    Posted 11-18-2014 10:42
    Robert,
    I like your "rule of thumb" -- but I think that many users of statistics unfortunately do not think in terms of  a "detectable difference". Indeed, my experience is that most users in the social sciences, in particular, calculate power and sample size in terms of "standardized effect sizes" (Cohen's d, etc.) -- and thereby avoid thinking about detectable differences, which need to be considered in terms of the raw effect size.  So to me, just getting people to look at detectable differences in raw effects would be a huge improvement. For example (a situation that often occurs in the social sciences), if the outcome variable is measured on a scale of 1 through N, with only integer values, then any difference less than one would not be detectable, and therefore not practically significant.

    -------------------------------------------
    Martha Smith
    University of Texas
    -------------------------------------------




  • 9.  RE: significant

    Posted 11-19-2014 12:30

    William, I meant to state that the p-value p does not measure evidence ONLY IF one adopts the bayesian / evidentialist interpretation of evidence. It's meaningless to say that p does or does not measure evidence, because frequentism (in which p is generated) has no concept of evidence. Thus, p might measure evidence as I conceive of evidence, but not as you conceive of evidence. Or vice versa. 

    If we're identifying problems with p, maybe a good one to point out is that everybody argues about whether p measures evidence, but most of the time we don't say what we mean, so our talk is meaningless. I don't see your JSM paper addressing this Qn, either.  

    I agree that the discrepancy between estimation (using CIs) & testing is a problem. Indeed, it's strange that, in calculating p for 1 sided testing (say, Ho: mu<=0) we don't assume Ho but rather we assume the boundary between Ho & the alternative hypothesis Ha. This practice arises because, if we were to assume Ho itself, we'd need to calculate p by integrating over the parameter space. Whereas that type of integration is not allowed in frequentism, we are left with calculating p assuming a single point in Ho, & that point is the boundary. We choose the boundary because that's the one that maximizes p. In other words, if p (calculated at the boundary) <a for some a, then p (assuming any other point in Ho) is also <a.  

    I hope to have a closer look at your JSM paper. Do you find it interesting that, after 100+ years, people still can't agree on the meaning of p, & yet most statisticians still keep using it? My opinions about the reasons for that appear in a book "Christian and Humanist Foundations for Statistical Inference." 

    In any case, thanks for reminding everyone (as does SN Goodman!) the difference between hypothesis testing & significance testing; the common conflation of the 2 also generates much confusion.
    -------------------------------------------
    Andrew Hartley
    Associate Statistical Science Director
    PPD, Inc.
    -------------------------------------------




  • 10.  RE: significant

    Posted 11-15-2014 19:27

    I make mine do both. Statistically significant. The pvalue is less that the alpha level. A type one error may have been made.I also make them interpret the confidence interval and the use the critical ratio and critical values to build familiarity with what they are doing.
    -------------------------------------------
    Jeffrey Culver
    Lecturer
    Eastern Washington University
    -------------------------------------------




  • 11.  RE: significant

    Posted 11-13-2014 09:19
    Lol. That's funny!
    Significant statistically vs. statistically significant?  Adverbs always after verbs, but either way in the case of adjectives?
    The p-value is less than 0.05, and therefore the difference is staticant!  But you are making the pronounciation difficult, -3 third tones in Pinyin-  Will take time to sink in.  I'm all for rich lexicon that express concepts accurately with fewer words. You might say that I am parsimonious. 

    Gracias,

    -------------------------------------------
    Beimar Iriarte
    Sr Statistician
    Abbott Laboratories
    -------------------------------------------




  • 12.  RE: significant

    Posted 11-13-2014 09:43
    There's already a surfeit of jargon.
    -------------------------------------------
    Emil M Friedman, PhD
    emil.friedman@alum.mit.edu (forwards to day job)
    emilfrie@alumni.princeton.edu (home)
    http://www.statisticalconsulting.org
    -------------------------------------------




  • 13.  RE: significant

    Posted 11-14-2014 16:00
    Saying "statistically significant" can indeed seem like a pain in the neck -- but using it is important to emphasize that "statistically significant" is different from "practically significant." Putting up with the pain in the neck is better than just slipping into the ambiguous, unmodified "significant," which is what many people do. And I think using your suggestion of "staticant" misses the (often missed) opportunity to be upfront about the different between statistically and practically significant.

    -------------------------------------------
    Martha Smith
    University of Texas
    -------------------------------------------




  • 14.  RE: significant

    Posted 11-17-2014 10:59

    Robert,

    I posted on LinkedIn: About Data Analysis, to get more feedback. 

    Randy
    -------------------------------------------
    Randy Bartlett
    -------------------------------------------