Blog Viewer

SSPA Blog: Displaying Uncertainty in Rankings: We're #1...Maybe

  
Rankings are everywhere. Every week there is a ranking of the top 40 songs on the radio. Each year US News & World Report publishes rankings of the best colleges and the best hospitals. And, it's big news when a company like SAS is ranked near or at the top of Fortune's Best Companies to Work For list.

These rankings are fun to read and discuss, but they suffer from a major shortcoming: they don't indicate the uncertainty in the underlying measurements. Statistically speaking, no measure of popularity or quality is without uncertainty. For example, colleges and universities are ranked by applying a formula that combines various measures of "quality" such as the average SAT scores of incoming freshman. Because an average is computed from a sample, there is uncertainty in the estimate.

A ranking is based on numbers (also called scores), and two items that have different numbers—no matter how small the difference!—result in different ranks. But statistically oriented people need to ask the question, "Are the underlying numbers significantly different?" In 2010, the top-ranked universities were Harvard, Princeton, and Yale, but if you look at the raw data behind the rankings, these schools have almost identical scores in the measured variables.

Further down the list is U. California, Berkeley, at #22 and U. California, Los Angeles, at #25. Should the students at Berkeley chant "We're #22! We're better than UCLA!"?

Unlikely. The measured data for these schools are similar. In the face of uncertainty, a better chant would be "We're most likely #22, but we could be as high as #16 or as low as #37!"

Somehow, I don't think that chant will catch on with the students, but it should be chanted (LOUDLY) by statisticians and statistical programmers.

There have been several excellent articles about how to indicate uncertainty in rankings. The simplest approach is to include confidence intervals (also called "error bars" in some disciplines) on a plot of the ranked items.

Recently, D. Spiegelhalter has proposed using a "funnel plot" instead of a table of rankings. A funnel plot graphically displays the raw score for each song/college/company against a measure of the precision of that score. He argues that a funnel plot is "very attractive to consumers of data" and that they avoid "spurious ranking."

As statistical programmers, we have an obligation to provide uncertainty estimates as part of an analysis. Confidence intervals and funnel plots are one way to visually display uncertainty in rankings.

4 comments
76 views

Permalink

Tag

Comments

04-29-2011 09:17

Computing decision limits for ANOM charts that are adjusted for multiple comparisons: http://bit.ly/lxOHsG

04-25-2011 13:42

A comparison of funnel plots and analysis of means (ANOM) plots: http://bit.ly/eWU78c

04-15-2011 09:41

How to create funnel plots in SAS: http://bit.ly/eL6qGa

03-30-2011 14:27

How to compute rankings with confidence intervals in SAS: http://bit.ly/ihq41C