No. The joker that made the graph is (to my knowledge) completely unknown. We know which organization made the graph – an organization called "Americans United for Life". We have no way to know whether this organization has any statisticians in their employ. Congressman Chaffetz reused someone's bad work – and is still rightly called out for doing that – but he didn't make the graph.
Others in this thread have mentioned that some professionals like lawyers and doctors can remove people from their discipline for malpractice. The point doesn't translate to statistics. The public doesn't care if we put on a graph "created by a certified statistician, license number XXXX" that someone else could verify. They'll consume the bad product, whether it's endorsed (or openly decried) by ASA or other statisticians. This is the culture we need to find a way to change, but how to make others care?
Original Message------
The joker that made graph has been called out in public forums and on tv.
I can think of dozens of larger issues dealing with the use and abuse of statistics in the sciences.
On the other hand, most, meaning 90%+ of all scientists still believe you can only change one thing at a time during an experiment. For anyone on this forum, try reading through some of the "latest and greatest" articles in "top notch" journals. How often did the authors make errors in their interpretations of their data. As a former chemistry student, I had to read dozens of articles. I still haven't found a journal article from the American Chemical Society where I thought the authors used the correct method of analysis. I've seen plenty of papers where the authors used multiple t-tests or created several simple linear regressions from a data set. These types of issues go unnoticed by the scientists because they don't know any better. (If you ask them, they know what they are doing and don't need your help.)
Reproduciblity and Reliability have been issues in academic science for a long time. Yet, I don't hear ANYTHING coming from statisticians and data scientists about how to fix these issues. Publishing an Op-Ed piece in a stats journal is about as useful to the situation as writing it on a bathroom stall wall. The people that need to hear the message never see it. Those that do read the message have heard it all before. Perhaps statisticians and leaders in the ASA should stand up and write Op-Eds for the big journal factories, ACS, Nature, Springer, etc. We know how to fix this mess. We need to let other know!
To us, Design of Experiments is a branch of statistics that goes back at least 150 years. We used split plots and factorial designs before the invention of the t-test. Yet, talking with most scientists you'd never know they exist. For those of you in academic positions, take a walk through the hallways of the sciences departments. Look at the posters, proudly displayed upon the walls. Ask yourself, "If my student submitted that analysis for that data, what grade would I give them?" I wouldn't go much more than a D+ most of the time. But I, as a student, cannot tell Dr Dimwit they did their analysis wrong. Cuz, I just a student. And as such, it's my job to sit down and get to work. Not to mention, Dr Dimwit and their colleagues have been "doing it that way" for decades.
If the ASA is going to take a stand against bad stats, ANSWER THESE OP-EDS!!!!!! Go on Science Friday and discuss how Designed Experiments can beat OFAT methods every time. Write about how to use Plackett Burman Designs and Definitive Screening Designs and how they will increase productivity and decrease the chances of these types of errors! Put those articles in non-stats journals. Explain how a Box-Behnken Design of a Mixture Design can be used in science labs! Do something! Preaching to the choir won't fill your seats.
Weak statistical standards implicated in scientific irreproducibility
http://www.nature.com/news/weak-statistical-standards-implicated-in-scientific-irreproducibility-1.14131
Animal studies produce many false positives
http://www.nature.com/news/animal-studies-produce-many-false-positives-1.13385
Policy: NIH plans to enhance reproducibility
http://www.nature.com/news/policy-nih-plans-to-enhance-reproducibility-1.14586
Number crunch
http://www.nature.com/news/number-crunch-1.14692
Scientific method: Statistical errors
http://www.nature.com/news/scientific-method-statistical-errors-1.14700
Uncertainty on trial
http://www.nature.com/news/uncertainty-on-trial-1.13868
------------------------------
Andrew Ekstrom
------------------------------
Original Message:
Sent: 09-30-2015 17:43
From: Morris Olitsky
Subject: Crooked graph
I agree with Dr. Cobb that statisticians should not shy away from challenging misleading uses of statistics, and many members have mentioned compilations of such. We should recognize, however, that these misuses occur on both sides of the political spectrum, and that calling out those only those that occur on one side can be misleading in and of itself.
------------------------------
Morris Olitsky
Statistician
USDA
------------------------------
Original Message:
Sent: 09-29-2015 18:59
From: George Cobb
Subject: Crooked graph
Many of my/our fellow statisticians may have seen the deliberately distorting graph presented at the congressional hearing about Planned Parenthood. If you haven't seen it yet you can find it at
http://www.msnbc.com/msnbc/congressman-chaffetz-misleading-graph-smear-planned-parenthood
The abuses of axes suggest, among many other egregious falsehoods, that 935,573 < 289,750. (For those who teach statistics, show the graph to your class and ask how many distortions they can identify.)
This is a challenge to our Association: Do we take a public stand (e.g., letters from our ED, our President, and our Board, to Congress and to major newspapers) or do we sit complacently on our tight little alphas, afraid to commit a Type I error? Theory tells us: the more we shrink our alpha, the weaker our power.
George Cobb
Former ASA VP