ASA Connect

 View Only
Expand all | Collapse all

Looking for experts on a story about statistics and the law

  • 1.  Looking for experts on a story about statistics and the law

    Posted 02-13-2018 11:27
    Hi all!

    I'm science/stats journalist currently working on a freelance project with Significance Magazine at the intersection of statistics and the law. 
    Without going into too much detail, an engineer collected data on yellow light timings in his city of residence and, after comparing them against the city's required yellow light duration, he sued the city for violating their own regulations. He ended up losing that case.
    I'm writing a story about him, about his use of statistics (which was limited), and about how statistics can be used to persuade in the courtroom. I was hoping to talk to some experts in the use of statistics as a tool for legal persuasion. This seemed like a good place to turn. By any chance, would anyone be interested in a brief discussion on the topic?

    I'd be happy to send you more information about the story or about my previous writing. 

    Thanks!

    Nick


    ------------------------------
    Nick Thieme

    ------------------------------


  • 2.  RE: Looking for experts on a story about statistics and the law

    Posted 02-14-2018 16:18
    I do not have experience with presenting statistical results in court, but this document might be an interest to you-  

    Reference Manual on Scientific Evidence. It was published by the Justice Department to help Judges understand how to handle scientific evidence with expert testimony. There are several sections that might interest you - How science works, Guide on Statistics, Guide on Regression, and Guide on Survey Methods. Hope this helps.



    ------------------------------
    Sarah Kalicin
    Intel Corporation
    ------------------------------



  • 3.  RE: Looking for experts on a story about statistics and the law

    Posted 02-15-2018 08:24
    Hi Nick, 

    In case you haven't already found him, you should contact Jay Kadane at CMU.  He's done a lot of work at the intersection of statistics and law.  He has written a book and taught classes on it.

    Michelle

    ------------------------------
    Michelle Dunn
    founder, Data Collaboratory
    granted.datacollaboratory.com
    ------------------------------



  • 4.  RE: Looking for experts on a story about statistics and the law

    Posted 02-15-2018 13:24
    If you remember the Florida presidential polls between Al Gore versus George Bush Jr., the Florida High Court used statistical sampling by two statisticians one of whom was an English gentleman who presumably had an M.Sc. in statistics who was extremely vocal and persuasive and was hired by the Republican Party.  I do not remember the names or any other detail but one should be able to find it in the Internet.  Nate Mantel was a well recognized statistician from D.C. who used to testify as a statistician-expert witness. Do not know whether this would help.  Good luck.

    Ajit K. Thakur, Ph.D.
    Retired Statistician





  • 5.  RE: Looking for experts on a story about statistics and the law

    Posted 02-16-2018 07:41

    Sarah,

     

    Check out this website.  It has lots resources. https://forensicstats.org/resources/   Good luck.

     

    Rahul Parsa

    Iowa State University






  • 6.  RE: Looking for experts on a story about statistics and the law

    Posted 02-16-2018 08:07

    Donald Berry
    Feb 16, 2018 

    I'm very interested in communicating statistical results to courts, and communicating statistical ideas and conclusions to people more generally. I've testified in many cases in a variety of areas. I'll give two examples, one that I wrote about and the other in which I testified, although in an unusual way.

    The first is O.J. Simpson's murder trial. It's the most important courtroom example in the U.S. where statistics and communicating statistics played critical roles, although in less than a positive way. Internationally, in my view, the most important cases regarding the importance of communicating statistics are the Sally Clark case in the U.K. and Lucia de Berk in the Netherlands. (Google them.) As regards the former, Peter Donnelly's rendition is arguably the most elegant TED talk that exists: <https://www.ted.com/talks/peter_donnelly_shows_how_stats_fool_juries/transcript>

    I wrote an article (Berry DA. DNA, Statistics and the Simpson Case. Chance 7(1994)(4):9-12) in advance of the Simpson trial. The article did not predict the result, of course, but it did try to illuminate the ability for courts to understand numbers and what they mean. And that turned out to be a major, possibly defining issue in the actual trial. Here's one of the paragraphs from my article:

    "A common defense tactic is to muddy the waters. A way to do this is to find experts who will testify to something different from the testimony of prosecution experts--it doesn't really matter what it is as long as it's different. Testimony involving numbers is especially vulnerable to smoke screens. Match proportions depend on assumed bin size, measurement standard deviation, database, etc. In one California case, match proportions presented to the court varied from 1 in 70,000 to 1 in 700 million, depending on assumptions made. Unsophisticated jurors can become confused--numerical calculations designed to make evidence more informative can have the opposite effect. In this particular case, the court was confused as well--it ruled that the discrepancies warranted excluding all match proportion estimates!"

    I'll say here a bit more about these two very different match proportions. At issue was the molecular weights of DNA fragments. A particular sample had shown a single fragment. But there had to be two fragments, one maternal and the other paternal. The defense assumed that the other fragment had washed off the end of the gel and therefore its weight was censored at the size that could be weighed by the process. Result: 1/70,000. The prosecution expert assumed homozygosity, that the both fragments were measured and were on top of each other. Result: 10,000 times smaller. Disallowing the evidence to be presented in the case was amazingly uninformed because the evidence was strongly incriminating regardless of which expert was right.

    Bruce Weir was the statistician for the prosecution in the actual Simpson trial. In his testimony on a Friday he presented some calculations of prevalence of DNA bands in the general population. Pushed by the defense, over the weekend he realized he had made a mistake. He had forgotten a factor of 2. It's the Hardy-Weinberg 2. For example, the probability that a couple's two children are a boy and a girl is approximately ½ x ½ = ¼, right? Well, only if the boy-girl question is something like, the older is a boy and the younger is a girl. If it's just boy-girl in any order then the right answer is twice ¼ or ½. Nothing is statistics is simpler than this. We all make mistakes similar to Weir's, but we usually catch them before we announce them. On Monday Weir testified to his mistake. For example, when he had calculated 1/1600 for the blood on the famous glove being Simpson's, it should have been 1/800.

    The defense attorneys had a field day with Weir's mistake. They were unrelenting. They focused on his reliability and not so much on the fact that the jury had been presented with two numbers for the same quantity. See the report in the next day's LA Times: <http://articles.latimes.com/1995-06-27/local/me-17506_1_dna-analysis>.

    As in the earlier example, both match probabilities, 1/800 and 1/1600, are incriminating. But the defense attorneys were successful in turning the evidence on its head and worked to convince the jury that they couldn't believe any of it. Apparently, they succeeded.

    *          *          *

    The other example is athlete doping. In 2011 I represented an Estonian Olympic gold medalist in cross-country skiing at the Court of Arbitration for Sport (CAS) in Lausanne, Switzerland. (Pro bono, by the way.) He had been banned from sport for three years for using human growth hormone in out-of-competition training. Tests had been carried out by the World Anti-Doping Agency (WADA). He appealed the ban to the CAS. For the first time, the ban was overturned. And, as clearly stated in the CAS ruling, the exclusive reason was statistics. (There has been one other ban overturned by CAS since that time. That was a Russian runner whose sample was tested beyond the statute of limitations.)

    The CAS hearings are different from most hearings. The CAS agreed with the athlete's request that I be allowed to sit with the CAS hearing panel and ask questions of witnesses from both sides. Krista Fischer and I chronicled our experiences in this article: (Fischer K, Berry DA. Statisticians introduce science to international doping agency: The Andrus Veerpalu case. Chance 27(2014)(3):10-16.) Krista is one of a handful of statisticians in Estonia, the smallest Baltic country at 1.3 million people.

    At the hearing the WADA statistician presented the way he determined the decision limit (DL), the measurement above which the athlete is determined to be a doper. He had fewer than 200 cases that had been collected from standard competitions. So some of these values might have been from dopers. So he deleted the highest values! Then he fit a log-normal distribution and found the DL to be the 99.99 percentile of the distribution. So he (and WADA) claimed a false-positive rate for the test of 1 in 10,000.

    There were the two critical problems with this method that I pointed out at the hearing ,,, two critical "communications." First I asked the WADA statistician how he could conclude that only 1 in 10,000 non-dopers would have a higher value of the DL when his sample contained fewer than 200 people.

    I also posed the other critical problem with his analysis-his rejection of "outliers"-as a question:

    At the hearing, Berry asked the statistician representing WADA the following rhetorical question: "So you're saying that you cut off the tail of the distribution and then claimed that it had a small tail, is that correct?"

    I could tell by follow-up questions of this witness that one particular member of the CAS really got it … he understood what I was saying. So the fact that the athlete's ban was lifted may have had more to do with this person being on the CAS panel and not my asking these questions.

    If you want further reading on the statistical issues of doping: Berry DA. Commentary: The science of doping. Nature 454(2008):692-693. Up until the day the article went to press its title was "The science of doping … or lack thereof." But Nature's editors dropped the last three words from the title under threat of a law suit by WADA. Despite my raking WADA over the coals in this article and in the Chance article with Krista Fischer, I've never been contacted by them regarding how to improve their science. After the Veerpalu debacle WADA did hire some local (mediocre) statisticians in Montreal to endorse what they're doing and, surprise!, they did endorse what WADA had been doing.

    Also, here's a news article regarding Veerpalu: <https://www.outsideonline.com/1925761/whats-wrong-world-anti-doping-agency>

    *          *          *

    If you're in the mood for a bit of humor, a lot of funny things happen in court … but nobody laughs! Here's the funniest anecdote from my courtroom experience. It was the 1980s. I was testifying (also pro bono) for the Attorney General of Minnesota. The State was prosecuting the defendant for operating a marketing business that was in effect a pyramid scheme with a product being sold only to cover up the scam to avoid the laws against pyramids. There was only one other witness for the State, someone from the Attorney General's office who testified that the scam was in fact operating in Minnesota. But the room was packed, mostly with lawyers from the "big city." The judge was Esther Tomljanovich. I was on the stand sitting next to her. The State's attorney began asking questions of me. One question elicited "I object" from one of the big city lawyers. He then explained his objection, talking to me! He said the question didn't deal with mathematics or statistics, which were my areas of competence. He launched into a detailed explanation and not incidentally was doing mathematics and statistics along the way … which ironically happened not to be among his competences. And he finished up addressing me saying, "And therefore it's not mathematics or statistics, is it sir?" Now, I knew place. I turned to the judge. She looked at me and said, "Do you want to rule on this or should I?"

    That's the story. Remember I said this was the 1980s. But the rest of the story is kinda cute. She started to say things, looking alternatively at the suitably red-faced lawyer and me. It became clear that she didn't know what to rule. So I spoke up, asking her if it would help for me to say what I understood the State's attorney to be asking. She said "Yes." So I did. And she replied, "Objection overruled!"

    I suppose the real "rest of the story" is the result of the case: the State won.

     



    ------------------------------
    Donald Berry
    Professor
    Univ of Texas MD Anderson Cancer Center-Department of Biostatistics
    ------------------------------



  • 7.  RE: Looking for experts on a story about statistics and the law

    Posted 02-19-2018 08:35
    Check Richard Gill's webpage. He is an English mathematician and statistician teaching at the University of Leiden. He was heavily involved in the Lucia de B case (mentioned by Donald Berry), and several others.

    Richard D. Gill's home page






    ------------------------------
    Anthony Warrack
    Associate Professor
    North Carolina A&T State University
    ------------------------------



  • 8.  RE: Looking for experts on a story about statistics and the law

    Posted 02-19-2018 09:38
    Don Berry,

    Thank you for your lengthy and informative response.  I especially liked the pyramid scheme story.


    ------------------------------
    David Mangen
    ------------------------------



  • 9.  RE: Looking for experts on a story about statistics and the law

    Posted 02-18-2018 01:57
    Nick, good luck with your story! I look forward to reading it eventually in Significance.

    One interesting case that might warrant a 'comparative analysis' would be that of Oregon man Mats Järlström, who faced an 'inquisition' for daring to speak mathematical truth to government power.

    http://ij.org/press-release/lawsuit-challenges-oregon-law-prohibiting-mathematical-criticism-without-license/

    Kind regards,
    David


    ------------------------------
    David C. Norris, MD
    Precision Methodologies, LLC
    Seattle, WA
    ------------------------------



  • 10.  RE: Looking for experts on a story about statistics and the law

    Posted 02-18-2018 12:16
    I also suggest contacting Prof. Mary Gray of American University in D. C. She published many articles in this area with 2 degrees and Math and Law ..and worked with statistics and statisticians! 



    ------------------------------
    Hasan Hamdan
    Professor of Statistics
    James Madison University
    ------------------------------



  • 11.  RE: Looking for experts on a story about statistics and the law

    Posted 02-21-2018 10:36

    A recent essay of Mary Gray's is: ADJUSTING THE ODDS, Chance 30(2), May 2017, http://chance.amstat.org/2017/04/adjusting-the-odds/ It focuses on matters "equity" and "disparate impact," claims about which are decided a thousand times more often in the human resources departments and "diversity and equity" offices of corporations, universities, etc. than in the courts.

    Here is one curious excerpt from Gray's essay:

    "Litigation is expensive, so banding a group together in what is termed a "class action" case is a useful technique. Unfortunately, because of court decisions, this has become increasingly difficult in recent years. However, even in the case of a single complainant, statistical evidence of similar discriminatory conduct may be helpful; DOZENS, IF NOT HUNDREDS, OF REGRESSION STUDIES OF SALARIES IN HIGHER EDUCATION HAVE SHOWN THAT WOMEN, ON THE AVERAGE, ARE PAID LESS THAN MEN EVEN WHEN MEASURABLE CHARACTERISTICS ARE THE SAME.

    And here's a question. If her essay had been subjected to rigorous peer review by experts in this matter, would the sentence I've put in all CAPS been defensible? One sees this sort of claim often in blogs, op eds, non-peer reviewed articles, etc. but I believe it to be false. If ANYONE KNOWS OF A SINGLE STUDY DONE in the last few decades that, taking into account all the major variables known to justifiably determine salary levels of academics, found evidence of sex discrimination, I'd be interested to know of it? Ditto for claims of sex discrimination in hiring.

    One of the few outfits that does not tolerate shoddy or superficial statistics on these matters (or a high reliance on anecdotes) is the Cornell Institute for Women in Science (https://www.human.cornell.edu/hd/research/labs/ciws/home). They not long ago had a paper in PNAS that found women applicants for academic positions get preferential consideration.

    Similar close analysis of 30 hiring decisions in my own department and of earlier overheated claims in the University of California found no credible statistical evidence of bias. A review of these can be found in:

    RACE, SEX, AND FACULTY SEARCHES, DEPARTMENT OF BIOLOGY, SDSU, 1988-2002, WITH COMMENTARY ON POLICIES AND ACTIONS OF THE SDSU ADMINISTRATION. San Diego State University, San Diego CA. 16 pp., 2003 (Now available as: NAS Article, National Association of Scholars, New York NY, September 11, 2017.)

    Statisticians who want to get a black belt by helping win big court cases on these issues, might first aspire to getting their green belt by whistle-blowing on the almost universal misuse of statistical data in their own universities.



    ------------------------------
    Stuart Hurlbert
    Emeritus Professor of Biology
    San Diego State University
    ------------------------------



  • 12.  RE: Looking for experts on a story about statistics and the law

    Posted 02-23-2018 13:04

    There are two problems with analyses showing that that women or any group are systematically underpaid even after adjusting for characteristics.  The same holds for analyses of differences in loan terms received.

    One is that, as pointed out, they generally fail to adjust adequately for characteristics.  But there is also a more fundamental problem in that it is not possible to draw inferences about decision-making processes based solely on information regarding persons who accepted some outcome of situation (i.e., without consideration of information on persons subject to the decision-making process at issue who declined to accept the outcome or situation). 

    I recently treated this issue here:  https://www.fed-soc.org/blog/detail/eeoc-omb-and-the-collection-of-data-that-cant-be-analyzed

    As it happens, virtually all highly successful discrimination litigations have involved analyses that failed to examine the entire universe at issue.  These include the Countrywide Financial ($335 million) and Wells Fargo Bank ($175 million) settlements discussed in my December 2012 Amstat News column. http://magazine.amstat.org/blog/2012/12/01/misguided-law-enforcement/

    But I did not there discuss this aspect of those cases.



    ------------------------------
    James Scanlan
    James P. Scanlan Attorney At Law
    ------------------------------



  • 13.  RE: Looking for experts on a story about statistics and the law

    Posted 02-20-2018 09:33

    This is a scary subject given how much confusion there is in the statistics profession and literature over relatively simple statistical issues, such as the meaning of P, the need to fix alphas, when, if ever, to use 1-tailed tests,etc.

     

    When Lombardi and I prepared our monograph on misuse of 1-tailed tests (Lombardi, C.M. and S.H. Hurlbert, 2009. Misprescription and misuse of one-tailed tests. Austral Ecology 34:447-468, Appendix -- ATTACHED) we had to omit, for sake of brevity, the section below, which is now about ten years out of date. Feel free to borrow from, everyone!

     

    ABUSE IN THE COURTS

                Medical research is not the only area where poor decisions have been made as a result of an obsession with the 0.05 level of significance and inappropriate use of 1-tailed tests.   Confusion over 1-tailed testing seems to have had and be having dramatic real consequences for plaintiffs and defendants in the U.S. legal system. Datko (1989) gives an illuminating review of how this issue has influenced litigation on employment discrimination under the 1964 Civil Rights Act of the U.S. Congress. That act "forbids the making of decisions regarding employment on the basis of race, sex, religion and several other factors." Statistical evidence frequently is a major factor determining the disposition of claims brought under this act.

                Let us first list some of the disparate findings by various courts as to when 1- and 2-tailed tests are admissible and then critique the logic of some of Datko's own recommendations.

                One court or another has held: 1) that chance could be ruled out only when "the observed result was two or three standard deviations from the average event"; 2) that 2-tailed tests are more appropriate than 1-tailed in discrimination cases; 3) that 1-tailed tests are appropriate when there is "independent" evidence of intent to discriminate; and 4) that 1-tailed tests can be justified simply because the plaintiff claims to have suffered discrimination. That the courts seem completely befuddled by the issue is perhaps not surprising given that statisticians collectively are as well.

                Datko's review gives a good description of a bad state of affairs. He makes some legitimate points,   but three of his key premises or recommendations are unacceptable. Thus we do not believe he has provided the clarification he had hoped to. First, he believes that should be fixed and claims that "Most courts and social scientists believe that the minimum level of significance should be .05", citing Hayes (1981) and a specific court case as his authorities. It is a bit vague, but he gives the impression that some cases have indeed been decided on the basis of P values of 0.05 or slightly less. In such legal settings, decisions as to what value of P should be regarded as "conclusive evidence" would seem to be a matter of legal, not statistical, principle. Just as a judgment against a defendant requires stronger evidence in a criminal than in a civil case (e.g. Gray 1993), so the critical level of P should vary according to the nature or import of the fact being tried. That is, 0.05 might be sufficient to establish a minor or subsidiary fact, but it would seem a very loose criterion for determining, solely from statistical evidence, the fact of discrimination itself . After all, if there were, say, 100,000 corporations in the U.S. and if not one were to discriminate against females in employment, one could in fact "prove", using an = 0.05 that about 5,000 of them do discriminate against females. Wouldn't that make a jolly pile of fees, awards and penalties!   A P value in the neighborhood of 0.05 is perhaps a reasonable basis for reaching a tentative scientific conclusion or for concluding a result is worth publishing, but it would seem a dangerous basis for any highly consequential and non-tentative legal decisions, civil or criminal.

    Additionally, we would argue that P values should be considered only in conjunction with the apparent magnitude of the alleged discrimination, as measured by some appropriate index. In recognition that important variables not easily quantified are neglected in such analyses and that the data gathered and methods used also have their uncertainties, it would make sense to require that this apparent magnitude exceed a certain level, before purely statistical evidence of discrimination be judged conclusive. At one point the U.S. government "agency charged with the administration of Title VII, propounded the so-called four-fifths rule: a selection rate for minorities less than four-fifths of the highest selection rate is taken to constitute evidence of discrimination. Courts were never very enamored of this rule, and statisticians soon convinced them to substitute a measure of statistical significance" (Gray 1993). It is not clear why both P values and effect sizes cannot be taken into account.

                A second fault in Datko's analysis is that he accepts the idea that be set at 0.05 regardless of the tailedness of the test. This makes no sense, as we have argued earlier. It is the same error made by those who claim that simply by switching from 2- to 1-tailed tests medical experiments can be made less costly and ethically superior, by using fewer patients, with no diminution in the force of the evidence .

                Thirdly, Datko argues, as have some courts, that 1-tailed tests should or can be used if "independent evidence" or "independent proof" of discrimination exists, whereas otherwise a 2-tailed test is appropriate. This again is based on the incorrect premise that the conclusiveness of the evidence, as indicated by a P value, can be judged independently of whether a test was 1- or 2-tailed. At least where the probability distribution of the test statistic is symmetric under the null hypothesis, the force of the evidence is completely unaffected by the tailedness of the test. We also note that the statistical evidence and the other evidence cannot be regarded as logically "independent" when both the decision to carry out a test and the nature of the test selected are determined by the "other" evidence. To his credit, Datko rejects the idea that the claim of a plaintiff is by itself sufficient grounds to justify a 1-tailed test. Unfortunately a well known statistics text for lawyers (Finkelstein and Levin 1990:125) does advocate that idea.

                Despite our disagreements with him, we concur with Datko's (1989) conclusion that this "issue is ripe for hearing and resolution at the U.S. Supreme Court level. The guidelines for the use of statistics in employment discrimination settings is an important topic since statistics are generally crucial and, on occasion, the only evidence produced by plaintiffs to prove discrimination."

    Datko, D. 1989. Should the tail wag the dog? One- versus two-tailed statistical tests in Title VII employment discrimination litigation. Capital University Law Review 18:445-461.

    Gray, M.W. 1993. Can statistics tell us what we do not want to hear? The case of complex salary structures. Statistical Science 8:144-179.



    ------------------------------
    Stuart Hurlbert
    Emeritus Professor of Biology
    San Diego State University
    ------------------------------

    Attachment(s)

    pdf
    2009MisprescrpnOneTail.pdf   228 KB 1 version
    pdf
    2009OneTailAppendix.pdf   233 KB 1 version


  • 14.  RE: Looking for experts on a story about statistics and the law

    Posted 02-26-2018 12:40
    The first name that comes to my mind is Joe Gastwirth -- he has published fairly extensively on the subject
    and has presented on the topic for the WSS

    https://statistics.columbian.gwu.edu/joseph-l-gastwirth
    jlgast@gwu.edu

    ------------------------------
    Grant Izmirlian
    Mathematical Statistician
    National Cancer Institute
    ------------------------------



  • 15.  RE: Looking for experts on a story about statistics and the law

    Posted 02-21-2018 09:20
    I'm just seeing this now, hope I'm not too late to be helpful. You've already received some great suggestions on contacts/resources.  A couple more thoughts:
    - For understanding how scientific evidence is used in legal settings, Bill Thompson, a law professor and sociologist at UC Irvine who's been involved in forensics reform, is a great resource. He's not a statistician.
    - You might want to look at civil and criminal situations as distinct, as things seem to play out differently in the two realms.  The OJ Simpson case is unusual in that it involved a high-powered, expensive defense that brought statistical issues to light.  Unsupported (and untrue) statistical statements have played a big role in wrongful convictions in large part because they are frequently unchallenged in criminal cases -- which often have poor defendants and limited defenses.  I think the biggest change in admissibility of scientific evidence came through a civil case.
    - On the criminal side, the need for good statistics comes up primarily in evaluating forensic evidence. This comes up in the 2009 National Academies report "Strengthening Forensic Science in the US: A Path Forward" and in a more recent (2016?) report from the President's Council of Advisers on Science and Technology. (Those would be the advisers to the previous president).  Karen Kafadar, incoming ASA president was part of the 2009 report.
    - There's a NIST-funded center of excellence working on those issues (CSAFE, http://forensicstats.org/) that's headquartered at Iowa State, with PI Alicia Carriquiry.  She's really articulate and can explain the statistical issues clearly -- it's helpful to distinguish between the concept of error rates and probabilities of inclusion or likelihood ratios. Full disclosure: I'm part of CSAFE at Carnegie Mellon.  So is Jay Kadane, who someone else already suggested here. 
    - There was a good panel on these issues at the AAAS conference last weekend.  It included Alicia, Hal Stern from UC Irvine (also from CSAFE), Lynn Garcia from the Texas Forensic Science Commission, and Peter Neufeld, co-founder (with Barry Scheck) of the Innocence Project. Peter was there to encourage scientists to get involved in these issues. He explained that the system that governs what kinds of statistics get into court is basically what's proposed by lawyers -- people who probably went to law school because they didn't like math and science (I'm paraphrasing, but it was his point) -- and accepted by judges, who are former lawyers.
    - Under the current administration, there have been decisions to defund and limit this work. The National Commission on Forensic Science was allowed to sunset and the president's proposed budget may zero-out CSAFE.  Peter Neufeld had a slide on all of the backsliding at his talk last weekend.
    - Steve Pierson, ASA policy director, also knows a lot about these issues and could probably point you to resources.

    Good luck!  I'm really excited that you're doing this story and look forward to reading it.

    Best,
    Robin

    ------------------------------
    Robin Mejia
    Special Faculty
    Carnegie Mellon University
    ------------------------------



  • 16.  RE: Looking for experts on a story about statistics and the law

    Posted 02-21-2018 10:13
    And I just read your post again and realize much of my note may be a bit afield from what you need -- but maybe there's something in there.  I do look forward to reading it, and am excited to see someone with a stat/CS background getting into journalism!

    ------------------------------
    Robin Mejia
    Special Faculty
    Carnegie Mellon University
    ------------------------------



  • 17.  RE: Looking for experts on a story about statistics and the law

    Posted 02-26-2018 23:33
    Hey all,

    I just wanted to make clear how (positively) overwhelmed l am by this outpouring of advice and support. These recommendations have led me down paths I would have never stumbled on otherwise and I'm immensely grateful for the time and effort each of you put into helping me research this subject. I think the article is going to be a good one and that's largely due to the ASA community. 

    Thank you truly,
    Nick

    ------------------------------
    Nick Thieme
    University of Maryland
    ------------------------------