James,
Thank you for the opportunity to contribute in some regard to this important discussion.
Whatever the ultimate merits of the case, I certainly disagree with Ayres' implication that methods not his own are undeserving of a hearing. (And I'd contest his definition of a p-value.)
What Ayres seems not to recognize or accept is the suggestion in your brief that underlying the distribution patterns you present for certain sorts of data, there might some mathematical principle or pattern at play, which runs counter to our intuitions of what is expected--namely, having the consequence that improving a group's relative success rates may appear to degrade that group's relative failure rates.
If this is indeed the type of claim you are making, then Ayres' argument for your needing more, extraneous empirical variables to consider, is somewhat off the mark. His better argument would have been to ask you to explain more clearly, and to defend, what exactly is this a priori (if you will) aspect of the data that brings on such effects.
As you can see by the hour, I got quite curious to find such an underlying explanation if there is one; and for this aim, I developed and ran some simulations.
I do think I have found something (
see case 2, below); but if it's indeed applicable here, then I think your paper may be partly misstating, what the phenomenon is.
Case 1: Suppose you take large samples from two groups, and whether any member succeeds or fails is a binary variable; and suppose further there is (or starts out) a systematic difference in the success rates of the two groups.
If you resample thousands of times from such groups, and compare their success and fail ratios as you discuss, the outcome is actually more what one would expect: A strong and highly significant positive correlation between an Advantaged group's relative success ratio (Advantaged over Disadvantaged) and the other group's relative failure ratio (D over A). (So in this scenario, if the advantaged group's relative success ratio drops, then expect the disadvantaged group's relative failure ratio to drop as well... which is what the conventional expectation is).
I found this finding robust over variations in: the expected success rates, and the group data distribution shapes (uniform or normal with different variances); and the expected systemic difference in the group's rates, or even if those rates vary.
Case 2: Where your examples differ is that success or failure is not exactly treated as binary. It's more like each data column (for the A and D groups, respectively) contain continuous values, and you are successively varying the cutoff values which define "success"--all with respect to the same data columns. If outputs are compared as if this were Case 1, the cutoffs become a confounder.
...But that's not necessarily bad news for your case: There is a reasonably strong, and definitely significant negative correlation between the cutoff value used and the disadvantaged group's relative failure ratio (D over A). ...so despite possible virtues of simply lowering the cutoff for success, which may seem to relatively improve success rates, the cutoff-lowering itself will apparently
increase the disadvantaged group's D/A percent fail ratio.
On reflection, this pattern appears to hold somewhat 'a priori', after all, so as I mentioned above it is not a matter of needing extra data: The curve for (Percent fail D/A) versus (cutoff value) slopes asymptotically towards a ratio value of 1.00. ....This makes sense: If the "cutoff" were set to the highest possible value, then both groups would always fail (equally), so their comparative ratio would be 1.0 The lower the cutoff the more chance for the group with any residual systemic advantage to pull ahead.
I realize this point differs from your paper, but you might find it helps your case. If the above finding is valid, it would certainly be unfair if an organization is encouraged to lower its cutoff rates, and then punished if indeed the higher D/A failure rate is a predictable artifact of that same policy.
Hope this helps.
Bill
-------------------------------------------
William Goodman
University of Ontario Institute of Technology
-------------------------------------------
Original Message:
Sent: 02-11-2015 13:22
From: James Scanlan
Subject: Supreme Court amicus curiae briefs on measurement
Below are links to two Supreme Court amicus curiae briefs recently filed in the case of Texas Department of Housing and Community Development, et al. v. The Inclusive Communities Project, Inc. Together the briefs raise issues about both the measurement of demographic differences and the quality and integrity of legal and scientific discourse that should be of interest to ASA members. The case, which involves the issue of whether disparate impact claims are cognizable under the Fair Housing Act, is considered quite important in many circles.
The first amicus curiae brief is one I filed on November 17, 2014, addressing certain measurement issues that I had to some extent addressed previously in a Fall 1994 Chance article titled "Divining Difference," a Spring 2006 Chance guest editorial titled "Can We Actually Measure Health Disparities?," and a December 2012 Amstat News Statistician's View column titled "Misunderstanding of Statistics Leads to Misguided Law Enforcement Policies," as well as a July/Aug. 2014 Society article titled "Race and Mortality Revisited" (each of which may be easily found online). As reflected in the articles, the measurement issues addressed in the brief pertain to a wide range of subjects beyond the matter specifically before the Supreme Court.
The second amicus curiae brief is one filed by Yale Law Professor Ian Ayres on December 23, 2014. The Ayres brief purports to respond to my brief and to correct misunderstandings and misapprehensions contained in it. In addition to being a lawyer, Ayres has a Ph.D. in economics from MIT and has published extensively on a range of statistical issues. He occasionally blogs on the Freakonomics Blog
A third item below is my January 13, 2015 letter to Ayres' counsel requesting that she withdraw the Ayres brief because it is materially misleading and advising that I would seek to have her sanctioned by the Supreme Court Bar. I will shortly be submitting my complaint to the Supreme Court Bar. Any views members may have as to why the Ayres brief is or is not misleading would be appreciated.
1. Scanlan brief: http://jpscanlan.com/images/13-1371tsacJamesP.Scanlan.pdf
2. Ayres brief: http://jpscanlan.com/images/Ian_Ayres_Amicus_Brief_13-1371.pdf
3. Letter to Ayres counsel: http://jpscanlan.com/images/Letter_to_Rachel_J._Geman.pdf
-------------------------------------------
James Scanlan
James P. Scanlan Attorney At Law
-------------------------------------------