ASA Connect

 View Only
  • 1.  Statistical Misunderstanding in Keep Kids in School Act

    Posted 03-20-2015 01:06
    This message has been cross posted to the following eGroups: Washington Statistical Society and ASA Connect .
    -------------------------------------------

    In a March 9, 2015 post regarding the Department of Justice's March 4, 2015 report on racial disparities in Ferguson, Missouri's law enforcement practices, I provided a link to a March 9, 2015 letter to officials of the Department of Justice and the City of Ferguson,[1] in which I explained that, contrary to the premise of the DOJ report, reducing the frequency of adverse interactions between the citizens of Ferguson and the city's police and courts would tend to increase (not decrease) the proportion blacks comprise of persons experiencing those interactions.  In the post, I referenced a December 2012  Amstat News  Statistician's View column [2] in which I had addressed some of the law enforcement anomalies arising from the fact that the DOJ and other federal agencies have long been proceeding under the mistaken view that relaxing standards and thereby reducing the frequency of adverse outcomes (like rejection of loan mortgage applications or suspension from public schools) will tend to reduce relative racial differences in rates of experiencing those outcomes. 

    Apparently, on March 4, 2015, Senator Robert P. Casey, Jr. had introduced S. 627, the Keep Kids in School Act, which is also based on the mistaken view that generally reducing discipline rates will tend to reduce relative demographic differences in discipline rates.  Item 3 below is my March 20, 2015 letter to the Senate Committee on Health, Education, Labor and Pensions, explaining that generally reducing discipline rates will tend to increase, not reduce, relative differences in discipline rates.  As the letter reflects (and as with the March 9, 2015 letter to the Department of Justice) this is not my first letter explaining the matter to the recipient entity.  Thus, the Keep Kids in School Act will provide further reason for the American Statistical Association to take a role in educating the government on this subject.

    1.  http://jpscanlan.com/images/Letter_to_Department_of_Justice_and_City_of_Ferguson_Mar._9,_2015_.pdf

    2.  "Misunderstanding of Statistics Leads to Misguided Law Enforcement Policies," Amstat News (Dec. 2012) http://magazine.amstat.org/blog/2012/12/01/misguided-law-enforcement/

    3. http://jpscanlan.com/images/Letter_to_Senate_Committee_on_Health,_Educ,_Labor_and_Pensions_March_20,_2015_.pdf



    -------------------------------------------
    James Scanlan
    James P. Scanlan Attorney At Law
    -------------------------------------------


  • 2.  RE: Statistical Misunderstanding in Keep Kids in School Act

    Posted 03-23-2015 09:58
    It's not whether the proposed change will reduce the level of disparity.  It's just a case of deciding how to define disparity.  If we go from a 40%/20% rate of some particular outcome to a 10%/2% rate we've reduced the difference between the two groups but increased the ratio.  In other words, there are a lot of things that we can't talk intelligently about without discussing how to define them.
    ------------------------------
    Emil M Friedman, PhD
    emil.friedman@alum.mit.edu (forwards to day job)
    emilfrie@alumni.princeton.edu (home)
    http://www.statisticalconsulting.org
    ------------------------------




  • 3.  RE: Statistical Misunderstanding in Keep Kids in School Act

    Posted 03-23-2015 16:44

    I agree with Dr. Friedman with respect to nature of the important question, save for the use of the word "define," which, to my mind, contains a suggestion that we might chose among measures (or merely need to clarify the measure used) to appraise a disparity, and possibly a suggestion that relative and absolute differences in discipline rates are the measures we would choose between.

    Society's interest in examining a pair of outcome rates of an advantaged group and a disadvantaged group is to appraise the forces causing the outcome rates to differ.  Observers make such an appraisal, in the first instance, to determine whether those forces should be deemed strong or weak.  Then, when comparing a situation involving one pair of outcome rates with a situation involving a different pair of outcome rates regarding the same groups at a different point in time, observers make such an appraisal to determine whether the forces have grown stronger or weaker.  We can effectively pursue these goals only with a sound understanding of the ways particular measures tend to be systematically affected by a general change in the frequency of an outcome, by which I mean a change akin to that effected by the lowering of a test cutoff (or, in the school discipline context, akin to that effected by increasing the number of infractions necessary to trigger a suspension).  Neither the relative difference in the adverse outcome (nor the relative differences in the corresponding favorable outcome) nor absolute differences between rates can achieve that goal, unless the observer understands the way the measure tend to affected by the frequency of an outcome.  For example, as a test cutoff is lowered, relative differences in failure rates tend to increase while relative differences in pass rates tend to decrease.  As a test cutoff is lowered from a very high point to a somewhat lower point, absolute differences tend to increase; when the cutoff is lowered from a fairly low point to an even lower point, absolute differences tend to decrease.  As the frequency of an outcome changes, the absolute difference tends to change in the same direction as the smaller relative difference.  While the absolute difference and both relative differences can all change in the same direction (in which case we can infer a true change in the strength of the forces causing the rates to differ), anytime the mentioned relative difference changes in the opposite direction of the absolute difference, the unmentioned relative difference will necessarily have changed in the opposite direction of the mentioned relative difference and the same direction as the absolute difference.   

    I describe these patterns, and the implications of the failure to understand them in various contexts, in reference 1.  An illustration of the patterns by which measures change across the distribution may be found in its Table 5 (at 335), and in Figure 3 (at 41) (based on test score date) and Figure 7 (at 63) (based on income date) of reference 2.  Graphical illustration based on income data may also be found in Figures 2 to 5 of  reference 3.  The discussion of Table 5 in reference 1 (at 335-36) is intended to refute the notion, which is increasingly asserted with respect to health and healthcare disparities research, that one may choose a measure of disparity based on a value judgment (an assertion generally made in circumstances where the absolute difference and the relative difference the observer happens to be examining have changed in opposite directions).

    Reference 1 also describes a method for appraising the strength of the forces causing a pair of outcome rates to differ that is not affected by the frequency of an outcome.  In the case Dr. Friedman's posited change of a pair of outcome rates of 40% and 20% to one of 10% and 2% (which the UCLA Civil Rights Project, based on absolute differences, would call a dramatic decrease in disparity (as would observers relying on relative difference in the favorable outcome) and the Department of Education, based on relative differences in the adverse outcome, would call a dramatic increase in disparity), the referenced method would say there in fact occurred an increase (from a.59 standard deviation difference between the underlying means in the former case to a .77 standard deviation difference in the latter case).

    But in a case where the .59 standard deviation figure did not change as there occurred of an overall decrease in the adverse outcome (as, for example, where the disadvantaged group's rate dropped from 40% to 20%, while the advantaged group's rate dropped from 20% to 7.5%) - that is, where there is no basis to infer anything about a change in the strength of the forces causing the rates to differ -  ­those relying on absolute differences would still find a decrease in disparity (as would those relying on relative differences in rates of avoiding discipline), while those relying on relative differences in discipline rates would still find an increase in disparity.  And some might then mistakenly devote resources to trying to figure out the reason for the perceived increase or the perceived decrease in disparity.  See discussion near end of page 329 of reference 1 regarding value of studying reasons for increasing or decreasing disparities in poverty rates without consideration of the ways measures tend to changes simply because poverty increases or decreases. 

    The letter to the Senate Committee, which involves a bill where the sponsor specifically believes reductions in discipline rates will tend to relative differences in discipline rates (a nearly universal, though manifestly incorrect, belief), was intended at least to educate the Committee with respect to the fact that reducing an outcome tends to increase relative differences between rates of experiencing the outcome (something that, so far within the federal government, seems to be known only at the National Center for Health Statistics, as discussed in references 1 at 331-35).  A starting point for understanding that pattern is to recognize such things as that lowering test cutoffs tends to increase relative differences in failure rates while reducing relative differences in pass rates, or that generally reducing poverty tends to increase relative differences in poverty rates while reducing relative differences in rates of avoiding poverty.  These are things that, while hardly disputable, seem to be largely unknown.   

    The letter's more ambitious goal is to cause the Committee to recognize that virtually all efforts to appraise demographic differences in outcome rates are statistically unsound as a result of the failure to understand the ways that the measures commonly employed to appraise those differences tend to be systematically affected by the frequency of an outcome, and that entities like Congress or the Department of Justice cannot effectively perform their functions where the interpretation of data on demographic differences between outcome rates is involved without understanding the statistical patterns describe in reference 1 and other places cited at page 3 of the letter.  As discussed at 343-45 of the reference 1, Congress and the Department of Justice are by no means the only institutions whose activities pertaining to the appraisal of group differences suffer from failure to understand these patterns. 

    It is certainly true that there exists a great deal of discussion of disparities without understanding that a different measure from that the discussant relies upon could (or in fact does) yield an opposite conclusion about directions of change.  Such discussion can be of little, if any, value.  But the same holds with respect to discussion that fails to recognize the reasons that neither of the measures is sound. 

     1. "Race and Mortality Revisited," Society (July/Aug. 2014) http://jpscanlan.com/images/Race_and_Mortality_Revisited.pdf

     2.   "Rethinking the Measurement of Demographic Differences in Outcome Rates," Methods Workshop, Maryland Population Research Center of the University of Maryland (Oct. 10, 2014). http://jpscanlan.com/images/MPRC_Workshop_Oct._10,_2014_.pdf

     3. "Can We Actually Measure Health Disparities?," Chance (Spring 2006)http://www.jpscanlan.com/images/Can_We_Actually_Measure_Health_Disparities.pdf



    ------------------------------
    James Scanlan
    James P. Scanlan Attorney At Law
    ------------------------------