Discussion: View Thread

  • 1.  A lesson on Political Survey Sampling: The 2002 Kansas Abortion Amendment Survey

    Posted 06-06-2023 09:52

    Hi all:

    I submitted this article to a journal. It was initially accepted with some revisions. I sent in a revised version that covered some issues but reached an impasse with one reviewer over differing philosophies on the validity of election survey science, not on the science behind this paper.  If you are fortunate enough to be able to decide for yourself whether or not to participate in a consulting project, this post has special interests for you.  I put this into Courrier font but the original was in Times New Roman.

    Is the Kansas Abortion Amendment Survey a Wake-up Call on Election Survey Science?

    In this essay, Jonathan Shuster asks statisticians to weigh in on the titled question.   Rather than advocating an answer one way or the other, it is being proposed as a debate topic in which the statistical community needs to become engaged. The 2022 Kansas state constitutional amendment will be used as a case study.

    The main reasons for keeping the statistical community involved include the following: (1) The public has a substantial interest in voter preference for a wide variety of political issues. If statisticians put themselves on the sidelines, less quantitative analysts are likely to increase the likelihood of adverse impact on the data analysis; (2) Statistician involvement can contribute to optimal design of surveys, getting the most from limited resources; (3) Statisticians are well suited to interpret the data and to make informed decisions about the scope of bias in the survey; (4) Many election surveys are conducted by groups with a vested interest in the outcome.  When results go against these interests, they can easily go unreported. Statisticians have long championed reporting non-significant results, and are well poised to act against such reporting bias.

    But there are also important reasons for staying on the sidelines, including: (1) Political polls have notoriously low participation rates (often below 20%).  See this link: Phone survey response rates decline again | Pew Research Center . Getting a true sampling frame is difficult. For example, even with voter registration rolls, you never know if the registered voter will actually cast a ballot; (2) Can a survey in of itself influence election results? As Cernat and Keusch 1 note, results of surveys can have an impact on behavior. For example, if a landslide prediction is publicly reported, will that in of itself adversely affect voter turnout as compared to a too close to call prediction? (3) We have all seen numerous situations where the survey result was completely inconsistent with the actual result.  This link Election polls are 95% confident but only 60% accurate, Berkeley Haas study finds | Haas News | Berkeley Haas

    gives us insight into how close to pure guesswork election surveys may be. (4) As a consulting statistician, if you believe the science behind a request for you to participate in any particular project  is faulty, you are within your rights to respectfully opt out.

    The 2022 Kansas Amendment Vote on Abortion Rights

    As an important case study, we shall compare results of a survey (held two weeks before the official vote)  and actual results of the vote on an August 2, 2022 state constitutional amendment in the US state of Kansas. A Yes vote would essentially remove abortion rights in the state.

    After the results were announced, this author was asked why the results seemed so peculiar in light of the "Co/Efficient" poll of 1500 likely voters seemed so divergent from the actual vote.  Of the 922,321 actual voters 378,466 voted Yes (favored removal of right to choose language from the state constitution), and 543,885 (59.0%) voted no (favored keeping the language in the constitution), while in the random poll of likely voters, 47% favored Yes, 43% favored No, while 10% voiced no opinion.  

    The retrospective question is: Were the poll results consistent with a true random sample of Kansas voters on this issue?   By  pessimistically projecting all unknown elements in the survey to favor survey responses of "Yes" , we shall demonstrate with very high certainty that if the survey was truly a random sample from the actual amendment vote, the survey results would have been implausible.

    We make two necessarily biased assumptions favoring a Yes answer to this question: (1) All individuals who would have voiced no opinion in the survey actually voted No in the actual election; (2) Because the Poll results were rounded to whole percentages, we presume the actual percentage results were as close as possible to a No outcome: 46.50% Yes, 43.50% No, and 10.0% No opinion.  Any other configuration of these two issues make the actual survey results even more implausible.

    Here are the data from the actual vote, cross-tabulated by conservative assumption (1) as to how voters would have responded to the survey: 

    Table 1: Conservative tabulation of actual voting percentages

    Voted Yes

    Favor Eliminate right to Choose

    Voted No

    Favor Retain right to Choose

    Would respond to Question

    41.0%

    49.0%

    Refuse to answer

    0.0%

    10.0%

    Total

    41.0%

    59.0%

    Of the population who would have responded to the poll, 49.0%/90.0%=54.44% would have reported a No vote. In the Co/Efficient poll of 1500 voters of the estimated 1350 who gave an opinion, 43.5/90=48.33% voted no, a discrepancy of 6.11%.  The standard error of this estimate is conservatively 0.5/sqrt(1350)=1.36%. The discrepancy is 6.11/1.36=4.49 standard errors.  The probability that a standard normal random variable exceeds 4.49 in absolute value is 7.1 in a million (0.0000071).

    Beyond a reasonable doubt, the survey results were not representative of the actual results.

    Discussion

    One question is whether the survey results influenced the final vote. Could it be that apparently being behind but within striking distance made the No leaning side more motivated to vote than the Yes leaning side?  Did the closeness of the survey results stimulate greater campaigning to try to bridge the deficit for the No leaning side, whereas the Yes leaning leadership became complacent?

    Survey results should be reported to a reasonable number of significant digits in reports made to the public. In a binomial survey of 1500, the standard error is at most 1.29%, and hence results should have had at least 3 to 4 significant digits not two.  It was fortunate that the discrepancy between the poll and election were robust enough to overcome the uncertainty caused by roundoff.

    In non-controversial settings, survey sampling can be very effective, provided that the investigators have a sampling frame and can retain identifiers to detect non-responders for second or even third contacts.  This essay's scope is restricted to pre-election polling including exit polling.

     

    Disclosure statement

    The author is self-funded and has no competing interest

    Reference

    1.  Cernat A. and Keusch F. (2022) Do surveys change behaviours? Significance 19(4), 10-11.

    Jonathan Shuster is Professor Emeritus, College of Medicine, University of Florida.  Homepage: https://hobi.med.ufl.edu/profile/shuster-jonathan/



    ------------------------------
    Jonathan Shuster
    ------------------------------


  • 2.  RE: A lesson on Political Survey Sampling: The 2002 Kansas Abortion Amendment Survey

    Posted 06-06-2023 10:24

    Hi Jonathan,

    Thanks for the interesting post. I would encourage you to share this with the Survey Research Methods Section (SRMS), where this will almost certainly generate a lot of discussion.

    -Brady



    ------------------------------
    Brady T. West
    2023 Chair, SRMS
    Institute for Social Research
    University of Michigan-Ann Arbor
    bwest@umich.edu
    ------------------------------



  • 3.  RE: A lesson on Political Survey Sampling: The 2002 Kansas Abortion Amendment Survey

    Posted 06-07-2023 09:19

    There is a number of methodological problems with your analysis and omissions of major resources concerning election polling.

    Any properly done survey should come with analysis weights (at the minimum; other aspects of the complex design features may include stratification and clustering in in-person surveys; a sample from the voter registration file is likely stratified by geography and possibly party ID although nonresponse generally washes out the benefits of stratification like that). There is no indication in your analysis whether the weights were used, but judging from your using sqrt(n) in the denominator for the standard errors, they weren't. The standard errors are computed differently for surveys.

    The survey people think in terms of the total survey error (see at least Groves and Lyberg (2010) and Biemer (2010) in this POQ special issue -- https://academic.oup.com/poq/issue/74/5). It conceptually accounts for all other issues in measurement, coverage, nonresponse -- although these are several orders of magnitude harder to quantify than the sampling error.

    There is a number of suggested elements to describe surveys -- AAPOR lists the items for immediate disclosure per the Transparency Initiative (https://aapor.org/standards-and-ethics/transparency-initiative/), and the list of the recommended reporting items for surveys has just been published by the Journal of the Survey Statistics and Methodology (open access https://academic.oup.com/jssam/advance-article/doi/10.1093/jssam/smac040/7136601). Without referring to the methodology properly, it is really impossible to make any decent judgement about the poll quality.

    After every presidential election, AAPOR performs an autopsy of where the polling errors were made; see the 2020 version at https://aapor.org/wp-content/uploads/2022/11/AAPOR-Task-Force-on-2020-Pre-Election-Polling_Report-FNL.pdf. The 2020 report points out deficiencies in deeply structured weighting as one of the more obvious technical faults of some polls.

    As a side note: statisticians are heavily involved in survey designs where appropriate. That is my daily job description. Suggesting that statisticians get involved more is a call for the academic part of the profession to produce more survey statisticians -- we in the survey profession (SRMS is the third largest section of the ASA) hire all of them as soon as they graduate. There are about three places left doing good job in training survey statisticians regularly -- Michigan, Maryland, and Iowa State. We contact all <10 other isolated survey statisticians to ask if they have any students this year -- and those 10 people combined produce maybe another 3-5 graduate students. Most universities don't have a sampling statistician proper neither among the tenure track/tenured professors who teach, nor among the consulting / practice staff. Statisticians have (nearly) abandoned this work, so it gets picked up by political scientists. 



    ------------------------------
    Stanislav Kolenikov
    Principal Statistician
    NORC at The University of Chicago
    ------------------------------



  • 4.  RE: A lesson on Political Survey Sampling: The 2002 Kansas Abortion Amendment Survey

    Posted 06-07-2023 15:35

    perhaps I missed it. Is there a breakdown of the vote by gender (female/male)? 



    ------------------------------
    Chris Barker, Ph.D.
    2023 Chair Statistical Consulting Section
    Consultant and
    Adjunct Associate Professor of Biostatistics
    www.barkerstats.com


    ---
    "In composition you have all the time you want to decide what to say in 15 seconds, in improvisation you have 15 seconds."
    -Steve Lacy
    ------------------------------