ASA Connect

 View Only
  • 1.  The pesky Not Applicable item

    Posted 01-05-2018 19:17
    ​Hi everyone, 

    This is one of those questions where it seems hard to find a clear cut answer. (:

    I have an instrument covering several domain and I need to compute a total score for each domain, as well as a total score across domains. All domains include multiple questions whose answers are of the form:

     1    2   3   4   Not Applicable

    where 1, 2, 3 and 4 reflect increasing degrees of ability of the person being rated and Not Applicable indicates that the question was asked but the rater indicated it was not applicable.  (There are multiple raters who answered the questions at two different occasions and the overall intent is to assess intra and inter-rater reliability.)

    From my readings, it seems that coding Not Applicable as 0 (say) is a possibility, but it may (?) produce biased score estimates. 

    Some people suggest that the Not Applicable answers should be treated as missing completely at random, which would justify the use of multiple imputation.  This doesn't seem quite right to me - if the question is not applicable, why should we assume it is and assign it a rating of 1, 2, 3 or 4? 

    Is there a principled way to deal with the Not Applicable answers in order to obtain reliable total scores?

    Thanks very much,

    Isabella

    ------------------------------
    Isabella Ghement
    Ghement Statistical Consulting Company Ltd.
    ------------------------------


  • 2.  RE: The pesky Not Applicable item

    Posted 01-08-2018 02:24
    For scoring: average, for reliability: use unheighted kappa rather than impute.

    ------------------------------
    Reinhard Vonthein
    Universitaet zu Luebeck
    ------------------------------



  • 3.  RE: The pesky Not Applicable item

    Posted 01-08-2018 02:36
    Typo: unweighted kappa. And averaging was meant to use applicable items only. Sorry for brevity, I like to have the message in the listing of contributions like a subject line. Thank's Isabella, for bringing up this interesting topic.

    ------------------------------
    Reinhard Vonthein
    Universitaet zu Luebeck
    ------------------------------



  • 4.  RE: The pesky Not Applicable item

    Posted 01-08-2018 09:01
    The only "principled way" to deal with it is to completely separate "NA" from applicable answers. Of course, it is not missing values problem and, of course, you cannot put 0 instead NA. If, say, the question about shaving is applied only to men, but other questions to both men and women - women's (non-existent) "opinion" about shaving is completely irrelevant, and their presence in any form of common treatment is a nonsense. Technically, you either take off NA people all the times they appear, leaving only "common part" - but it can bring your sample to zero sizes very soon. Or you make the analysis of any type for different relevant subsets, what changes sample size all the times, but allows to capture an as big number of commoners as possible. Or leave NA as blanks and use any procedures, which ignore (not fill) the blanks (like, for example, the correlation calculation, etc., or, possibly, the recommended kappa).  I do not see any other way to resolve problem correctly. 
    Igor

    ------------------------------
    Igor Mandel
    Telmar, Inc.
    ------------------------------



  • 5.  RE: The pesky Not Applicable item

    Posted 01-08-2018 09:46
    Hello,

    While this answer does not really address the issue of creating domain or total scores, for the purposes of addressing the intra- and inter-rater reliability I do not think that you need to worry about it.  Simply approach this task as a nominal-level analysis.  And, from this point of view the inter-rater agreement (or lack thereof) on the NA category will be interesting in terms of information about the raters' knowledge of the subjects.

    Regarding domain and total scoring, I would postpone a decision on that matter until the reliability analysis is completed.  How many raters exist for each subject?

    ------------------------------
    David Mangen
    ------------------------------



  • 6.  RE: The pesky Not Applicable item

    Posted 01-08-2018 11:04
    Dear Isabella,

    If the people who developed the instrument do not cover this situation in their scoring manual (or if the suggestion seems off), or the instrument is still under development, my suggestion is similar to yours (code Not Applicable as 0) with a rescaling to mitigate the bias you mentioned. For example: 5 items, 4 with scores, 1 Not Applicable. Total the 4 scored items and multiply by 5/4 = 1.25. Could that work in your situation?

    ------------------------------
    Alicia Toledano
    President
    Biostatistics Consulting, LLC
    ------------------------------



  • 7.  RE: The pesky Not Applicable item

    Posted 01-09-2018 02:29
    A common approach to this is to exclude the “not applicable” cases from both the numerator and denominator, and in reporting results, say that the results reflect the opinion of people who thought the question applicable.

    In other words, instead of trying to use a questionable method to impute the results to cover the population you originally wanted to cover, define the population that the unimputed results unquestionably does cover and report the results as applicable to that population. This leaves your audience to their own devices to decide whether too assume the excluded people are similar to the included people are not.

    This is something of a dodge and often far from satisfactory, because the unredacted population is often the real one of interest and clients often want us to advise them what inferences we can make and what we can say about them. But it is a principled approach. And when the truth is we don’t really know what we can say about the “not applicable” patients, explicitly acknowledging this fact, and explicitly not saying anything about them by limiting the set we talk about, is sometimes the only principled position we can take.

    Jonathan Siegel
    Associate Director Clinical Statistics

    Sent from my iPad