ASA Connect

 View Only
  • 1.  Discussion content: Do population outcomes exist independent of measurement?

    Posted 30 days ago

    It has been a while since this forum has had a serious discussion on a foundational topic. It's been mostly used to post announcements, get help on various topics, etc. So this post may now seem inappropriate for this forum, but I will essay it and see what people think.

    In the long review cycle for our recent paper, we encountered the review comment "Population outcomes exist independent of measurement," in the course of objecting to some of our material. This is of course a key assumption of both classical and causal inference. But is it true?

    One of W. Edwards Deming's differences with the statistical theory of the previous century was his rejection of exactly this assumption. It isn't true at all in quantum mechanics, where it's clear that measurement arises from an interaction between a measurement process and the measured. At this micro a scale, you can only measure something by throwing something at it, e.g. a photon, that which will interact with and alter the thing measured, making position and momentum, among other quantities, uncertain. And as a result, as Malley and Hornstein (Quantum statistical inference, Statist. Sci. 8(4): 433-457 (November, 1993). DOI: 10.1214/ss/1177010787) noted, inference is different. Hilbert spaces do not form a Boolean algebra. Joint distributions may not exist. Malley and Hornstein questioned whether frequentist theory has foundations in this context; they advocated a Bayesian approach.  Myron Tribus, a leading Deming student of the last century, had reached a similar conclusion regarding the appropriate way to formalize Deming's approach.

    Deming, a physicist by training who used quantum mechanics analogies in the technical parts of his arguments more generally, suggested that the outcome independence assumption isn't true in general. He posited that there is an issue of participant-observer interaction, interaction between measurement process and what is measured, in general, and particularly in complex systems such as biological ones, and indeed pretty much anything having to do with humans. He suggested that participant-observer effects may be rampant in social inquiry. He noted that, for example, people may give different answers to a survey depending on whether they get a male or a female interviewer.  Hawthorne-type effects may change the behavior of patients in a study or clinical trial.  Deming proposed an approach that does not assume any such independence. He emphasized operational definitions. Changing the method of measurement changes the outcome. He emphasized limiting statistical approaches to situations where the process has first been shown to behave approximately randomly, i.e. is in statistical control. And like Poincare before him, he emphasized qualitative approaches where quantitative ones have questionable validity or require too much computational complexity.

    More fundamentally, Deming conceived of the kind of statistics that is actually useful in human affairs as generally requiring the study of dynamic processes, not static populations. The kind of questions people want to ask are generally analytical, not enumerative, in character, about predicting the future, not simply documenting the present. The conception of statistics as fundamentally being a science of processes, as distinct from being a science of data, has lost some of its former vogue in recent years.  But one potential advantage of looking at things this way is that while static population outcomes have to be assumed to be independent of measurement, process outcomes do not.

    Whether one agrees with this approach or not, one interesting observation about the process of having the paper reviewed was to notice that members of the statistics community today are still inclined to posit assumptions that make their particular inference theory work as being facts about the world. This, I suspect, comes from conceiving of statistics as being a branch of mathematics, which arrives at truths by starting with unshakably true foundational axioms and deriving theories by a process of deductive logic, rather than fundamentally a branch of science, which arrives at truths inductively, by a process of generalizing from observation, and whose principals can have no unshakably firm or certain foundations. If what we observe in the world is different from what theory assumes, it is the theory that has to bend.



    ------------------------------
    Jonathan Siegel
    Director Oncology Statistical Sciences
    Bayer US Pharmaceuticals
    ------------------------------


  • 2.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 29 days ago

    I appreciate the move to start a foundational discussion in this forum! And glad to see a topic that I often think about. My background is also in physics (now in collaborative statistics) so these quantum mechanical metaphors have always landed home for me.

    I generally agree that outcomes are not (cannot?) be independent of measurement. And the points about the methods of measurement influencing the outcome are well taken. 

    I'd just like to add that if we take a step even further back, and consider that what outcomes we decide to measure and how they are conceptualized (i.e. how we operationalize our constructs) are also part of the measurement process. And from this view, it seems to me that the outcomes are not only conditional on the method of measurement, but also the worldview of the researcher (as mediated through the choices / conceptualization of what the outcomes even are).

     The angle or worldview or value system from which we choose to view a phenomenon (i.e. process) affects what we see as worth measuring and therefore affects what and how we choose to measure it. 

    Through validity testing we can make sure our measures are reasonably stable, reliable, etc., and therefore we do not completely err in our conceptualizations. But there is no guarantee that it is the only valid view of the phenomenon. 

    At best we can get agreement that given our values and priorities, our view is good enough for the inferences we would like to make. But there is always the chance that an outcome that appeared valid is rendered invalid by a reevaluation of what really matters. I think that health disparities research is a good example of this. If our outcomes ignore the possibility of disparities, we could see one set of results (for example, on average positive outcomes). And if the issue of disparities is included in our conceptualization of the healthcare process, we could get a different set of results (maybe outcomes are only positive for some groups). 

    Although there is nothing statistically incompatible between these two sets of results, the inferences (and decisions) made from them could be radically different. So inference is not independent of our approach to conceptualization and measurement of phenomenon. I think this points to a (maybe more radical?) version of the observer-participancy that Jonathan mentioned. That the worldview and values of the observed is inextricably tied to the measurement process. And thus to the outcomes.

    Alex E. Clain

    Postdoctoral Fellow

    Communication Sciences and Disorders

    Northwestern University 



    ------------------------------
    Alex Clain
    PhD Candidate
    Northwestern University
    ------------------------------



  • 3.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 28 days ago

    Alex, re your observation that "The angle or worldview or value system from which we choose to view a phenomenon (i.e. process) affects what we see as worth measuring and therefore affects what and how we choose to measure it.":

       I was struck by similar insight once, when I first become involved risk analysis study.  For example, what is "the risk" to agricultural workers if they are asked to spray certain pesticides.   Chemical companies would likely calculate based on controlled-experiment type models (e.g., how little pesticide gets through to the workers if, at all times, they wear all the properly fitted protective equipment, etc.).    But an alternative approach acknowledges that the equipment is hot and uncomfortable to wear, etc.,  so in real life it is not always worn up to spec;   so the risk to those real-world workers should also included in the calculations.



    ------------------------------
    William (Bill) Goodman
    Professor (Retired) and Adjunct Professor, Faculty of Business and Information Technology
    Ontario Tech University
    ------------------------------



  • 4.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 28 days ago

    Jonathan,

    Thank you for raising this interesting line of thought. My only contribution will be to relay a conversation I had with Dr. Deming in the 1980s at a presentation at Ford for their quality consultants.  It addresses your comment:

    "More fundamentally, Deming conceived of the kind of statistics that is actually useful in human affairs as generally requiring the study of dynamic processes, not static populations. The kind of questions people want to ask are generally analytical, not enumerative"

    Deming made a stronger statement at the time, that "all statistical analyses should be analytical, not enumerative." I pointed out that he had worked at the U.S. Census Bureau and knew the many vital important applications that require enumerative information on the U.S. population.  He agreed and said he shouldn't be saying "all". It is important to remember that we often do need to know "how many", for example to determine if an observed rate indicates likely racial bias when compared to the overall relevant population.

    Having said that, your point (and Deming's) that in many situations we are interested in analytic question that can only be answered by observing populations that are in statistical control is really important. In so many applications observations are made from out-of-control populations, and then people are surprised when the results are not reproducible.



    ------------------------------
    David Marker
    Senior Statistician
    Retired
    ------------------------------



  • 5.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 29 days ago

    Jonathan,  I enjoyed reading your post, and believe your points are insightful and helpful.

    Interestingly, I think they address, from a different angle, an issue I've been thinking of as  a distinction between testing with statistics--implying a formal comparison against a fixed, ideal image of the population--versus applying statistics--in the sense of identifying how (to the best you can observe and measure it) a population seems to actually distribute and vary.  They're not the same process. 

    This comes up for example, in many popularized "tests" for the "conformance" of populations to Benford's Law, which is an interesting, apparent property of the distributions of the first digits of numbers in certain populations.  It's one thing to observe that such patterns seem to occur (roughly) in certain empirical contexts, and to speculate on why.  And even to draw some information from that knowledge. But it's fallacious to test using the law in "gotcha" articles, for whether some sample does or does not conform to the law.   This is assuming that the sample of interest is (or ought to have been, somehow) drawn, quasi-randomly, from a pure, theoretically-defined population.  As you point out, the samples we actually encounter do not arise as just Platonic instances of ideally-conceived populations; so this impacts on how and what we can test.

    Good luck publishing your paper!



    ------------------------------
    William (Bill) Goodman
    Professor (Retired) and Adjunct Professor, Faculty of Business and Information Technology
    Ontario Tech University
    ------------------------------



  • 6.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 28 days ago

    I find this a fascinating topic; it looks to me like the main advantage of Bayesian inference in the setting described is its ability to model interactions and processes as part of a hierarchical model. Is that the reason for Malley and Hornstein's advocating that approach?

    Don't agree with with the statement that statistics is not a branch of mathematics--an essential part of statistical theory's foundation is probability theory, along with linear algebra, functional analysis as well as other subfields of math. Frequently, practicing statisticians don't even know that techniques they use were borrowed from math, e.g., kernel smoothing (an older and less used technique now) has its roots in Dirac sequences, a technique borrowed from real and complex analysis. And this gets to the heart of my objection: The foundations for statistics ought to be based in mathematics because you want techniques with general application to data, but subject to your assumptions about how data were generated. Bayesian analysis doesn't avoid this--it's based on math, too (conditional probability, Bayes Thm., MCMC). I think the problem you point out about statisticians (sometimes) viewing their inferential theory as the only viable theory is a cultural / social problem, and definitely not unique to stats--you find it in many subjects, e.g.,  literature, cognition, philosophy. 

    It's true there are no unshakable assumptions about how data are generated because the variety of ways data can be generated is infinite. I don't see this as a failure caused by developing statistical theory as part of math; it's just a fact about the universe we have to deal with. Statistics is a science about data, and data generation. Apart from all the techniques borrowed from math, statistical theory's developed mathematically because assumptions need to be stated clearly so we have an idea when techniques apply and what assumptions inferences (generalizations from data) are based on. I think that's a good thing.



    ------------------------------
    Paul Louisell
    MS Statistics
    MS Mathematics
    Currently Semiretired
    ------------------------------



  • 7.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 27 days ago

    Very happy to see statistical physicists weighing in on this important topic. As a statistical physicist myself, I would like to expand on the useful remarks made by David Marker. 

    As Dr. Marker has described, interaction between the observer and the thing being measured introduces and element of randomness into the measurement.  There is a view in quantum mechanics, known as the Copenhagen Interpretation, that this randomness is intrinsic. This approach would argue that population outcomes do not exist until observed. As I am one of the few physicists who reject the Copenhagen Interpretation, my colleagues here may wish to reject my comments ab ovo. 

    A slightly different view is that the population outcomes really do exist - we just don't know preciscely what they are due to human interaction with the thing being measured. Lack of knowledge about a thing is not an argument against ontology. This view regards randomness not as intrinsic but stil containing an element that is irreducible, even with the best measurements. 

    For example: I presently work for an electrical utility, using phyisc-informed AI to convert weather forecasts into a prediction for the number of power outages in a given geographic area in a period of time. The fact that we don't know specifically which wires will break due to high winds does not mean it doesn't exist, as customer losing power will be glad to point out to us. However, there exists an irreducible uncertainty in the process that must remain even with the best measurements. 

    Irreducible randomness is an important concept for us as statisticians. It's important to keep in mind that some of the irreducible randomness in the systems we study can be introduced by human interaction - for example, when a person asks a question in a survey or takes a medical measurement. However, some irreducible randomness occurs without any human interaction. Electrical wires will break in high winds with some degreee of randomness whether they are observed or not, and the act of observing them does not introduce additional randomness into our understanding of the outcome. As statisticians, our interest is in the irreducible randomness whether introduced by human observation or not. 



    ------------------------------
    David J Corliss, PhD
    Director, Peace-Work www.peace-work.org
    davidjcorliss@peace-work.org
    ------------------------------



  • 8.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 23 days ago

    A basic tenet of science is that if something is done again in exactly the same way, the outcome will be the same. Unfortunately, things are never the same if for no other reason than time intervenes. We may think that those intervening factors have negligible effect on outcomes especially in highly controlled environments, but that is surely not true for many systems including those in agriculture and biology where numerous unexpected and uncontrollable factors can affect the data.  Even in laboratory experiments, such things as  unclear protocols, variations in the level of training of technicians, or not having that morning cup of coffee can affect outcomes in ways we may not realize. So what does this tell us about statistical inference? It is not just about the population from which we sample. It's about the effect of all the conditions that prevail at the time the data are taken. When these conditions change, as they often do, reproducibility cannot be expected.  



    ------------------------------
    James Higgins
    Kansas State University
    ------------------------------



  • 9.  RE: Discussion content: Do population outcomes exist independent of measurement?

    Posted 13 days ago

    I appreciate people's thoughts. A few comments:

    1. For Alex Clain and William Goodman (1), Walter Shewhart and W. Edwards Deming followed the epistemological pragmatism approach of C. I. Lewis. From this perspective, we assess propositions by whether relying on them enables us to succeed in a goal or not. C.I. Lewis gave an example where a thirsty traveler seeing a mirage of an oasis would characterize the vision as false because he would ultimately be unable to drink. But an artist seeking to paint a picture would have no reason to doubt his subject's veracity. The goal of estimation is to seek the minimum-loss estimate. But our losses are subjective and value-based. This means different people with different goals will have different loss functions. And that means they should choose different estimation methods minimizing those different loss functions, resulting in different estimates. 

    1. For David Marker, Since Deming was at Ford at the time, perhaps he was intending to talk to managers about management.  Deming repeatedly said that management is fundamentally about prediction. (He said "management is prediction.") So for purposes of management, useful inference is (almost) always analytical in nature. 

    1. For William Goodman (2), Deming said that randomness is an abstract ideal, like a perfect circle, that is hardly ever actualized in the physical universe we observe. Distributions never fit perfectly. He opposed goodness of fit tests in part because given a large enough sample size, they will (almost) always show deviation from any theoretical distribution. It's a common fallacy in statistical analysis to assume if the data aren't strictly random, that means either ones theory has been proved or something is wrong. From this point of view, nothing in life (at least nothing we can observe) is ever actually strictly random. If nothing else, imperfections in our observational apparatus will result in small deviations from randomness that will show up with enough observations. As an example of participant-observer effect, roll a die enough times, and small flaws will eventually appear from wear that will create slight biases that will show up on a goodness-of-fit test. So a finding that something isn't strictly random, by itself, tells us nothing. 

    1. Paul Louisell, at one point Euclidean geometry and Aristotelian logic were thought to be foundational to physics, but 20th century developments shattered both foundations. Probability is no different. Deming said the only reason we use it is that it appears to work. I think there is an important difference from physics that might perhaps be an accident of history. Riemannian and differential geometry, and intuitionist logic, were both developed before the theory of relativity and quantum mechanics respectively, so when the classical mathematics broke down, there were ready-made modern theoretical math alternatives that fit the evidence to replace them. Statistics has had no such luck. Half a century ago we thought everything was either deterministic or probabilistic. We are now starting to become aware that many if not most phenomena around us are in fact complex and chaotic, functioning in a state in between.  We have developed toy-model mathematics to describe these states. But we don't have a really good theory of inference for them And because we don't have any ready-to-go replacement at hand as we did when the old physics paradigm broke down, unlike with physics we have generally clung to the existing statistics paradigm notwithstanding its flaws. Shewhart and Deming's stop-gap solution was to try to identify when probability theory would work and when it wouldn't. But a stop-gap solution is exactly that. The moment analogs of differential geometry and intuitionist logic get developed and shown to work, we shouldn't hesitate to abandon probability theory for something that can tackle intermediate states. 

      Personal note: I think at this point in my career perhaps I can now confess that decades ago I once went to graduate school with visions dancing in my head of finding that solution. And failed really miserably. Unfortunately, I'm nowhere near that smart. 

    1. David Corliss, I think the reason Myron Tribus preferred Bayesian statistics is much simpler and more fundamental. Frequentist statistics theory aims to model how the external world works. But, as noted in my original post, the world often doesn't work according to probability theory; it works that way only under stable conditions and/or as an approximation. Bayesian statistics theory, on the other hand, aims to model how people think. It models their subjective beliefs. I think this is why Myron Tribus proposed Bayesian statistics. I think he found it a sufficiently useful model of how people think that it can be a used to make workable statements about beliefs, even when what people are thinking about doesn't itself work completely probabilistically. Also, as an aside, I've found Couder et al.'s experiments with bouncing oil drops in a vibrating fluid bath exhibiting macroscopic hydrodynamic analogs of certain De Broglie-Bohm pilot-wave quantum properties fascinating. If they can get it to work reproducibly, I personally think a theory tractable to the intellect, that lets you construct a model you can see working right in front of you, is really the simpler theory better satisfying Occam's Razor,  and is to be to preferred to a theory that you can't comprehend or model, even if the tractable, modellable theory involves more components than the incomprehensible one. But I think Deming generally assumed the Copenhagen interpretation, just as he generally assumed frequentist statistics.

    1. James Higgins, re your point "A basic tenet of science is that if something is done again in exactly the same way, the outcome will be the same." I think one of the purposes of Deming's Red Beads experiment, where he made a big show of people doing things exactly the same way yet getting highly variable outcomes, was not just to demonstrably disprove this "tenet," but to show the great damage and heavy losses that believing in this fallacy does to people and organizations.  I think one of his fundamental viewpoints is the modern world has shown a number of  "basic tenets of science" to just not reflect the way the world around us works. If the evidence shows they just aren't so, and this particular "tenet" is one Deming particularly focused on (as the Red Beads Experiment illustrates), we have to be willing to abandon those "tenets" and go with what works.  

    As I mentioned in my original post, I think Deming would completely agree with your point, and go further, that a workable theory of statistics relevant to most of the things we deal with in our lives has to be about, and take into account, dynamic processes embedded in systems. 

    A recent paper on estimands I was involved in didn't take a particularly Deming perspective. But it did touch on the point that clinical trials are not as controlled as lab experiments but are embedded in processes outside the trial designers' control, for example having subjects with minds of their own who can "vote with their feet." The estimands guidance requires recognizing and taking the resulting "intercurrent events" into account more than has been done in the past.



    ------------------------------
    Jonathan Siegel
    Director Statistical Sciences
    Bayer US Pharmaceuticals
    ------------------------------