ASA Connect

 View Only
  • 1.  SEMs and research policy guidance

    Posted 04-14-2021 08:08
    The recent American Educational Research Association (AERA) conference included a panel on assessing the validity and reliability of the researchers' SEM models.  This included using external (perhaps simulated) data to assess how robust the pathways are, impacts on inferences if the paths are strengthened or weakened (eg. due to sampling issues, measurement error etc.

    This left me wondering about using the general approach to model interventions (e.g. in educational research) to predict the effects of strengthening factors such as intensity of the intervention, quality of instruction, adding components to the intervention, etc.   E.g. modeling impacts of strengthening or weakening various pathways.   

    In principle, the modeling would allow estimation of the expected impacts of altering the strength of various paths (e.g. duration, quality of instruction, . adding mentors...).  The results of such modeling "simulations" could be suggestive in identifying paths to improvement--and identifying specific targets for future research to assess whether the model results hold up in "the real world".

    There must be an extensive literature on such policy related modeling of possible policy variations on impacts.   I'd appreciate any suggestions on readings on this topic, caveats about such an approach to identifying targets for additional research, etc.

    Thanks in advance for any suggestions.

    Best regards,
    --Jeff Rodamar
       Protection of Human Subjects in Research Coordinator
       US Department of Education
         email:  JeffRodamar@gmaill.com

    PS This is a personal professional query as an ASA member.   It does not constitute a request by the US Department of Education, etc. etc. etc.

    ------------------------------
    Jeffery Rodamar
    Protection of Human Subjects in Research Coordinator
    U.S. Dept. of Education
    ------------------------------


  • 2.  RE: SEMs and research policy guidance

    Posted 04-15-2021 12:07
    Following.  I'm interested in SEM for health policy research, but have similar questions.

    ------------------------------
    Adrienne Ohler
    University of Missouri
    ------------------------------



  • 3.  RE: SEMs and research policy guidance

    Posted 04-15-2021 17:50
    A few years ago, I was working at Henry Ford College in the Institutional Research dept. My job was to find ways to "improve student outcomes". I spoke with several of their educational researchers and went to several local organizational meetings on it.  In general, I was in awe of how bad a lot of the metrics were. I was ashamed of how a lot of my colleagues and meeting attendees psychologized about why the results were what they were. 

    For example, we looked at how Anatomy and Physiology 2 students did based upon their A&P 1 final grade. I don't remember the exact numbers but let's say students with an "A" in AP1 passed AP2 75% of the time. "A-" students in AP1 passed AP2 78% of the time."B+" AP1 students passed AP2 79% of the time. We spent minutes discussing why "B+" students did better than "A" students. One suggested, and many more concurred, that the B+ students realized how tough it was going to be and knew they had better study skills because of it. The "A" students thought it would be easy and it wasn't. 


    After many minutes of this non-sense, I asked, "What are the confidence intervals for our estimates?" It turns out the CI for "A" students was based upon 1,000+ students and the "A-" and "B+" students was based upon 60-80 students. I concluded we were highly confident that "A" students passed about 75% of the time. For the "A-" and "B+" students, the CI was about +/- 10%. So we had little to no clue about how robust the numbers we had really were. 

    When we looked at how students did the 1ST time they took the class versus their 1ST AP2 grade, the number of students in each group dropped. We realized, many of those students took the class 2+ times, a few took it 10+ times, to get a final grade of A to B+. 

    Going back and redoing the "success" metrics, we got a usefull and robust model that works on students from different depart.emts as well as different colleges! 

    We realized that students in our WIFD category (Withdraw, Incomplete, Fail, D range grade) the first time they took the class, over half stopped trying. This changed our focus to getting students to succeed the 1st time, NOT eventually. 

    To minimize the pointless and misleading psychologizing about students, I grouped students into Pass with an A, Pass with a B, Pass with a C, Pass(over all) and WIFD, based upon their 1st attempt grade. This lead to the realization that students in most curricula that passed with a C, were usually less than 50% likely to pass a follow up class. Usually, this "Pass rate" was in the 30%-45% range. Sometimes even LOWER! 

    The factor that had an even bigger impact on student performance in a particular class, was the prof they had in THAT class. The prof they had in their previous class had little to no effect on student performance in the "current" class. Depending upon the prof, anywhere between 20% to 80% of THAT profs students would be in the WIFD category. 

    With the data I had from HFCC, the 3 biggest predictors of success in the current class were, in order of importance, Pell Grant eligible, current prof, and with the smallest effect, pre-req grade. Pell Grant eligible students were 80%+ likely to be in the WIFD category. Combine that with an experienced prof (taught 100+, usually 200+ unique students) that has a WIFD rate of 80%, most of those students don't stand a chance. Since the majority of HFCC's Black and Latino students were in that Pell Grant eligible category, we had a HUGE attrition rate of those students. (They can barely afford the class, then they have to pay for it again, leads to dropping out.) Thus Black and Latino don't attain college degrees at the rate of white students. However, Pell Eligible White students WIFD at the same rate as Pell eligible Black and Latino students. AND non - Pell Eligible students all passed at the same rate. Race not a factor. 

    At Oakland University, Oakland Community College and Eastern Michigan University, current prof and pre-req grade were the dominant factors. (Because of FERPA laws, I couldn't get data on Pell eligibility, race and gender.) 

    If we want to talk about modeling student achievement, if we were taking bets on if a student would pass a class, my first question would be, "Who are they taking it with?". My second would be, "What is their pre-req grade?" 

    In many cases, a student who passed the pre-req class "with a C" student is FAR MORE likely to pass their current class than a student with an "A" in their pre-req class. It all depends on the current prof! 

    So, when I see your comment about SEM, I have to ask, "Does your response metric actually mean anything usefull, in reality?" I don't mean for that to sound disparaging. I'm just wondering if it is a type 3 or type 4 error. (Coming to the right conclusion about the wrong metric and coming to the wrong conclusion about the wrong metric.)

    ------------------------------
    Andrew Ekstrom

    Statistician, Chemist, HPC Abuser;-)
    ------------------------------



  • 4.  RE: SEMs and research policy guidance

    Posted 04-16-2021 09:32
    I think Dave MacKinnon has interests in this area.

    MacKinnon, D. P., Valente, M. J., & Wurpts, I. C. (2018). Benchmark validation of statistical models: Application to mediation analysis of imagery and memory. Psychological methods23(4), 654.


    ------------------------------
    Daniel Coven
    Graduate Statistics Consultant
    Daniel.Coven@asu.edu
    ------------------------------