May 6, 2005 Conference

May 6, 2005 Conference

THE AMERICAN STATISTICAL ASSOCIATION CHICAGO CHAPTER PRESENTS*

MAY I ASK A QUESTION, PLEASE?

PSYCHOMETRIC ISSUES AND IMPACTS.

Friday May 6th 2005 8:15am - 4:30pm

Rubloff Auditorium, Loyola Univ. of Chicago, Water Tower Campus

25 E. Pearson St. Chicago IL USA

IN BRIEF:

Surveys are a commonplace in social, policy and market research. Indeed, their ubiquity and

familiarity makes it seem intuitive that “anyone can design a questionnaire.”

However, gaining real, unbiased insight from survey research and applying that insight to

practical decision making in education, public policy, social science or marketing takes unique

skills and tools.

This conference, offered by the Chicago Chapter of the American Statistical Association will

provide perspective on the tools, approaches and methods currently used by experts to maximize

the value of their survey and testing instruments.

WHO SHOULD ATTEND?

• Survey researchers addressing social policy

• Commercial market researchers

• Social scientists

• Policy analysts

• Managers and administrators who make decisions based on survey research

WHAT YOU WILL LEARN:

• How to choose among and use the many models of item response theory to better understand

your survey / assessment data and maximize your instrument’s effectiveness

• Understand the issues which must be solved before current assessment tools can be used to

really provide an impact in educational achievement, a crucial public policy arena

• Understand how assessment tools can be designed, and delivered through electronic media to

support the professions

• Understand how culture can be identified and analyzed through survey means

• See how emerging Bayesian tools advance the feasibility of advanced analyses

• Understand how the form and content of questions can give rise to biased responses,

understand the impact of this bias, and how it can be corrected

ADDITIONAL INFORMATION

• Conference agenda, speakers and abstracts

• Registration form

See below

CONFERENCE PROGRAM

8:15 a.m. - 8:45 a.m. Registration

8:45 a.m. - 8:55 a.m. Conference Welcome. Mary Morrissey, VP Conferences, Chicago

Chapter ASA.

8:55 a.m. - 9:55 a.m. Cindy M Walker, Ph.D. University of Wisconsin – Milwaukee

The Many Models of Item Response Theory.

In situations where survey questions or test items are used to estimate parameters and

discriminate between underlying groups, invariably there are questions regarding how the items

relate to one another. For example: Are there certain items which better discriminate among

groups?

A wide variety of tools have been developed to address this question. These tools can be very

useful in both refining survey and testing instruments, and in understanding the structure of the

respondents’ data. This talk will compare and contrast a variety of tools, covering varying

degrees of sophistication.

Take-away: Come and learn how to use these tools to better understand your survey and test

instruments, and maximize their effectiveness.

9:55 am – 10:05 am Coffee Break

10:05 a.m. – 11:05 a.m. Howard Wainer, Ph.D. National Board of Medical Examiners, The

Wharton School

Value-Added Assessment and three challenges to its Practicality.

Over the past decade there has been a growing desire among educational policy makers to

measure the extent to which the performance of students has been transformed by the educational

process. It was felt that the indirect approach of looking at yearly average performance provided

by most assessments was insufficient and that a more direct assessment of individual student

change could prove helpful in assessing the efficacy of various sorts of educational programs.

Toward this end states began to use some form of longitudinal measurement. Currently 4 states

have such programs in place, 5 have pilot or roll-out plans in place, and many more are

contemplating value added assessment.

While this approach is clearly gaining momentum, several issues remain. In this talk, the speaker

will briefly describe what is value-added assessment and discuss three problems that need to be

overcome before it can be used for the purposes its developers envisioned.

Take-away: The existence of these issues, and their potential solutions highlight an important

way for statisticians and psychometricians to add to the public policy debate on this important

national issue.

11:10 a.m. - 12:10 p.m. Thomas O’Neill, PhD. National Council of State Boards of Nursing

Computer Adaptive Testing.

Tests administered via a computer allow a test to be customized to the examinee in order to better

probe ability. A standard test that is too hard or too easy does not yield as much information as a

test matched to the examinee’s ability.

This presentation will provide an overview of adaptive testing. It will explain what is an adaptive

test and how adaptive tests work. The relative advantages and disadvantages of using an adaptive

test will be discussed. Using the NCLEX (National Council Licensing Examination) as an

example, the process of estimating a candidate's ability and targeting items to the candidate's

ability will be illustrated. Also, the use of various stopping rules will be discussed. Some

discussion of the Rasch model's structural expectations of the data will also be addressed. Some

issues related to reporting results will also be included.

Take-away: Learn how computer adaptive testing may improve your testing instruments.

12:10 p.m. – 1:10 Lunch

1:15 p.m. - 2:15 p.m. Werner Wothke, Ph.D. CTB/McGraw-Hill.

I May Not Leave This Child Behind - Large Scale Educational Assessment.

Accountability due to federal No Child Left Behind (NCLB) legislation drives much of the K-12

testing efforts today. NCLB-related testing programs have created many logistical challenges

where statisticians and measurement researchers can help. Examples include: comparability of

performance standards, testing accommodations for language and disability, adequacy of yearly

growth in test scores and the equivalency of multiple test forms

The presentation will walk through the operational steps of student assessment featuring the

statistical/psychometric services and decision points towards the production of student scale

scores. The path includes test blueprint, item construction, field testing, item parameter

estimation, test form construction, standard setting, operational testing, item calibration, form

equating, scale

Take-away: Learn about issues and solutions in creating large scale test instruments.

2:15 p.m. – 2:25 p.m. Coffee Break

2:25 p.m. - 3:25 p.m. George Karabatsos, Ph.D. University of Illinois, Chicago

Bayesian Cultural Consensus Theory

Can you learn the beliefs of a cultural group of respondents through questionnaire data? Defining

“culture” broadly, this issue has applications to both social science and marketing.

If such groups, their beliefs, and their response patterns can be identified, the results can be used

for both methodological improvement [e.g. removing scale bias] and substantive learning [e.g.

How do cultures’ beliefs differ? How many distinct beliefs systems exist in a population?]

One approach to these issues comes from Cultural Consensus Theory (CCT) models, which

provide a means to infer the beliefs of a cultural group of respondents from questionnaire data,

using answer key parameters that describe the culturally-correct response to each and every

questionnaire item, and parameters that describe differences between respondents according to

ability.

This presentation will discuss a general Bayesian approach for performing statistical inference

with Cultural Consensus Theory (CCT) models, which includes methods of model estimation,

model testing, and model selection (Karabatsos & Batchelder, 2003, Psychometrika). This entire

Bayes framework is illustrated through analyses of real data sets.

Take-away: See how the rapidly emerging tools of Bayesian inference are used in real

applications to address this question.

3:30 p.m – 4:30 p.m. Jill Glathar, Ph.D. and EricWendler, Ph.D. Opinion Research

Understanding, anticipating and dealing with systematic bias in survey data.

Measurement of attitudes and opinions, and the proper interpretation of such measurements,

requires that sources of systematic bias be understood, anticipated and dealt with.

By "systematic bias" we mean patterns of responding to questions that are not reflective of what

individuals actually believe, but rather are artifacts of how individuals respond to questions, and

in this way interfere with valid interpretations and with the comparison of responses across

persons or groups of persons. These patterns of response bias are sometimes noted at a cultural or

country level, but also can be seen at an individual level. This kind of response bias can be seen

to greater or lesser degrees with regard to different question formats as well as with regard to

different question objects. Attitude and opinion survey data collected across many countries are

used to illustrate these patterns, and as the starting point for demonstrating the kinds of

corrections and accommodations that can be made to deal with response bias, both before data

collection and after the data have been collected.

Take-away: Understand the basic idea of response bias, and how it relates to question formats,

contexts, and individual vs. cultural effects. Observe some of the most common effects of

response bias, and the effect of simple corrections.

* co-sponsored by Loyola Universty's Mathematics and Statistics Department

REGISTRATION FOR CONFERENCE

Name ____________________________________________ Title___________________________

Business/ School Affiliation _________________________________________________________

Address _________________________________________________________________________

City _______________________________________ State __________ Zip Code _____________

Phone # _________________________________________________________________________

email address ____________________________________________________________________

Registration Fee (Select One): NEW! Early Bird!

(postmarked by April 15th, 2005) After April 15

Non Chicago Chapter ASA Member _____ $200 _____ $225

Chicago Chapter ASA Member _____ $185 _____ $210

Student non Chicago Chapter Member _____ $90 _____ $105

Student Chicago Chapter Member _____ $83 _____ $98

Payment Information:

[ ] Check or Money Order (please make payable to Chicago Chapter ASA)

[ ] Credit Card (Visa and Mastercard)

Name on Card: ______________________________ Card Number: ______________________

Billing Address (if different from above): _______________________________________

_______________________________________

Expiration date: __________

Cardholder Signature: _______________________________________________

Please mail form and payment to:

Chicago ASA Conference

c/o Jerry Enenstein

222 Main Street

Evanston, IL 60202

For payment questions, please contact Jerry Enenstein at (847) 475-4403 or at JEResearch@ameritech.net

For additional conference information, please check the Chicago Chapter Website at:

www.chicagoasa.org or contact Mary Morrissey at mmorriss@rush.edu