If you are already an AAPOR member with an account on the AAPOR website, please let Brady West (email@example.com) know that you would like to access the past webinars, and he will let the AAPOR staff know to upgrade your account privileges.
Once you are logged in with the credentials you received, expand the “Education/Resources” pull down menu from the top, expand the “Online Education/Webinars” submenu, and then click on “My Webinars.” You will be able to view any 2016-2022 webinars.
Previous SRMS Webinars
INNOVATIONS IN SPATIAL SAMPLING
Yves Tillé, Professor, University of Neuchatel, Switzerland
DATE AND TIME
Thursday, March 16, 2023 12:00 p.m. – 1:00 p.m. Eastern Time
Spatial sampling is not limited to the selection of units within a territory. One can also define a space with auxiliary variables and define distances between units. Two nearby units are more likely to be similar than two distant units. A sample well spread out in space avoids redundant selection and is therefore often more accurate. One can even interpret sample spreading techniques as a smoothed and multivariate stratification. We will review the current state of research on spatial sampling and present a set of new efficient methods.
BAYESIAN DEPENDENT DATA MODELING FOR OFFICIAL STATISTICS AND SURVEY METHODOLOGY
Jonathan R. Bradley, Florida State University and Scott H. Holan, University of Missouri
DATE AND TIME
Friday, January 21, 2022 12:00 p.m. – 2:00 p.m. Eastern Time
Model-based statistics have experienced tremendous growth in official statistics and survey methodology due to their utility across a wide range of applications. Recently, various hierarchical Bayesian dependent data models have been proposed. These models take advantage of different dependencies that are inherent in the data, often arising because of the way the data are collected. That is, there are often spatial, spatio-temporal, cross-spatial resolution, and/or multivariate relationships that arise. This presentation reviews basic Bayesian hierarchical modeling strategies and highlights some recent advances made in this area. Importantly, we consider both unit-level and area-level models and in both the Gaussian and non-Gaussian settings. Finally, we illustrate the various methods through several applications and detail the computational challenges that arise.
MAKING INFERENCES FROM NON-PROBABILITY SAMPLES THROUGH DATA INTEGRATION
Jean-François Beaumont, Statistics Canada
DATE AND TIME
Tuesday, September 28, 2021, 12:00 p.m. – 2:00 p.m. Eastern Time
For several decades, national statistical agencies around the world have been using probability surveys as their preferred tool to meet information needs about a population of interest. In the last few years, there has been a wind of change and other data sources are being increasingly explored. Five key factors are behind this trend: the decline in response rates in probability surveys, the high cost of data collection, the increased burden on respondents, the desire for access to “real-time” statistics, and the proliferation of non-probability data sources. In this presentation, I review some data integration approaches that take advantage of both probability and non-probability data sources. Some of these approaches rely on the validity of model assumptions, which contrasts with approaches based solely on the probability sampling design. These design-based approaches are generally not as efficient; yet, they are not subject to the risk of bias due to model misspecification.
WEIGHTING METHODS IN SURVEYS
David Haziza, Department of Mathematics and Statistics, University of Ottawa
DATE AND TIME
This was a four-part webinar presentation from 1:00 – 3:00 p.m. Eastern time on these Wednesdays in November: the 4th, 11th, 18th, and 25th, 2020
Data collected by surveys are typically stored in a rectangular data file, each row corresponding to a sample unit (e.g., a business, a household, an individual, etc.) and each column corresponding to a survey variable (age, gender, income, etc.). Made available on the data file is a column of final weights. This set of weights constitutes a weighting system. The idea is to construct a unique weighting system that may be applied to all the survey variables. The typical process leading to the final weights involves three major stages. At the first stage, each unit is assigned a base weight, which is generally defined as the inverse of its inclusion probability. The base weights are then modified to account for unit nonresponse. At this stage, survey statisticians aim at reducing the nonresponse bias. Finally, the weights adjusted for nonresponse are further modified to ensure consistency between survey estimates and known population totals, a process often referred to as calibration. In some cases, the weights undergo a last modification trough weight trimming or weight smoothing procedures in order to improve the efficiency of survey estimates. This webinar series will provide the participants with an overview of the various stages.
MULTILEVEL REGRESSION AND POSTSTRATIFICATION
DATE AND TIME
Thursday, October 10, 2019, 1:00 p.m. – 3:00 p.m. Eastern time
Multilevel regression and poststratification (MRP), a method originally applied to political polls, has become increasingly popular with applications to demography, epidemiology, and many other areas. Adapted from hierarchical models, MRP is an approach to modeling survey or other nonrepresentative sample data that has the potential to adjust for complex design features and nonresponse bias while performing small area estimation. The seminar covers the statistical concepts and practical issues in implementing MRP with real-life application examples. We will introduce the assumptions and properties of MRP, connecting with calibration methods. Recent developments of MRP for survey weighting and inference of probability/non-probability samples will be covered. We will conclude with some cautions and challenging issues in the application of MRP.
EXPLORING THE CENSUS BUREAU'S RESPONSE OUTREACH AREA MAPPER (ROAM): A TOOL FOR PLANNING SURVEYS AND CENSUS
Nancy Bates and Suzanne McArdle, U.S. Census Bureau
DATE AND TIME
Wednesday, September 18, 2019, 1:00 p.m. to 2:00 p.m. Eastern time
The presenters will demonstrate how to use the Response Outreach Area Mapper (ROAM). ROAM makes it easier to identify hard-to-count areas and provides a socioeconomic and demographic characteristic profile of these areas using American Community Survey (ACS) estimates available in the Planning Database (PDB). Learning about each hard-to-count area allows the U.S. Census Bureau to create a tailored communication and partnership campaign, and to plan for field resources including hiring staff with language skills. It is also helping external stakeholders do the same. Complete Count Committees, tribal, state, and local governments, as well as community groups, are conducting outreach, education, and engagement across the country. These and other efforts can improve response rates. ROAM's thematic map shows the Low Response Score-a projected propensity to self-respond to the 2020 Census-by census tract to help identify hard-to-count areas. Among other variables, ROAM includes census tract-level characteristics such as poverty status, educational attainment, median household income, housing unit vacancy rates, race, Hispanic origin, and languages spoken at home. The latest ROAM release also includes several variables related to internet access such as broadband subscription and the availability of computers and smartphones in households. Finally, the 2020 Census Audience Segmentation data exists as a new thematic map layer. The eight category Audience Segmentation provides an overarching framework for understanding areas of the country by bringing together behavioral, demographic, attitudinal, and media usage data to help plan and develop messaging, advertising, partnership activities, and other communications. ROAM provides direct access to the strategic summary profiles developed for each audience segment. ROAM has many potential uses including:
Identifying geographic areas for special outreach and promotional efforts.
Examining expected self-completion rates in local areas.
Identifying geographic areas that meet a particular set of characteristics using Data Table tools.
Adding other open source spatial data layers into the map.
Websites: Planning Database (PDB) www.census.gov/topics/research/guidance/... Response Outreach Area Mapper (ROAM) www.census.gov/roam
UNEQUAL PROBABILITY, HIGH ENTROPY, AND BALANCED SAMPLING DESIGNS
DATE AND TIME
POLITICAL POLLS AND THE PREDICTION OF ELECTION OUTCOMES
DATE AND TIME
Tuesday, October 25, 2016
ASA is offering this webinar in the Professional Development series to improve understanding of political polls by statisticians, and make it easier for us to communicate with the general public (including students, colleagues, members of the press, friends, neighbors, and bloggers) about political data.
This webinar will introduce participants to the history and methodology of pre-election polls, as well as several other sources of information for predicting the results of elections. The focus will be on the symbiotic relationship between pollsters and news organizations and the ways in which that affects applied survey methods. Another historical stream will look at the ways that technology has impacted data collection and analysis.
Topically, the webinar will look at different ways of estimating vote intent and predicting election outcomes, beginning with polls and surveys – covering sampling issues and questionnaire design, as well as likely voter models. It will address issues of how the likely voters are identified, and how the concept is applied to the estimation of election outcomes. Different modes of data collection will be discussed from the perspective of the ways in which they might affect estimation of the election outcome. Several other prediction strategies will be discussed, including election forecasts, data aggregation, economic markets, and statistical modelling; and their strengths and weaknesses will be evaluated.
The primary learning outcome of this webinar is to allow the participants to make more informed judgements regarding the quality of survey data underlying the reported poll results, timeliness of polls relative to the political events during the campaign, and to inform discussion of statistical issues involved with polling.
ENHANCING THE VALUE OF QUALITATIVE RESEARCH USING THE TOTAL QUALITY FRAMEWORK (TQF)
Margaret R. Roller and Paul J. Lavrakas
DATE AND TIME
Thursday, June 9, 2016, 1:00 p.m. - 3:00 p.m. Eastern time
Oftentimes a research question cannot be answered well through the use of quantitative research or through the exclusive use of quantitative approaches. For example, quantitative survey data may leave a researcher with unanswered questions about the reasons that underlie the responses or the particular contexts in which respondents framed their answers. That is why statisticians and other quantitative researchers are on occasion involved in conceptualizing, conducting, interpreting, and/or reviewing research projects that include the use of qualitative research methods.
Qualitative research goes beyond the expedient to gain a richer, more intricate appreciation of the research issue. Deriving these complex and contextual data, however, presents unique challenges to researchers who attempt to combine the essence of qualitative research with reliable and valid approaches that maximize the usefulness of their research. It may be because of these challenges that quality-design issues related to qualitative research - such as coverage, sample selection, nonresponse (including missing data), and researcher bias - have heretofore received relatively modest consideration by the qualitative research community.
In this presentation, we introduce a new approach that brings greater rigor to qualitative research. That approach is the Total Quality Framework (TQF) (Roller & Lavrakas, 2015). The TQF provides researchers with a systematic yet highly flexible way to (a) give explicit attention to reliability and validity issues in qualitative research, (b) critically examine the possible sources of bias and inconsistency in qualitative methods, (c) incorporate features into qualitative research designs that try to mitigate these effects, (d) acknowledge and take their implications into consideration during analysis, and (e) thereby maximize value of the research outcomes.
Our presentation: 1) presents a brief overview of what makes qualitative research uniquely different from quantitative; 2) explains the TQF and its value for conceptualizing, implementing, interpreting, and reviewing qualitative research; and 3) illustrates the application of the TQF by way of two qualitative methods, in-depth interviews and focus group discussions. It is intended that our presentation will help quantitative researchers think more critically and confidently about the value that qualitative methods can bring to their studies.
HOW TO CREATE PRESENTATIONS THAT PEOPLE WILL ACTUALLY REMEMBER
DATE AND TIME
Thursday, February 25, 2016
You might think that choosing the most salient pieces of information and copying them into a presentation is all that is required to communicate research findings with an audience. That is false. In order to engage an audience and help them to follow and believe in your argument, you must follow some basic presentation techniques. And that doesn't mean adding clip-art to every other page. In this presentation, you will learn how to tailor your presentation for conferences versus other types of events. You will learn how to set-up slides so that audiences can quickly and easily identify the most important information, as well as learn techniques to help keep audience members engaged and able to follow your train of thought. This webinar in a must for anyone who wants to spark serious connections at conferences such as AAPOR, WAPOR, an ASA. Presentation Slides
A TOTAL SURVEY ERROR APPROACH TO MANAGING THE DATA QUALITY OF STATISTICAL PRODUCTS
Paul P. Biemer
DATE AND TIME
Thursday, September 10, 2015
This webinar will describe a general framework for improving the quality of statistical programs in organizations that provide a continual flow of statistical products to users and stakeholders. The work stems from the system the author and his colleagues developed for Statistics Sweden to improve data quality for their key statistical products (see Biemer, Trewin, Bergdahl and Japec, 2014). This system, called ASPIRE, is built upon an array of quality indicators or metrics for tracking developments and changes in product quality and for achieving continual improvements in survey quality over time. ASPIRE works by reducing the risks of error across all major error sources with each review iteration. The webinar will provide some of the theoretical underpinnings of ASPIRE and how it is supported by the four pillars of the TSE paradigm – design, implementation, evaluation and analysis. It will then describe the components of ASPIRE and demonstrate how it was apply it to a number of products at Statistics Sweden including demographic surveys, business surveys and survey frames or registers. Key results and lessons learned will be summarized and the implications of this work for monitoring and evaluating product quality in statistical organizations more generally will be discussed. Presentation Slides
DESIGN, WEIGHTING AND VARIANCE ESTIMATION FOR POPULATION-BASED EVALUATION STUDIES
DATE AND TIME
Thursday, March 19, 2015
Most years, there are a few really large population-based evaluation studies going on of federal programs designed to improve the economic well-being and health of disadvantaged domestic populations. They are typically sponsored by evaluation divisions of the Departments of Labor, Agriculture, Education, and Health and Human Services. One of the largest in U.S. history is now being conducted by the Social Security Administration on ways of encouraging disabled adults to return to the labor force. These evaluations often involve true experimental designs, but may also involve quasi-experimental designs and regression discontinuity designs. Sometimes the studies rely on only either administrative data or followup survey data to measure outcomes, but often both followup survey and administrative data are used to measure outcomes. Usually some degree of clustering is employed in the design – possibly to make collection of outcome data more efficient, but more often because of resource constraints for treatment delivery or monitoring of treatment delivery. Probabilities of treatment assignment often drift over time in response to local treatment capacities. The combination of clustering, differential treatment assignment probabilities, followup survey nonresponse, and linked administrative data make for an interesting set of challenges very similar to those encountered in the design and analysis of descriptive population surveys. In addition, if only administrative data are used, sample sizes can be very large, approaching survey sample sizes otherwise seen only in the American Community Survey. These sample sizes imply data processing challenges for resampling-based variance estimation and multiple-comparison adjustment procedures. This course will present solutions to many of these more interesting challenges that are aligned with survey methods issues. Presentation Slides
RESPONSIVE DESIGN FOR COMPUTER ASSISTED TELEPHONE INTERVIEWS (CATI) SURVEYS
DATE AND TIME
Tuesday, June 17, 2014, 1:00 p.m. – 3:00 p.m. Eastern Time
Over the past few years, paradata research has focused on gaining a better understanding of data collection processes, leading to the identification of strategic improvement opportunities that could be operationally viable and lead to improvements in cost efficiency or quality. For Computer-Assisted Telephone Interview (CATI) surveys, research findings have indicated that the same data collection approach does not work effectively throughout an entire data collection cycle, stressing the need to develop a more flexible and efficient data collection strategy. To that end, Statistics Canada has developed, implemented and tested a Responsive Collection Design (RCD) strategy for CATI social surveys. RCD is an adaptive approach to survey data collection that uses information available prior to and during data collection to adjust the collection strategy for the remaining cases. In practice, the RCD approach monitors and analyses collection progress against a pre-determined set of indicators for two purposes: to identify critical data collection milestones that require significant changes to the collection approach and to adjust collection strategies to make the most efficient use of remaining available resources. In the RCD context, control of the data collection process is not determined solely by a desire to maximize the response rate or reduce costs. Numerous other considerations come into play when determining which aspects of data collection to adjust and how to adjust them. These considerations include quality, productivity, response propensity of in-progress cases, the collection mode and competition from other surveys for collection resources. This seminar describes the RCD strategy for CATI social surveys and presents the active management and decision making tools used. The highlights and lessons learned are also described, along with current and future RCD research plans and activities.Presentation Slides
USING ADMINISTRATIVE DATA: STRENGTHS AND WEAKNESSES
Joe Sakshaug, University of Michigan
DATE AND TIME
Monday, May 12, 2014, 1:00 p.m. - 3:00 p.m. Eastern time
This webinar will provide a detailed overview of administrative data; their possible uses, strengths, and limitations. Real applications of administrative data used in a social context will be presented from projects conducted at the Institute for Employment Research in Nuremberg Germany.Presentation Slides
VARIANCE ESTIMATION IN COMPLEX SAMPLE SURVEYS
Richard Valliant, University of Maryland
DATE AND TIME
Wednesday, April 23, 2014, 1:00 p.m. - 3:00 p.m. Eastern time
This webinar will provide an overview of the methods for variance estimation in complex sample survey data. Two approaches: linearization and replication will be compared and contrasted. Software options will be examined for different types of estimates.Presentation Slides
THE CALIBRATED BAYES APPROACH TO SAMPLE SURVEY INFERENCE
Tuesday, May 21, 2012Presentation Slides
MODERN METHODS FOR MISSING DATA
Tuesday, May 11, 2012Presentation Slides
PRACTICAL TOOLS FOR NONRESPONSE BIAS ANALYSIS
Kristen Olson and Jill M. Montaquila
DATE AND TIME
Tuesday, April 24, 2012, 1-3 p.m. Eastern time
This webinar will give an overview of methods that may be used to help in addressing the OMB guidelines for conducting nonresponse bias studies when response rates in surveys are less than 80 percent or there is reason to suspect that estimates are biased due to nonresponse. Practical tools are described and examples are used to illustrate these methods. The advantages and disadvantages of these methods are presented, and the value of having multiple approaches is highlighted. The need to devise strategies for nonresponse and for its analysis in the planning stage, prior to completing the survey, is emphasized.
Kristen Olson is an Assistant Professor of Survey Research and Methodology and Sociology at the University of Nebraska-Lincoln. She has been at UNL since 2007. Her areas of research include nonresponse bias and nonresponse adjustments, the relationship between nonresponse and measurement errors, and interviewer effects. Kristen's research has appeared in Public Opinion Quarterly, the Journal of the Royal Statistical Society Series A, Sociological Methods and Research, Field Methods, Social Science Research,
and Survey Research Methods
. She is currently serving as Conference Chair for MAPOR, and has taught short courses on nonresponse bias studies for AAPOR, DC-AAPOR, SAPOR, and JPSM. Kristen is also editor of the Research Synthesis section of Public Opinion Quarterly. She earned her B.A. in Mathematical Methods in the Social Sciences and Sociology from Northwestern University, her M.S. in Survey Methodology from the Joint Program in Survey Methodology at the University of Maryland, and her Ph.D. in Survey Methodology from the University of Michigan.
Jill Montaquila is an Associate Director of the Statistical Staff and Senior Statistician at Westat, and a Research Associate Professor in the Joint Program in Survey Methodology at the University of Maryland. She is a Fellow of the American Statistical Association. Her research interests include various methods for evaluation of nonresponse bias, random digit dialing survey methodology, and address based sampling. Jill has given short courses on approaches for nonresponse bias analysis for DC-AAPOR, SAPOR, and JPSM. She has served as President of the Washington Statistical Society and is Chair-Elect of the Survey Research Methods Section of the ASA. Presentation Slides
RECONSIDERING MAIL SURVEY METHODS IN AN INTERNET WORLD
Washington State University
DATE AND TIME
Wednesday, April 13, 2011, 1-3pm Eastern time
Coverage and response rate concerns make telephone surveys unacceptable for some uses. However, switching to the Internet is limited by the inability to use email contact for some populations and the reluctance of certain people to respond over the web. In this perplexing environment, the development of postal addressed-based sampling, which now provides better coverage than either telephone or the Internet, has generated renewed interest in mail survey methods.
Mail can be used effectively as a stand-alone data-collection mode. In fact, research has shown that mail-only surveys may now produce response rates higher than can be achieved by any other survey mode. Alternatively, mail can be used to encourage response over the Internet in mixed-mode surveys. Research also has shown that using mail to encourage response in certain surveys that use email contacts, but only allow Internet responses, may produce dramatic improvements in response rates.
Achieving positive results with mail requires thinking about it differently than in the past. Dillman will discuss why mail contact methods are effective in today's survey environment and provide examples of how they can be used in situations in which email contact is not feasible (e.g., household surveys of addressed-based samples) or a prior relationship exists (e.g., clients or students). Emphasis will be on recent tests of these new implementation concepts for mail-only and mail+web surveys. In addition, research questions in need of answers will be articulated.
Dillman is a regents professor and the Thomas S. Foley Distinguished Professor of Government and Public Policy in the departments of sociology and community and rural sociology at Washington State University. He also serves as deputy director for research and development in the Social and Economic Sciences Research Center. Dillman is recognized internationally as a major contributor to the development of modern mail, telephone, and Internet survey methods.
INTRODUCTION TO SAMPLING FOR NON-STATISTICIANS
Safaa R. Amer
Senior Statistician, NORC DATE AND TIME
Tuesday, February 8, 2011, 1-3pm ESTABSTRACT:
Many researchers, journalists, policy makers, and educators encounter sample surveys in their research, work, reading, or everyday experience. This course will uncover the logic behind sampling. It will give an explanation of the different types of samples and the terminology used by statistician and survey researchers. It will outline and illustrate the steps needed before, during, and after selecting a sample. It will describe the types of errors faced when conducting a survey and whether they are sampling related or not. The goal of the course is to expose non-statisticians to sampling so that they are able to read and understand articles or documents describing sampling designs and communicate with statisticians about their research needs. The course may even motivate participants to design and select simple samples to illustrate concepts and procedures. The webinar also will be of interest to students taking introductory statistics courses and their instructor who want to learn more about sample surveys. Some references for easy reading will be provided. The content of the course will include the difference between a sample and a census, probability versus non-probability sampling methods, the meaning of a sampling frame or list, illustrations of sampling versus non-sampling errors, random sampling techniques, sample size considerations, and post-sampling steps. INSTRUCTOR BIO
Safaa Amer is a multi-lingual Senior Statistician and Project Director at NORC with wide-ranging experience in data analysis, survey sampling, missing data, and data mining. She has been involved in survey design; analyzing survey operations problems; conducting literature reviews and research to adapt surveys to international contexts; developing new sampling techniques and definitions for multi-cultural setting; developing and refining training material; training and building international survey capacity. She offered consulting to researchers from different fields on complex sampling problems, providing practical information on the types of analyses, limitations of the data, and strengths/weaknesses of various sampling strategies.
In addition, Dr. Amer held several academic positions with the most recent being on the faculty list for the Survey Design & Data Analysis Graduate Certificate program at George Washington University. She has offered statistics and survey research lectures in Arabic, French, and English languages. Dr. Amer has an Economic and Political Sciences background with a special interest in international work, human rights, and geographic information systems. Dr. Amer is a member of several national and international Statistical Associations. She has refereed several papers for international journals and contributed in several graduate level theses.
ADDRESS BASED SAMPLING: WHAT DO WE KNOW SO FAR?
Michael W. Link, Ph.D
Center of Excellence at The Nielsen CompanyDATE AND TIME
Tuesday, November 30, 2010, 1-3pm ESTABSTRACT:
Address Based Sampling (ABS), the use of a comprehensive address database for sampling residential listings, has been the subject of intensive research efforts in recent years. The promise of ABS is that it provides high coverage of residential homes using a nearly complete sampling frame based on the U.S. Postal Service Delivery Sequence File. Because the frame is based on addresses and not landline telephone numbers, cell phone only households are included in the frame in proportion to their penetration within the sampled geography. Additionally, telephone numbers and other sample frame indicators – such as geocoded information from Census block groups or commercial databases – can be appended to the frame, providing more information for sample stratification and targeted sample treatments. While ABS provides a sample frame with high coverage, it does present other issues and challenges for researchers – some methodological, others operational. This webinar will provide participants with background on the ABS frame and potential survey design considerations that accompany its use; highlight areas where research has been conducted and where it is needed; and, provide an initial assessment of potential best practices when using and ABS approach. The course draws upon both the growing body of research in this area and many of the operational lessons learned from utilizing ABS survey designs. INSTRUCTOR BIO
Michael W. Link, Ph.D is Chief Methodologist/VP for Research Methods Center of Excellence at The Nielsen Company. He has a broad base of experience in survey research, having worked in academia (University of South Carolina, 1989-1999), not-for-profit research (RTI International, 1999-2004), and government (Centers for Disease Control and Prevention, 2004-2007) before joining Nielsen. Dr. Link’s research efforts focus on developing methodologies for confronting the most pressing issues facing measurement science, including improving participation and data quality, use of multiple modes in data collection, obtaining participation from hard-to-survey populations, and developing electronic measurement methodologies to supplement or replace self- reports. His numerous research articles have appeared in leading scientific journals, such as Public Opinion Quarterly
, International Journal of Public Opinion Research
, and Journal of Official Statistics
Responses to Questions
SMALL AREA ESTIMATION
Partha Lahiri, PhD
Joint Program in Survey Methodology (JPSM) at the University of MarylandDATE AND TIME
Tuesday, October 19, 2010, 1-3pm ESTABSTRACT:
Direct survey estimates of various socio-economic, agriculture, and health statistics for small geographic areas and small domains are generally highly imprecise due to small sample sizes in the areas. To improve on the precision of the direct survey estimates, small area estimation techniques are often employed to borrow strength from related information that can be extracted from one or more existing administrative and/or census databases. In this talk, I will first discuss the main concepts and issues in small area estimation and then illustrate the effectiveness of small area estimation techniques in different applications. The talk will be presented at a level appropriate for individuals who are new to small area estimation, but also include discussion of research topics of interest to more experienced researchers. INSTRUCTOR BIO
Partha Lahiri is a Professor of the Joint Program in Survey Methodology (JPSM) at the University of Maryland, College Park, and an Adjunct Research Professor of the Institute of Social Research, University of Michigan, Ann Arbor. Professor Lahiri’s research on small-area estimation has been widely published in leading journals such as Biometrika
, the Journal of the American Statistical Association
, the Annals of Statistics and Survey Methodology
. Professor Lahiri has served as member, advisor, or consultant to many organizations, including the U.S. Census Advisory committee, a National Academy of Science panel, the United Nations, the World Bank, and the Gallup Organization. He has served on the Editorial Board of many international journals, including the Journal of the American Statistical Association
and Survey Methodology
. Dr. Lahiri has been honored by being made a Fellow of the American Statistical Association
and the Institute of Mathematical Statistics
and an elected member of the International Statistical Institute
Human Resources in Science and Technology: Surveys, Data, and Indicators from the National Science Foundation
DATE AND TIME
Tuesday, April 6, 2010, 1:00 PM - 3:00 PM Eastern time
The Division of Science Resources Statistics (SRS) is a federal statistical agency housed at the National Science Foundation (NSF). SRS's role within NSF is to "provide a central clearinghouse for the collection, interpretation, and analysis of data on scientific and engineering resources, and to provide a source of information for policy formulation by other agencies of the Federal Government..." Within this mandate SRS is involved in collecting and disseminating information on R&D expenditures and activities and on human capital issues. The United States is unique among major industrialized nations in that it has directly invested in collecting detailed data from a variety of sources on the entire science and engineering pipeline. Each of the data sources came about from U.S. federal administrative needs. The sources have evolved into important elements for the study of higher education and the scientific workforce. In this webinar, these surveys and data sources are described. Key indicators regarding trends in U.S. science and engineering degree production, enrollments, and workforce are defined and described. The Science and Engineering Indicators: 2010 and Women, Minorities and Persons with Disabilities in Science and Engineering reports will be used as examples for these indicators. At the end of the webinar participants should be aware of data sources and how data are collected, indicators and reports from the NSF, and where to find more information from the NSF.
Dr. Nirmala Kannankutty is a senior analyst in the Division of Science Resources Statistics at the National Science Foundation. During her tenure at NSF, she was responsible for the coordination of NSF's three science and engineering workforce surveys, collectively known as SESTAT (Scientists and Engineers Statistical Data System). Also while at NSF, she has completed special projects at the White House Office of Science and Technology Policy on scientific workforce issues, and the White House Office of Management and Budget on the federal R&D budget. Her areas of expertise include S&T workforce, graduate education, and S&T research and development, with extensive experience in survey research techniques and the use of survey results for policy analysis. She is currently Senior Social Scientist/Senior Advisor in SRS, with responsibility for outreach and dissemination. Dr. Kannankutty earned a doctorate in Engineering and Policy from Washington University in St. Louis in 1996.
The Psychology of Survey Response
Roger Tourangeau DATE AND TIME
Tuesday, February 9, 2010, 1:00 PM - 3:00 PM Eastern time
This two-hour course examines survey questions from a psychological perspective. It covers the basics on how respondents answer survey questions and how problems in this response process can produce reporting errors. The class will focus on behavioral questions. The course is intended as on introduction for researchers who develop survey questionnaires or who use the data from surveys and want to understand some of the potential problems with survey data. It describes the major psychological components of the response process, including comprehension of the questions, retrieval of information from memory, combining and supplementing information from memory through judgment and inference, and the reporting of an answer. The course has no specific perquisites, though familiarity with survey methodology or questionnaire design would be helpful.
Roger Tourangeau is a Research Professor at the University of Michigan's Survey Research Center and the Director of the Joint Program in Survey Methodology (JSPM) at the University of Maryland. He has been a survey methodologist for nearly 30 years, with extensive experience in a wide range of surveys. Tourangeau is well-known for his methodological research on the impact of different modes of data collection and on the cognitive processes underlying survey responses. He is the lead author of a book on this last topic (The Psychology of Survey Response
, co-authored with Lance Rips and Kenneth Rasinski and published by Cambridge University Press in 2000); this book received the 2006 Book Award from the American Association for Public Opinion Research (AAPOR). He is also one of the co-editors of a collection of papers (Cognition and Survey Research
, published by Wiley in 1999) from a conference on cognitive aspects of survey response. In addition, he has published a number of papers on mode effects (including a very widely cited paper on audio-CASI with Tom Smith) and on forgetting and telescoping in surveys.
In 2002, Tourangeau received the Helen Dinerman Award for his work on the cognitive aspects of survey methodology. This is the highest honor given by the World Association for Public Opinion Research. In 2005, he received the 2005 AAPOR Innovators Award (along with Tom Jabine, Miron Straf, and Judy Tanur). He was elected a Fellow of the American Statistical Association in 1999 for his work on survey measurement error and his contributions to federal surveys as a sampling statistician. In 2006, he served as the chair of the Survey Research Methods Section of the American Statistical Association. He has a Ph.D. in Psychology from Yale University.
Webinar Q&A Answers
Presentation slides in black and white
Dual Frame Theory Applied to Landline and Cell Phone Surveys
J. Michael Brick
DATE AND TIME
Tuesday, November 10, 2009, 1:00 PM - 3:00 PM Eastern time
As the number of households that have only cell phones has increased dramatically over the past 5 years, telephone surveys have addressed this problem by sampling from both landline and cell phone numbers. One of the issues emerging from these dual frame surveys is that the theoretical foundation for these surveys largely ignores nonsampling errors. Because these errors may be large and result in biases, they must be considered in dual frame telephone surveys. This Webinar begins with a review of dual frame theory with particular attention to surveys that sample landline and cell phone numbers. It then examines the effect of nonsampling errors when surveys are conducted without considering these errors. In particular, we describe the potential effect of nonresponse and measurement error using data from surveys of landlines and cell phone numbers. We discuss both practical sample design issues such as whether to screen for cell-only households, and weighting methods to reduce the effects of the errors. The advantages and disadvantages of different sample designs and estimation methods are discussed. The examples are from actual dual frame telephone surveys.
Dr. J. Michael Brick is a Vice President and Director of the Survey Methods Unit at Westat. He is also a research professor in the Joint Program in Survey Methodology at the University of Maryland, and an adjunct research professor at the University of Michigan. Dr. Brick has over 30 years of experience in sample design and estimation for large surveys, survey quality control, nonresponse and bias evaluation, and survey methodology. Dr. Brick has a Ph.D. in Statistics from the American University, is a Fellow of the American Statistical Association, an elected member of the International Statistical Institute.
Webinar Q&A Answers
Presentation slides in color
Presentation slides in black and white