Third Meeting of the Commission for Evidence-Based Policymaking

By Amy Nussbaum posted 11-08-2016 08:44

  

On November 4, members of the Commission for Evidence-Based Policymaking gathered once again for a third meeting held at the Brookings Institution. The meeting featured panels representing perspectives on program evaluation from federal and local government, the social sector, and academic researchers. Specifically, evaluation capacity considerations, data and capacity needs for government evaluation, and the capacity to support public good activities, were all discussed at length with time dedicated for questions from the commission.

Demetra Nightingale, Chief Evaluation Officer in the U.S. Department of Labor, kicked the meeting off by speaking on evaluation capacity considerations in her department. She first mentioned general need for more longitudinal data sets, data linkage, unique identifiers, and using appropriate outcome variables and covariates in program evaluation. She later presented key concerns in the Department of Labor, including priority data systems issues, timely access, expert human capital, and data security. She concluded by giving the commission a wish list, including greater access to earnings data, firm identifiers, a reformed version of the Paperwork Reduction Act, and streamlined interagency agreement to allow increased cooperation between government agencies.

The next panel featured Katherine O’Regan, Assistant Secretary for Policy Development and Research in the U.S. Department of Housing and Urban Development, Evelyn Kappeler, Director of the Office of Adolescent Health in the U.S. Department of Health and Human Services, and Matthew Klein, Executive Director of the New York City Center for Economic Opportunity. First, O’Regan emphasized increasing government capacity for in-house research via cross agency linkage. Two recent examples include linking HUD data with data from the Department of Education to explore the trajectories of students living in assisted housing and linking HUD longitudinal data with data from the National Center for Health Statistics to investigate the links between assisted housing and healthcare, chronic disease, morbidity, and mortality. O’Regan also advocated that data sharing should be planned for at the beginning of a project—specifically, not doing so can make it hard for academics to access data and collaborate with one another. Kappeler gave specific examples as well, describing the Teen Pregnancy Project in the Office of Adolescent Health, and recommended using funding to replicate findings of existing programs as well as research and demonstration projects for new and innovative approaches to create a body of standards. In addition, she recommended that government offices establish and promote uniform evidence standards, provide evaluation training and technical assistance, collect and use data to make continuous improvements to programs, and ensure transparency in dissemination of results. Klein, whose department is located in the Office of the Mayor in New York City, gave a local government perspective and urged relaxation of federal laws to allow ease of data sharing between different local sources—in other words, citywide data integration.

Speakers representing the social sector were Tanya Beer, Associate Director for the Center for Evaluation Innovation, James Sullivan, Co-Founder of the Lab for Economic Opportunities at the University of Notre Dame, Adam Gamoran, President of the William T. Grant Foundation, and Kelly Fitzsimmons, Director of Innovation and Policy Planning at the Edna McConnell Clark Foundation. Beer spoke of various trends in program evaluation, including a growing demand for evaluation to support decision-making, a need for systems-level data and new approaches to evidence building, opportunities for cross-sector collaboration on evaluation, limited bandwidth for understanding and building evidence, and experimenting with staffing and processes to support better use. Some of these trends are cautionary tales, and Beer recommended that close attention be paid to the disproportionate amount of focus on performance metrics at the expense of the whole body of evidence and limited agency capacity. Sullivan opened his portion of discussion with some shocking figures—the government spends $800 billion on social programs, yet only 1% of the programs are backed by evidence. Obstacles to building evidence include funding, data on outcomes, and conflicting best practices. Recommendations to combat these obstacles include incentivizing impact measurement and making administrative data more accessible. Gamoran defined research evidence as derived from applying systematic methods and analyses to predefined questions and hypotheses and gave examples of studies using state data gone awry. Finally, Fitzsimmons offered a funder’s perspective, including both bright spots and challenges, and recommended that evaluation, currently the “caboose of the evidence-building train,” be the engine instead.

Naomi Goldstein, co-chair of the Interagency Working Group on Evidence Policy and Deputy Assistant Secretary for Planning, Research, and Evaluation in the Administration for Children and Families in the U.S. Department of Health and Human Services, brought the meeting to a close with seven main points: data are necessary but not sufficient to create evidence (sound analysis is also key), administrative data and ongoing surveys are important resources but specialized data collected for the purpose of specific evaluations will also continue to be important, easier accessibility of administrative data would greatly streamline evaluation activities, implementation and descriptive studies are just as important as impact or outcome evaluations and relevance is just as important as rigor (irrelevant data being “elegant, but useless”), prerequisites for  federal evaluation include statutory authority, funding, a skilled federal workforce, and a robust private sector, several bureaucratic challenges pose substantial barriers to federal evaluation efforts in the areas of procurement, information technology and security, and information collection, and finally, federal evaluation enterprise lacks many elements of the infrastructure that supports and protects the federal statistical system.  

The next CEP meeting will be on December 12, and will focus on data infrastructure and management. In the meantime, read about the first and second meetings as well as the recent public hearing , and keep checking the Policy Director's Blog and Twitter for updates!

0 comments
357 views

Permalink