Research Paper
|

Improving Education Through Assessment, Innovation, and Evaluation

Authors
Henry Braun, Anil Kanjee, Eric Bettinger, and Michael Kremer
Back to publications
American Academy of Arts & Sciences

How is progress toward educational goals, both local and global, measured? Although assessment is most often seen as a tool to measure the progress of a single student, it also allows individuals, communities, and countries to track the quality of schools and educational systems. In theory, publicly available data enable policymakers to craft effective policies and students and parents to better choose among educational options. As Henry Braun and Anil Kanjee note, the potential benefits of assessment are not easy to capture, as they must overcome a number of significant implementation challenges and political and financial obstacles. The authors review promising national and international efforts and offer recommendations for creating and implementing assessments in developing countries.

Testing offers a means to track the outcomes of schools and educational systems. But how can education reformers identify the practices that led to improved or worsened outcomes? There are countless and complex factors at work even within a single classroom. Deciding whether an educational innovation is responsible for a change in student outcomes is difficult at best, yet essential for efficiently implementing the most effective educational programs.

As Eric Bettinger and Michael Kremer each discuss, one reliable means of evaluating the effects of a program or intervention-namely, randomized controlled experimentation-is now finding use in education. These experiments make possible valid comparisons among pedagogical techniques and systems of management because randomization establishes equivalent participant and non-participant groups for comparison. Randomized controlled experiments can, therefore, produce the most credible evaluation of programs, including their cost-effectiveness. Bettinger explains why experiments such as that used to study the school-based health program remain underutilized though they provide highly credible results. Kremer reviews the findings from randomized evaluations to determine low-cost means of increasing enrollment. As the research of these authors makes clear, with more reliable information from such experiments, education reformers can focus efforts and resources on the programs that have been found to be most effective.

Contributors

Eric Bettinger is an assistant professor in the department of economics at Case Western Reserve University. He is also a faculty research fellow at the National Bureau of Economic Research. From 2002–2003, he was a Visiting Scholar at the American Academy of Arts and Sciences. His work focuses on determinants of student success in primary and secondary school. He has written several papers on the effects of educational vouchers on student outcomes in Colombia. He has also written on the academic and non-academic effects of educational vouchers in the United States. His most recent work focuses on the determinants of college dropouts and the effectiveness of remediation in reducing dropout behavior.

Henry Braun is a distinguished presidential appointee at the Educational Testing Service (ETS) and served as vice-president for research management at ETS from 1990–1999. He has published in the areas of mathematical statistics and stochastic modeling, the analysis of large-scale assessment data, test design, expert systems, and assessment technology. His current interests include the interplay of testing and education policy. He has investigated such issues as the structure of the Black-White achievement gap, the relationship between state education policies and state education outputs, and the effectiveness of charter schools. He is a co-winner of the Palmer O. Johnson award from the American Educational Research Association (1986), and a co-winner of the National Council for Measurement in Education award for Outstanding Technical Contributions to the Field of Educational Measurement (1999).

Anil Kanjee is an executive director at the Human Sciences Research Council (HRSC), South Africa. He is head of the HRSC Education Quality Improvement Initiative, which aims to support government and other key role-players in the implementation of evidence-based policies and practices to improve education quality. His research interests include education change and school reform in developing countries, the use of assessment to improve learning, the application of Item Response Theory for test development and score reporting, and the impact of globalization on knowledge creation and utilization. He also works on an initiative to establish and strengthen links among researchers in Africa and other developing nations for the purpose of sharing expertise and experience in improving education quality.

Michael Kremer is Gates Professor of Developing Societies at Harvard University, senior fellow at the Brookings Institution, and a non-resident fellow at the Center for Global Development. He founded and was the first executive director (1986–1989) of WorldTeach, a non-profit organization that places two hundred volunteer teachers annually in developing countries. He previously served as a teacher in Kenya. A Fellow of the American Academy of Arts and Sciences, Kremer received the Macarthur Fellowship in 1997. His research interests include AIDS and infectious diseases in developing countries, economics of developing countries, education and development, and mechanisms for encouraging research and development.