Connecting Research to Practice: Knowing Who Is Proficient Isn't Always Sufficient

During the past decade, the percentage of proficient students (PPS) has become the primary indicator of school performance. Educators use the PPS to monitor changes in performance over time, compare performance across groups, and assess trends in achievement gaps. The PPS is relatively new, first used with the National Assessment of Educational Progress in the 1990s. Although the PPS seems to be a straightforward indicator of student performance, can a single, simple indicator be trusted to provide essential information about a very complex system? Can the PPS support the inferences that educators and policymakers want to make from it?

A recent Connecting Research to Practice conference hosted by Regional Educational Laboratory (REL) Midwest addressed these questions and provided answers. The conference, titled “Interpreting Test Score Trends and Gaps,” took place in May 2009 in Rosemont, Illinois. In his keynote address, Andrew Ho, Ph.D., argued that the PPS, when used as the sole summary statistic for measuring the performance of a school, district, or state, distorts nearly every important, large-scale, test-driven inference. Distortions at these scales may then lead educators and policymakers to misinterpret gaps, trends, and trends in gaps in populations at all levels of the education system.

Rather than relying on a single tool for effectively representing school performance in the future, educators should use a set of complementary statistical procedures, each of which provides a necessary perspective on the complex school-performance picture. The Joint Committee on Standards for Educational and Psychological Testing of the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education (1999) also strongly recommends the use of multiple measures and discourages the reliance on any single measure.

This brief presents three components to help educators and policymakers understand the limitations of relying on a single measurement tool and the value of continually seeking out multiple perspectives on student performance data. First, the executive summary provides an overview of the brief’s argument and explains how the multiple recommended tools are insufficient on their own but robust when used together. The second section provides a more detailed explanation of the particular limitations of the PPS that many educators are not even aware of. The final section describes in more detail how each tool functions, what it does well, and what its limitations are. Each of these sections will allow educators and policymakers to quickly determine what they know, what they need to know, and how to more effectively monitor—and thereby improve—a school’s performance.