Yi Shang, Ph.D.

States across the U.S. rely on standardized tests to measure the performance of elementary, middle, and high school students — and, increasingly, to evaluate the effectiveness of their schools and teachers. But test scores alone are not a foolproof way to make these assessments, according to John Carroll University education professor Yi Shang, Ph.D.

Evaluations of student growth and teacher as well as school effectiveness based on standardized tests are problematic, she explains. In part, that’s because measurement errors of the test scores are not always being taken into account in the analyses. A recent study by Shang offers a new approach to help reduce these biases, providing a more accurate picture of student growth and school performance.

Shang examined the student growth percentile model, which has been approved by the U.S. Department of Education and currently adopted by more than 30 states to evaluate and project student growth. Her research suggests that, when testing data are analyzed using the student growth percentile model (and, most likely, other popular models as well), the results contain systematic biases rather than random errors.

Random errors are considered to be acceptable in statistical modeling of test scores because they do not affect only a particular group of students. However, systematic biases do raise fairness concerns.

In her study, Shang found that the student growth percentile model tends to overestimate student growth in schools that already have a large number of higher-performing students. The model also underestimates student growth in schools that already have a large number of lower-performing students.

To correct the systematic bias in the student growth percentile model, Shang applied a simulation-extrapolation (SIMEX) method. SIMEX has been used in biostatistics and medical research, but not previously in educational research.

Shang’s study found that the SIMEX method helped to provide a more accurate estimation of academic growth for students at all levels of achievement. In other words, the SIMEX method reduced the bias that systematically inflated or deflated student outcomes.

Shang’s study—titled “Measurement Error Adjustment Using the SIMEX Method: An Application to Student Growth Percentiles”—appears in the winter 2012 issue of the Journal of Educational Measurement.

A faculty member at JCU since 2009, Shang holds a master’s degree in mathematical statistics from Boston University, a master’s degree in counseling psychology and a Ph.D. in educational research, measurement, and evaluation from Boston College.

Posted on March 19, 2013
||