Study: Carefully Crafted Evaluations Improve Teacher Performance, Student Scores

Study: Carefully Crafted Evaluations Improve Teacher Performance, Student Scores
September 10, 2012

Ashley Bateman

Ashley Bateman (bateman.ae@googlemail.com) writes from Alexandria, Virginia. (read full bio)
Audio

As school districts across the country revitalize and redesign aging evaluation systems in an effort to improve teaching and net federal dollars, researchers have found that at least one district’s plan is improving student learning.

In their study, “Can Teacher Evaluation Improve Teaching?” researchers Eric Taylor and John Tyler found Cincinnati’s rigorous Teacher Evaluation System improves midcareer teachers’ performance, as reflected by student test scores.

By comparing teacher performance before 2000-2001, when TES began, the authors found students gained an average of 4.5 percentile points in math and similarly in reading after teachers completed the yearlong evaluation. They first noted the increase during the evaluation year, which is repeated four years later and then at five-year intervals.

As of summer 2011, 18 state legislatures had altered tenure or continuing contract policies, which rely heavily on evaluation determinations. Twelve states further amended such laws this year.

“There’s been tremendous legislative activity,” said Kathy Christie, chief of staff for the Education Commission of the States. “To get [Race to the Top] money a lot of the states had to change evaluations. Most of those are still in their infancy because they’re phasing in or developing the tools or putting together a task force to develop the rubrics.”

Peer Evaluators Drive Improvements
Peer evaluations comprise 75 percent of TES scoring. As opposed to the traditional “principal walk-through” evaluation many districts use, an administrator only contributes one-quarter of evaluation scores in Cincinnati.

The study found that while the system’s overall scores tend towards grade inflation, rubrics and feedback individual evaluators provided were less lenient, leading the authors to suggest that, “Cincinnati’s evaluation program provides feedback on teaching skills that are associated with larger gains in student achievement.”

“Teachers have been notoriously been afraid of getting principals who don’t know what they’re talking about in evaluating,” Christie said. “If they can, [districts] should crystallize the use of peers or independent evaluators.”

Although “peer evaluation is woven in quite a few” of current state’s policies, many of those tools are still under development, she said.

Costly, But Worth It?
Training evaluators is essential to productive outcomes, Christie said. In Cincinnati, evaluators undergo intensive training, but at a cost. The district spends $1.8 to $2.1 million every year for TES, averaging $7,500 per evaluation.

According to Taylor and Tyler, student gains outweigh the spending, “since each peer evaluator evaluates 10 to 15 teachers, those gains are occurring in multiple teachers’ classrooms for a number of years.”                                                                                       

The authors also note that while many policymakers and researchers oftentimes claim that midcareer teachers cannot be improved, Cincinnati proves “experienced teachers provided with unusually detailed information on their performance improved substantially.”

“An important thing to note is that this is just one study,” Taylor said. “In general, in social science or general study we want to see replication. That’s an important step for the future.”

Image by Lafayette College.

Ashley Bateman

Ashley Bateman (bateman.ae@googlemail.com) writes from Alexandria, Virginia. (read full bio)