Network News

X My Profile
View More Activity


Posted at 11:01 AM ET, 01/13/2011

New analysis challenges Gates study on value-added measures

By Valerie Strauss

Last month, a Gates Foundation study was released and said to be evidence of the validity of “value-added” measures to evaluate the effectiveness of teachers by using students’ standardized test scores. But a new analysis of that report concludes that the substance of the report doesn’t support its conclusions.

The report released last month was called “Learning About Teaching: Initial Findings from the Measures of Effective Teaching Project,” by Bill & Melinda Gates Foundation officials Thomas J. Kane and Steven Cantrell.

They used data from six major urban school districts to examine correlations between student survey responses and value-added scores computed both from state tests and from higher-order tests of conceptual understanding. Kane and Cantrell concluded that the evidence suggests that value-added measures can be constructed to be valid; others described the report as strong evidence of support for this approach.

But Economics Professor Jesse Rothstein at the University of California at Berkeley reviewed the Kane-Cantrell report and said that the analyses in it served to “undermine rather than validate” value-added-based measures of teacher evaluation.

The review by Rothstein, who in 2009-10 served as senior economist for the Council of Economic Advisers and as chief economist at the U.S. Department of Labor, is being published today by the National Education Policy Center, housed at the University of Colorado at Boulder School of Education.

The MET report uses data from six major urban school districts to, among other things, compare two different value-added scores for teachers: one computed from official state tests, and another from a test designed to measure higher-order, conceptual understanding. Because neither test maps perfectly to the curriculum, substantially divergent results from the two would suggest that neither is likely capturing a teacher’s true effectiveness across the whole intended curriculum.

By contrast, if value-added scores from the two tests line up closely with each other, that would increase our confidence that a third test, aligned with the full curriculum teachers are meant to cover, would also yield similar results.

The MET report considered this exact issue and concluded that “Teachers with high value-added on state tests tend to promote deeper conceptual understanding as well.” But what does “tend to” really mean?

Rothstein’s reanalysis of the MET report’s results found that over 40 percent of those whose state exam scores place them in the bottom quarter of effectiveness are in the top half on the alternative assessment.

“In other words,” he said in a statement, “teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. This result, underplayed in the MET report, reinforces a number of serious concerns that have been raised about the use of VAMs for teacher evaluations.”

Put another way, “many teachers whose value-added for one test is low are in fact quite effective when judged by the other,” indicating “that a teacher’s value-added for state tests does a poor job of identifying teachers who are effective in a broader sense,” Rothstein wrote.

“A teacher who focuses on important, demanding skills and knowledge that are not tested may be misidentified as ineffective, while a fairly weak teacher who narrows her focus to the state test may be erroneously praised as effective.”

If those value-added results were to be used for teacher retention decisions, students would be deprived of some of their most effective teachers, Rothstein concluded.

-0-


Follow my blog every day by bookmarking washingtonpost.com/answersheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page at washingtonpost.com/higher-ed Bookmark it!

By Valerie Strauss  | January 13, 2011; 11:01 AM ET
Categories:  Research, Teacher assessment  | Tags:  bill gates, gates foundation, gates foundation study, gates-funded research, how to evaluate teachers, teacher assessment, value-added, value-added measures  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: Are more high-stakes tests inevitable? A teacher says 'no'
Next: Va. orders history textbook review -- finally

Comments

"“teacher evaluations based on observed state test outcomes are only slightly better than coin tosses at identifying teachers whose students perform unusually well or badly on assessments of conceptual understanding. "

Well, Bill has a lot of quarters.

Posted by: edlharris | January 13, 2011 12:38 PM | Report abuse

I have said more than once the curriculum could not match the test, otherwise we would hear less fussing about tests. Having said that, why are so many students passing the test inspite of the disconnect? Certainly makes me wonder.

Posted by: jbeeler | January 13, 2011 1:00 PM | Report abuse

In other words, you would have to destroy the careers of two effective teachers to fire three ineffective ones. It nice to know that there would be so much canon fodder, I mean new teachers, willing to replace those five.

Posted by: johnt4853 | January 13, 2011 3:21 PM | Report abuse

Bill Gates is foisting an ineffective method of teacher effectiveness upon the world just as he foisted an inferior operating system on the world.

Posted by: educationlover54 | January 13, 2011 5:58 PM | Report abuse

jbeeler- They're passing the tests because they are trained in the skills needed to take a multiple choice test. For example, many students can figure out how to back into a correct answer on a math test that offers four answer choices, but are hopelessly lost when asked to figure out how to design a solution to an open ended problem.

Posted by: aed3 | January 13, 2011 10:02 PM | Report abuse

Therein defines the problem with social policy findings. For every study that proves one theory, "researchers" can develop/skew studies to seemingly prove the opposite.

Another in the long line of flaws with education and the teaching profession - the blatant lack of validity/reliability.

Posted by: phoss1 | January 14, 2011 7:41 AM | Report abuse

Post a Comment

We encourage users to analyze, comment on and even challenge washingtonpost.com's articles, blogs, reviews and multimedia features.

User reviews and comments that include profanity or personal attacks or other inappropriate comments or material will be removed from the site. Additionally, entries that are unsigned or contain "signatures" by someone other than the actual author will be removed. Finally, we will take steps to block users who violate any of our posting standards, terms of use or privacy policies or any other policies governing this site. Please review the full rules governing commentaries and discussions.




characters remaining

 
 
RSS Feed
Subscribe to The Post

© 2011 The Washington Post Company