A weird accountability system in Texas
In the category of “data will set your school system free -- unless, of course, the data don’t tell the literal truth,” we have the Texas Projection Measure. It’s part of an accountability system that awards schools credit for students who have actually failed state-mandated tests but who are expected to pass sometime in the future.
According to newspapers in Texas, the state’s Education Agency is considering reforming the “projection measure” or eliminating it altogether. Critics say so many districts saw so much improvement in state test scores this year that the projection measure may be distorting real achievement growth.
Do you think?
Texas state Rep. Scott Hochberg, a Houston Democrat and chairman of an appropriations subcommittee overseeing the education budget, was quoted by the American-Statesman as calling the measure “invalid,” and urged education officials to “start from scratch and develop a real measure of the progress students make in schools.”
Last week, Education Commissioner Robert Scott sent a letter to district administrators saying that he will review the projection measure, and mentioned several options, including suspending the measure or continuing using it.
Here’s how it works, according to the Star-Telegram: “When a student fails a test but shows enough gain to make it statistically likely that he or she is on track to pass during the next high-stakes testing period,” the Texas Projection Measure "counts that student as having passed for school accountability purposes.”
The state created the measure and began implementing it after years of complaints from schools that they were declared failures by the slimmest of margins.
At last week’s subcommittee meeting, Hochberg said that 43 school districts were rated "exemplary" in 2008. After the projection measure was put into effect, the number of “exemplary” school districts jumped to 117 in 2009, and all but one was tied to advantages received by the projection measure, according to Hochberg.
The lesson here goes way beyond Texas.
Today’s breed of school reformers can’t stop talking about the importance of “data” as the fuel to drive school reform. If something isn’t “data-driven,” it isn’t any good. President Obama’s blueprint for education reform itself says that “districts will be required to implement data-driven interventions to support those students who are farthest behind and close the achievement gap.”
But data are tricky.
In Texas, the “data” show kids who failed a test as actually having passed.
School systems today are moving toward paying teachers using “data” from student standardized test scores that are highly suspect as a measure of student achievement.
Colleges and universities often make admissions decisions based on narrow “data” that are not compiled the same way from one state to another, from one school district to another.
An obsession with “data” gives the illusion of precision, but it is only an illusion.
Here are the 32 Principles of Data Interpretation authored by the late Gerald Bracey, an educational psychologist who was an expert on education research. They reveal the many ways data can be misused.
1. Do the arithmetic.
2. Show me the data.
3. Look for and beware of selectivity in groups.
4. When comparing groups, make sure the groups are comparable.
5. Be sure the rhetoric and the numbers match.
6. Beware of convenient claims that, whatever the calamity, public schools are to blame.
7. Beware of simple explanations for complex phenomena.
8. Make certain you know what statistic is being used when someone is talking about the "average."
9. Be aware of whether you are dealing with rates or numbers. Similarly, be aware of whether you are dealing with rates or scores.
10. When comparing rates or scores over time, make sure the groups remain comparable as the years go by.
11. Be aware of whether you are dealing with ranks or scores.
12. Watch for Simpson’s paradox.
13. Do not confuse statistical significance and practical significance.
14. Make no causal inferences from correlation coefficients.
15. Any two variables can be correlated. The resultant correlation coefficient might or might not be meaningful.
16. Learn to be "see through" graphs to determine what information they actually contain.
17. Make certain that any test aligned with a standard comprehensively tests the material called for by the standard.
18. On a norm-referenced test, nationally, 50 percent of students are below, by definition.
19. A norm-referenced standardized achievement test must test only material that all children have had an opportunity to learn.
20. Standardized norm-referenced tests will ignore and obscure anything that is unique about a school.
21. Scores from standardized test are meaningful only to the extent that we know that all children have had a chance to learn the material which the test tests.
22. Any attempt to set a passing score or a cut score on a test will be arbitrary. Ensure that it is arbitrary in the sense of arbitration, not in the sense of being capricious.
23. If a situation really is as alleged, ask, "So what?"
24. Achievement and ability tests differ mostly in what we know about how students learned the tested skills.
25. Rising test scores do not necessarily mean rising achievement.
26. The law of WYTIWYG applies: What you test is what you get.
27. Any tests offered by a publisher should present adequate evidence of both reliability and validity.
28. Make certain that descriptions of data do not include improper statements about the type of scale being used. For example "The gain in math is twice as large as the gain in reading."
29. Do not use a test for a purpose other than the one it was designed for without taking care to ensure it is appropriate for the other purpose.
30. Do not make important decisions about individuals or groups on the basis of a single test.
31. In analyzing test results, make certain that no students were improperly excluded from the testing.
32. In evaluating a testing program, look for negative or positive outcomes that were not part of the program. For example, are subjects not tested being neglected? Are scores on other tests showing gains or losses?
Follow my blog all day, every day by bookmarking washingtonpost.com/answersheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page at washingtonpost.com/higher-ed Bookmark it!
| July 14, 2010; 4:29 PM ET
Categories: Research, Standardized Tests | Tags: accountability measures and school, data driven reform, gerald bracey, texas accountability measures, texas education agency, texas projection measure
Save & Share: Previous: Cool Summer Class: 'Boot camp' revisited
Next: What does "charterness" mean, exactly?
Posted by: aby1 | July 14, 2010 5:20 PM | Report abuse
Posted by: celestun100 | July 14, 2010 10:35 PM | Report abuse
Posted by: nfsbrrpkk | July 15, 2010 8:32 AM | Report abuse
Posted by: tweinberg | July 15, 2010 10:08 AM | Report abuse
Posted by: buckbuck11 | July 15, 2010 10:33 AM | Report abuse
Posted by: nfsbrrpkk | July 15, 2010 11:54 AM | Report abuse
Posted by: Hillcrester | July 15, 2010 3:54 PM | Report abuse
The comments to this entry are closed.