Network News

X My Profile
View More Activity

Study shows how dumb we can be

A little-noticed but unusually detailed study of teaching practices, reported by Robert Rothman in the November/December issue of the Harvard Education Letter, delivers a depressing message you should keep in mind whenever you read anything about raising school achievement. I don’t care if it’s by an education school dean, or a state governor, or the U.S. secretary of education, or even me. If this new study is true then none of us really knows what we are talking about.

Consider all the ink and electrons my Post colleagues and I, plus the leaders of our local schools and commentators far and wide, expend on how each of our public schools have performed on the annual tests. These assessments are required under No Child Left Behind. They are wired into the culture now. They will continue in some form no matter how Congress changes that law.

Some of us freak out over even relatively small differences between schools, like a ten percentage point gap in proficiency rates. Districts have changed principals and curriculums based on such results. Give that low-performing elementary school more phonics. Get some reading coaches into that middle school where the scores dipped.

But the Study of Instructional Improvement document described in Rothman’s article rips a big hole in the idea that changes in those schools’ reading programs will have much effect on what going on in their classrooms.

The study led by Brian Rowan of the University of Michigan found extraordinary differences in what teachers in adjoining classrooms were doing, even in schools supposedly ruled by comprehensive reform models that dictated how everyone used every hour of the day.

“For instance,” Rothman reported, “the study showed that a fifth-grade teacher might teach reading comprehension anywhere from 52 days a year to as many as 140 days a year. Similarly, first-grade teachers spent as little as 15 percent to as much as 80 percent of their time on word analysis. Thus, the study found, students in some classrooms may spend the majority of their classroom time on relatively low-level content and skills, while their peers in the class next door are spending much more time on higher level content.”

Okay, you say, that’s easy to fix. Just watch the teachers more closely. Make sure they are all using the higher level content. Guess again, said Rowan’s colleague Jenny DeMonte. She discovered the student gains from highest level practices, such as examining literary techniques and sharing writing with others, were no better than those produced by low-level practices, like asking questions that have answers at the back of the textbook chapter and summarizing story details.

I know. It’s just one study. But few others have tried so hard to gauge educator choices. Teachers at 112 schools kept daily logs of the amount of time they spent on reading and language arts, what they emphasized in particular topics, and what content and methods they used.

I am excited about a new Web tool devised by Friends of Choice in Urban Schools, a non-profit organization that promotes D.C. charters schools. Go to the site and watch it display changes in proficiency rates of individual D.C. schools, both regular and charter, over the last three years. This is an obsession for those of us following the efforts to turn the school district around.

Does the Michigan study mean that group’s work was wasted? Many experts don’t think so. The Michigan data show that on average some reform models did better than others. We have long known that teacher practices vary widely. We can work at that, and study some of the team-oriented schools that have closed those gaps. We can try to figure out why high-end methods don’t work better than low-end.

But we ought to resist what history shows is our instinct to forget inconvenient results and keep doing what we are doing. Ignoring hard truths is not the best way to help our kids.

Read Jay's blog every day at

Follow all the Post's Education coverage on, Facebook and our Education web page,

By Jay Mathews  | January 10, 2010; 6:00 PM ET
Categories:  Metro Monday  | Tags:  teacher variability; myths on raising achievement; school reform models; Robert Rothman; Brian Rowan; Harvard Education Letter;  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   StumbleUpon   Technorati   Google Buzz   Previous: Stop the two-hour snow delays
Next: New ideas from Weingarten


"I don’t care if it’s by an education school dean, or a state governor, or the U.S. secretary of education, or even me. If this new study is true then none of us really knows what we are talking about."

Yes, but at least you admit it. Not so with the other characters you mention whose livelihoods and offices depend on fooling gullible publics into believing they have failsafe solutions to often manufactured problems.

Posted by: dz159 | January 10, 2010 10:15 PM | Report abuse

Should we also be concerned about the assessments? Could it be that some assessments aren't sensitive enough to detect the effects of the "highest level practices?" Might they at least contribute to the problem?

Thanks for alerting us to this study, which I'll read today.

Posted by: ClausvonZastrow | January 11, 2010 7:54 AM | Report abuse

How anyone could think the current approach is the key to future success is beyond me. Just look at the data (which has become somehow the magic bullet) that demonstrates that even our so called high achieving kids are merely proficient.. not meaning very much at all. As we stick our head in the ground and search ever further into the darkness, other countries are increasing their competitive edge. That is simply for three reasons: First, it is a matter of values. In most countries the student is expected to come to school with a serious mind-set, to work hard, and to make learning a priority even if they have very few resources at home. Meanwhile, we are trying to get kids to put away their cell phones, to actually read, and spend precious hours deterring parents who think their kids should get A's no matter what. Second, the emphasis is on inquiry, problem solving and critical thinking, not rote memorization. Third, those who are not college bound, still walk away with an excellent education and a skill. How can we hope to compete? We do not trust or value our educators while other countries hold them in great regard. Our teachers have to worry about poor behavior, following scripted lessons, collecting data, constant dialogue with parents, and punishment for poor test scores. One must ask why anyone in their right mind would pursue such a highly controlled, over worked, and undervalued profession? Since No Child Left Behind, there has been little to no progression in higher level thinking skills, the dropout rate is astronomical, teachers work under threats and intimidation, and tyros make and judge policy decisions. It's a mess... too bad someone with a little common sense (not an axe to grind or a big paycheck to pursue) isn't given authority to revitalize education. The liberals make excuses for kids and underestimate them while the conservatives try to control the curriculum and purse strings. Meanwhile, we manufacture nothing but arms and poor entertainment. In a nation where intellectualism and scholarship is seen as somehow subversive, is it any wonder that we have the peanut section commenting and guiding on something they have little to no knowledge on?

Posted by: lk11 | January 11, 2010 8:04 AM | Report abuse

Calus is right about the "assessments" (really, standardized tests), though I would put is more strongly. The tests this team has been using are set up to rank students, but not to assess higher order thinking skills. So when the study reports no difference, it is certainly likely that the tests cannot pick up key differences. If we want to know if students are learning complex thinking skills, deeper analysis, synthesis, evaluation, application, never mind creativity, we have to use something other than the narrow and antiquated tools researchers seem stuck with. This matters anytime we want to know about more than memorization, recall, rote application, or low-level analytic sorts of skills. If some teachers are trying to do that, it may not lead to improvement in the rote skills that are tested and still lead to meaningful improvements in unmeasured areas. If you do not search for it, you will not find it.

Back in the 1990s, Fred Newmann and his colleages studied aspects of teaching and learning in Chicago. Among other things, they found some teachers, using less didactic approaches, got students to engage in deeper learning. The results were visible in the kinds of actual work the students did, such as their classroom writing. In this case, they got somewhat better standardized test scores, but that is not always the case. The Queensland Australia Rich Task projects had students engaged in extensive tasks. These students clearly could do better in complex thinking and application, but their standarized routine test scores were not much different. (The Chicago Consortium on School Research printed Newmann et al's reports; go to and search for Queensland, you can find links to a wealth of data on the Rich Tasks project.)

Monty Neill

Posted by: montyneill | January 11, 2010 8:41 AM | Report abuse

RE: Fair Test

Monty asserts:

These students clearly could do better in complex thinking and application

If the improvement was so "clear", how did these folks know? What measurements or assessments did they use? Did each teacher determine for him/her self the level of improvement in "complex thinking and application"?

I often fear that Monty's "Fair Test" really means NO Test (and no other even comparable, much less objective, measures).

Jay uses percentage of students taking at least one AP or IB test, regardless of whether they demonstrate any proficiency in their answers. One can certainly debate how meaningful that assessment is -- but at least it is an attempt at an assessment. I always appreciate Monty's comments -- but on occasion I'd appreciate a little indication of what he believes is a "Fair Test" and how those using or taking that test are doing.

Posted by: mct210 | January 11, 2010 8:59 AM | Report abuse

Can we find out how DC-CAS scores vary from class to class within a school? If it’s a lot, then that could be some evidence that what and how teachers are teaching does make a difference. If not, then ???

I’m guessing that there’s not a wide variance – that the school as a whole is a good model – and it’s more something about the kids who populate a school than something about the teachers who work there that makes the difference in the scores. This is not rocket science, as they say. It’s also not a popular conclusion here in DC, under Michelle Rhee, or in the NCLB world, where it was decided, based on no research (I bet) that changing the teachers and principals would “turn” the schools. It hasn’t worked. When, indeed, will we start relying on knowledge instead of speculation?

Jay, have you read “Drive” the new book out by Dan Pink? It sounds like you have. He talks about how we’ve been ignoring decades of scientific research about what motivates people. I wish he could get a meeting with Arne Duncan and urge him to call off this whole “Race to the Top” business based on the findings of real scientific research that says the carrot and stick approach doesn’t work well in many fields (e.g., education), which involve intrinsic motivation. Maybe you could facilitate the meeting. What Pink is saying isn’t new, untested or flashy in any way – he just says it well, at a time when it really needs to be heard. Here’s a link to the transcript of an NPR interview with him

Also, here’s a link to an audio of the Kojo interview with Pink:
It’s more directly relevant, as Kojo relates Pink’s work to Chancellor Rhee’s DCPS reforms.

PS. This is a re-post of a disappeared comment I made last night when this piece first came up online in article (vs blog) format.

Posted by: efavorite | January 11, 2010 9:09 AM | Report abuse

Mr. Mathews shows the visions of a hack. He's a waste of money and deserves to be among the next set of pink slips at the Post. Maybe he will be able to land a job writing up maintenance contracts for the Metro, where he would fit right in.

Posted by: AppDev | January 11, 2010 10:28 AM | Report abuse

The fact that even teachers in schools with a standard curriculum are doing things very differently is no surprise. Whether principals are too busy with answering to district officials or are too lazy, there is not enough sufficient evaluation of what's happening in classrooms.
As to the study's finding that even students in classrooms where teachers are using high-level practices don't show significantly different gains, I wonder whether that has anything to do with inconsistency across grades. If a fifth-grade teacher is using high-level practices, is that enough to counteract the low-level practices students have seen from K-4? I doubt it.
There must be an effort to align not only curriculum, but the actual practices of individual teachers.

Posted by: jdsena | January 11, 2010 10:34 AM | Report abuse

Good post Jay and exactly what our study of success urban charter schools showed (see Chapter 10) see

However, do pay attention to a prior post noting the work of Fred Newmann on variation across classrooms in schools as well as the evidence that higher level tasks produce higher levels of understanding. Good reminder for everyone that this student performance stuff is not just formulaic.

Posted by: katherinemerseth | January 11, 2010 10:51 AM | Report abuse

Hey, Jay - Here’s a study that could be done fairly easily, I think, with already existing data:

Do a tracking study of the variance in SAT and/or AP scores in selective private and public schools, e.g., Sidwell Friends, GDS, Thomas Jefferson, Banneker. What kinds of patterns are there? Any years when the scores vary pretty dramatically?

Then follow-up the data-analysis by finding out how educators have react to these patterns (if at all). For instance, are teachers called on the carpet if scores decline one year? Are they exhorted to change their teaching methods; are they fired or threatened with dismissal if they don’t get the scores up? In other words, are teachers of high-performing students held accountable in the same way that teachers of low-performing students are? Do teachers get a raise or bonus if scores increase considerably? Is that then rescinded if the scores go down the next year? What do teachers have to say about yearly fluctuations? Do they take full responsibility for the results, good or bad, or do they think differences in kids’ preparedness, changes in the test or other factors play a role?

Of course, this presumes that we we respect teacher input and think that teacher quality may affect high-performing students similar to the ways it affects low-performing students. As far as I can tell, education leaders have determined that teachers have a huge effect on the general learning and standardized test scores of low-performing students and are strangely incurious about teachers’ effects on other students. It also presumes that we are interested in achievement overall and in the value of careful and comprehensive academic research.

Posted by: efavorite | January 11, 2010 10:56 AM | Report abuse

How anyone could think the current approach is the key to future success is beyond me. Just look at the data (which has become somehow the magic bullet) that demonstrates that even our so called high achieving kids are merely proficient.. not meaning very much at all.

Posted by: lk11 | January 11, 2010 8:04 AM
What a bitter view of public education in this country.

Unfortuneatly this view is correct in a nation where the educational policy is based upon the view of politicians with No Child Left Behind that every child should meet the standards of the least common denominator. Imagine the benefits if the medical schools in this nation simply went on the policy that every student for a medical degree should meet the standards of the least common denominator. There would be no shortage of Doctors.

Think how great public education in this nation will be in 2014 when all students will be proficient in reading by the end of the third grade as mandated by NCLB.

Think how great it would have been if the politicians had mandated that in 2014 all Americans have a job or that all Americans are healthy.

Posted by: bsallamack | January 11, 2010 2:39 PM | Report abuse

lk11--- don't be deceived by the jargon. There is often an advanced category in these stats, with lots of kids there and doing work at high international standards, but we usually don't report that, because lack of many proficient kids in the cities is considered more newsworthy.

Posted by: Jay Mathews | January 11, 2010 2:46 PM | Report abuse

Monty is right about the inadequacy of the tests to measure higher order skills. Also, I didnt have space to point out that the researchers thought one problem with the higher order skills teachers was that they were introducing twice as many topics as the middle skills teachers (who had the best results) and the lower order skills teachers. Less is more, some say.

Posted by: Jay Mathews | January 11, 2010 2:50 PM | Report abuse

I don't know a nice way to put it, but systemic cheating on standardized testing ( by the institution or teacher) is not exactly hard. I don't want to publish a how to list, at least under my own name, but I see it. NEVER trust the standardized testing of a teacher or school trying to use it as proof of improvement.Especially a radical short term gain.

Posted by: mamoore1 | January 11, 2010 4:29 PM | Report abuse

Spineless University Presidents and Chancellors costs are born by their students. $3 Million Extravagant, Arrogant Spending by UC President Yudof for UC Berkeley Chancellor Birgeneau to Hire Consultants - When Work Can Be Done Internally & Impartially

These days, every dollar in higher education counts. Contact Chairwoman Budget Sub-committee on Education Finance Assemblywoman Carter 916.319.2062 and tell her to stop the $3,000,000 spending by Chancellor Birgeneau for consultants.

Do the work internally at no additional costs with UCB Academic Senate Leadership (C. Kutz/F. Doyle), the world – class professional UCB faculty/ staff, & the UCB Chancellor’s bloated staff (G. Breslauer, N. Brostrom, F. Yeary, P. Hoffman, C. Holmes etc) & President Yudof.

President Yudof’s UCB Chancellor should do the high paid work he is paid for instead of hiring expensive East Coast consults to do the work of his job. ‘World class’ smart executives like Chancellor Birgeneau need to do the hard work analysis, and make the tough-minded difficult, decisions to identify inefficiencies.

Where do the $3,000,000 consultants get their recommendations?
From interviewing the UCB senior management that hired them and approves their monthly consultant fees and expense reports. Remember the nationally known auditing firm who said the right things and submitted recommendations that senior management wanted to hear and fooled the public, state, federal agencies?

$3 million impartial consultants never bite the hands (Chancellor Birgeneau/ Chancellor Yeary) that feed them!

Mr. Birgeneau's accountabilities include "inspiring innovation, leading change." This involves "defining outcomes, energizing others at all levels and ensuring continuing commitment." Instead of deploying his leadership and setting a good example by doing the work of his Chancellor’s job, Mr. Birgeneau outsourced his work to the $3,000,000 consultants. Doesn't he engage UC and UC Berkeley people at all levels to examine inefficiencies and recommend $150 million of trims? Hasn't he talked to Cornell and the University of North Carolina - which also hired the consultants -- about best practices and recommendations that will eliminate inefficiencies?
No wonder the faculty, staff, students, Senate & Assembly are angry and suspicious.

In today’s recession economy three million dollars is a irresponsible price to pay when a knowledgeable ‘world-class’ UCB Chancellor and his bloated staff do not do the work of their jobs.

Take action: use the phone. Together, we will make a difference: save $3 million for students!

Posted by: Moravec36 | January 11, 2010 5:18 PM | Report abuse

If it's not longitudinal data, we should just completely ignore it for effectiveness studies. Even then, this data is not useful to compare between classes or schools as you can't control for student variation.

I love all of these super-geniuses who think that nothing meaningful can be tested. Did you all go to school where there were no tests? If it can't be tested, how do you determine if anyone learned what you taught them or if they just went home and had Mom or Dad do it for them?

Posted by: staticvars | January 11, 2010 5:29 PM | Report abuse

Open your eyes...
this data comes straight from the Department of Education

U.S. fourthgrade
students on the combined reading literacy scale did not measurably differ between 2001 and 2006. Average scores for the literary and informational subscales
in 2006 also did not measurably differ from the average scores in 2001.

Five years of NCLB, and what do we have to show for it? Further data.... 67% of 4th grade students read at Basic levels or below. 20% are reading at proficient or above, and 13% are advanced.

The drop out rate is horrendous as is the rate of incarceration. Sure, we had tests as a kid, but for me, they did nothing but discourage my love of learning. Spitting back facts really turned me off. Real world practice, writing, reading and communication, arts based learning, and inquiry all develop higher level thinking skills and prepare folks for college and/or the real world. If you have ever traveled, you would know that the average cab driver in a foreign country knows more about our history and politics then we do. The facts speak for themselves.

Posted by: lk11 | January 11, 2010 6:17 PM | Report abuse

Jay, you write, "But we ought to resist what history shows is our instinct to forget inconvenient results and keep doing what we are doing. Ignoring hard truths is not the best way to help our kids."

But you seem a bit defensive about this new study. Seems you are having trouble with the idea these "inconvenient results" might be accurate.

Posted by: gafCO | January 11, 2010 8:36 PM | Report abuse

Educated persons know that "content" refers to the substance or subject matter, not to the material or media.

Thus "low-level content" is a mistake, as is "higher level content"; the substance cannot be characterized as of a certain level.

Perhaps you mean basic skills, or simple material, or rote memorization. Perhaps you could say whatever it is you do mean.

Or perhaps we have moved a step closer to being dumb, accepting the current fallacy that "content" means "what is contained", as in the common redundancy "the content contained herein".

Posted by: jpk1 | January 11, 2010 9:12 PM | Report abuse

Good insight, gafCO. I like to think that Jay is becoming more reflective about the influence inherent in his position and is looking at the unfolding reform story more responsibly as something important than another book opportunity for himself.

Posted by: efavorite | January 11, 2010 9:18 PM | Report abuse

Very confusing: are you saying the study says that teaching techniques don't matter? How is that different than saying teachers don't matter?

What *are* the central factors, then, in successful student performance?

Having two parents (in the home) who actually care about education? Wouldn't that pretty much rule out success for most DCPS kids?

Posted by: RealityCheckerInEffect | January 16, 2010 11:37 PM | Report abuse

The comments to this entry are closed.


© 2010 The Washington Post Company