Network News

X My Profile
View More Activity

E-mail Bill | RSS Feed | In-depth coverage: Education Page | Follow The Post's education coverage: Twitter | Facebook

Capital Gains short of cash

Chancellor Michelle A. Rhee says that she's ready to continue the District's two-year experiment with paying middle schoolers as much as $100 a month for good grades, behavior and attendance, but only if she can find the money.

Data from the first year (2008-09) of the Capital Gains program, a joint venture with Harvard economist Roland Fryer, showed higher reading test scores for Hispanics, boys and students with behavior problems. Rhee said in April that she wanted to see 2009-10 results before making a decision on renewing the initiative, which cost the District about $1.2 million this past year. But now she says she's ready to continue on the basis of year one results, and that even if the second-year data is "flat or not moving the ball forward" the program is worth pursuing.

"The biggest issue is money," Rhee said, adding that DCPS is looking for private grants to fill in the funding gap.

The District has some time to line up financing, she said, because the program, which involves about 3,000 middle school students at 15 schools, usually doesn't get up and running until early to mid-October.

Follow D.C. Schools Insider every day at
And for admissions advice, college news and links to campus papers,
please check out our new Higher Education page at
Bookmark it!

By Bill Turque  |  June 25, 2010; 1:40 PM ET
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   StumbleUpon   Technorati   Google Buzz   Previous: Rhee: Pope still in arts magnet picture
Next: More tests on the way for D.C. students


It’s time to repeat my original comment on this project, made on 4/10/10

I read the 4,328 word Time article twice and found that with one exception, no numbers used to describe the purported success of the program - no percentages, no statistics, no dollar amounts.

Numbers WERE used to describe the study design, but not the results.

Fascinating. Try to think of one other news report of a statistical study of any kind where this is the case. Also, releases of survey results are typically accompanied by links to the data so people can check and interpret for themselves. No such link is here. In this case so far, all the interpretation is done by a journalist who provides practically no data. The one instance in which a number was used to describe an outcome is here:

“…according to Fryer's results, kids [in DC] with a history of serious behavioral problems saw the biggest gains in test scores overall. Their reading scores shot up 0.4 standard deviations, which is roughly the equivalent of five additional months of schooling.”

Interestingly, this finding was prefaced two paragraphs earlier by a caveat on sample size: “Because of the small size of the school system, the Washington sample was less well balanced than those in the other cities.”

In other words, we can’t be confident about the one piece of positive (or any) data presented.

This looks less like an explanation of a serious educational study, and more like the rollout a public relations campaign.

And here’s Brandenburg’s comment, with a link to his data:
Fryer’s study: a complicated-looking crock.
It can be found at the link that Turque gave. I've skimmed it, and I must say that it is a masterpiece of obfuscation. He makes fairly simple stuff seem like things that only someone with a PhD in statistics and economics could possibly understand.

He and Rhee then cherry-pick out a couple of groups in which there was a little bit of growth in the experimental group as opposed to the control group, ignore all of the cases where the exact opposite occurred, and then declare victory.

Sure, if you make up enough variables, and roll the dice enough times, you are probably going to get a positive outcome every so often. That’s what happened.

It’s a big, complicated-looking crock.

Quoting Turque:

'Overall, the awards showed only a “marginally significant” effect on standardized reading test scores. Effects on math test scores were not statistically significant.'

Read my blog for more details:

Posted by: efavorite | June 25, 2010 2:25 PM | Report abuse

I think this one is obvious. Of course it is going to work to some extent. It is a great idea and I hope she gets the money.

I also think small class size works.

I also think it would help if she gave the teachers some credit.

Posted by: celestun100 | June 25, 2010 3:53 PM | Report abuse

Is the program available in ALL DCPS middle schools? If not, why not? What criteria were used for selecting the schools if the program is not in existence in all middle schools?

Posted by: vscribe | June 25, 2010 4:27 PM | Report abuse

The program was a failure. There were 40-odd groups of kids looked at and only one showed statistical improvement, and that should happen by random chance.

Some of Fryer's tables showed DCCAS scores of 640 with a standard deviation of 80 or so. This makes no sense, since all 6th graders must have a score of 6XX, 7th graders 7XX etc... Fryer (or his grad assistant) clearly has no idea of how DCCAS is set up or graded.

This completely ignores the unbalanced sample sizes and so forth that also ruined the study.

Posted by: Wyrm1 | June 25, 2010 6:55 PM | Report abuse

The comments to this entry are closed.

RSS Feed
Subscribe to The Post

© 2010 The Washington Post Company