Network News

X My Profile
View More Activity
Posted at 5:21 PM ET, 01/ 3/2011

Government on the scale, Part III

By Ezra Klein

An academic involved in some efforts to evaluate government programs says that both Sen. Kent Conrad and this reader are right:

I suspect Senator Conrad and the anonymous letter writer use the term “metrics” to denote different things. There is indeed a profusion of program activity metrics in many federally-funded programs. Many academics--myself included--have participated in efforts to produce these numbers. GPRA- measures are quite valuable to document that you have done the work, and to characterize the populations an intervention serves. These measures do not provide the kind of program evaluation information policymakers really need to understand which programs are most effective or cost-effective. Funders generally require too many numbers and reports, which have a way of being collected in a nice binder that sits on a shelf, pleasantly undisturbed.

By Ezra Klein  | January 3, 2011; 5:21 PM ET
Categories:  Government  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: File under 'can't win'
Next: Reconciliation

Comments

Umm...organizations like MDRC have been doing rigorous evaluations of job training and education programs for about 40 years now (type mdrc.org into the google to see some examples). The standard for such evaluations is very high these days, with randomized trials more and more the model(just like in medical experiments). NBER also has a slew of econometric studies on ed and JT programs. With JT and education programs, causality is complex but not impossible to sort out. Tools such as propensity score matching, instrumental variables and regression discontinuities have brought us much closer to isolating causality in these cases.

What Conrad is doing is spreading disinformation and he needs to be called on it. Someone like Krugman or another Senator has to explain to him (and the American people) the difference between good research and bad research. It's not that hard if you are patient and explain it right.

Posted by: nyusocofed | January 3, 2011 6:02 PM | Report abuse

I would agree with the original writer and the this one too... less so with Sen. Conrad. There is a lot of valuable academic and on-the-ground evaluation work that indicates effectiveness and impact. In both the service and evaluation communities, there is a lot of value placed on quality measurements in order to maximized program effectiveness - "accountability" and increasingly scarce public and private grant dollars are driving this.

However, what grantees are often required to submit to federal agencies is just numbers for numbers' sake that do not contribute to good program or policy design and, in fact, can drive program implementation - or at least the individuals responsible for reporting - off the rails. So, the numbers that are filtering up to Congress may be of limited value but that doesn't mean good numbers don't exist.

Posted by: kcar1 | January 3, 2011 6:31 PM | Report abuse

Federal agencies are required to evaluate their performance, programmatically and financially, annually per the Federal Management Financial Integrity Act (FMFIA). In my own research I found that other federal agencies like the EPA, were not taking this requirement seriously. Also Inspector Generals (IG) are tasked with ensuring that their respective agencies are operating with optimum efficiency, effectiveness, and economy. GAO and OMB (PART Reviews) also do their part in this regard. A copy of a report I participated on while with the EPA IG regarding EPA's implementation or lack thereof of FMFIA is available at http://www.epa.gov/oig/reports/2009/20090806-09-P-0203.pdf

Posted by: bholtrop1962 | January 3, 2011 11:05 PM | Report abuse

the problem with 'metrics' is twofold:
1. as the other reader noted,
"In some programs, future funding gets tied to performance, creating an unhealthy obsession with the metrics."
In this case the metrics distort the reality - all efforts are directed at getting good numbers on the metrics, rather than actual good performance. This is a sub-pathology of
2. "These measures do not provide the kind of program evaluation information policymakers really need to understand which programs are most effective or cost-effective. "
Of course they don't. All models are false but some are useful (George E.P. Box). The metrics never correspond with reality, the trick is to know this and manage appropriately.

One of the symptoms of creeping managerialism is an obsession with what are known as KPIs (Key Performance Indicators), a pernicious form of 'metrics'. An organization that implements KPIs tied to incentives is doomed to rediscover that any KPI can be maximized in an unproductive way.
See
http://www.joelonsoftware.com/news/20020715.html

For a particularly horrifying KPI story, see the current hell of UK higher education 'reforms'.
See
http://www.nybooks.com/articles/archives/2011/jan/13/grim-threat-british-universities/">'reforms'.

Posted by: DougK1 | January 4, 2011 2:53 PM | Report abuse

Post a Comment

We encourage users to analyze, comment on and even challenge washingtonpost.com's articles, blogs, reviews and multimedia features.

User reviews and comments that include profanity or personal attacks or other inappropriate comments or material will be removed from the site. Additionally, entries that are unsigned or contain "signatures" by someone other than the actual author will be removed. Finally, we will take steps to block users who violate any of our posting standards, terms of use or privacy policies or any other policies governing this site. Please review the full rules governing commentaries and discussions.




characters remaining

 
 
RSS Feed
Subscribe to The Post

© 2011 The Washington Post Company