Network News

X My Profile
View More Activity


Posted at 11:00 AM ET, 02/15/2010

Willingham: In defense of measurement

By Valerie Strauss

My guest is cognitive scientist Daniel Willingham, professor at the University of Virginia and author of "Why Don't Students Like School?"

By Daniel Willingham
I have recently written about the problems in trying to use student achievement data to measure teachers’ effectiveness.

But that doesn’t mean that I think teachers’ effectiveness should not be measured.

Indeed, I think it’s essential that it is.

People focus on just one of the uses to which measurement of teachers could be put: rewarding the successful and firing the unsuccessful. But if you’re interested in improving the practice of teaching, you must have a method of measuring teachers’ effectiveness.

That conclusion is part and parcel of the nature of education. Education is not a natural science. It’s an applied science.

Natural sciences describe the world as it is. Physics describes the nature of matter. Biology describes living systems. Cognitive science describes the workings of the mind.

In contrast, applied sciences do not describe the world as it is. They make the world more like it ought to be. To do so, applied scientists create artifacts. For example, civil engineers build bridges and dams. Aeronautical engineers build airplanes. Urban planners design city infrastructure and transportation systems. Architects design buildings.

Education is an applied science. Educators create artifacts--curricula and lesson plans—in an effort to make the world more like it ought to be.

Applied sciences are inevitably goal-driven enterprises. Biologists take the world as it comes and try to describe it. But an engineer designing a computer chip must define a goal. Is she trying to make it faster, more reliable, more energy efficient? Some combination thereof?

Crucially, the engineer cannot evaluate whether or not she is moving towards her goal unless she can measure the speed of the chip, the power demands, or whatever else is pertinent to her goal.

So it is with education. We must define what the goal of the teacher is, and have measures to tell whether or not a teacher is making progress toward the goal. Absent those measures, when we make changes in the classroom we have no way of knowing whether the changes have made things better or worse.

I would argue that right now, of the measures that can be administered broadly, we have reasonable ones for a narrow spectrum of student outcomes. The NAEP [National Assessment of Educational Progress] tests do a good job of testing factual knowledge within disciplines.

The ability to measure broadly (that is, to measure lots of kids) is important because education interventions are notoriously difficult to scale up.

I haven’t met many people who think that the NAEP tests capture the full range of outcomes we hope for in education. But the consequence is that we can’t get a research toehold on improving educational practices on any of the other stuff we might care about. We can make changes, but we have no way of knowing whether or not they work.

We’re like an engineer who is hoping to design a faster, more reliable, cool-running chip who can only measure temperature.

It is discouraging to contemplate doing this very basic work. But until we can adequately measure the things we care about, we can’t improve them. And more likely, we will overemphasize the things that we can measure, instead of the things that we have determined, based on broader goals and values, are important to emphasize in education. In other words, we’ll end up with a slow, unreliable, but really cold computer chip.

In point of fact, this discussion about measures ought to follow a discussion of goals. After all, the measures must be determined by the goals. Yet there is remarkably little discussion of educational goals in this country.

More on that to come.

-0-

Follow my blog all day, every day by bookmarking href="http://voices.washingtonpost.com/answer-sheet/">http://voices.washingtonpost.com/answer-sheet/

And for admissions advice, college news and links to campus papers,
please check out our new Higher Education page at href="http://washingtonpost.com/higher-ed">washingtonpost.com/higher-ed
Bookmark it!



By Valerie Strauss  | February 15, 2010; 11:00 AM ET
Tags:  accountability, teacher assessment  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: The college hunt: It's enough to drive a mom crazy
Next: Dealing with snow: Officials must do better

Comments

Mr Willingham,
Thanks for a series of serious and thought-provoking topics.
You say it is "essential" that (public) school teachers' effectivenaess should be measured.
You teach at University of Virginia, Mr. Jefferson's highly reputable institution of advanced education. What means have UVA and other universities across the country been using to evaluate their faculty's "effectiveness"?
Are you observed in the classroom 2-3 times a year every couple years? Does a dean privately advise you and hand out an evaluation of your performance? Is your salary being proposed to be tied to your students' test scores? Do politicians say say professors are the main problem with learning in colleges?
Please "put the shoe on the other foot". Tell us how this would work at the college level. This is in fairness to your counterparts in elementary, middle and high schools.

Posted by: 1bnthrdntht | February 15, 2010 1:12 PM | Report abuse

Good points, 1bnthrdtht - but I'll guess that the professor would say that higher education is different. The students there are already considered qualified, they are already adults and learning is their responsibility, not the professors’.

I do think there is a good comparison, though to high school – and that is the case of advanced placement courses, where students are taking a college course and all are being assessed by one national measure.

I’ll wager that even proven AP teachers, with several years of experience, good qualifications and good reputations have varying #s of passing grades over the years. Should a teacher get a pay increase if their percentage of 5s (the highest score) increases significantly one year? What if that same teacher, gets a much lower percentage of 5s the following year? Did the teacher get dumber? Work less? Should her pay be reduced that year and should she be in danger of getting fired if she doesn’t get her numbers up? In making an assessment, you have to look at all the factors - changes in the test, in the students, in the schools, etc. It’s ridiculous to assume that the teacher is completely responsible for all the variables involved in student achievement, but that’s the system here in DC and I fear the US Secretary of Education is going in the same direction.

Posted by: efavorite | February 15, 2010 3:00 PM | Report abuse

1bnthrdntht and efavorite: I would think it wouldn't matter where you teach. As Willingham wrote in the article, we have to first figure out our goals as an education system and as a country, then compare the performance to those goals. Doesn't matter if you're teaching elementary or college. The goals might be different but you should still follow the same procedure in determining the goals and measuring effectiveness against them.


Mr. Willingham: I am very much looking forward to further articles on the goals of k-12 education. I have tried to have this conversation with many people, because, to me, you can't really talk about curriculum, teaching methods, success, etc. until you've defined what the goal is.

I'm interested to hear your thoughts about those goals.

Posted by: ms108 | February 15, 2010 3:20 PM | Report abuse

@1bnthrdntht & ms108: UVa and other universities have done nothing (that I know of) to seriously evaluate the effectiveness of teaching practices. Therefore, research on what might make college instruction more effective is piecemeal. At many (but not all) higher ed. spots, all of the evaluation is by students, and it is for the purpose of rating how well the teacher is teaching, not for the research purpose that I wrote about here.
@efavorite: I think the issue is exactly the same in higher ed as in K-12. Again, the point is evaluation of effectiveness for the purpose of research—how well are college professors meeting the goals set by the institution? UVa (or any other school) should be able to say “If you enter the College of Arts & Sciences, here’s what sort of outcomes we’re shooting for. Here’s what we’re trying to do for you.” Then, we should be able to evaluate the extent to which we’re delivering, which is a prerequisite for figuring out *how we can do it better.* This last point was what I was trying to get at in this blog piece. (And efavorite, if you read the Boston Globe op-ed linked to in this blog, you’ll see that I agree with you regarding using student scores to evaluate individual teachers.)

Posted by: DanielTWillingham | February 15, 2010 3:33 PM | Report abuse

I agree that the vast majority of colleges and universities do nothing to seriously evaluate the effectiveness of teaching practices, and that what little we do is based on students self-evaluations (either by class, retrospective surveys of engagement, or general statistics such as retention).
However, I think that it is a mistake to lump all institutions together by saying that no one has done a sufficient test of teacher effectiveness in terms of the goals of their institutions. This is essentially saying that the goals and planning processes are equivalently lacking across all of higher education.
Some colleges make more of an effort to say "here will be your outcomes" than others, and some departments or majors have more explicitly stated and testable outcomes than others. Anyone who does a premed college education, a good score on the MCAT is certainly one desired outcome.
I think a lot of colleges do a lot of planning for their population, and for their institutional goals. These goals are then supposed to be guiding forces for curriculum redesigns, and departmental reviews, and, in some cases, tenure reviews.
I absolutely agree that we aren't there yet, but I think just because we don't have data that gives us general laws of teaching effectiveness, doesn't mean that some places aren't setting goals and evaluating them with better data than others.

Posted by: formerDCPSstudent | February 15, 2010 4:02 PM | Report abuse

But I totally agree that there should be a discussion of goals before we do measurement. Although, it seems like the discussion of goals in the recent NYTimes piece on the Texas Board of Education shows how even an "open" discussion of goals can result in far less agreement than we might think, and how setting of goals in one place can drive textbook standards in many others.

I was just arguing that some places in higher education have considered their goals (and how to evaluate them) more than others.

Posted by: formerDCPSstudent | February 15, 2010 4:07 PM | Report abuse

Prof Willingham -Thanks for directing me to the Boston Globe piece.

Now - what can we do to get President Obama to stop this foolishness before it gets too far?

Posted by: efavorite | February 15, 2010 4:17 PM | Report abuse

"We’re like an engineer who is hoping to design a faster, more reliable, cool-running chip who can only measure temperature."

Not exactly. The engineer is focusing on the artifact--on the means of accomplishing a specified aspiration--not on the "aim." El-hi measurement is currently focused on "hope/expectations/aims", termed "standards." The protocols and products of instruction are a black box between "standards" and "standardized tests" that shed no light on the instruction that has transpired.

Many "things we care about" in education can be observed directly, but we've been conned to believe that that the arcane psychometrics of standardized achievement testing is the only acceptable "measurement."

There is much consideration of "instructional goals," but little consideration of the artifacts for reliably achieving the goals.

Posted by: DickSchutz | February 15, 2010 4:48 PM | Report abuse

As with much else, whatever discussion of goals there has been over the past 20-odd years has been largely driven by the business community, which wants workers who are literate and numerate. Most parents want schools to teach their children to read with understanding and calculate too, but these are not the only things. Richard Rothstein's "Grading Education" contains a serious discussion about what the goals of education should be and proposes a system for holding schools accountable for achieving these goals. It is a symptom of our narrowed educational culture that proposals like Rothstein's receive no airplay in the state halls and DC where policies are set.

Posted by: dz159 | February 15, 2010 9:37 PM | Report abuse

@formerDCPSstudent: yes, I think you’re right on all this.
@DickSchutz: right, the chip metaphor probably isn’t exactly spot on because the engineer measures the outcome directly whereas in schooling we’re measuring an outcome that is the product of many factors, only one of which is the artifact. The larger point is, I think, still valid. If we measure only one outcome, we can’t expect that anyone will pay much attention to the others.

Posted by: DanielTWillingham | February 16, 2010 9:53 AM | Report abuse

Engineering is totally an applied science. Voltage = Current x Resistance

As a present educator and former engineer, I miss Manufacturing sector's clearly defined goals -- "We need to ship the units by Tuesday."

Education is not an applied science and is therefore much tougher to measure than units (in my case, circuit boards and microcircuits) because products do not have to be motivated to do their homework and do not experience peak performance issues when taking exams.

Educational goals are based on standards and standards are a state-by-state hodgepodge. The main educational goal for my 10 year old dyslexic son -- passing the GED: Only 60% of graduating high school seniors would pass the GED Tests on their first attempt.

Source:http://www.acenet.edu/Content/NavigationMenu/ged/pubs/GED_Testing_Program_FactSheet_20092.pdf

Robin Schwartz
www.mathconfidence.com

Posted by: mathconfidence | February 16, 2010 9:53 PM | Report abuse

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2010 The Washington Post Company