Network News

X My Profile
View More Activity


Posted at 12:26 PM ET, 05/17/2010

Is education research all dreck? -- Willingham

By Valerie Strauss

My guest is cognitive scientist Daniel Willingham, a pyschology professor at the University of Virginia and author of "Why Don’t Students Like School?"

By Daniel Willingham
Sharon Begley, science editor at Newsweek, doesn’t have anything nice to say about education research. In a recent article, she refers to it as "second-class science" and "so flimsy as to be a national scandal."

I agree that there is a problem, but I don’t think she’s diagnosed it correctly.

There is a lot of excellent research in education. I spend most of my time reading basic scientific work and trying to understand what it means for classrooms and for policy, and much of what I draw on is education research.

There is, however, also a good deal of dreck.

There is a certain amount of poor science in other fields as well. Go to the psychology section of a large book store and you’ll see plenty of nonsense. Books with crazy suggestions on dieting, love, self-actualization, and so on.

The difference between psychology and education is that psychology, as a field, is more vigilant in its self-regulation, particularly through its professional societies.

Suppose that I’m a legislator and I want to know what the latest research says about repression of childhood memories and whether recovered memories should be admissable as court testimony. Naturally, a legislator is not going to dig through the research literature himself or herself. There ought to be a national organization to which policy makers can turn for answers. Such an organization would stand ready to provide the best information available on a particular topic.

Within psychology there are two--and only two--such organizations. Both are deeply committed to scientific rigor, they are run by scientists, and they publish very high quality research.

If I’m a legislator--or teacher--interested in a question such as, "What’s the latest research on how kids learn to read and how they respond to different ways of teaching reading?" where do I turn?

The American Educational Researchers Association (AERA) ought to be logical place, but it has not shown a lot of interest in taking on the job.

I think a large part of the reason for this is that it is an enormous organization that includes scholars from very different disciplines: psychology, economics, political science, critical theory, history, feminist studies, etc. These different fields not only have different criteria by which evidence is evaluated, they have different definitions of what it means to "know" something. Small wonder, then, that AERA is seldom ready to make a flat statement on a research issue.

This reluctance leaves a vacuum into which opportunists are happy to leap.

People ask me: "If it’s really true that there is no evidence supporting learning styles, how can there be professional development activities on them, and books published about them, and all the rest?" Because the people who do research on this sort of thing don’t speak with one voice to say: "We’ve looked into this and there doesn’t seem to be much to it.”

And, on occasion, when a high-profile research article clarifying the lack of research base is published, organizations like ASCD stand ready to publish outlandish defenses of discredited theories--for example, that even though a clear prediction of the theory is not supported, that doesn’t matter because teachers don’t want to use that prediction anyway.

Education researchers ought to care about this issue. It’s not enough to shake our heads sadly at how misinformed Sharon Begley is. Letting commercial interests and frank snake-oil salesmen influence or even take control of the national conversation on research issues damages the field, and ultimately harms students.

A start would be for the AERA membership to decide how to come to agreement on important issues. The easiest place to start with would be to decide which issues within education are amenable to a scientific analysis. There is precedence in other branches of science regarding how to come to some agreement on complex questions.

Education researchers frequently lament policy makers cherry-picking research findings to support positions that they advocate for non-research-based reasons. Until researchers get their act together, we continue to invite them to do so.

-0-

Follow Valerie's blog all day, every day by bookmarking washingtonpost.com/answersheet And for admissions advice, college news and links to campus papers, please check out our new Higher Education page at washingtonpost.com/higher-ed Bookmark it!

By Valerie Strauss  | May 17, 2010; 12:26 PM ET
Categories:  Daniel Willingham, Guest Bloggers, Research  | Tags:  Daniel Willingham, aera, education research, value of education research  
Save & Share:  Send E-mail   Facebook   Twitter   Digg   Yahoo Buzz   Del.icio.us   StumbleUpon   Technorati   Google Buzz   Previous: The real world of teachers
Next: Protesters call IB program un-American. Is it?

Comments

Although I agree that the quality of education research is abysmal, I disagree with Willingham's example of learning styles.

The response he tries to discredit states: "Pashler and his team conclude that using learning-style assessments in schools is 'unwise' because they found minimal evidence that test performance improved when students' styles were diagnosed and matched to corresponding instructional treatments. Although we don't endorse this kind of matching, we disagree with the contention that test scores are all that matter in determining whether matching—or
any intervention—works."

The crucial point being that any belief that test scores represent a valid measure upon which to base decisions has been discredited repeatedly by experts in testing.

The April 23, 2010, issue of Science magazine has a special section on science education and several authors lament that the focus on test scores is destroying science because it rewards memorization and discourages reasoning.

One should start with the premise that test scores are dreck in educational research because they are like testing expert Popham once stated: trying to measure weight with a spoon.

Anyone with any knowledge of testing knows that research going back to Coleman finds repeatedly that test scores measure SES and little more.

Posted by: zoniedude | May 17, 2010 2:55 PM | Report abuse

@zoniedude: The learning styles argument does not hinge on test scores (and this is part of the confusion that the ASCD people cause in their "rebuttal").
If learning styles exist, then you should be able to find some test (any test) where matching instruction to the style improves everyone's scores. The learning styles people have not found a test (or content) for which this is true.
What Pashler and colleagues conclude is not that test scores are the only thing that matters, but rather that we should restrict the "research-based" seal of approval to those concepts which have at least been documented in some form in the laboratory.
As Dan Willingham says, a clear prediction of learning styles theory is that matching style with instruction should help. If a wide variety of instructional strategies help in the classroom, you need not invoke learning styles to explain this. Most students like a change of pace every now and then. To show that learning styles exist, you have to show that matching style with instruction is better than a variety for everyone.

But I agree with your point that test scores are misused. I don't think Pashler or Willingham (see his last column) would disagree that the emphasis on what is narrowly and conveniently tested through standardized reading and math tests does not serve our students well. I think Willingham's proposal here, of an institution of educational research consensus would actually help make that more clear, than the seemingly disorganized (but absolutely right) authors from Science that you cite or just about any real testing expert you can get your hands on.

Posted by: formerDCPSstudent | May 17, 2010 9:35 PM | Report abuse

The reason education research lacks rigor is often because it is political in nature. It it is truly one of those fields where the given set of variables and outcomes are not agreed upon by those doing the research, those practicing the research and the unwitting parents and children who think their is some agreement on how their child should be taught. The reason the professional groups fail is that they themselves are either afraid of wading into the politics or have their own political agenda.

Posted by: Brooklander | May 18, 2010 6:56 AM | Report abuse

ASCD is a nonprofit, nonpartisan membership association specializing in expert and innovative solutions in professional development. We are an association made up of thousands of practitioners, not researchers. As an organization, we believe in healthy debate, and we’d like to clarify a few things for the readers.

Our monthly newsletter, Education Update, has a reoccurring “responding to the research” column that allows respected experts do just that—respond to current research. We choose people who we think can talk critically about new research and not just people who may align with one viewpoint or another. We hope to engage readers to think critically about what education research says, analyzing it for bias and perspective, and make their own judgments. Harvey Silver and Matthew Perini, like many of the experts we publish, have spent years in the classroom and in schools. Silver and Perini disagreed with the study because of the limitations, so we hardly feel that deserves the label of “outlandish.” The following is a passage from their article:

“We want to clarify that the authors [Pashler et al] only reviewed one approach to learning styles—based instruction: grouping students by style of instruction. Differentiated instruction proponents, ourselves included, do not advocate this kind of matching as the optimal form of instruction … Distinguishing between our view of learning styles and the view examined in the study is important because the authors’ conclusion about the use of style assessments in schools depends on a view of style matching that not everyone shares. And reported lack of evidence for matching is being misinterpreted as a criticism of style-based instruction in general.”

Please visit us at ASCD Inservice [http://ascd.typepad.com/blog/2010/05/silver-perini-respond-to-learning-styles-critics.html] for continued conversation on learning styles.

Posted by: lmberry1 | May 18, 2010 11:45 AM | Report abuse

Too many, perhaps most, education researchers are poorly educated themselves. Every field has researchers who are sloppy or who don't care about rigorous standards, but in education, most don't even know the difference. They routinely cherry pick evidence that supports their pet theories and toss everything else. They rarely have K-12 teaching experience and discount any input from experienced teachers who disagree with them (if they bother to ask). And they tend to have little interest in exploring current research in other fields like psychology that could enlarge their understanding.

When the researcher has multiple degrees in education, it generally means the research is based on self-perpetuating, academically incestuous ideas and methods that would be laughable in any other field. They spend years on badly designed experiments and studies that spawn pages of self-important jargon that says nothing useful.

I saw all of this happen up close when I took an asssistantship in an ed psych department in graduate school. I was getting an MA in another field, but was very interested in working with people in education, thinking there should be collaboration.

What a mistake! I was shocked at the garbage being spawned. That was when I understood why education departments are considered to be academic bottom feeders. Sadly, some of the same research ended up being seminal in developing many of the reading programs pushed by NCLB.

I became a teacher years later through a career switcher program, but my experience at that ed psych department taught me to be extra careful when reviewing any kind of education research. There is some very good research these days, but I also see a lot of the same questionable methodology and limited scope that I saw so many years ago.

Posted by: aed3 | May 18, 2010 12:48 PM | Report abuse

Dan,

If both AERA and ASCD currently do not value evidence-based instruction, what would possibly motivate them to change their perspective? Certainly, those organizationals already feel like they uphold professional standards. Just telling them that their standards are too low seems unlikely to change their outlook and direction.

Additionally, there are substantial differences between basic and translational research. While there may be tremendous amounts of high quality basic educational research, there is, as Sharon Begley rightly points out, a dearth of translational educational research. (So the basic research literature tells us that content matters in reading. But what content? When? What techniques should be used to engage students in that content? etc.) What translational research is out there to answer those essential questions?

Even if the professional organizations got into the act of evaluating research, what data is out there to compare instructional approaches and curricula in real classrooms? These types of studies are few and far between. (And the few that have been done produced results that are so counter to current edu-thought that they have been ignored/dismissed. e.g. Project Follow-Through)

In medicine, it is not the AMA that evaluates the efficacy of a new therapy but the FDA. That is: the professional organizations are great at upholding a standard of conduct, but the evaluation is done by a different organization. One that sets rigorous bars and requires evidence of efficacy, not just a professional opinion.

Posted by: erin_m_johnson | May 18, 2010 3:52 PM | Report abuse

To the extent that any formal test or assessment is used in educational research, the well-known AERA/APA/NCME "Standards for Educational and Psychological Testing" (1999) would offer relevant guidance.

Less well known, but available for free, is an analogous document that AERA has issued on educational research: "Standards for Reporting Empirical Social Science" (2006). Here is the link:

http://www.aera.net/uploadedFiles/Opportunities/StandardsforReportingEmpiricalSocialScience_PDF.pdf

As the introduction cautions, these standards focus on reporting rather than on conducting such research, but the two activities are clearly connected.

Also, this APA article on the same subject is helpful:

http://www.apa.org/pubs/authors/jars.pdf

Posted by: NoraO | May 20, 2010 1:53 PM | Report abuse

The comments to this entry are closed.

 
 
RSS Feed
Subscribe to The Post

© 2010 The Washington Post Company