English teacher: Data can drive us down wrong road
Roxanna Elden is the author of See Me After Class: Advice for Teachers by Teachers. She teaches high-school English in Miami, Florida and is a National Board Certified Teacher. This first appeared on the website of the magazine Education Next.
By Roxanna Elden
While reviewing a practice passage called “The Night Hunters” for last year’s 9th-grade Florida Comprehensive Assessment Test (FCAT), I had to peek at the teachers’ guide to check my answer to this question: Which of the owls’ names is the most misleading?
I was stuck between (F) the screech owl, because its call rarely approximates a screech, and (I) the long-eared owl, because its real ears are behind its eyes and covered by feathers.
The passage explains that owls hear through holes behind their eyes, so the term long-eared owl seemed misleading. Then again, a screech owl that rarely screeches? That is pretty misleading, too.
According to the FCAT creators, each question on the practice tests corresponds to a specific reading skill or benchmark. Teachers are supposed to discuss test results in after-school “data chats” and then review weak skills in class.
Here is a sample conversation from a data chat, as imagined by promoters of this idea:
First Teacher: Well, it looks like my students need some extra work on benchmark LA.910.6.2.2: The student will organize, synthesize, analyze, and evaluate the validity and reliability of information from multiple sources (including primary and secondary sources) to draw conclusions using a variety of techniques, and correctly use standardized citations.
Second Teacher: Mine, too! Now let’s work as a team to help students better understand this benchmark in time for next month’s assessment.
Third Teacher: I am glad we are having this “chat.”
Here is a conversation from an actual data chat:
First Teacher: My students’ lowest area was supposedly synthesizing information, but that benchmark was only tested by two questions. One was the last question on the test, and a lot of my students didn’t have time to finish. The other question was that one about the screech owl having the misleading name, and I thought it was kind of confusing.
Second Teacher: We read that question in class and most of my students didn’t know what approximates meant, so it really became more of a vocabulary question.
Third Teacher: Wait.... I thought the long-eared owl was the one with the misleading name.
At this point, data chats often turn into non-data-related gripe sessions.
When I interviewed teachers for See Me After Class, the unintended consequences of high-stakes tests came up most often among language arts teachers.
They know that answering comprehension questions correctly does not rest on just one benchmark. Separating complex skills into individual benchmarks may well work in math class. Symmetry and place value, for example, can be taught independently of one another, and benchmark-based data may indicate which of these skills needs work.
Reading is different. After students have mastered basics like decoding, reading cannot be taught through repeated practice of isolated skills. Students must understand enough of a passage to utilize all the intricately linked skills that together comprise comprehension.
The owl question, for example, tests skills not learned from isolated reading practice but from processing information on the varying characteristics of animal species. (The correct answer, by the way, is the screech owl.)
Unfortunately, strict adherence to data-driven instruction can lead schools to push aside science and social studies to drill students on isolated reading benchmarks.
Compare and contrast, for example, is covered year after year in creative lessons using Venn diagrams. The result is students who can produce Venn diagrams comparing cans of soda, and act out Venn diagrams with Hula–hoops, but are still lost a few paragraphs into a passage about owls. When they do poorly on reading assessments, we pull them again from subjects that give them content knowledge for more review of Venn diagrams. Many students learn to associate reading with failure and boredom.
It is difficult to teach kids to read well if they don’t learn to enjoy reading. It is impossible to teach kids to read well while denying them the knowledge they need to make sense of complex material. Following the data often forces teachers to do just that.
Follow my blog every day by bookmarking washingtonpost.com/answersheet. And for admissions advice, college news and links to campus papers, please check out our Higher Education page at washingtonpost.com/higher-ed Bookmark it!
| October 21, 2010; 6:00 AM ET
Categories: Learning, Standardized Tests, Teachers | Tags: compare and contrast, fcat, florida comprehensive assessment test, standardized tests, teachers, teaching, test questions
Save & Share: Previous: New data on bullying: 17% report regular abuse
Next: Kudos for D.C. school meal program but hold applause for Congress
Posted by: bigjayray1 | October 21, 2010 7:20 AM | Report abuse
Posted by: pdexiii | October 21, 2010 7:44 AM | Report abuse
Posted by: natturner | October 21, 2010 9:09 AM | Report abuse
Posted by: sideswiththekids | October 21, 2010 10:03 AM | Report abuse
Posted by: bsallamack | October 21, 2010 12:07 PM | Report abuse
Posted by: bsallamack | October 21, 2010 12:41 PM | Report abuse
Posted by: bsallamack | October 21, 2010 1:06 PM | Report abuse
Posted by: ronmorehouse | October 21, 2010 1:33 PM | Report abuse
Posted by: bm4711 | October 21, 2010 1:56 PM | Report abuse
Posted by: bsallamack | October 21, 2010 2:27 PM | Report abuse
Posted by: jlp19 | October 21, 2010 6:53 PM | Report abuse
Posted by: Coachmere | October 21, 2010 8:50 PM | Report abuse
Posted by: neaguy | October 21, 2010 9:11 PM | Report abuse
Posted by: leonardo1729 | October 22, 2010 12:50 AM | Report abuse
The comments to this entry are closed.