Parsing the Polls on the Ohio Senate Race
Two polls came out this week in the hotly contested Ohio Senate race. The first, which was conducted by Mason-Dixon Polling & Research, showed Sen. Mike DeWine (R) with a 47 percent to 36 percent lead over Rep. Sherrod Brown (D). The second, conducted by Brown campaign pollster Diane Feldman, showed the Democrat with a 45 percent to 44 percent edge.
The surveys were in the field at nearly the same time (April 24-26 for Mason-Dixon, April 24-27 for Feldman) and had similar sample sizes (625 likely voters for Mason-Dixon, 800 likely voters for Feldman).
So, what gives?
This week's Parsing the Polls is dedicated to trying answer that question.
Let's start with an apples-to-apples comparison of how these surveys were conducted and a bit of background on the two survey firms.
First, the background. Mason-Dixon has been one of the nation's leading independent polling organizations for several decades. According to Mason-Dixon's Web site, it has conducted survey research for more than 250 news organizations and has regularly polled in every state since 1983.
Larry Harris of Mason-Dixon said the firm has worked in Ohio for decades and pointed to its polling in the 2004 presidential race in the state as evidence of its accuracy. In its final poll conducted in this central presidential battleground, Mason-Dixon showed President George W. Bush with a 48 percent to 46 percent edge over John Kerry; after the votes were counted, Bush had achieved a 51 percent to 49 percent victory. (A number of other independent pollsters had Kerry leading Bush by several points in Ohio heading into Election Day.)
Feldman, for her part, is handling the polling not just for Brown's Senate race but also for Rep. Ted Strickland's (D) gubernatorial campaign. When television personality Jerry Springer was weighing a challenge to Sen. George Voinovich (R) in 2003, he hired Feldman (along with pollster Paul Maslin) to test the viability of his candidacy. Feldman was also part of the Democratic National Committee's "Ohio Election Task Force," a group convened following the 2004 election to examine potential voting irregularities in the Buckeye State.
In short, both firms have sterling credentials in the state.
Now, onto the methodology the two firms used to arrive at their results. (Stay with me because this goes deep into the weeds of survey research.)
Feldman used the voter file to compile her her polling sample; Mason-Dixon used random digit dialing.
Each method has its pros and cons. Using the voter file ensures that the people interviewed have actually voted in past elections and gives a pollster a good chance of developing a tight screen of who will vote in traditionally low turnout races. But not every person in the voter file has a listed phone number, nor is every person in the voter file guaranteed to vote again -- some may have died or moved from the state since the file was last updated.
The major advantage for random digit dialing is that it allows a pollster to reach a wider swath of people, including those with unlisted phone numbers. But random digit dialing makes it harder to determine who actually answers the phone in a household -- i.e. is the person of voting age, a registered voter or are they honest about their voting history.
Another major difference between the two surveys is how the firms ensure that their sample is reflective of the geography and past voting patterns in the state. Feldman uses a technique called "cluster sampling" to ensure proper racial and ethnic diversity. In cluster sampling, voters who live in a specific area are broken into groups of 30 or so. The person making the calls must reach one (and only one) person in each cluster; the goal is to contact hard-to-reach voters (especially the poor) to get the most representative sample possible.
Mason-Dixon relies on its past work in the state to determine how to determine the racial and geographic mix of its sample. As Harris puts it, "If I don't have a certain minority population from Cuyahoga County [Cleveland], I am going to be so wrong it's embarrassing. We know what that number should be."
What do these differences in methodology mean? They point to the fact that political polling is both an art and a science. The science end is easily understandable -- a certain amount of people are polled producing results and a margin of error in which those results fall. But determining who should be surveyed and how many of those people make the cut is the trick.
In 2004 -- when Bush and Kerry waged an all-out campaign for Ohio -- 5.7 million people turned out to vote. Two years earlier, in the first midterm election of Bush's presidency, just 3.2 million Ohioans voted. Most observers agree that turnout in 2006 will be higher than 2002 but lower than 2004. But how much higher? Or how much lower? The art of polling is figuring out the answer to that question.
The results on Election Day will let us grade which pollsters had it right. Until then, we wait and speculate. Speaking of which, speculate away about the Ohio race and the two polls we are comparing in the comments section below.
One other note: Some political pros are dismissive of surveys conducted by partisan pollsters. After all, wouldn't the pollster charged with electing Brown to the Senate have a vested interest in releasing numbers that show him ahead? Yes and no.
Partisan pollsters are working to elect candidates, but a major part of that equation is producing accurate numbers to help guide campaigns to victory. It doesn't serve their interests to cook up numbers that are entirely misleading to their clients and could hamper their ability to attract top-tier races in future cycles.
We'll explore this subject more deeply in a future post.
May 3, 2006; 10:50 AM ET
Categories: Parsing the Polls , Senate
Save & Share: Previous: Ohio Primary Results: Good News For Dems?
Next: Chatting With The Fix: Damage Control Time for Sen. Allen?
Posted by: Correct Genius | May 5, 2006 10:51 AM | Report abuse
Posted by: The Contrarian Genius | May 4, 2006 5:04 PM | Report abuse
Posted by: Vicenzo | May 4, 2006 10:53 AM | Report abuse
Posted by: RMill | May 3, 2006 3:36 PM | Report abuse
Posted by: RMill | May 3, 2006 3:28 PM | Report abuse
Posted by: Polling Fraud | May 3, 2006 3:04 PM | Report abuse
Posted by: Andy R | May 3, 2006 2:49 PM | Report abuse
Posted by: Polling Fraud | May 3, 2006 2:42 PM | Report abuse
Posted by: CJ | May 3, 2006 1:51 PM | Report abuse
Posted by: Will in Seattle | May 3, 2006 1:46 PM | Report abuse
Posted by: Will in Seattle | May 3, 2006 1:45 PM | Report abuse
Posted by: Kakuzan | May 3, 2006 1:19 PM | Report abuse
Posted by: Scott | May 3, 2006 1:09 PM | Report abuse
Posted by: Sally | May 3, 2006 12:36 PM | Report abuse
Posted by: John Bing | May 3, 2006 11:30 AM | Report abuse
Posted by: Kurt Landefeld | May 3, 2006 11:27 AM | Report abuse
Posted by: Bobby Wightman-Cervantes | May 3, 2006 11:20 AM | Report abuse
Posted by: Polling Fraud | May 3, 2006 11:05 AM | Report abuse
The comments to this entry are closed.