ROBERT SIEGEL, host:
I'm Robert Siegel.
MELISSA BLOCK, host:
I'm Melissa Block, and this is All Things Considered from NPR News.
SIEGEL: We won't have another round of House and Senate elections for another two years, and we won't have another presidential election for four years, which means that doing one election campaign, unless you are a political professional or an obsessive, you've probably forgotten the granular detail of the last election. For example, when you read all those polls next time, which ones can possibly claim to have gotten it right the last time? Well, Mark Blumenthal's words, he runs pollster.com, a website that does for political polling what mlb.com does for baseball statistics. Welcome to the program.
Mr. MARK BLUMENTHAL (Editor and Publisher, pollster.com): Great to be here.
SIEGEL: Mark Blumenthal, let's start with the national popular vote. It was 52 percent for Obama, 46 percent for McCain. Which pre-election-day poll or couple of polls came the closest?
Mr. BLUMENTHAL: Let me answer two ways. If the margin ends up being six points, there were two national surveys that had that exactly right, the one from the Pew Research Center, on one extreme - a highly-regarded traditional polling method, and on the other extreme, the Rasmussen Reports Automated Survey.
SIEGEL: It's a robo-poll.
Mr. BLUMENTHAL: Some call it robo-poll. But I have to say that there's a sort of element of lottery to this. There is random error in surveys, and there's a margin of error. And most of the national surveys got to within sampling error of the result, so most polls were pretty good this way.
SIEGEL: What does it say if two of the very best polls getting it right at the end are, in the one hand, as you say, the Pew Research Center, a gold plated poll that we care about very often on this program, and on the other hand, the Rasmussen automated telephone surveys, which people might look very skeptically at. Are there questions to be raised here about methodology?
Mr. BLUMENTHAL: If you talk to some of my colleagues, they're going to grumble and say, you know, it's pretty easy to get it right at the end because you can look in the rear view mirror and sort of follow the leader. I think it says that we have been maybe a little too hard on the researchers who don't use a live interview or, I think, that the automated surveys have proved themselves to give us as good a picture of the horse race result at the end as those that use live interviewer.
SIEGEL: Are there polls that came out in the last week or so which the pollster would probably most like to see buried and never heard about again?
Mr. BLUMENTHAL: There may have been a couple. I think the one thing that has me feeling a little cynical is that, if we were to turn the clock back to some of the polls we saw seven, 10 days ago, we saw a lot more variation. And there was this impressive convergence over the last week. And I think that's something that when we look back at this, we need to keep in mind, it's not just the last poll. It's all those other surveys during the course of the campaign that we need to think about.
SIEGEL: Among pollsters, there was a lot of discussion this year about what a likely voter is, it was. Could you apply traditional measures of who should be included in a poll sample, or was there so much more enthusiasm among young voters or black voters that you should add more of them to the sample and figure people who, in previous, might have been considered unlikely voters?
Mr. BLUMENTHAL: If you look at just the net result, we clearly did not have a huge mess in terms of the number of young people and African-Americans and others. And the results were basically on in terms of getting the national margin right. So I think they did reasonably well.
SIEGEL: Is there any big take-away lesson for pollsters from the 2008 election so far?
Mr. BLUMENTHAL: I'm sure there are many. If I had to take away a lesson from the way we covered polling, which is a little different, I would say, we may have spent a little bit too much time looking backwards of what polls got wrong 20 years ago and the Bradley effect, rather than looking at the evidence we had sitting right in front of us.
Just in the last couple of weeks, there wasn't much evidence that inner view or race had anything to do with who people supported. There was no evidence of a hit among decided vote when we actually looked at the undecided. And yet, we spent a lot of time chasing theories and spin that didn't really work out.
SIEGEL: So you say the Bradley effect can be consigned to the ashbin of history, right?
Mr. BLUMENTHAL: It certainly was not a factor yesterday, so it does look as if that is probably something we're not going to see much of again.
SIEGEL: Mark Blumenthal of pollster.com, thank you very much.
Mr. BLUMENTHAL: Thank you.
NPR transcripts are created on a rush deadline by Verb8tm, Inc., an NPR contractor, and produced using a proprietary transcription process developed with NPR. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.