As we draw closer to Election Day, we’re going to see more and more public opinion surveys on the state of the presidential race. But some polls are better than others – and sometimes the most important numbers aren’t the ones you see on the top of the page.

So when you see a public opinion poll, what should you be looking for? How much stock should you put in the results? And how should you interpret news coverage of those surveys?

Chapel Hill resident Tom Jensen is the director of Public Policy Polling, a highly-reputed national survey firm based in Raleigh. He spoke recently with 97.9 The Hill’s Aaron Keck about the art and the science of polling.

Click here to listen to their full conversation. The transcript below has been lightly edited for clarity.


Aaron Keck: What should folks know about how polls are conducted? What goes into the process of reaching out to folks (and) making sure that those numbers come back accurate?

Tom Jensen: One thing that’s interesting in the polling industry right now is (that) there are more ways that polls are reaching possible respondents than there have ever been at any time in the past. It used to be that everybody did their polling the same way: you had a call center, and you called everybody on their landlines and asked them what they thought about things, and that was your poll. Now you have all sorts of different methodologies: you still have live calls to landlines or cell phones, but now you have lots of polls being conducted by text. You have lots of polls being conducted online.

And there are lots of different ways in which that’s done. Some polls have a panel of people who they’ve signed up to participate, so they keep polling the same people over and over again. Others use web ads to recruit people, so they have a different pool of people responding every time. And increasingly, many polling companies use a mixture of those different kinds of methodologies to reach people. So there’s a lot of diversity in how polling’s being done. And a simple reality is that no matter how you’re doing the polling, you’re lucky if 2 percent of the people who you contact for the poll answer it. So even though polling has been pretty accurate for the most part in recent history, you still have to take into account that this is something where more than 95 percent of people who get contacted for a poll don’t answer it.

Keck: Is there any research suggesting that (a particular) method of conducting a survey is more accurate?

Jensen: I think it’s increasingly become clear that there is no sort of gold standard. The prestige media in Washington used to say that (polls) with a live caller were the gold standard – (but) I think people (now) understand that there isn’t necessarily a single gold standard.

(And) it’s not just how you’re contacting people, but then what you do with that data on the back end. Another reality about polling is that not everybody in the population is equally likely to respond to a poll. Older people are more likely to answer polls than younger people. Highly educated people are more likely to answer polls than less well educated people. White people are more likely to answer polls than people of color are. So it’s not just collecting that data, but then making sure that everybody you polled is representative of the population that’s going to vote. So a lot of what determines the accuracy of polling is how good of a job you do with figuring out those sorts of dynamics.

Keck: And how do you do that? I know we’ve talked about this in the past: a lot of the adjustments that pollsters make from one election to another is based on, “oh, we anticipated this turnout in 2020 and it didn’t quite work out that way, so we’ll make an adjustment for 2022.” But then if 2022 is different from 2020, then the numbers aren’t going to be exactly right either. So what do you do to be proactive and look ahead to what turnout is likely to be in a few months?

Jensen: You’re absolutely right that there is no silver bullet solution to that. And if there’s something wrong that (pollsters) are doing right now, we probably won’t know that until after the election. In 2016, when the polls showed Hillary Clinton doing a lot better than she ended up doing, a big part of the dynamic that year was that people weren’t taking education into account when they were doing their polls. That hadn’t been a big deal in the past, because prior to Trump coming on the scene, there wasn’t a big difference between well-educated and less well-educated voters. Since Trump became the Republican standard bearer, we’ve had this huge trend where well-educated voters have gotten more Democratic (and) less well-educated voters have gotten more Republican. And in 2016, there weren’t enough of those less well-educated voters included in the surveys. So there was a lot of corrections made to that for 2020…

Then of course in 2020 the polls underestimated Trump again. But I think 2020 was a really unique case. If you look at the polls from (that) spring, they came pretty close to what the election result ended up being. If you look at the polls from October, they significantly overestimated Joe Biden. Well, think about what was going on in 2020. Obviously that was the peak of COVID – (and) those spring polls came when everybody was staying at home, and maybe equally likely to respond to polls. The polls in the fall, that were too good for Democrats, came at a time where Democrats were still taking COVID really seriously and staying home – more likely to be there to answer a poll. More conservative voters were a lot more likely to have gone back about their lives – and be out, living their best lives, instead of being at home answering polls. I think that’s why Biden got overestimated in 2020. There’s not some systematic bias against Trump – just some specific dynamics from ‘16 and ‘20 that led to that polling error. And those dynamics aren’t around this time.

For the second consecutive presidential election, Joe Biden and Donald Trump are the Democratic and Republican nominees, which leads to a unique opportunity to review polls. (Photo via AP Photo.)

Click here to listen to the full archive of “What We’re Thinking,” a bi-weekly series of conversations with Tom Jensen on This Morning with Aaron Keck.

Keck: As we get closer to Election Day, we’re going to hear more and more reporting about public opinion surveys. So when folks are following the news and they see a poll, and they’re looking at the numbers, what should we be looking for to determine, “okay, this is the thing that I should take from this survey?”

Jensen: Well, I think one of the very most important things for everyone to keep in mind when consuming polls is that even in a good year for polls, they’re off by an average of three or four points. Margin of error is a real thing, (and) you would expect the final results of an election to be a little different from what the polls say, given that margin of error. So let’s say it’s a good year for the polls and they’re only off by three points – well, you talk about all these swing states right now where Donald Trump is up by one or two points. If (there’s) a three-point error that’s underestimating the Democrat, that’s the difference between Biden getting reelected or not.

So I think my biggest point of emphasis is not to get too worked up – especially about polls where either Trump or Biden is up by just a little bit. Instead of trying to take that and say, “oh, Biden’s gonna win,” or “oh, Trump’s gonna win,” I just take that as a toss-up. If (a poll) is within a couple points, there’s no way to really know who’s going to win based on that.

And (another) thing – and sometimes I’ve had to talk some progressive voters off the edge about the polling so far – each of the last three times that a president was running for reelection (Bush 2004, Obama 2012, and Trump 2020), all three got underestimated in the polls. Trump did better than the polls said, Obama did better than the polls said, Bush did better than the polls said. And if that same kind of thing happens again, that’s the difference between Biden losing Michigan, Pennsylvania, (and) Wisconsin by one or two, or winning Michigan, Pennsylvania, (and) Wisconsin by one or two.

Keck: And I’ll add one more thing from the news side of things. This is something that I like to drive home as much as possible, when you’re following the news: the news reports primarily on what’s unusual. So if there’s a big headline about a poll, it’s (often) an unusual result. Which probably means it’s an outlier. Which probably means you should take it with a grain of salt. It’s a little bit odd: when the news covers a poll more, that often means you should pay attention to it less. But that’s the case.

Jensen: Yep. If a poll comes out today that has Biden up by 10 or Trump up by 10 nationally, that’s what you’re going to hear about all day, and it’s not remotely correct.


Chapelboro.com does not charge subscription fees, and you can directly support our efforts in local journalism here. Want more of what you see on Chapelboro? Let us bring free local news and community information to you by signing up for our newsletter.