Democrats may close out the summer celebrating polls that show their party in better shape for November, but the numbers could conceal lingering midterm danger for Democrats.
Pollsters infamously missed the extent of support for Donald Trump in the 2016 election and nearly repeated the mistake again when their surveys undercounted support for the former president when he ran for reelection in 2020.
Those polling errors, which in some cases also inflated expectations about how Democrats would perform down the ballot, could be distorting the picture of where Democrats stand eight weeks from Election Day.
“It is reasonable to begin to think, at this point, that there’s something systematic going on which makes Republicans, relative to the polling data, overperform,” Grant Reeher, a political science professor at Syracuse University, told the Washington Examiner.
“There seems to be some patterns state by state on that, in terms of if the Democrats are being shown to overperform,” he said.
The reasons why pollsters captured more support for Democrats in 2016 and 2020 than election results reflected remain a subject of debate.
Berwood Yost, director of the Center for Opinion Research and the Floyd Institute for Public Policy at Franklin & Marshall College, said the driving force behind the 2016 discrepancy is likely different than why the 2020 polls suffered a similar fate.
“I think what happened in 2016 was pretty clear: You had a sizable portion of voters who were undecided heading into the last week of the election, and you had a sizable portion of voters that didn’t like either candidate,” Yost told the Washington Examiner. “And at the end of the day, the undecideds sort of broke in a much larger fashion for Trump.”
But 2020 was a markedly different election. Polls in 2020 were even more inaccurate in measuring Senate and gubernatorial races.
“For senatorial and gubernatorial races combined, polls on average were 6.0 percentage points too favorable for Democratic candidates relative to the certified vote margin,” the American Association for Public Opinion Research found in an analysis it commissioned after the results of the 2020 election.
That discrepancy caused the pollsters who conducted the analysis to conclude that the errors likely weren’t specific to Trump.
One early and popular theory about the polling disaster in 2016 held that some voters were simply too embarrassed to admit to pollsters they planned to vote for Trump, thus resulting in an undercount of his support; if that alone explained the errors, the new analysis concluded, then pollsters would not have seen such a discrepancy in gubernatorial and Senate races in which he was not on the ballot.
Still, the polling errors have not been constant since 2016. In 2018, poll numbers got closer to reflecting the actual results of the midterm elections, further clouding the picture of what ails polling.
Close margins in some top Senate races have fueled a narrative over the past month that a long-predicted GOP wave in November may not materialize, especially given how much some GOP candidates have struggled to gain traction.
A New York Times analysis this week of polling data and actual results from 2016 and 2020, as well as poll numbers from the current 2022 cycle, suggested Democrats may not be as far ahead as they think in key Senate races, however.
The analysis looked at how errors in 2016 polling were predictive of errors in presidential polling four years later.
For example, polls in 2020 indicated Joe Biden was poised to win Florida by 2 percentage points. Controlled for the type of polling error that occurred in 2016, Biden was projected to win Florida by a much slimmer margin: less than 1 percentage point.
But Trump in 2020 ended up carrying Florida by 3 points. In all the states analyzed, the final result was much closer to a polling average that had been adjusted to account for a “2016-like poll error” than what the 2020 polls suggested at face value.
Today’s surveys, the analysis warned, skew close to reflecting Democratic advantages in some states that mirror the flawed assumptions of 2016 and 2020.
Three Senate races that currently show an average polling advantage for Democrats — Wisconsin, North Carolina, and Ohio — would actually end in a Republican victory if that average was recalculated to account for a “2020-like poll error.”
Yost said one possible reason polls have, in some cases, underreported Republican support is that the type of person who hangs up the phone on a pollster these days is different from the type of person who won’t.
“There’s always the issue that perhaps the people who are willing to talk to pollsters are so different in some way that we can’t adjust for it,” he said.
Reeher noted that the same dynamics fueling GOP mistrust in the media and government could also drive a refusal to answer questions for surveys.
“There is a greater suspicion of government and this whole process on the part of Republicans, and therefore, they may be less likely to participate,” he said. “And so you undersample them, or undersample the likely Republican voter, to be more specific.”
But polling serves more functions than just predicting the outcome of elections, and measuring their precision can be difficult.
Some analyses of how close polls came to predicting results look only at surveys conducted just days before the election; others look at the average of polls conducted up to several weeks before, when some voters still had time to change their minds before casting ballots.
“I think we should always be cautious about polls that try to tell us winners and losers,” Yost said. “The polls can be really helpful in a lot of ways, but if you just look at the horse race, they’re not going to be helpful.”
Polls are meant to show attitudes, trends, and shifts within the broader context of an election, Yost said. Devoid of that context, such as what issues voters care most about and how they feel about the direction of the country at the moment, polls can mislead.