[This is the second in a series on polling challenges and changes in 2024; the first was about what went wrong in 2020.]
Public pollsters ended up with egg on their face after the last two presidential elections.
It’s possible they will after this election, too. But if you think nothing about polling has changed since then, you’re mistaken.
The changes pollsters have made this cycle may or may not yield a more accurate result this November. That’s impossible to predict. But many of these changes are substantial, and, on balance, they probably do reduce the risk of another high-profile polling misfire — even if there are no guarantees.
Over the last four years, there have been four basic kinds of changes in the polling world:
-
Pollsters adopted different methods of data collection, like surveys taken by text or mail, rather than by phone.
-
Pollsters adopted new methods of “weighting,” where they made statistical adjustments to try to better account for underrepresented groups.
-
Pollsters didn’t change their practices, but benefited from new data that improved longstanding methods.
-
The makeup of the pollsters changed, as some pollsters left the game and others joined the fray.
These changes don’t account for every single change pollsters have made, of course. Pollsters tinker constantly with their procedures, even when they make no wholesale changes at all. Still, below are many of the big ones. Here’s how each might — ever so slightly — reduce the risk of another big miss.
Data collection: The mail
Fifteen years ago, nearly every major political poll was conducted by randomly dialing telephone numbers, a technique known as random-digit dialing. Gallup, ABC/Washington Post, CBS/New York Times, Pew Research, you name it — this was the kind of poll they conducted. Today, only one prolific pollster, Quinnipiac, is still doing it this way.
Instead, pollsters are using many different methods to find respondents and conduct surveys, including sending text messages and recruiting survey-takers online. But perhaps a more surprising change has been many pollsters’ move toward using mail, a method known as address-based sampling.
Why the mail? This method adheres to the principle of random sampling: It’s possible to send mail to addresses at random, much as pollsters once dialed random telephone numbers. But while most people never pick up their phone, most people do open the mail — and when they do, pollsters can try to get them to participate in their surveys, usually by offering a financial incentive.
The move to address-based polling is playing out on two fronts.
First, more pollsters are using so-called online probability panels, in which pollsters recruit people by mail to take polls online. CNN/SSRS, ABC News/Ipsos and the University of New Hampshire are all examples of pollsters that moved from random-digit dialing to mail-recruited probability panels over the last four years.
It’s not clear so far that probability panels are any better at finding Trump voters than phone polls. (The Quinnipiac poll, using random-digit dialing, has tended to show similar or even stronger results for Donald J. Trump.) They also face a distinct challenge: panel attrition, where the people who gradually leave a panel may not be representative of those they originally recruited. Many of these pollsters produced some of the least accurate results of the 2020 campaign, and they’ve made major changes in response.
Second, there’s the rise of so-called benchmark surveys. Here, a pollster conducts a one-off mail survey, usually with a large financial incentive. It then uses the results of that survey — which typically has a much higher response rate, thanks to those incentives — to ensure its subsequent and lower-quality polls have the same number of Democrats and Republicans as the higher-quality survey.
The most prominent example is the Pew NPORS study, a high-incentive mail survey with around a 30 percent response rate. Pew’s probability panel, in comparison, effectively has a 3 percent response rate — similar to most other high-quality polls. The Pew NPORS study is being used to determine the partisan makeup of many polls you see nowadays, including the Pew American Trends Panel, Reuters/Ipsos, ABC/Ipsos, CNN/SSRS, KFF and Marquette Law/SSRS polls.
This year, the Pew NPORS study found a two-point Republican advantage on leaned party identification. Realistically, many or even all of these surveys would find more Democratic results without the Pew NPORS.
Weighting: Recall vote
This cycle, many pollsters have started to weight their polls on a new metric: “recalled vote.”
This is a technique in which a pollster weights the number of self-reported Biden 2020 and Trump 2020 voters to match the result of the last election.
We’ve written a lot about this measure — it’s a big controversy among survey researchers this cycle. In prior cycles, almost none of the traditional “gold standard” pollsters even touched it. Now CNN/SSRS, The Washington Post and Monmouth University, among many others, employ the technique. Around two-thirds of the polls this cycle are weighted using recalled vote.
Whatever its merit, recall-vote weighting today clearly reduces the risk that the polls will underestimate Mr. Trump this cycle, for three reasons.
First, voters have typically been likelier to recall voting for the winner of the last election — in this case, President Biden. In practice, this means weighting on recall vote gives more weight to the respondents who say they voted for the loser — in this case Mr. Trump. As a result, weighting on recall vote would tend to boost Mr. Trump’s numbers in 2024.
Second, it may be especially well suited to counteract the biases of highly engaged voters. One of the most prominent theories of survey error is that the polls get too many highly engaged voters, and that these voters lean Democratic. These voters are also among the likeliest to report sticking with the candidate they supported in the last election. As a consequence, weighting by past vote can move a group of highly engaged Democratic-leaning respondents neatly into alignment into the result of the last election.
Third, many polls in 2020 were off by so much that they absolutely would have benefited from weighting on recall vote, if only because it makes it harder to produce double-digit outliers. The infamous ABC/Post poll in Wisconsin, which found Mr. Biden up 17 points, would have been helped by this measure.
With recall vote in the back pocket of pollsters, a result like Harris +17 is much less likely to happen this cycle. And in fact there hasn’t been one.
New data: More Trump-era information
Some pollsters haven’t necessarily made any major changes since 2020, and that includes many of the state-based pollsters using voter file data.
Will these pollsters fare as poorly as they did four years ago? They very well might, but they do have one reason to hope to do better anyway: They might benefit from data that wasn’t available back in 2020.
-
Who voted in 2020. The last presidential election was the highest-turnout modern race, and there are many indications that Mr. Trump fared best among lower-turnout voters. Today, pollsters know who those new 2020 voters are; they didn’t know four years ago.
-
Who voted by mail, early in person or on Election Day. In 2020, Biden voters were much more likely to vote by mail, while Trump supporters were more likely to vote in person on Election Day. Importantly, this pattern often broke party lines: The Republicans who voted by mail, for instance, weren’t very likely to support Mr. Trump; the Democrats who voted in person, on the other hand, were often relatively supportive of Mr. Trump. Today, this information is a powerful tool to help make sure a pollster doesn’t just have the right number of Democrats and Republicans, but also the right kind of Democrats or Republicans.
-
Four years of party registration changes. Over the last few years, Republicans made significant gains in party registration, in no small part because many longtime Trump supporters have gradually been registering as Republicans. To the extent some Trump voters were “hidden” from pollsters, many have come out of the woodwork by re-registering as Republicans.
A new cast of pollsters
Imagine, for a moment, that none of these changes helped at all, and that these pollsters fared just as poorly as they did four or eight years ago.
The polls still might be a bit more accurate than they were in 2020.
How can that be? The answer is because the makeup of the pollsters has changed.
Monmouth — the nonpartisan pollster that fared worst in 2020, in 538’s reckoning — has since stopped asking voters how they will vote in the presidential race.
SurveyMonkey and Swayable were the two most prolific polls of the 2020 campaign, with a combined 126 polls over the final stretch. They’ve all but disappeared this cycle.
Three Democratic firms — Change Research, Data for Progress and Public Policy Polling — produced a combined 117 polls over the final three weeks of the 2020 campaign. They’re still around, but producing numbers at only a fraction of the old pace.
At the same time, many firms that had more balanced or even Republican-leaning results have kept producing new polls or even increased their pace. For the 2022 midterms, there were so many Republican-leaning polls — and so few traditional, nonpartisan pollsters — that the polling averages of several key states wound up underestimating Democrats.
This cycle, there are many more nonpartisan pollsters conducting surveys. As a result, the Republican pollsters haven’t succeeded in moving the averages, as they did two years ago. Nonetheless, the balance of pollsters has shifted to the right. In 2020, the polls underestimated Mr. Trump by about 4.2 points. This year’s cast of pollsters, based on those that conducted polls over the last month, underestimated Mr. Trump by 3.2 points in 2020.
It’s just one point, but it’s a point less biased against Mr. Trump. Together, the polling industry is hoping it adds up to a big difference in November.
The post How Polls Have Changed to Try to Avoid a 2020 Repeat appeared first on New York Times.