Consumetrix
Consumetrix

Questions About Questions

Questions About Questions

Can you trust surveys?

Are surveys useless for predicting what people will do because people can't be trusted to tell the truth?

It's a common and recurring sentiment.  Jason Oke, then a planner at Leo Burnett, blogged about it way back in 2007.  Faris Yakob threw a bomb of a blog post in 2010. Richard Shotton wrote about it in 2017.

The concerns about respondents' biases are as legitimate as they are well known. The problem even has a name: "can't say / won't say."  How to ask questions in a way that produces reliable data is quite literally a science. It has its own experiments, discoveries, and textbooks.  

Many factors affect the way people respond to surveys. The Asking Questions book outlines a few of them:

  • comprehension of survey questions
  • the recall of relevant facts and beliefs
  • estimation and inferential processes people use to answer survey questions
  • the sources of the apparent instability of public opinion
  • the difficulties in getting responses into the required format.

 

Philip Graves, "a consumer behaviour consultant, author and speaker,"  takes a dim view of market research surveys in his book Consumerology.  Among other things, Graves writes that "attempts to use market research as a forecasting tool are notoriously unreliable, and yet the practice continues."

He then uses political polling as an example of an unreliable forecasting tool. He does not elaborate beyond this one paragraph:

Opinion polls give politicians and the media plenty of ammunition for debate, but nothing they would attach any importance to if they considered their hopeless inaccuracy when compared with the real data of election results.

It's worth taking a closer look at political polls for two reasons.

First, horse race polls ask exactly the forward-looking "what will you do" kind of question that people, presumably, should not be able to answer in any reliable way.  Here's how these questions usually look; this example is from Pew's 2012 questionnaire (pdf, methodology):

If the presidential election were being held TODAY, would you vote for
- the Republican ticket of Mitt Romney and Paul Ryan
- the Democratic ticket of Barack Obama and Joe Biden
- the Libertarian Party ticket headed by Gary Johnson
- the Green Party ticket headed by Jill Stein
- other candidate
- don’t know
- refused

 

Second, in election polling, there's nowhere to hide. The data and the forecasts are out there, and so, eventually, are the actual results.

And so, every two and four years, we all get a chance to gauge how good surveys are at forecasting people's future decisions.

How Accurate Are Political Polls?

Here's what we know.

1. On average, the polls conducted in the presidential elections between 1968 and 2012 have been off by 2 percentage points, whether because the race moved in the final days or because the polls were simply wrong (538). The 2016 polls were, on average, within the historical 2-percentage-points error (WaPo).

2. On average, you can expect 81% of all polls to pick the winner correctly (538).

3. The closer to the election day polls are conducted, the more accurate they are. (NYTimes)

4. The story of 2016 is not one of poll failure.  It is a story of interpretive failure and a media environment that made it almost taboo to even suggest that Donald Trump had a real chance to win the election." (RealClearPolitics)

In an experiment conducted by The Upshot, four teams of analysts looked at the same polling data from Florida.

"The pollsters made different decisions in adjusting the sample and identifying likely voters. The result was four different electorates, and four different results."  In other words, a failure to interpret the data correctly."

 

In other words, the problem is not with surveys.

Ilya Vedrashko