Consumetrix
Consumetrix

Questions About Questions

Questions About Questions

When should you test?

Rough drafts are the staple of the creative business. Art directors sketch and storyboard. Interaction designers mock and prototype.  

How soon should you put it in front of people to get their feedback?

On the one hand, an IDEO designer says sooner rather than later:

Instead of starting with our usual process where we gather inspiration, we thought: What if we share a prototype we are really excited about with the client right away?

On the other hand, Alan Cooper cautions in a Twitter thread:

Prototyping and testing is not interaction design. It helps, but it isn’t user centered. It’s designer centered. Prototyping & testing has one huge virtue: it makes management happy because it never rocks the boat. It never requires big changes. It always keeps the designer in command. It’s very ego gratifying.

When you put an artifact in front of a user, you instantly shut down an infinity of good ideas, avenues of thought, opportunities to create. 

In other words, the first prototype you show sets the direction for everything that follows. It becomes the reference point. Put the first button in the wrong hole, and you'll will mess up the rest. 

This is something that should be considered a lot more often than it is in practice.

Ilya Vedrashko
Can you trust surveys?

Are surveys useless for predicting what people will do because people can't be trusted to tell the truth?

It's a common and recurring sentiment.  Jason Oke, then a planner at Leo Burnett, blogged about it way back in 2007.  Faris Yakob threw a bomb of a blog post in 2010. Richard Shotton wrote about it in 2017.

The concerns about respondents' biases are as legitimate as they are well known. The problem even has a name: "can't say / won't say."  How to ask questions in a way that produces reliable data is quite literally a science. It has its own experiments, discoveries, and textbooks.  

Many factors affect the way people respond to surveys. The Asking Questions book outlines a few of them:

  • comprehension of survey questions
  • the recall of relevant facts and beliefs
  • estimation and inferential processes people use to answer survey questions
  • the sources of the apparent instability of public opinion
  • the difficulties in getting responses into the required format.

 

Philip Graves, "a consumer behaviour consultant, author and speaker,"  takes a dim view of market research surveys in his book Consumerology.  Among other things, Graves writes that "attempts to use market research as a forecasting tool are notoriously unreliable, and yet the practice continues."

He then uses political polling as an example of an unreliable forecasting tool. He does not elaborate beyond this one paragraph:

Opinion polls give politicians and the media plenty of ammunition for debate, but nothing they would attach any importance to if they considered their hopeless inaccuracy when compared with the real data of election results.

It's worth taking a closer look at political polls for two reasons.

First, horse race polls ask exactly the forward-looking "what will you do" kind of question that people, presumably, should not be able to answer in any reliable way.  Here's how these questions usually look; this example is from Pew's 2012 questionnaire (pdf, methodology):

If the presidential election were being held TODAY, would you vote for
- the Republican ticket of Mitt Romney and Paul Ryan
- the Democratic ticket of Barack Obama and Joe Biden
- the Libertarian Party ticket headed by Gary Johnson
- the Green Party ticket headed by Jill Stein
- other candidate
- don’t know
- refused

 

Second, in election polling, there's nowhere to hide. The data and the forecasts are out there, and so, eventually, are the actual results.

And so, every two and four years, we all get a chance to gauge how good surveys are at forecasting people's future decisions.

How Accurate Are Political Polls?

Here's what we know.

1. On average, the polls conducted in the presidential elections between 1968 and 2012 have been off by 2 percentage points, whether because the race moved in the final days or because the polls were simply wrong (538). The 2016 polls were, on average, within the historical 2-percentage-points error (WaPo).

2. On average, you can expect 81% of all polls to pick the winner correctly (538).

3. The closer to the election day polls are conducted, the more accurate they are. (NYTimes)

4. The story of 2016 is not one of poll failure.  It is a story of interpretive failure and a media environment that made it almost taboo to even suggest that Donald Trump had a real chance to win the election." (RealClearPolitics)

In an experiment conducted by The Upshot, four teams of analysts looked at the same polling data from Florida.

"The pollsters made different decisions in adjusting the sample and identifying likely voters. The result was four different electorates, and four different results."  In other words, a failure to interpret the data correctly."

 

In other words, the problem is not with surveys.

Ilya Vedrashko
How to do paper surveys without pencils?

Sometimes you need to collect responses using paper forms, but handing out pencils is not practical. What do you do?

CinemaScore conducts exit polls in theaters by asking movie goers to pull back tabs on the ballot, the design of which has remained mostly the same over the past 35 years.

CinemaScore tabulates the results and reports each movie's letter grade. Only 19 movies in the company's history got an F, although the score is not a simple average. (This article in Vulture explains how the scoring works.)

The results are used to estimate word of mouth and multiples— the overall gross in relation to the opening weekend.

Ilya Vedrashko