How Bad Surveys Can Turn Respondents Off

Twitter Facebook

by Michaela Mora
Follow Me on Twitter Here

Angry Respondent

I recently received a survey from a magazine I subscribe to. I read it every month cover to cover. I’m familiar with the style, the sections, and I always get some nuggets of wisdom about life and people. In more than 10 years of being a subscriber, this is the first survey I have ever received, so I thought it must be something important about the magazine. I was ready to contribute.

To my disappointment, the survey held me for almost 10 minutes, grid after grid (with a few typos), asking about my involvement in car purchase decisions. When I was about to drop from the survey, which was making me very angry, I started getting questions about how car companies address women’s needs and what type of car-related articles I wanted to read in the magazine. To top it off, the survey crashed and I couldn’t finish it.

This survey was a really a good example of how bad design and use of surveys can turn off respondents, increase drop rates, and yield bad data. Critical mistakes committed with this survey included:

  • Ignoring the relationship with their audience. Because the survey clearly came from the magazine by putting its logo on every page and the survey invite said the purpose was to help the magazine to be better, I kept asking myself over and over why they started asking so many questions about my car usage. I couldn’t see the point other that they were using their subscriber base to get data on behalf of a car sponsor to try to sell me stuff. They started to losing my trust and pissing me off.
  • Wrong question order. If in fact what they wanted to know is how interested I was to read car-related articles, this set of questions should have come first. It would have been more congruent with my expectations as a subscriber who has never been asked to participate in research for them. But by putting car-related behavioral questions upfront with no clear connection to the magazine, they abused my goodwill and by the time I got to the questions that matter, I was really in a bad mood
  • Asking attitudinal questions using an agreement/disagreement scale without a neutral or “Don’t Know” option. This really upset me because they were forcing me to lie. On certain items I really didn’t have an opinion and couldn’t agree nor disagree. For example, they asked if I agreed or disagreed with the following statement “Car companies don't pay enough attention to women when marketing and selling their cars.” I really have no idea. I hardly watch any advertising these days (thanks to my DVR). I have only purchased one car in my life, and it was kind of a neutral experience, so I have not formed an opinion about what car companies do for women one way or another. There was no way out of the question, so I have to select a random point in the scale. To avoid bad data with this type of question, a neutral point and a Don’t Know option should be included. For more on this check the post Is It Right to Include a Neutral Point in Rating Questions?
  • Ignoring the recency of events. At the beginning of the survey they asked when I purchased or leased a car the last time. That was more than 5 years ago for me. Then they proceeded to ask what features influenced my decision and what was more important to me versus more important to my significant other. I understand what they were trying to get at, but they should have filtered these questions and ask them to people who have purchased a car more recently (last month, last 6 months) and have more fresh memories of their purchase decision process. I can’t really remember much about a purchased that occurred more than 5 years ago.
  • Using rating scales without labels in drop-down menus. Rating questions using a drop-down menu format require a lot of work from respondents and produces more measurement errors. Pulling the menu and finding the scale point takes more time and clicks. In this case, the ending points of the scale (1 to 10) were not labeled, so it was easy to reverse their meaning, particularly when evaluating a long list of items. Way half-through the list of items, I suddenly wasn’t sure if the 1 or the 10 were the positive end of the scale, and I had to scroll up to check the instructions. Even in fully labeled scales, there are always respondents who tend to reverse the scale as they don’t pay much attention to the labels, so I suspect in this case many will do it.
  • Wrong question format. Whoever created this survey was either enamored with rating scales in grid format or doesn't know when it is appropriate to use them. The question below is an example of when a rating scale is not appropriate. From the question, I deduce that they want to know in which format and how frequently I’m interested in receiving car-related articles. The problem is that by using a rating scale, I can select the same answer for all the options and the magazine wouldn’t be able to make decision on which to go with. From my perspective as a reader, these are exclusive options. I either like to see a monthly column, an occasional article or an occasional supplement. The most appropriate format for this question would have been a single-choice format.

    Wrong question format

  • No progress bar. Using progress bar sets expectations about the time respondents will spend on a survey. This is a sign of respect for their time.
  • No contact information. My survey crashed and didn’t have an email or phone number to call and report the issue. Having contact information is important not only to promote trust from respondents, but also to monitor any problems that may occur with the survey during field.

Hope this bad example helps you create surveys that don’t discourage respondents, affect their perception of your brand and yield bad data.

 



Tags: , ,



No Comments »

No comments yet.

RSS feed for comments on this post. TrackBack URL

Leave a comment

 

Subscribe
To Our Blog
Read market research articles with zero fluff!

Our Clients Say...

Relevant Insights is very thorough in how they go about thinking through and performing data analysis. Not only do they have a great appreciation of how quantitative tools can work but they can translate them clearly to business implications. Michaela, the founder is also a great thought partner in terms of research tools and applications in general and takes a high degree of pride in delivering the best possible.

Joanne Kok, Manager, Consumer Research and Usability
Travelocity
Certifications
  • Minority Business Enterprise (MBE), National Minor…
  • Women Business Enterprise (WBE), Women's Business …
  • Historically Underutilized Business (HUB), State o…
  • Disadvantaged Business Enterprise (DBE), North Cen…