How Bad Surveys Can Turn Respondents Off

Summary: Bad surveys are an epidemic. They can turn-respondents off leading to high abandonment rates and in the worse cases, they can yield bad quality data.

5 minutes to read. By author Michaela Mora on August 21, 2012
Topics: Market Research, Survey Design

How Bad Surveys Can Turn Respondents Off

Bad surveys are everywhere. I recently received a survey from a magazine I subscribe to. I read it every month cover to cover. I’m familiar with the style, the sections, and I always get some nuggets of wisdom about life and people. In more than 10 years of being a subscriber, this is the first survey I have ever received, so I thought it must be something important about the magazine. I was ready to contribute.

To my disappointment, the survey held me for almost 10 minutes, grid after grid (with a few typos), asking about my involvement in car purchase decisions.

In addition,  when I was about to drop from the survey, I started getting questions about how car companies address women’s needs and what type of car-related articles I wanted to read in the magazine. To top it off, the survey crashed and I couldn’t finish it.

This survey was a good example of how bad surveys can turn off respondents, increase drop rates, and yield bad data. Critical mistakes committed with this survey included the following below.

1. Ignoring Relationship With Audience

First, the survey clearly came from the magazine by putting its logo on every page. Second, the survey invite said the purpose was to help the magazine to be better. Consequently, I kept asking myself over and over why they started asking so many questions about my car usage.

I couldn’t see the point other than they were using their subscriber base to get data on behalf of a car sponsor to try to sell me stuff. They started losing my trust and pissing me off.

2. Wrong Question Order

If in fact what they wanted to know is how interested I was to read car-related articles, this set of questions should have come first. It would have been more congruent with my expectations as a subscriber who has never been asked to participate in research for them.

But by putting car-related behavioral questions upfront with no clear connection to the magazine, they abused my goodwill.  By the time I got to the questions that matter, I was really in a bad mood, cursing bad surveys.

3. Asking Rating Questions Without  Neutral or “Don’t Know” Option

This really upset me because they were forcing me to lie. On certain items, I really didn’t have an opinion and couldn’t agree nor disagree.

For example, they asked if I agreed or disagreed with the following statement “Car companies don’t pay enough attention to women when marketing and selling their cars.” I really have no idea.

I hardly watch any advertising these days (thanks to my DVR). I have only purchased one car in my life. Moreover, it was kind of a neutral experience, so I have not formed an opinion about what car companies do for women one way or another.

Unfortunately, there was no way out of the question. As a result, I have to select a random point on the scale. To avoid bad data with these types of questions, a neutral point and a Don’t Know option should be included. 

4. Ignoring Recency Of Events

At the beginning of the survey, they asked when I purchased or leased a car the last time. That was more than 5 years ago for me. Then they proceeded to ask what features influenced my decision, what was more important to me versus more important to my significant other. I understand what they were trying to get at.

However, they should have filtered these questions out. These questions were for people who have purchased a car more recently (last month, last 6 months) and have more fresh memories of their purchase decision process. I can’t really remember much about a purchase that occurred more than 5 years ago.

5. Using rating scales without labels in drop-down menus

Rating questions using a drop-down menu format require a lot of work from respondents and produces more measurement errors. Pulling the menu and finding the scale point takes more time and clicks. In this case, the ending points of the scale (1 to 10) were not labeled, so it was easy to reverse their meaning, particularly when evaluating a long list of items. Way half-through the list of items, I suddenly wasn’t sure if the 1 or the 10 were the positive ends of the scale, and I had to scroll up to check the instructions. Even in fully labeled scales, there are always respondents who tend to reverse the scale as they don’t pay much attention to the labels, so I suspect in this case many will do it.

6. Wrong Question Format

Whoever created this survey was either enamored with rating scales in grid format or doesn’t know when it is appropriate to use them. The question below is an example of when a rating scale is not appropriate. From the question, I deduce that they want to know in which format and how frequently I’m interested in receiving car-related articles.

The problem is that by using a rating scale, I can select the same answer for all the options. Consequently, the magazine wouldn’t be able to make a decision on which to go with.

From my perspective as a reader, these are exclusive options. I either like to see a monthly column, an occasional article or an occasional supplement. The most appropriate format for this question would have been a single-choice format.

 

Wrong question format

 

7. No Progress Bar

Using the progress bar sets expectations about the time respondents will spend on a survey. This is a sign of respect for their time.

8. No contact information

My survey crashed and didn’t have an email or phone number to call and report the issue. Having contact information is important not only to promote trust from respondents but also to monitor any problems that may occur with the survey during the field.

I hope this example of bad surveys helps you create surveys that don’t discourage respondents, affect their perception of your brand, and yield bad data.