How To Improve Online Survey Response Rates

Summary: Response rates are based on the number of respondents who attempt to participate in a survey, even if they are disqualified after screening questions.

4 minutes to read. By author Michaela Mora on June 9, 2010
Topics: Market Research, Survey Design

How To Improve Online Survey Response Rates

I recently got an inquiry asking about what response rate he could expect from using this online survey tool. Fortunately for any online survey tool, response rates to online surveys don’t depend on the survey tool you use.

First let’s distinguish between response rates, incidence rates, completion rates, and non-response. They are related, but not the same, and some clients use these concepts interchangeably, which leads to confusion in sample size and cost estimations.

Response rates are usually calculated based on the number of respondents who attempt to participate in a survey, even if they are disqualified after they have been screened with certain questions. If we send a survey invitation to a sample size of 100 people and only 5 attempts to take the survey, then the response rate would be 5%.  Response rates have been used for years as indicators of data accuracy, however recent research has indicated that lower response rates don’t necessarily mean low-quality data.

Response rates are affected by:

  • Survey topic relevancy: People will not dedicate time to participate in surveys that are perceived as irrelevant.
  • Incentives: Sometimes an incentive is needed to motivate respondents, but careful consideration needs to be given to this. Incentives are a tricky subject since we may attract only certain types of respondents and insert selection bias in the sample.
  • Survey invitation: Survey invitations should be personalized and provide compelling reasons to participate in the survey. A poorly written invitation can drive respondents away or not catch their attention. Use appealing subject taglines and make the invitation short, clear, and persuasive.
  • Type of relationship with target survey audience: Depending on the level of relationship respondents have with the brand, organization or company sponsoring the project they will be more or less motivated to participate. For example, customer surveys tend to have higher response rates than those targeted at non-customers. 
  • Privacy protection concerns: People are not comfortable sharing information if they don’t know how it is going to be used. Communication about privacy policy and data security should be clear.
  • Reminders: These may be needed to reach busy people or those not available within a certain time frame when the first invitation is sent out.

Incidence rates are based on the number of respondents that qualify for a study based on certain screening criteria. For example, if we need a sample of females in the general population without any other requirements, the incidence rate is expected to be 50% since half of the population are women. Incidence rates will vary depending on who we are targeting with the study.

Response rates are often used to indicate the number of completed surveys, but I think it is worth making the distinction between response rates and completion rates since this has methodological and cost implications ( e.g. when we need to purchase sample from online panel providers).

Completion rates indicate how many people who qualified for the study completed the survey. If they enter the survey, answer some questions, and then abandon the survey, they will be counted as incompletes and are usually excluded from the final data. The number of incompletes increases when:

  1. The survey is too long
  2. Survey flow is confusing
  3. There are skip logic errors that show irrelevant questions to respondents who can’t answer them
  4. Questions are poorly worded and instructions are unclear
  5. Questions are complex and require a lot of mental effort from the respondent
  6. The respondent is not rewarded accordingly based on survey length and amount of effort required
  7. The topic and survey format can’t hold the respondent’s interest
  8. Privacy protection is unclear or lacking

Non-response occurs when we fail to get a response from the total sample either because respondents refuse to participate in the survey or they start but never complete it. If non-responses follow a pattern that systematically excludes a particular segment of the sample, they introduce what it calls selection bias, which will prevent us from getting a representative sample of opinions in the population of interest. Nonrespondents are often different from respondents, so their absence in the final sample can make it difficult to generalize the results to the overall target population.

In short, regardless of the survey tool you use, you can improve response rates and completion rates if you avoid most of the problems mentioned above.