Writing short surveys is an uphill battle with many clients. Whenever the word is out that a survey will be conducted, everybody close to the subject, being the product team, senior management or operations, wants to add questions. The thought is, “since we are doing a survey let’s get as much as possible out of it.”
Unfortunately, the only thing you get out with very long surveys is bad quality data. Why?
NON-RESPONSE & ABANDONMENT
As the survey length increases, so does the non-response bias and abandonment rate. Simply said, respondents won’t stay too long answering questions. Many won’t even start if they know the survey length (It is a best practice to announce the length of the survey in the invitation).
For those who think they can get away with it by not announcing how long the survey will be, think again. Respondents can always figure out the length from the progress bar and will drop in the middle of the survey if they perceive it as too long (even if no progress bar is shown). High abandonment and non-response rates affect sample representativeness negatively.
In an experiment conducted by Galesic and Bosnjac (2003) to prove this point, 3,472 respondents were divided in 3 groups based on an online survey with different lengths (10, 20 and 30 minutes). The chart above shows how the number of respondent who started and completed the survey declined as the survey length increased.
Respondents, who are willing to endure a long survey, are at high risk of experiencing high burden and becoming “satisficers.”
Satisfacing occurs when the respondents select the answer options without giving them too much thought. They go for the most effortless mental activity trying to satisfy the question requirement, rather than work on finding the optimal answers that best represent their opinion. Respondents may start selecting the first choice in every question, straight-lining in grid questions (selecting the same across all options) or simply selecting random choices without much consideration. This type of behavior renders the data worthless.
The same experiment by Galesic and Bosnjac was set to test the impact of survey length on data quality, which was measured with a variety of indicators including response times, item response rate, length of answers to open-ended questions, and variability of answers to questions in grids.
Of all the indicators, item response rate (defined as the percentage completed from all questions presented in a block) was the only one that seemed unaffected by survey length, however it is unclear if the survey was programmed to force respondents to answer before going forward in the survey. For the other indicators, the results strongly suggest that survey length affects quality.
There are powerful reasons that push clients and force research vendors to launch long surveys. Budget, time constraints, and different agendas from internal groups are some of them. However, when surveys start getting too long, clients and research vendors should take a minute to think about the implications. After all if we get bad data, we have wasted the little time and money we started with.