The decision whether to use dichotomous grid questions (dual-column grid format) or multi-response questions has been debated and researched now for a while as survey researchers are trying to battle respondent acquiescence bias. This is defined as the tendency to positively agree with all questions or statements in a survey.
Multi-response questions, in which the respondents are asked to select all the options that apply, is a common question format used in surveys that tends to elicit fewer selections in comparison with dichotomous grid questions, in which respondents are found selecting more positive answers (e.g. Yes, Agree, etc.), when forced to choose between a positive or negative answer (e.g. Yes/No, Agree/Disagree) (Smyth, Dillman, Christian & Stern, 2006; Thomas & Klein, 2006). In 2015, Callegaro, Murakami, Tempman & Henderson suggested that the larger number of positive responses in the dichotomous grid format could be explained by the acquiescence bias.
However, research conducted by GFK and the University of Nebraska – Lincoln (Thomas, Barlas, Buttermore & Smyth, 2017), and presented in a poster (“Acquiescence Bias in Yes-No Grids? The Survey Says… No.”) at the 2017 AAPOR conference, was unable to confirm that the acquiescence bias was the key driver for differences between the dichotomous grid and multi-response formats. Their hypothesis was that the differences were more likely due to how the items stand out, or resonate with respondents (salience hypothesis).
Initial research by this group of researchers, presented at the 2016 AAPOR conference, in which dichotomous “Yes/No”, and “Describe/Do Not Describe” grids were compared to multi-response questions, found support for the salience hypothesis, but it was objected that these grids were also prone to acquiescence bias, explicitly or implicitly.
In the most recent research, the researchers expanded the experiment to include construct-specific dichotomous grid questions with columns labels based on the specific question asked (e.g. Like, Do Not Like), in which there is no explicit or implicit agreement.
This time, the research showed that the average number of positive responses was similar across scales, and also higher than those for multi-response questions.
The interesting result was that more items with lower salience also received positive answers in the dichotomous grids, compared to the multi-response. This is most likely due to the fact that respondents are forced to consider each item before selecting an answer, bringing their attention to items that may be overlooked in a list of select-all-that-apply options.
To validate their results, the researchers asked a follow-up question about the frequency of engagement in behaviors presented in the multi-response and dichotomous grids, and examined how well responses from the different question formats predicted the frequency behavior. If the responses to the dichotomous grids were a result of the acquiescence bias, low correlations between the answers to the grid questions and the frequency behavior were expected. However, the analysis found comparable correlations between the frequency behavior question and all question formats tested, suggesting support for the salience hypothesis.
Based on these results, the researchers recommend using the dichotomous grids, as they elicit greater item consideration, particularly in screeners.
Although, I admit that these results are encouraging and provide a promising practical application to improve respondent quality, the use of grids have other problems. They take longer to answer, can be tiring, and are susceptible to straightlining. As the authors acknowledge, the grid format can yield too many false positives, and subsequent quality controls are needed to filter out bad responses.
In short, dichotomous grids seem to be better to encourage more thoughtful answers about items that may be ignored in a long multi-response list, but we should use them with caution, and add controls to validate their answers.