4 minutes to read. By author Michaela Mora on May 6, 2019 Topics: Analysis Techniques, Market Research, Survey Design
Should we include a neutral point in rating questions? Clients often ask this question when we design surveys that include rating questions (e.g. Very satisfied, Satisfied, Neither Satisfied or Dissatisfied, Very Dissatisfied).
A lot of research has been conducted in this realm, particularly by psychologists concerned with scale development. We are still looking for a definitive answer. Some studies find support for excluding it while others for including it, depending on the subject, audience, and type of question. Hence, the debate still continues
Those against a neutral point argue that by including it, we give respondents an easy way out to avoid taking a position on a particular issue. There is also an argument about wasting research dollars. Data from the neutral point tend to be discarded. Some don’t find much value in it or are afraid it would distort the results.
This camp advocates for avoiding the use of a neutral point and forcing respondents to tell us on which side of the issue they are.
The fact is that as consumers, we make decisions all day long and many times we find ourselves idling in neutral.
A neutral point can reflect any of these scenarios:
1. We feel ambivalent about the issue and could go either way 2. We don’t have an opinion about the issue due to a lack of knowledge or experience 3. We never developed an opinion about the issue because we find it irrelevant 4. We don’t want to give our real opinion if it is not considered socially desirable 5. We don’t remember a particular experience related to the issue that is being rated
By forcing respondents to take a stand when they don’t have a formed opinion about something, we introduce measurement error in the data. In other words, since we are not capturing a plausible psychological scenario in which respondents may find themselves, we get bad responses.
If the goal of the question is to understand different opinions, we should not only use a neutral point but also a “Not sure/Don’t Know/Not Applicable” option. This would allow respondents in scenarios 2 and 3 to provide an answer that is true to their experience.
For example, I received a customer satisfaction survey from my mobile phone provider after I made a call to their support desk. The survey had a question asking to rate the representative, who took my call, on different aspects.
One of the rating items was “Timely Updates: Regular status updates were provided regarding your service request.” I wouldn’t know how to answer this since the issue I called for didn’t require regular updates. Luckily, they had a “Not applicable” option. If not, I would have been forced to lie or toss a coin. One side of the scale would have been as good as the other.
An increase in non-response and survey abandonment can also result from respondents who don’t want to air their opinion because of perceptions of low social desirability. If we give them the “Not sure/Don’t Know/Not Applicable” option, they are more likely to use it than the neutral point. This would be preferable since we could exclude the data from the analysis for a particular question. Above all, we would not lose the information on other questions.
However, a better alternative yet is to provide a “Prefer not to answer” option if the question touches particularly sensitive issues.
The best antidote against having respondents gravitating towards the neutral point is to make sure that we show the questions to those who can really answer them.
Finally, with the help of skip logic, we can design surveys that filter out respondents with no experience, knowledge or interest in the subject.
In my mobile phone example, they could have asked me first if my request needed regular updates. If that was the case, then they could ask me to rate my satisfaction with it.
Most likely, the researcher who designed this survey was trying to make the survey shorter. Nonetheless, I still could have introduced measurement error. I almost missed the “Not Applicable” option at the end of the scale.
You may have guessed by now in which camp I am. Survey questions should be as close as possible to the way respondents would naturally answer them in real life. Sometimes we need to get there in several steps by filtering out those who can’t answer. At other times, we just have to give them the option to be neutral.
Originally published on 2/4/2010. Revised on 5/6/2019
Share on:
Subscribe to our newsletter to get notified about future articles
Subscribe and don’t miss anything!
Subscribe
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.