Is It Right to Include a Neutral Point in Rating Questions?

 |  Posted: by
Neutral Point in Rating Questions

Should we include a neutral point in rating questions? Clients often ask this question when we design surveys that include rating questions (e.g. Very satisfied, Satisfied, Neither Satisfied or Dissatisfied, Very Dissatisfied). 

 

A lot of research has been conducted in this realm, particularly by psychologists concerned with scale development. We are still looking for a definitive answer. Some studies find support for excluding it while others for including it, depending on the subject, audience, and type of question. Hence, the debate still continues

 

Against The Neutral Point In Rating Questions

 

Those against a neutral point argue that by including it, we give respondents an easy way out to avoid taking a position on a particular issue.  There is also an argument about wasting research dollars. Data from the neutral point tend to be discarded. Some don’t find much value in it or are afraid it would distort the results.

 

This camp advocates for avoiding the use of a neutral point and forcing respondents to tell us on which side of the issue they are.

 

In Favor Of The Neutral Point In Rating Questions

 

The fact is that as consumers, we make decisions all day along and many times we find ourselves idling in neutral.

 

A neutral point can reflect any of these scenarios:

 

1. We feel ambivalent about the issue and could go either way
2. We don’t have an opinion about the issue due to lack of knowledge or experience
3. We  never developed an opinion about the issue  because we find it irrelevant
4. We don’t want to give our real opinion if it is not considered socially desirable
5. We don’t remember a particular experience related to the issue that is being rated

 

By forcing respondents to take a stand when they don’t have a formed opinion about something, we introduce measurement error in the data. In other words, since we are not capturing a plausible psychological scenario in which respondents may find themselves, we get bad responses.

 

The Need For “Don’t Know”

 

If the goal of the question is to understand different opinions, we should not only use a neutral point but also a “Not sure/Don’t Know/Not Applicable” option. This would allow respondents in scenarios 2 and 3 to provide an answer that is true to their experience.

 

For example, I received a customer satisfaction survey from my mobile phone provider after I made a call to their support desk. The survey had a question asking to rate the representative, who took my call, on different aspects. 

 

One of the rating items was “Timely Updates: Regular status updates were provided regarding your service request.” I wouldn’t know how to answer this since the issue I called for didn’t require regular updates. Luckily, they had a “Not applicable” option. If not, I would have been forced to lie or toss a coin. One side of the scale would have been as good as the other.

 

An increase in non-responses and survey abandonment can also result from respondents who don’t want to air their opinion because of perceptions of low social desirability. If we give them the “Not sure/Don’t Know/Not Applicable” option, they are more likely to use it than the neutral point. This would be preferable since we could exclude the data from the analysis for a particular question. Above all, we would not lose the information on other questions.

 

However, a better alternative yet is to provide a “Prefer not to answer” option if the question touches particularly sensitive issues.

 

How to Avoid Leading Respondents to the Neutral Point

 

The best antidote against having respondents gravitating towards the neutral point is to make sure that we show the questions to those who can really answer them.

 

Finally, with the help of skip logic, we can design surveys that filter out respondents with no experience, knowledge or interest in the subject.

 

In my mobile phone example, they could have asked me first if my request needed regular updates. If that was the case, then they could ask me to rate my satisfaction with it.

 

Most likely, the researcher who designed this survey was trying to make the survey shorter. Nonetheless, I still could have introduced measurement error. I almost missed the “Not Applicable” option at the end of the scale.

 

You may have guessed by now in which camp I am. Survey questions should be as close as possible to the way respondents would naturally answer them in real life. Sometimes we need to get there in several steps by filtering out those who can’t answer. At other times, we just have to give them the option to be neutral.

 

Originally published on 2/4/2010. Revised on 5/6/2019

 

As Featured On EzineArticles As Featured On EzineArticles

Comments Comments

blakus Posted: March 10, 2010

Do you have copy writer for so good articles? If so please give me contacts, because this really rocks! 🙂

Michaela Mora Posted: March 11, 2010

Thank you for the kind words. I write these articles myself.

Paula Hill Posted: August 18, 2010

Excellent article. Great food for thought.

Samar Masood Posted: February 14, 2011

A very realistic approach towards a common research dilemma & good suggestions

Aneta Posted: June 10, 2014

This was really good! I designed and administered a questionnaire for my organization. I included the ‘neutral’ option, and was flooded by neutral responses. I was sure that this would significantly skew the data and render the findings and the whole process invalid. Imagine finding this piece on the web! I am now able to used my neutrals in a positive way.

Michaela Mora Posted: June 10, 2014

Hi Aneta,
I’m glad you found this post useful!
Michaela

Only logged in users can leave comments.