Why Customer Satisfaction Surveys and Text Analytics Belong Together

 |  Posted: by

Customer Satisfaction Surveys and Text Analytics

Customer satisfaction surveys are pretty common these days. Every time I make a purchase on Amazon.com or Bestbuy.com I get a customer satisfaction survey. Every time I buy at Kohl’s or Walmart they hand me a survey invitation printed on the receipt promising a chance to win something if I fill the survey. Nowadays many retailers have some form of enterprise feedback system aimed at assessing customer experience during specific transactions. That’s terrific.

 

Common questions in satisfaction surveys include:

  1. Overall satisfaction
  2. Likelihood to recommend
  3. Likelihood to buy again at the retailer

 

They are often analyzed as a composite (3M model) as they seem to capture different dimensions of the satisfaction. Asking only about overall satisfaction doesn’t provide a complete picture of how the customer feels and is likely to act. Customers may not be completely satisfied with a company, but they may keep buying its products out of habit (inertia) or because there is no another alternative. It is often the case that we do have other alternatives, but keep patronizing a brand because although we may not be happy with certain aspects, still like others.

 

Likelihood to recommend, which sometimes is used alone to predict satisfaction or purchase behavior, doesn’t tell the whole store either. Recommending a product or a brand doesn’t always translate into purchase behavior. I may recommend a product for someone else because I find it fitting for that person, but not necessarily a good fit for me.  On the other hand, a consumer would recommend a product for a particular use even if she is not entirely satisfied with it.

 

Finally, likelihood to buy again is not a very reliable metric. There are too many factors at play when it is time to make a purchase. Our financial situation, competition or own irrational consumer-self can cancel any intentions we may have claimed to have when answering  this question in a survey.

 

“OVERALL” SATISFACTION METRICS DON’T TELL THE WHOLE TRUE

 

Over the years I have been buying laptops from a particular brand. Overall they have worked pretty well. In the latest model I bought, I really don’t like what they have done to the buttons on the touchpad. They feel too hard and the finger-guided pointer has a life of its own. The fact is that I hate this feature, so if you ask me about my satisfaction on a scale from 1 (dissatisfied) to 5 (satisfied), I give my laptop a 3.

 

On the other hand, if you ask, how likely it is that I would buy another laptop from this brand again or recommend it to others, I would say a 4 (somewhat likely) or 5 (very likely). Why? I’m satisfied with other attributes more important to me. I was able to customize it to all my requirements at a reasonable price, and it does the job. I mostly use a mouse, so the hard click buttons only bother me when I travel with my laptop. However, since you asked about my “overall” satisfaction and nudged me to take into account everything about my laptop, then I feel pushed to the middle of the scale trying to balance out the good and the bad.

 

I feel the same way when I get an overall satisfaction question after a call to the customer service of any brand in which the actual experience during the call was good, but my problem was not solved for whatever reason. The call gets a 5, but the brand may get a 2. If my problem doesn’t get solved, no amount of courtesy from a customer service rep will attenuate my feeling of frustration with the unresolved issue.

 

What I have seen over the years doing customer satisfaction research is that asking these questions in customer satisfaction surveys without a context produces answers that are too sterile and disjointed to be meaningful. This is why they often fail to predict how customers will behave.

 

We need to dig deeper and ask why a customer gives a particular answer. This is especially true if we use rating questions because scale points mean different things to different people. See the example below about a product review. If we average the number of stars it gets, the overall rating is very high as most reviewers give it 4 stars and one gives it a 5 star. However if you actually read what the reviewers meant when rating the product, you see that 4 stars are associated with both positive and negative reviews.

 

Product Rating

 

 

EVALUATING SPECIFIC ATTRIBUTES

 

To get around this problem, customer satisfaction surveys usually include questions asking customers to evaluate the performance of the brand, product or the organization on different aspects (e.g. product performance, customer service, product availability, selection, etc.).

 

Companies, which go beyond reporting frequencies, often try to run key driver analyses to determine what influences overall satisfaction or a composite of these three questions (3M model). They still have a hard time linking the customer satisfaction survey results to actual sales. Why?

 

The answer is aggregation. When we aggregate the individual answers to these questions, we lose information about what drives the behavior of individual customers and the prediction error at the individual level increases. In the aggregate, we lose the link between the answers to customer satisfaction questions and individual purchase behavior. We need a model in which the individualized context of an answer is taken into consideration to weight up or down a particular answer.

 

WHAT CAN WE DO?

 

To be fair, I have seen many customer satisfaction surveys that attempt to capture context data using open-ended questions. Realistically, it is almost impossible to code the thousands of responses to these questions many companies get in transactional customer satisfaction surveys sent out.

 

There is no tool yet that can capture of all the nuances of the human language, particularly when there are typos, half-finished sentences, grammar errors or the sarcasm we often see in answers to open-ended questions. For now, we have to read customers answers, code them and adjust the results based on the expressed sentiment (which is very, very time consuming and costly).

 

Nonetheless, I’m hopeful. My hope is in the text analytics field. With the explosion of information thanks to social networking, there are many companies racing to develop text analytic tools that would make text analysis and coding easier, faster, and more efficient. They just need to hurry up!

Comments Comments

Tom H. C. Anderson Posted: May 25, 2011

Your’e right on point Michaela,

Have looked at the 3M’s for large companies like Starwood Hotels for many years, and Text Analytics adds so much more. Why we’ve been working on OdinText, especially for research.

I’d add that recent research shows that text is superior to ratings, especially those online like on Amazon when customers start using them to decide what to buy. Text sentiment actually correlates (is a stronger driver) to sales $ than customer ratings on a 5 point scale!

I plan to blog more about this research soon.

Only logged in users can leave comments.