Why Customer Satisfaction Surveys and Text Analytics Belong Together

Summary: Text analytics, if improved, can help extra insights from open-ended questions from customer satisfaction surveys conducted with large samples.

6 minutes to read. By author Michaela Mora on May 25, 2011
Topics: Analysis Techniques, Customer Experience, Survey Design

Customer Satisfaction Surveys and Text Analytics
Rate this post

Combining customer satisfaction surveys and text analytics can be a winning formula. Customer satisfaction surveys are pretty common these days, but they alone can’t capture the whole customer experience.

Every time I make a purchase on Amazon.com or Bestbuy.com I get a customer satisfaction survey. Every time I buy at Kohl’s or Walmart they hand me a survey invitation printed on the receipt promising a chance to win something if I fill the survey. Nowadays many retailers have some form of enterprise feedback system aimed at assessing customer experience during specific transactions. That’s terrific.

Common questions in satisfaction surveys include:

  1. Overall satisfaction
  2. Likelihood to recommend
  3. Likelihood to buy again at the retailer

They are often analyzed as a composite (3M model) as they seem to capture different dimensions of satisfaction. Asking only about overall satisfaction doesn’t provide a complete picture of how the customer feels and is likely to act.

Customers may not be completely satisfied with a company, but they may keep buying its products out of habit (inertia) or because there is no other alternative. It is often the case that we do have other alternatives, but keep patronizing a brand because although we may not be happy with certain aspects, still like others.

The likelihood to recommend, which sometimes is used alone to predict satisfaction or purchase behavior, doesn’t tell the whole store either. Recommending a product or a brand doesn’t always translate into purchase behavior. I may recommend a product for someone else because I find it fitting for that person, but not necessarily a good fit for me.  On the other hand, a consumer would recommend a product for a particular use even if she is not entirely satisfied with it.

Finally, the likelihood to buy again is not a very reliable metric. There are too many factors at play when it is time to make a purchase. Our financial situation, competition, or our own irrational consumer-self can cancel any intentions we may have claimed to have when answering this question in a survey.

Overall  Satisfaction Metrics Don’t the Whole Story

Over the years I have been buying laptops from a particular brand. Overall they have worked pretty well. In the latest model I bought, I really don’t like what they have done to the buttons on the touchpad. They feel too hard and the finger-guided pointer has a life of its own. The fact is that I hate this feature, so if you ask me about my satisfaction on a scale from 1 (dissatisfied) to 5 (satisfied), I give my laptop a 3.

On the other hand, if you ask, how likely it is that I would buy another laptop from this brand again or recommend it to others, I would say a 4 (somewhat likely) or 5 (very likely).

Why? I’m satisfied with other attributes more important to me. I was able to customize it to all my requirements at a reasonable price, and it does the job. I mostly use a mouse, so the hard click buttons only bother me when I travel with my laptop.

However, since you asked about my “overall” satisfaction and nudged me to take into account everything about my laptop, then I feel pushed to the middle of the scale trying to balance out the good and the bad.

I feel the same way when I get an overall satisfaction question after a call to the customer service of any brand in which the actual experience during the call was good, but my problem was not solved for whatever reason. The call gets a 5, but the brand may get a 2. If my problem doesn’t get solved, no amount of courtesy from a customer service rep will attenuate my feeling of frustration with the unresolved issue.

What I have seen over the years doing customer satisfaction research is that asking these questions in customer satisfaction surveys without a context produces answers that are too sterile and disjointed to be meaningful. This is why they often fail to predict how customers will behave.

We need to dig deeper and ask why a customer gives a particular answer. This is especially true if we use rating questions because scale points mean different things to different people.

See the example below about a product review. If we average the number of stars it gets, the overall rating is very high as most reviewers give it 4 stars and one gives it 5 stars.

However, if you actually read what the reviewers meant when rating the product, you see that 4 stars are associated with both positive and negative reviews.

We need to analyze the actual reviews to put understand the meaning of the ratings. However, imagine the amount of time we need to read the thousands of comments that are generated on a daily basis in product and service evaluations. This is precisely the same issue many companies face when they do customer satisfaction surveys.

 

Product Rating

 

Evaluating Specific Attributes

To get around this problem, customer satisfaction surveys usually include questions asking customers to evaluate the performance of the brand, product, or the organization on different aspects (e.g. product performance, customer service, product availability, selection, etc.).

Companies, which go beyond reporting frequencies, often try to run key driver analyses to determine what influences overall satisfaction or a composite of these three questions (3M model). They still have a hard time linking the customer satisfaction survey results to actual sales. Why?

The answer is aggregation. When we aggregate the individual answers to these questions, we lose information about what drives the behavior of individual customers, and the prediction error at the individual level increases.

In the aggregate, we lose the link between the answers to customer satisfaction questions and individual purchase behavior. We need a model in which the individualized context of an answer is taken into consideration to weight up or down a particular answer.

What Can We Do?

To be fair, I have seen many customer satisfaction surveys that attempt to capture context data using open-ended questions. Realistically, it is almost impossible to code the thousands of responses to these questions many companies get in transactional customer satisfaction surveys sent out. This is where combining customer satisfaction surveys and text analytics could provide a deeper understanding of the customer experience.

Unfortunately, there are no tools yet that can capture all the nuances of the human language, particularly when there are typos, half-finished sentences, grammar errors, or the sarcasm we often see in answers to open-ended questions.

For now, we have to read customers answers, code them, and adjust the results based on the expressed sentiment (which is very, very time consuming and costly).

Nonetheless, I’m hopeful. With the explosion of information thanks to social networking, there are many companies racing to develop text analytic tools that would make text analysis and coding easier, faster, and more efficient. Again, customer satisfaction surveys and text analytics are a winning combination!