Conjoint analysis results don’t always match the reality as experienced by clients.
Fortunately, we can minimize this problem by having the right information in the experimental design and market simulations.
We also need to correct misconceptions about conjoint analysis. They tend to create expectations beyond what this technique can provide. For more on this check the article: Conjoint Analysis Myths.
When Conjoint Analysis Results Don’t Match Reality
Here are some of the variables that have an impact on how well conjoint results could match reality:
1. Irrelevant Attributes and Levels
Having relevant attributes and levels seems a basic consideration. However, issues in this area are often behind the mismatch between conjoint results and reality.
Sometimes we simplify or omit attributes, when:
- There are too many attributes and levels
- They are difficult to represent in a conjoint survey setting
At other times, preliminary research, which can pinpoint relevant attributes and levels, is skipped.
Due to budgetary or other reasons, clients often don’t do qualitative research or other quantitative research (e.g. MaxDiff) that can help prioritize and reduce the list to relevant attributes.
As a result, respondents are forced to evaluate products on attributes and levels that they may not consider in the real world.
2. External Factors
A conjoint analysis may well capture preferences accurately based on the right attributes and levels.
However, there are often unknown factors at the time of the study design or analysis such as:
- Brand or product awareness level
- Marketing strategy and tactics
- Advertising of product benefits
- Sales process
- Category inertia
- Competitors’ reaction
Any discrepancy between the research assumptions and what’s happening in the marketplace regarding these factors can lead to inaccurate conjoint analysis results.
3. Sampling Plan
Including the right type of respondents is an important part of the study design. By that, I mean, including people who are at least active in the category.
Respondents who are disengaged, due to a lack of knowledge or experience in the category, are not able to provide relevant answers. As a result, you may end up underestimating preference shares.
4. Survey Design
To obtain realistic answers, we always try to design surveys that reflect how consumers think about the products.
However, balancing the amount and type of information provided, respondent burden, and survey length (which has cost implications) is often difficult.
This becomes critical when we evaluate complex products or services with many attributes and levels and competing alternatives
There are occasions when the list of attributes is reduced to shorten the survey. At the same time, this can leave relevant attributes out.
5. Respondent Priming
Quite often respondents go through conjoint exercises that require a lot of information processing. These exercises may include complex or new information or both. Therefore, to make it easier for them, we “prime” respondents.
In other words, we expose respondents to the attributes and levels through introductory questions before the conjoint section. Thus, they become familiar with product attributes and can make more thoughtful decisions.
However, in the real world, consumers often don’t have the time, or will, to do comprehensive product research. They often make decisions based on limited information.
Unfortunately, “priming” can lead to a higher awareness of certain attributes that we would never achieve. Consequently, this may yield inflated importance of those attributes and over-prediction of a product’s preference shares.
I believe that techniques like conjoint analysis have many useful applications. Nonetheless, they should be used with caution and full knowledge of its limitations.
In short, you need to consider the information available at the time of design and analysis. This will help manage expectations on the validity of the results in the real world.