Concept Testing for UX Researchers

Summary: Watch Michaela Mora, founder of Relevant Insights, at the 2022 Rosenfeld Advancing Research Conference, discuss the concept testing methodology and various approaches available, when to use them, the decisions they can support, and the process to conduct good concept testing with mixed methodologies in mind.

28 minute video. By author Michaela Mora on November 28, 2022
Topics: Analysis Techniques, Business Strategy, Concept Testing, Market Research, UX Research

If you work as a UX researcher, you may have heard of concept testing or not. The fact remains that this family of research techniques is often absent in the UX Research toolkit in many companies, mainly because it is based on quantitative research. Since UX Research is mostly driven by the DIY trend in the insights industry and conducted by many without a research background or training in quantitative research, they often ignore techniques like concept testing despite the value these techniques can provide.

Watch this presentation at the 2022 Advancing Research Conference by Rosenfeld Media, in which Michaela Mora, founder of Relevant Insights discusses various approaches to concept testing when to use them, the decisions they can support, and the process to conduct good concept testing with mixed methodologies in mind.

What is Concept Testing?


Let’s start with the definition of concept testing. In the simplest terms, concept testing is about testing ideas. These could be ideas about product configurations, features, and benefits before development. We also use concept testing to test ideas for marketing and content to launch products.

The primary goal of concept testing in product development is to confirm there is a need for the product idea, which is probably why it is underutilized in UX research. UX researchers are frequently involved in researching the user experience after a product or a feature idea has already been approved.

Concept testing is most useful before investing time and money into product development. It helps prioritize what we should develop. When we don’t have, when we don’t start with the fundamental question of whether users/customers need a product or a feature, we will fail to align product development with both the user experience and the company’s business goals and strategy, and this is where concept testing can come to the rescue. This research methodology can help bridge the gap between insights from UX research and desired business outcomes.

Concept Types


Before we start a concept testing project, it is essential to understand the type of concept we are dealing with because different types have different metrics and answer different business questions. They are not exclusive, but we may prioritize them at different times.

First, we have the pure concept test in which we describe products or features as a set of features and functions. This can also apply to specific features as well. This is what we call the WHAT. What does the product or feature do? And it’s used to determine if we should go forward with one or the other in the process of development and prioritize them in case we are dealing with one or more product features.

Second, we have the positioning concept in which we go a layer deeper, beyond descriptions, to understand perceived product benefits, the WHY someone should use or buy the product.

We also study product features here but as supporting evidence that the product can deliver the promised benefit. We used this not only to prioritize ideas for product development but to create content to position and differentiate the product in users’ minds which helps us, which helps both the product and the marketing team. Usually positioning concept testing provides richer insights into what drives user behaviors, and those habits mentioned by Dr. Bach this morning, because people care not so much about product features but what these features do for them. And third, we have the advertising concepts, the HOW to communicate those benefits and features. These are the stories that can effectively attract and retain customers. This usually helps the marketing team to create content and develop marketing communication tactics as part of the larger strategy. Ideally, before spending resources on the HOW, we should have a clear idea of the WHAT and the WHY.

Concept Anatomy – Intuit QuickBooks Example


And to see the What, the Why, and the Wow coming together to support a business outcome of attracting new customers, here is a good example from a QuickBooks commercial, and let me see if you can hear it.

“Ouch, it hurts that you not using smarter tools to manage your business. You work too hard to work this hard. Collecting receipts? Is it the 80s? Does anybody have a mixtape I can borrow? You should be chasing pets not chasing payments. QuickBooks gives you a sweet set of business tools that’ll do all the hard work for you. You made groom corgis, but you don’t have to work like a dog. You earned it. We are here to make sure you get it. Time to get yours. QuickBooks backing you.”

I hope you heard that, but as you saw Danny DeVito uses the hurt from physical exercise as a metaphor to send a message that QuickBooks makes managing a business easier, and less painful. That’s the benefit, the Why to use it, and through his visits to different small businesses, we are presented with supporting product features such as receipt capture, expense tracking, invoicing, and payments. That’s the What. The how are those visits he makes, making humorous comments on how QuickBooks would solve their problems. That’s the story created in the advertising concept.

Most likely, a lot of market research went into finding the Why, the What, and the How, including concept testing in creating this commercial, but when should we call for concept testing?

When To Use Concept Testing

🎧 00:06:50 

The need for concept testing is greatest when the company is trying to decide whether to invest in the development of a new product and whether to redesign and relaunch existing products.

Before deciding to invest in a new product, we should have evidence there is a need for it in the market. This may be old needs that have not been met or new needs created by external events and cultural and technological changes that lead to behavioral change. This is why we need to monitor trends, to do foresight research, and imagine the future as was discussed on the first day at the conference by Devon Powers, and Sam Louder, and Ovetta in the previous presentation.

 At the same time, these changes and the competition can make products irrelevant or turn them into commodities, which are also conditions that call for innovation.

A company may not be able to create a new product or product category, but it can redesign current products and services to meet changing market conditions. In any of these scenarios, we should consider concept testing as the first step to make sure the new offer or the redesigned product is on target with the market needs.

Key Considerations In Designing Concept Tests


So, let’s say we got the nod from above to do concept testing. That’s when the fun starts. We need to start by answering several questions that will determine the study design.

First, are we testing any one concept or multiple concepts? What analytical approaches should we be using? I’ll discuss this in a few minutes.

We need to think about the population included in the study, are we targeting the general population? are we targeting our product users? or should we include category users who are not using our products but could be potential customers? The population frame has implications for the data collection methods we select, the analysis, and the business decisions regarding channels to reach the target users.

Another critical question is whether this should be a branded or unbranded blind study. Branding can have an impact on multiple test metrics.

We also need to consider whether to show prices as part of the concept. People react differently, and many times act irrationally, as Ovetta pointed out, when the price is present or absent, which has implications for business decisions,

Finally, we need decision rules to determine if this is a successful concept. We can use internal norms from similar concept tests the company has done, or we can use external benchmarks from third parties that do this type of studies. The issue here usually is that the external norms are rarely specific enough for your product and brand.

Concept Testing Approaches


So, based on the number of concepts, the type of questions we ask, and the analytical approaches we select, there are three major concept testing categories.

First, we have the monadic concept tests in which participants are exposed only to one concept.

Second, we have sequential concept testing where we test several concepts with the same sample of people, but concepts are shown one at a time, rotating the order across respondents

Third, we have trade-off techniques for concept testing where people see several concepts shown side by side.

Monadic Concept Testing

A simple monadic concept test focused on a product, or a feature may look like this, and standard metrics often include appeal, perceptions of uniqueness, usefulness, likelihood to use and to purchase, and probing questions about why. This can apply to services too. It doesn’t have to be physical objects or products. This is easy to implement in an online survey tool and produces familiar metrics for stakeholders.

A key drawback to this approach is that it uses rating questions which are known for lack of discrimination and scale bias, and this approach doesn’t explicitly consider the competing alternatives to new ideas that our target audience is already using or considering using.

Sequential Concept Testing

As mentioned before, several concepts are tested with the same sample of people in sequential concept testing, but they are shown one at a time, rotating the order across respondents. Again, we can test products or features using text and images, and it’s easy to implement in a low-cost survey tool.

This is a popular approach, mainly because it helps us save money since less sample is required, and it’s useful when we only have access to a limited sample size.

However, the main drawback of this approach is the inevitable interaction effect between concepts, meaning that the perception of the first concept will influence the perception of the next one shown, and this is a problem if we want to compare preferences for different concepts and want to select one. This approach also relies on rating questions with the issues I already mentioned.

Trade-Off Concept Testing Techniques

Finally, trade-off techniques. You may hear the enthusiasm in my voice. These are quite exciting and powerful. These are becoming more and more popular in concept testing. Here people see several concepts, or components of a concept, shown side by side, and are asked to make a choice between them. This is a family of techniques, but I want to highlight two of the most popular ones at this time.


One is MaxDiff. This technique is based on simultaneous pair-wise comparisons happening when we ask participants to select the most and the least preferred or the most important and the least important items. It works very well when we have a long list of products and features that we need to prioritize. MaxDiff […] is a great approach to prioritizing the product feature backlog. This technique has broad applications, and the items in this list could be anything we want to rank and prioritize.

Assume here we have a list of 30 features for the Fitbit Charge, and we show five at a time and we ask for the most and the least preferred. Participants will see several of these types of screens with five different items selected based on experimental design running in the background to ensure balancing the number of times the items show up, the number of times they show up with each other, and the position in which they show up. In this way, we can try to control order and interaction effects.

Conjoint Analysis

The next trade-off technique is conjoint analysis, and here, levels of product attributes and benefits are combined in different configurations to present the concepts. This is an image of a standard Choice-based conjoint exercise.

The product attributes or features are on the left, and we show their levels on the right, which are selected also based on an experimental design to create product configurations that are balanced in a way we can estimate their main effect and potential interactions on choice.

In this exercise, participants are asked to select one product or none. They will see several screens with variations of the configurations and their selections are then used to develop a Hierarchical Bayes probabilistic model to estimate the impact of each attribute and level on choice and preferences.

In this example, we have a hypothetical study of smartwatches in competition. We could include any feature, and it could be features that don’t exist and see the effects on preference and choice.

There are other variations of this technique like Adaptive Conjoint and Menu-based Conjoint, and each method has its pros and cons, and we select them depending on what ideas we are testing, how complex they are, what attributes they have, etc.

Because people are forced to select, these techniques have greater discriminatory power than rating scales. This is a very intuitive task. People make choices all the time, which makes it very well suited for international studies since we know that scales have different meanings in different countries and cultures.

These techniques also allow us to test many different concepts in a cost-effective way and within a competitive context, particularly if conjoint analysis is used. This is by far the best approach to test the effect of pricing.

Finally, the main output of these studies are simulators to run what-if scenario analyses, which are pretty fun and helps understand the impact of different levels and products and benefits on people’s preferences, and the likelihood to use and buy the products or service.

On the negative side, these exercises can pose a cognitive burden on participants if we show too many features, products, so we have to be careful in their design.

Both techniques are implemented through a survey format, but more specialized tools are needed to design, program, and analyze these results and a low-cost survey tool won’t work here. The design and analysis have higher complexity than other concept test approaches, so expertise is needed to avoid costly mistakes.

Research Methodology


All […] of these techniques I have presented are in the quantitative research toolkit, but the ideal way to run concept testing is to combine qualitative and quantitative research.

Market researchers are heavy users of qualitative research. We have been doing it since the 1940s. Using qualitative research before and after quantitative concept testing will allow you to design better concepts and derive better insights from the quantitative analysis.

Qualitative Research

Qualitative research by nature is exploratory and can help us understand the range of behaviors, habits, perceptions, motivations, and the context in which these products are used.

Consequently, qualitative research helps us to formulate hypotheses about the relevant features and benefits that should be part of the concept and describe the idea in the language and terms customers and potential customers use, so please whenever possible, do this before any quantitative research. It will save you time and money in the long run.

We love qualitative research, but we are also aware of its limitations. The unstructured data and small samples make it harder to project the results to a larger number of users or potential users.

Quantitative Research

Qualitative research is excellent at discovering problems and needs but not so much at telling us how big of a problem it is, and if it is worth the investment of resources to solve it, so quantitative approaches like concept testing can help us test hypotheses about the relevancy of features and benefits found during the qualitative research phase.

We can use it to validate final concepts that will guide product development decisions towards solutions with a big enough need in the market, so, if possible, please do this after qualitative research.

 If you still need a better understanding of the results, another round of qualitative research could be helpful to interpret the findings from the quantitative analysis. It is not uncommon for us to do qualitative research after quantitative research.

Other Approaches


I have discussed research techniques we’re using in a somewhat controlled environment but thanks to technology we have other approaches to test the market viability of new product ideas in the wild.

Companies that are heavy web analytic users often conduct A/B testing to study the impact of new products and features, others use more scrappy approaches by setting up ad campaigns on Google and Facebook and monitoring the take rate, and others are using crowdfunding campaigns to measure willingness to buy the new product

The main advantage of these approaches is that they measure actual behavior and depending on the scale they can be cost-effective approaches to concept testing.

However, as we know, the same behavior can be motivated by different reasons.  This can point out to market segments we may need to prioritize to maximize revenue, profit, or save on cost.

Unfortunately, these methods don’t provide insights into the why behind the behavior, into the impact of potential competitive actions, or the effect of the message we use, or how to improve the message or the actual product idea, if that’s one of the research goals.

Also, the channels used for the test may be missing important target markets. Finally, it may not be a good idea to do it in this way due to legal and confidentiality issues.

As I always say, no research method is perfect. We should select approaches that are a good fit for the problem at hand and the research questions we have.

In my own experience, to gather actionable insights we need to use a mix of methods and triangulate results. Sometimes, we are more interested in predicting outcomes, and sometimes we are more interested in explaining them, and some methods are better than others for each objective.

Concept Testing Tips


Today, my goal has been to discuss concept testing as I don’t see it used to its full potential in UX research.

UX researchers can gain a lot from adopting this methodology and connect the dots between UX discovery research and business outcomes and make product development more efficient. This can help UX researchers to build a business case to ask for more resources to expand their research operation and show its value.

I firmly believe in the value of qualitative research to uncover needs, and formulate hypotheses about what users care about, but UX research should also combine it with quantitative methods like concept testing because we also need to validate the insights to make go/no-go decisions, and iteration is important here.

If you have never done concept testing and you have limited resources, you can start with a simple monadic test using low-cost survey tools, and as you see its value I recommend trying to use trade-off techniques for better insights. You don’t have to do it yourself if you don’t have the expertise but hopefully, you can create a business case for the value of concept testing and get the budget to bring research partners to help you.

In my prior life as a manager of internal research teams, I found that a hybrid model working with research partners was the best approach for my team, for me as a researcher, and for the company.

Finally, when you start doing concept testing, have a clear idea of what’s being included in the concept description, what pieces are describing the What, the Why, and the How to make sure you know what you’re measuring to provide actionable metrics,

So, I want to thank all of you for staying with me until the end. I hope you found some useful nuggets in this talk, and I’ll try to answer questions here if we have time or in the Slack channel.

Q & A


Bria: Ah, a wonderful job. Oh, my goodness, and yes absolutely like worth our time, goodness, so let’s dive into your questions, the first one is, can you elaborate on when to use mon […] monadic, there we go monadic testing, testing with one concept versus one to use multiple concepts?

Michaela: Well, it has to do with what are the goals. Sometimes we want to test more than one thing, sometimes it has to do with the budget. It’s timeline and budget, and what are the priorities of the decisions, so sometimes to save money, people just try to cram several concepts in one study, and sometimes it’s just the concept… Monadic is usually good when you have maybe a complex concept that needs attention, that you need a lot of description around it that people need to really pay [attention to] and spend time with it because when you have several concepts you can go into user fatigue pretty quickly in a survey format, and people you know have all the effects on it, but it has to do many times with the complexity of the idea, the timing, the budget, the skills, sometimes of the team, many times that’s what they know.

Bria: Absolutely, ah such a good answer. We’ve got another question. How much training does it take, if any, to use conjoint analysis?

Michaela: Oh boy. It takes some experience, there is a lot of…, there are tools out there that give you the impression they just press a button and you go and do it, and there is a lot of judgment calls that you have to make to make it right because the design piece is the one where you have to really, the experimental design, is the one that you really have to know what you’re doing, and then once you have it, it’s set up you have to know do the actual…, you know there are some tools that make it easy to estimate the model, it’s just what happens after, the interpretation, so there is some training and experience that is required, yeah.

Bria: Well, it is good to know that. Thank you. We’ve got another question. Is the goal to differentiate products and features or to identify which product or feature to focus on in a future research when simultaneously testing multiple products and features?

Michaela: All of the above. You can do all of that. It depends on the research project. Sometimes it’s driven by… sometimes, it’s just one step in the whole, in a full, in a larger project, a research project where you need to do this to do the next, so you prioritize items to go to the next phase. Sometimes it is that it already has come through a process, and then now you need to select the one in which the business or the product team needs to be, okay, which one are we going with, and you have a long list of things. So, it depends on where you are in the process of product development and the decision-making, the pressure you have internally. I have been on the other side so I know how it works, but you can do it for either.

Bria: Yeah, another wonderful answer and we’ll round it out with this final one. Love conjoint or MaxDiff as a method for concept testing but it can be slow, with a whole bunch of os.  Any tips on how, tools, or techniques to speed it up a little?

Michaela: I’m not sure what they mean by slow, but it is actually, it has become, it’s a very efficient way of doing concept testing when, you have, particularly, if you have more than say 10 items that you want to check the preference and you want to prioritize them MaxDiff is excellent, and it can go pretty quickly [….]. Now there are variations and adaptations to go to large sets, you can go 40, 60, 100 items if you have like a really bogged-down backlog product list, so it doesn’t necessarily have to be slow. I’m not sure what the slowness is, at the design, in the field, or after, but it could go pretty quickly.

Bria: Awesome! Thank you, so much Michaela. You still have a lot of questions but I’m gonna let you grab them in Slack. Thank you so, so much. What a pleasure!