Understanding the Pros and Cons of Mixed-Mode Research

 |  Posted: by

As published in the July 2011 issue of the Quirk’s Marketing Research Review.

Quirks July 2011

The concept of mixed-mode surveys is nothing new, but it seems to be gaining traction in the research community. Among the issues pressing the use of mixed-mode survey designs are the need to reduce coverage bias, increase response rates and lower costs.


The advent of new technologies combined with fast-paced demographic and cultural changes have put a dent in the ability of traditional and newer data collection methods to cover all segments of the population.

  • In-person interviews. The increase of gated communities and locked apartment buildings in tandem with safety concerns has made it very difficult to access people’s homes for in-person surveys.
  • Telephone surveys. The trend toward not listing addresses in phone directories; increased use of answering machines with Caller ID; and listing of phone numbers in the Do Not Call Registry have introduced problems with coverage in phone surveys. While survey research is exempted from the Do Not Call Registry, most consumers are unaware of this. Telemarketers have been blamed for much of this change. In the ’90s telemarketers began using some of the techniques used to conduct surveys, making people less willing to answer unsolicited sales calls and let their phone numbers be public.
  • Population coverage. Population coverage via phone surveys has also been affected by the proliferation of multiple phone lines in homes (i.e., fax, Internet), cell phones and voice-over-IP lines. Landline-based samples can’t guarantee high coverage rates any longer. Research from the National Center for Health Statistics indicates that by the end of 2009, at least 25 percent of adults could only be reached on a cell phone – a proportion that is even higher in certain groups (i.e., young adults ages 18-29 and Hispanics). Analysis of data from Pew Research Center’s dual-frame surveys also shows that only 7 percent of adults 18-29 can be found in landline samples, while 65 percent are adults ages 50 and older.

    Wireless Only Households by Age

    Wireless Only Households by Ethnicity

    Lancdline and Cell Only Samples

  • Online surveys. Web surveys have grown exponentially in the last 10 years. At the same time, online surveys have always raised concerns about coverage. Although a large portion of the population has Internet access, it doesn’t mean they are all accessible for online surveys. Most primary market research conducted online uses samples sourced either from online panel providers or from clients’ own customer databases.

    Online panel samples are, by definition, convenience samples. Even when their composition may resemble the general population, they are populated with people who want to participate in surveys in exchange for incentives. We can select random samples from the panel population but the fact remains that not all individuals in the population will have the same chance to participate in surveys, as not all belong to an online panel.

    There are always groups that will be underrepresented (e.g., Hispanics, young males, low socioeconomic strata, etc.) or overrepresented (e.g., white, educated, etc.) in the online panel population. A typical example is Hispanics, separated by different acculturation levels. Most online panels have acculturated, bilingual Hispanics but very few, if any, unacculturated Hispanics. Consequently, if we want to survey less-acculturated Hispanics we have to use phone or in-person interviews to reach them.

Mixed-modes can be used to reach respondents and invite them to take the survey in another mode. This is the case of postal mail, in-person and phone interviews used to recruit respondents for online surveys, which can be taken at a central location or at home, if the person has Internet access. By combining different data collection modes – and reaching different segments with the mode that is more effective for each – we can improve coverage.


  • Data collection fit. Survey response rates across all data collection modes have experienced a steady decline for decades. Often a mixed-mode approach is used to increase response rates by providing different alternatives to participate in surveys, depending on respondents’ preferences or resources. Some people may not have access to a computer at home or may not feel skilled with computers; in these instances a phone or in-person interviews are more appropriate than online surveys. In studies about sensitive topics with people that can only be reached via in-person interviews, computer-assisted personal interviews can be mixed with computer-assisted self-administered surveys so that the person can answer – directly on the computer – questions that feel too embarrassing to answer out loud in front of the interviewer.
  • Survey length. Another factor, which usually affects response rates negatively, is long surveys, which unfortunately have become too common. Faced with tight budgets and short deadlines, many research clients often cram as many questions as possible, particularly in online surveys, trying to capture data on all possible issues (current and old) in the few studies they can afford. Research has shown that as the survey length increases, response rates increase and data quality deteriorates. The best way to reduce drop rates related to survey length is to make surveys shorter, although it has been argued that good survey design can also reduce drop rates in long surveys.

    In cases when inevitably we have to ask a large number of questions, a mixed-mode approach may help to increase response rates. Examples of this are studies that require respondents to record activities or media consumption, like the National Consumer Survey conducted by Experian Simmons, in which in-person, phone and mail are used to recruit, remind and capture data.
    In our effort to increase response rates by using mixed-mode surveys, we should never lose sight of the goal of achieving a representative sample of the target market. It may be easier to increase response rates in certain segments using a data collection mode but we may end up skewing the sample.


Cost is often one of factors that weighs heavily on the decision about what data collection mode to use. Multiple data collection methods can help mitigate the cost of certain research projects, especially if the study requires reaching low-incidence groups or groups not accessible with a certain mode.

Currently, online surveys are the most cost-effective of all data collection methods, so they are often combined with other more expensive modes like phone or in-person interviews to reach smaller subsets of the sample to increase coverage. For instance, in tracking studies that require samples from the general population and the Hispanic segment, we may direct the general population sample and part of the Hispanic sample to online surveys, while the rest of the Hispanic sample is recruited by phone-to-Web, phone-to-central location or phone-to-central location-to-Web. These combinations cost less than if we were to conduct only phone or in-person interviews with the whole sample, Hispanics and non-Hispanics. Of course, none of these combinations would cost less than doing online surveys only but sometimes this is not feasible without introducing coverage bias and non-response errors.


Despite all the benefits of mixed-mode surveys, researchers and clients must be aware of a major limitation of using multiple data collection modes in the same study. The use of different modes introduces measurement error because people answer questions in different ways depending on the collection mode. There are three key factors that usually influence how people answer.

Question formulation
Question wording and format tend to be different between data collection modes. For instance, in selection questions, while respondents are able to read answer options in mail and online surveys, the same questions asked over the phone often require changes in wording so they sound natural when spoken.

In mail and online surveys, respondents tend to put more thought in the items listed early but as they go down the list and more information is added it becomes more difficult to consider all the options at the same time (Krosnick and Alwin, 1987). In this case, respondents tend to choose among the first items listed if they are in agreement with what the respondents had in mind (primacy effect). In phone interviews, on the other hand, respondents don’t have enough time to process all the items being read and the last answer options heard are more likely to be remembered and selected if they confirm what the respondents are thinking about the subject in question (recency effect).

To avoid burdening respondents’ memories with too much information, multiple-choice questions are usually asked as a series of forced yes/no questions in phone interviews. However, the same questions are commonly asked as multiple selections (i.e., “check all that apply”) in online surveys. It is common practice to treat the results from both types of questions as equivalent but in fact, they are not. Research has shown that respondents answering forced yes/no questions spend more time processing the question and mark “yes” more than with the check-all question format when used in phone and online surveys (Smyth et al., 2003). This suggests thatforced yes/no and check-all formats are not comparable.

Sense activation
Respondents’ comprehension of a question is greatly influenced by the senses activated by the data collection mode. In phone surveys, respondents have to actively listen and remember the questions and answer options. As they interact with the interviewer, they can’t avoid being influenced by the meaning conveyed by the interviewer’s tone of voice, intonation, accent and other characteristics. In online surveys, visual representation of words and graphic elements such as page layout, colors, font size and images become more important and have a greater impact on how respondents answer. In short, there is always a potential risk that the same question asked in an online survey and in a phone interview will be interpreted in different ways by the same respondent yielding different answers unless efforts are made to make sure the same meaning is conveyed in both modes.

Rating questions seem to be particularly sensitive to oral and visual presentations. Many studies (Dillman et al., 1984; Krysan et al., 1994; Christian, 2007) have shown significant differences in responses to rating questions when asked by phone, IVR, online surveys and self-administrated paper surveys. The results in all the studies indicate that the same rating questions may produce more positive ratings when the respondent is not able to see the rating scale, regardless of scale length; use of labels on all rating points or just the end points; or respondents rating in one or two steps (Dillman et al., 2009).

Presence of interviewer
Data collection modes such as phone and in-person interviews can’t escape the influence of the interviewer on respondents’ answers, as the activation of social norms is inevitable during the interaction. When interviewers are present, respondents are more likely to acquiesce and provide socially-desirable answers than when interviewers are absent – even if not very sensitive questions are asked.
Another factor to consider is the fact that interviewers control the delivery of the survey. They ask the questions and can probe further to better understand respondents’ answers, which is not possible in paper or online surveys. Further probing can improve measurement but it creates more interaction with the interviewer and potentially a higher risk for bias driven by social norms. Acquiescence and social desirability can also be present in online surveys depending on how the questions are formulated but the risk is mitigated by the absence of interviewers.


Whenever it makes sense and is feasible, we should always try to keep surveys within a data collection mode to minimize measurement error produced by the use of multiple methods but mixed-mode surveys are part of market research’s future.

The solution is to find a balance between the need to make the questions work for each specific mode and the need to create questions for each mode that produce comparable results. This was precisely the goal of the survey design guidelines developed for achieving commonality across mail, Web, phone and handheld computer modes in the 2010 U.S. Decennial Census. The underlying principle for these guidelines was described as:

“Universal Presentation: All respondents should be presented with the same question and response categories, regardless of mode. While one might assume that this principle requires questions, categories, instructions, etc., to be identical across modes, this assumption turns out to be neither feasible nor desirable. Rote repetition would result in awkward and difficult-to-administer instruments that are unlikely to achieve consistent response data. Rather, Universal Presentation says that the meaning and intent of the question and response options must be consistent. In some cases, questions or instructions need to be modified so they can be communicated to, attended to and understood by respondents the same way in different modes. The goal is that instruments collect equivalent information regardless of mode. By equivalent, we mean that the same respondent would give the same substantive answer to a question regardless of the mode of administration.”


Many times the differences between how questions are designed are a matter of tradition and researcher specialization in one data collection mode. Market researchers need to think about designing surveys that work across multiple modes and provide comparable results.

Comments Comments

Jennifer Moretti Posted: August 8, 2011

Great article – very informative with insights that can be applied in the real world. Far too often writings of this nature leave me wondering what I’m supposed to learn or just how the information contained can be used. Your posts are a welcome change!

Michaela Mora Posted: August 8, 2011

Thanks Jennifer. I’m glad you found it helpful!

Only logged in users can leave comments.