How AI Can Further Remove Researchers in Search of Productivity and Lower Costs

Summary: Michaela Mora, discusses with Dan Fleetwood, president of QuestionPro on Live with Dan, the challenges of maintaining genuine human connections and meaningful interactions when conducting market research as technology and AI continue to advance.

28 minute video. By author Michaela Mora on May 4, 2024
Topics: Artificial Intelligence, Business Strategy, Market Research

Artificial intelligence (AI) is quickly dominating the conversation across all events and encounters market researchers have across the insights industry. Over the years, the market research industry has increased its dependency on technology in the search for productivity, lower costs, and faster research delivery as the short-termism culture began dominating companies’ cultures.

Since ChatGPT was launched in November 2022, the fear of missing out (FOMO) has reached a fever pitch. Qualitative and quantitative researchers, research suppliers and agencies, and corporate teams are jumping on the AI train that seems to speed up every passing day. Voices trying to bring attention to potential long-term unintended consequences are being drowned by the noisy train.

Dan Fleetwood, president of QuestionPro, was gracious enough to hand me the microphone and open a conversation about the challenges of maintaining genuine human connections and meaningful interactions when conducting market research as technology and AI continue to advance.

Here is a summary of some highlights of our conversation. It has been modified for conciseness and improved readability. It is only 28 minutes. Give it a listen!

Challenges to Maintain Human Connections

00:20: 11

Dan: Everyone in the industry is talking about generative AI, ChatGPT. I wanted to have you on because you provide good counterpoints and balance. I get excited about all the possibilities and what it can do, and we could put it into the QuestionPro platform. Can you talk a little about some potential challenges you foresee and how we maintain this human connection while having ChatGPT on the back end to help us conduct research?

Michaela: In our effort to increase productivity, lower cost, and deliver fast research, we are inserting AI-powered tools into all phases of the research process, from secondary research to discussion guides and survey design, data collection, data synthesis, and analysis, and reporting.

AI can augment our capabilities in certain stages. I think it’s great for secondary research, and it can speed up certain types of data summaries. However, it seems there is a rush not to augment capabilities but to replace researchers in the stages where we have traditionally relied on our capacity for empathy in human interactions.

We have been adopting remote asynchronous modes of research for years in which we may chat with participants and never talk to them or see them. Still, until now, we have had at least humans involved, designing the questions and probing further based on the response we get.

Now, there is a lot of enthusiasm about letting the tools write the discussion guides and surveys, which we are supposed to edit, but inexperienced researchers may not recognize bad questions if they hit them in the head.

We also see companies discussing synthetic respondents and developing bots acting as moderators. If we remove moderators, for example, and rely on bots to run interviews and focus groups, we will likely lose empathy and emotional connection.

In online qualitative research, we lose some of the human touch and emotional connection that comes from face-to-face interactions, and the level of interaction and energy is different. We can’t observe body language in the same way. We all experienced it during the pandemic.

The challenge is that the AI power tools may lack the ability to understand and respond to human emotions, which leads to fewer empathetic interactions.

It can process large amounts of data quickly. It will struggle to understand subtleties, the context, and the nuance of human interactions, which is why so many people are coming up with a list of prompts to make it specific.

At the same time, this can become formulaic and rigid and limit our ability to understand complex and social psychological scenarios. Then, there are potentially biased interactions. Algorithms are only as good as the data they train on. If the data is biased in any way, it can lead to bad interactions. You saw the case of Tay, the Microsoft AI bot that went rogue in less than 24 hours and spewed many racist and misogynistic remarks.

Let’s not forget the ethical considerations around data privacy, consent, potential misuse of personal information, and intellectual property. That can affect connections not only to research participants but also to clients who own the data.

Acceleration of the Fast-Food Era in Research: Trading Quality for Lower Cost, Speed, and Convenient


Dan: How can researchers strike a balance between embracing this technology and using it in everyday life without losing that empathy or that soul we all want? We all want it in the connections, and as researchers, we’re no strangers to that, so where’s the balance here?

Michaela: We all want deeper connections. The pandemic again made that very, very clear and brought many mental health issues that isolation brought to the surface. We are a social species, and technology interferes with our nature on many levels, solving some problems but creating others. So, the jury is still out on whether the tradeoffs are worth it. I wonder if we are repeating what happened with fast food. We traded off nutrition for convenience and lower cost.

Now, we have been plagued with many health issues connected to changes in our diets for many years, so we must balance the need for efficiency, the efficiency AI may provide, and the value of human creativity, intuition, and emotional intelligence in research.

Those are the nutrients for our souls, and we need to identify complementary roles instead of outsourcing everything to AI.

AI can, again, process large amounts of data and identify patterns, but data, as we know, is not the same as insights. Humans are great at empathy and creativity and understanding complex nuances of human interaction, so we need to recognize the limitation, you know, the limitation of both.

Technology-based Efficiency Can Backfire


We can use AI to streamline processes, but if this means removing humans and making the process less meaningful for other humans, then the fact that it can be done doesn’t mean it should be done right.

Human collaboration is still our secret power for survival. We don’t have to get very complex and advanced. Think of customer service as a simple example. I wonder how often you have gotten a good answer from a chatbot or an IVR phone system where you try to reach someone to ask questions.

That’s been usually a big waste of time for me. It’s efficient for the company but very wasteful for you. I do qualitative and quantitative research. As a qualitative researcher, I moderate focus groups. So, for a recent project, I used a facility provider that also did participant recruitment, and the only way to communicate with the recruitment side of the business was via email. The facility did not have a phone number for the recruiters, who supposedly are from the same company. If you go to the website, no phone number can be found. As is common with these projects, we had an emergency where we needed to substitute some participants with less than 24 hours until the session. Discussing the problem on the phone was faster than writing in an email because it was a complicated setup. I had to send emails asking for a phone number, but nobody responded. I had to push the facility to get someone high up to contact me, and somehow, my screams reached the top. Suddenly, I got a phone number, and the following emails showed a phone number in the signature that was not there before.

They made it very efficient for them to avoid the cost of phone calls, but not for me as a client.

Think augmentation, no replacement. We can solve certain types of questions and problems via email or chat boxes, but you shouldn’t exclude a full channel because you cannot foresee all the potential, more advanced, and complicated situations in which the clients still need to call. I try to avoid calling, but you must give me the option.

I prefer to work with partners with whom I can make a human connection and who are there for me when I need them. Our industry’s bread-and-butter is talking to people, relating to others, and trying to understand customers; you cannot remove the human element from your interaction with clients or partners. You need humans. There is pressure to cut costs and deliver, but you will worsen the situation. The customer experience will degrade quickly if you just outsource everything to technology.

Best Uses for Generative AI?


Dan: Do you think using ChatGPT or generative AI tools for mundane tasks is a good idea to free up your time for more strategic thinking and insights?

Michaela: I have experimented with and tested several tools. At this stage, it is probably suitable for secondary research; you still need to check your sources. I was hoping, in qualitative research, that it would solve the time and cost of transcriptions. Still, I’m disappointed with automatic transcriptions because it’s an area that could augment capabilities at a lower cost if done well. The quality is often so poor, particularly if you have a different English accent, so you see immediately the bias. Models that do automatic transcriptions are trained on people who speak a certain English accent. If you have any type of deviation from that, the tools get a little crazy; they don’t know what you say. I mean, for me, some tools don’t even separate my voice from male voices.

I also see the potential power to synthesize large amounts of qualitative data I have tested, but I found then that I needed to spend a lot of time checking that I didn’t miss any big points. You still need to go through the traditional process of reading the raw data, letting your brain search for unexpected insights, and providing input for creativity when you try to connect the dots across the thing. Hence, the mundane depends on what type of mundane activities.

The pendulum had swung extremely to the other way. Twenty years ago, when online qualitative tools came to the market, there was a lot of resistance. Many of the qualitative researchers didn’t want to get into that because when they were talking about, you know, losing emotional connection, in-person relations, and all that. It took them a while to realize that was needed; then the pandemic also forced them to jump onto that pretty quickly for those who still were not accepting that right. Now it feels like, “Oh, we better not wait; we might be left behind,” so now you go to the other extreme and try to adopt it too quickly.

No extremes are ever good in any category or industry. You will solve one problem, but you may create other problems and have to create safeguards so things don’t go bad. How much time and money are you saving if you have to create all this fencing around?

If you put transcripts in the public version of ChatGPT, you are using your client’s data to train the model; the client will not like that, so you have to buy a service to put a fence around it, which increases your costs.

Potential Benefits, Serious Concerns


Dan: What areas are you excited about the possibility of AI, looking forward to a year or two years in the future?

Michaela: I think there is a lot of potential in terms of optimization, but there is no magic. There was also a recent article in The Atlantic about the dark side of these models, right? We imagine these models are doing things on their own. There is a cottage industry behind all those models. A lot of low-pay workers are checking those models and checking the definitions. They’re not well paid, and they don’t have the training, and they don’t have a lot of time to do it right. By their admissions, they don’t have time; there’s too much information they need to process; they don’t have guidelines, so how many biases are being inserted in those models? Companies don’t want to talk about that.

Those models need to be trained by humans; the question is training them and how well prepared those trainers are.

The Need to Preserve Critical Thinking


Dan: Let’s say I’m a researcher and dabbling in ChatGPT. I want to know more about it. What would you recommend as some resources or training for some of these researchers looking to learn more about it, both good and bad?

Michaela: Well, right now, there are no official sources. You can have to go around and read a lot and listen to the founders of those tools talk, and you have to listen to both sides. I know there’s in our industry a lot of talks about this, but it’s a lot of, hey yay, yay we go, we go, we go, we go and the other side is, every time, I try to say something it’s like but it’s gonna be better, no, but it’s gonna, and I have been for too long in this industry to be a little skeptical, so for the new people coming here be informed, follow both pros and cons but just don’t get little into the illusion that AI is going to do it all for you researchers need to drive the process, they need to be the ones designing the questions asking the questions, responding to they need to work in a diverse team to avoid experiential blindness and lack of empathy for people. I mean, no tool is going to solve that.

 The research can be well designed, and the data can be accurate but the researcher can be biased, can bias the analysis and interpretation, and this is sometimes, you know, an argument used to say, well give the upper hand to AI, but AI is not the solution to avoid bias in analysis because the AI models can be also biased and we wouldn’t even know.

The solution to bias mitigation is to work in a diverse team with people from different backgrounds, demographics, values, and skill sets, and you need it to minimize blind spots in both the research design and the analysis. And as for the role of AI, every time you get those tools, the question you have to ask is what type of data was used to train this model to handle this type of data, specifically, no any type of data, the data they trying to collect and analyze because AI right now is a black box for many of us. It’s hard to evaluate how good of a job it does in analyzing a topic if we are not experts in it to spot errors and bias.