Posted by Ray Poynter, 28 August 2020
In LinkedIn, Dave Carruthers (Founder and CEO of Voxpopme) asked “Why don’t we do more qual?”, and then clarified his question by asking why the contrast between the number of quant projects and the number of qual projects is so large. This question elicited a wide range of responses and I recommend reading the discussion here.
The ESOMAR Global Market Research Report indicates that in terms of money spent on research the latest ratio was 85% quant and 15% qual. There is no reliable data about the volume of research, since that is much harder to define and estimate. When we talk about volume, do we mean the number of projects, the number of interviews, the number of groups and depths, or the number of hours spent etc?
I think we can deconstruct Dave’s question into two parts
- Why is the ratio of quant to qual so much in favour of quant?
- Why isn’t more qual research conducted?
What do we mean by Qual?
Before diving into these two questions, I want to highlight the difficulty of defining what we mean by qual. Some projects are easy to categorise, for example, if we do four focus groups and summarise the findings, that is qual. Or, if we ask 1000 people to fill in a questionnaire where all the questions are closed-answer questions, that is quant.
However, there are studies which are more ambiguous. For example, is a semiotics study qual or quant or something else? At the moment methods such as semiotics, discourse analysis and ethnography are mostly categorised as qual because the analysis is qualitative – but not everybody agrees.
The picture becomes even more complex when we consider projects that have elements of both quant and qual. For example, if we have a questionnaire that is a mixture of closed questions and open-ended questions it tends to be categorised as quant. Indeed, the analysis of the open-ended questions is often operationalised by analysing the comments as codes and treating them like quant data. This is true of even the extreme case where a satisfaction survey might comprise just two questions 1) the NPS (Net Promoter Score) question, and 2) an open-ended question asking them to say why or asking them to describe the experience. This is almost always described as a quant study, not because the data is quant, but because the analysis tends to be quant.
If we do a user study about flying with a specific airline with (say) 1000 people and we ask them to fill in a questionnaire and some of them (let’s say 200) capture one or more videos of their journey, then (in my experience) this tends to be described as a quant study, even though the analysis of the video tends to be qualitative. I think the reason, in this case, is that the key reason for the study was quantitative (e.g. have our numbers changed from last year, what proportion of customers used the following features, what is the value of spend by key sub-groups). The video has been added to add depth, but the video was not the reason for the study.
With the growth of video tools like Voxpopme and the rise of text analytics, we will probably find that most ‘quant studies’ will have a growing qual dimension to them. This will mean that much more qual is being done, but it is unlikely to appear in the industry figures, since the projects will probably continue to be labelled quant.
Why is the ratio of quant to qual so much in favour of quant?
Here I think there are broadly three reasons, which I have listed in descending order of importance
- Clients spend more money on projects that need to be quant than on projects that could be qual or quant
- Quant can often be faster
- Quant can often be cheaper
Clients spend more money on projects that need to be qual
If we look at the ESOMAR Global Market Research Report we see that the largest uses of market research (in terms of dollars spent) are:
- Market Measurement 21% (e.g. how many people are buying how much stuff at what price, when and where?) This is inherently quant and is shifting away from surveys to automated, observational measurement, e.g. passive data and big data.
- Media Audience / Research 12% (e.g. how many people are watching/interacting with what, when and how?). There is a qual element, in terms of optimising content and understanding why, but that is a small part of this category. This research is also shifting from surveys to automated, observational measurement.
- Usage & Attitude Studies 12% (e.g. segmentations, drivers of choice etc). Most of these projects are quant surveys with some degree of qual (e.g. open-ended questions, video capture etc). The core need for most projects is to know how many people fall into various buckets. The qual elements make the results better, but they are not typically the reason for the project.
- Customer Satisfaction 8%. This tends to be quant and often driven by NPS. Most of the customer satisfaction surveys I see contain qualitative elements, for example open-ended questions and increasingly video feedback – but the studies remain categorised as quant and the reason for their funding is often the need to report specific numbers to the C-suite.
The biggest four uses of market research (listed above) account for over 50% of all market research spending and tend to be massively biased in favour of quant. They are also areas where the trend is away from surveys and towards automated, observational measurement.
There are categories of research that are much a better fit with qual, such as UX research (7%), NPD (7%), Ad pre-testing (3%) but these are smaller categories. There are also other categories that are mostly quant, such as Market Modelling (3%) and Omnibus Surveys (6%).
From this data, we can see that there are many categories of research that are inherently more quant than qual, and this goes a long way to explaining why the gap between the amount of quant commissioned is so much larger than the amount of qual. However, we can see that in these cases the trend is away from surveys and towards automated, observational measurements. We can also see that in cases where surveys are still being used, there is a tendency to include more qual elements – even though the study remains categorised as quant.
Quant can be faster
Speed matters when businesses need an answer quickly. For example, when an agile team wants to know a key fact or test a key idea. In terms of quant, there are broadly three ways to get answers quickly a) if you have access to an online community, you script the survey and get responses the next day, b) if you have access to a customer list, you script a survey and get responses the next day, or c) you go to an online access panel and you get responses within a day or two.
In terms of traditional qual (focus groups or depths) the process tends to be slower. You have to appoint a qual researcher, a recruiter needs to set up the sessions, the sessions need to happen sequentially (one interviewer can’t do three focus groups at the same time), and the analysis is not as quick as analysing a simple survey (but it is quicker than analysing a conjoint study).
The newer qual options can be quicker (for example an online focus group or an online discussion), but they are usually not as quick as a simple online survey. If we take an online discussion or video diaries, a typical project will last a week or more for data collection. The data is thick and meaningful, but from design to analysis we are talking a minimum of two weeks in most cases. A quant study will not be as deep, but can be done in two days, and if the quant is ‘good enough’ or if the results are needed in two days, the quant study will be chosen.
Quant can often be cheaper
The panel companies and the survey platforms have done a great job at streamlining quant surveys. Incentives are low (perhaps the quality is also low), the process is automated, and the net result is a low-cost result. By contrast, qual has higher incentive costs (20 people being paid $50 each is $1000, a quant study using a customer list or an online community might be offering a prize draw of just $200). But, the big difference is in the cost of the researcher. A simple quant study might take 5 or 6 hours including design and analysis. A simple qual study (let’s say an online discussion lasting five days with video uploads) is likely to take 20 hours of a qual researcher’s time.
Why isn’t more qual conducted?
This is broadly the same question as ‘why isn’t more research conducted?’ I believe the two key reasons are:
- People think it will take too long. With the growth of lean and agile ways of doing business, we need to ensure there are research options (qual and quant) that do not unnecessarily slow the business down. We need to conduct research at the ‘speed of business’.
- The ROI of research is not widely appreciated. We need to show the value of research in ways that the users of research appreciate. Furthermore, we need to use the language of business to show the value, not couch it in research terms (i.e. show money earned or costs avoided, rather than statements about awareness or intentions).
Occam’s Razor and the preference for quant over qual
Another way to think about the choice of research techniques is to look at it using Occam’s Razor, in terms of choosing the simplest solution.
We should normally use the fastest/cheapest solution that answers the problem. In practice this means that if a quant study can answer the problem, it will be chosen. Qual comes to the fore when quant can’t adequately answer the problem. Over the last few years research has increasingly realised the limitations of questionnaires and quant, which is perhaps why Qual has held its share of the total research pie, whilst online surveys have lost share to other options.