Five Common Myths About Bias and Market Research

Ray Poynter, 21 September 2021


Bias is something that is all around us and we should be aware of it. Bias can lead to mistakes, for example biases led to to the Literary Digest predicting Alf Landon would beat Roosevelt in the 1936 U.S. presidential election (which Roosevel won by a landslide) and to Marty Cooper (the inventor of the mobile phone) to say that “Cellular phones will absolutely not replace local wire systems.”

However, there are many myths about bias, which are themselves a form of bias! and which need addressing. In this post I re-work a blog from a couple of years ago to address five common myths about bias and market research.

Myth 1: Some research is free from bias.

All research suffers from some form of bias. Some people claim that scales that are taken from academic research and combined with random probability sampling and the result will be “objective” or bias-free research – but this is wrong.

In reality, bias enters the research process at every step:

  1. When a topic is picked for research, other possible topics are not picked. That is a subjective decision, i.e. bias.
  2. When the research is designed, a host of biases enter the picture. Social desirability bias means questions like “How often do you clean your teeth?” attract overclaim, in that people tend to ‘round up’ their answers. Acquiescence bias is when people tend to agree due to the way the question is asked. Order bias means that people are more likely to pick the first item on a list. Framing effects mean that the competitive set we ask about changes the answers.
  3. The people who take part in the survey are a source of bias. The people we contact are one bias. Whether they are willing to be interviewed is a bias. Whether they have been interviewed before (by us or by somebody else) is another bias.
  4. When the data is analyzed and the insight is extracted, there are several subjective elements, all introducing bias. Not all of the possible findings from a study can be reported. Researchers create a coherent story from the data, and this is subjective process. The decision about what to leave out and what to leave in are biases. For example, confirmation bias means we are more likely to accept data that agrees with our initial idea than data that disagrees.

A great example of the problem is the research disaster that was the U.K.’s 2015 general election. The polls all got it wrong—badly wrong. There were essentially two types of polls: polls using Internet access panels and those where people were phoned using random digit dialing. Both systems were equally wrong. People who were willing to answer polling questions differed (it turns out) from people who were not willing to answer the questions.

The bottom line: There is no gold standard. Every research method has problems and these need to be assessed. Trade-offs need to be made between speed, cost, relative accuracy and type of bias.

Myth 2: All bias is bad.

Well, if we call it bias, it certainly sounds bad, but that language hides a host of key benefits that are associated with what we call bias. Consider the following:

  • When measuring customer satisfaction, researchers normally restrict their sample to customers. Indeed, CX research is normally restricted to customers who have recently used the company’s service or product; this is bias. The alternative is to interview a more representative sample, e.g. non-customers and customers who have not used the services recently. This more representative sample can’t provide informed information and this adds noise to the results.
  • When running a focus group or online discussion, researchers want people who are willing to take part in discussions—people who will actually share views. A “fair” sample of the public would result in many people who would tend not to participate meaningfully in the discussions. Researchers compromise and thereby get a richer conversation.
  • When choosing researchers for a project, companies normally choose professionals who know already something about the subject. This can lead to confirmation bias, but the alternative of using people who do not already know something about the subject and the techniques used would add noise and be less likely to work.
  • Projects that require effort, such as maintaining a mobile/video diary, means that researchers exclude the people who can’t be bothered to participate. This bias is unavoidable because researchers can’t make people do our research.
  • Creating an online discussion or community tends to attract people who want to be heard. They may be angry with the brand, love the company, think they have a great idea, want to get something off their chest, or have any of a thousand reasons. However, people who don’t have a view or are not likely to change their behavior are less likely to join. Again, focusing on the people who have a view and who may act on that view focused this the research on people who are commercially more relevant.

A purist approach to research would say “ask everybody” and “ask everything,” and then filter out the noise. But this approach is costly, too slow, and is not actually possible. More importantly, the process itself doesn’t get rid of bias completely. In market research, bias can be a way of increasing the signal and reducing the noise.

The bottom line: We can and should use what we already know to increase the signal and decrease the noise, trading-off some risk against a lot of cost and delay.

Myth 3: There is nothing we can do about bias.

The first thing you should do about bias is to recognize that it is all around us. It exists at every stage of every project. The next steps are to evaluate the sources of bias in a project and determine the best way of dealing with them.

A good starting point is asking people from outside your comfort zone to challenge you—to play devil’s advocate. Researchers should also look at past mistakes, and check the published literature on things like questionnaire and project design.

Some biases can be addressed directly. For example, questions can be adjusted to reduce bias. The sample can also be checked to see if key groups are missing. For example, you could check if a certain demographic (like those aged 70 years and older) are under-represented in the research.

With other biases, the best treatment is to hold them constant. For example, you could use the same sample source for each study or use the same questions and apply the same analytics.

Finally, keep the bias in mind during analysis and try to use approaches such as triangulation to support or challenge findings.

The bottom line: Bias should be acknowledged, recognized and controlled, wherever possible.

Myth 4: Insight communities are too biased for “real research.”

Ten years ago, this was a common comment in the research industry. But over the last few years, as the number of organizations using insight communities has rocketed, the concern has shifted from using them at all to whether they can be used for most research purposes. The ESOMAR Answers to Contemporary Research Questions suggests that large, ongoing insight communities are suitable for most types of research projects.

Applying all the points mentioned above, many organizations are using insight communities for almost every type of research. It’s easier to list the types of research they can’t be used for, with the key ones being:

  • Market sizing: Estimating what proportion of the population use which products or services.
  • Media consumption data: For example, estimating what proportion of the population are watching a particular program or downloading a particular song.
  • Research specifically on non-customers: Most communities focus on customers, so if the research needs to focus on non-customers, it is often not a good option.

Two areas where many organizations do not rely solely on their insight communities are brand tracking and customer satisfaction tracking. Organizations that use insight communities for these two areas are typically operating large communities and benchmark changes in their scores rather than assuming that the absolute values are meaningful.

Many users of insight communities also run a small proportion of their research via other channels (for example via online access panels) so that they can get a sense of whether there are differences. They then take those differences into account when analyzing the data.

The bottom line: With care, most forms of research can be conducted via an insight community.

Myth 5: Big data will remove the bias in research

We know that many forms of questions create bias. For example, the question “How many calories did you eat this week?” tends to elicit answers that are distorted by social desirability bias and by lack of knowledge. This fact has led some to feel that observations will provide a bias free answer. However, the process of observing is also a sources of bias, in the choice of what is observed, the impact of being obsevered, and in terms of interpreting what is happening.

The choice of sample impacts the answers you get. For example, the aforementioned UK polling disaster was partly caused by sampling too few over-70 year olds. This has led some to advocate big data approaches that are closer to being a census, thereby providing a bias-free solution.

Big data will do many interesting things, but it won’t remove bias. The first challenge is that when something is measured, it changes. A great example is what happens to motorists if traffic cameras are erected. When we know our behavior is being monitored, we sometimes change our behavior, resulting to bias.

Big data throws up more spurious correlations than conventional research (because there are more chances for spurious relationships in larger data sets). In many cases, the decision about which are spurious and which are meaningful is subjective, based on our prior assumptions. For example, the U.S. has more guns and more homicides per person than most other countries. However, the interpretation of that data tends to depend almost entirely on one’s beliefs (bias) about guns.

The bottom line: Using big data doesn’t remove bias and could, in fact, result to spurious conclusions.

Conclusion

Bias is unavoidable. In fact, as I discuss above, bias can often result to more effective and efficient market research practices. Being aware of these myths about bias can help you get more accurate insight when working with insight communities and other types of customer intelligence tools.

Thank you for reading my five myths about biases. Do you have any that you’d add to the list?

9 thoughts on “Five Common Myths About Bias and Market Research

  1. Great post Ray. Another thought provoking goodie! Isn’t bias all pervasive? Jealousy bias, holier than thou bias, age bias, weather bias, magician bias, we all have bias for or against just about every aspect of life – we consciously and subconsciously introduce bias the whole time in order to get our own way, or influence others, etc…. But perhaps my biggest question is – is “bias” the right term? It sounds so much kinder as a term than “prejudice” which is really what “bias” is. Should we call it what it is, and that might jerk us into being more aware of what “prejudices” we are bringing to everything we do? Also, bias can be a really good thing, e.g. in fashion – because cloth cut on the bias tends to hang so much better and create a whole different look and feel – opening up new possibilities! Prejudice is seldom a good thing – I’d argue it is always a dangerous/negative thing. Something to be entirely wary of……

  2. As expected Ray a Great summary of a subject that can be both complex and simple. You know we’ll I get worked up over certain types of bias ( age, technology usage, recruitment promise failure etc). Which of course , like all of us , highlights my own bias. My own history as a white Australian who has mostly worked across multiple cultures in Asia has been my “offer” and my bias. Your right to point out we can’t avoid bias, we can work to limit and recognise it. well done again

  3. I think some biases are clearly prejudice (for example gender bias and confirmation bias), but I think there are plenty of biases where describing the effect rather than ascribing blame to the cause is more appropriate. For example, many people decline to take part in research, this is a bias, it is not prejudice on the part of the researcher, and it is probably not helpful to describe the people who decline to take part prejudiced. Similarly, biases such as loss aversion and errors created by anchoring and framing are features of the human brain and its development of heuristics to enable System 1 to do most of the work, as opposed to prejudice as such. Another illustration of bias in another context is the game of lawn bowls. The game is played in straight lines but the bowls have a bias, they don’t run straight, they carve an arc as they roll along the grass. The players can neutralise this bias by bowling the bowl fast, to make it strike another bowl and knock it out of the way, or they can utilise the bias by bowling the bowl so that it go around a blocking bowl to get closer to the jack.

  4. When i was studying ethnography, one of the key points made was the complementary benefits that an insider and an outsider can bring. The insider knows more about what to expect and can interpret from their own experience, but the outsider can utilise their naivety to see actions in a fresh way.

  5. Thanks for sharing Ray. One of my go-to ways to overcome bias concerns is with tracking studies. By monitoring trends and changes in responses instead of the snapshot of the responses themselves we essentially remove part of the bias.

  6. Hi Ray, it’s an interesting and comprehensive summary of the common myths about bias. If bias is not recognised and managed, it can seriously compromise the robustness of data.

    I have also come across “lack of objectivity” which could be linked to your Myth number 5 where respondent’s behaviour is changed. My specific example is based on a 360 degree B2B benchmarking study where trading partners (suppliers and customers) in FMCG industry assessed quality of the commercial/business relationships. Customer respondents, in particular, were often seen as deliberately marking down their suppliers to “send the message home”, to influence the outcome of commercial negotiations and business plans.

    Bias or lack of objectivity can also be a by-product of an emotional reaction when providing responses. For example, when conducting the B2B study referenced above, buyers (working for retailer or wholesaler companies) were sometimes influenced by a recent challenging meeting with a supplier, rather than taking a more balanced approach/view about supplier’s performance, spanning over a longer period of time. This would become apparent in the follow-up interviews during which they would admit to being unbalanced when completing a survey.

  7. Great post Ray. I’ll add one of the advantages of Bias – Work off of hypotheses (which are a form of bias). I’ve had clients say “I don’t want to tell you my hypotheses because that will bias your work”. Well, yes it would. It would make sure I sharpen my questions to directly address your hypotheses. A good researcher will try to both prove and disprove a hypothesis, but to deprive me of your thinking can result in less insightful and actionable results.

    P.S. you first word is missing an “s”

Comments are closed.