The Participant Crisis 2022

You can helpPosted by Ray Poynter, 6 April 2022


The world of insights is on the cusp of a great future where human-centric approaches dominate services, product design and business. However, this opportunity is at risk because of the participant crisis.

What is the participant crisis?
The two key elements of the participant crisis are:

  1. the insufficient supply of the right sort of research participants
  2. the growth in bad data

Whilst the picture I am talking about is one that I have been aware of for some time, the data I am drawing on most heavily in this post comes from the work of CASE (Coalition for Advancing Sampling Excellence) and the Insights Association. Please check out the CASE website and check out two great Town Hall discussions held by the Insights Association and led by Melanie Courtright (CEO of the Insights Association).

There is lots of other information available, from the likes of ESOMAR, GreenBook and AAPOR – none of us should plead ignorance.

The Bad Data Problem
In the sessions mentioned above, Melanie Courtright summarised the issues as falling into four buckets

  1. Frauds and duplicates – people trying to scam the system, generally to collect the incentives
  2. Disengaged participants – people doing the survey, but not really paying as much attention as we need them to provide
  3. Mistakes – for example when people mean to type a 9, but they type the adjacent key, i.e. the 0.
  4. Poor survey design – for example asking a double-barrelled question which can’t then be interpreted properly, or where the meaning of the question isn’t clear to some participants.

Frauds and duplicates are a massive problem and there has been amazing growth in the number of tools that the good panel providers use to detect people who are not who they say they are and to detect people who try to do the survey multiple times. (You better check your provider uses one of these services). The most recent CASE study suggested that for a regular, easy to recruit sample the number of frauds and duplicates that are blocked by the smart tools approaches 20%. The number of frauds and duplicates is felt to be much worse for B2B and hard to recruit samples (because the incentives are higher). However, there is a never-ending battle between the fraud detection systems and the frauds – and it is noticeable that the different systems catch different people, this means some frauds are slipping through.

The next line of defence looks at trap questions, speed, coherence of the open-ended comments etc. One example of a trap question is “For quality control purposes, please select the third answer in the list below …”. People who fail to select the third answer are one of a) frauds, b) disengaged, or c) they made a mistake. This leads to a statistic that is often called the ‘toss rate’. The toss rate is the proportion of completed surveys sent to the agency or the client that (when inspected) are deemed to be bad, unsafe or uncertain. The CASE study suggested this could be in the range 15% to 25% – which means that something like 30% to 40% of the data originally collected might need to be cleaned (combining the automated tools with a manual process of looking at the data). The very strong recommendation from CASE was that BEFORE the end-user starts analysing their data, they should check that the data looks right – and they should have a good dialogue with the providers of the data.

Another problem identified by CASE was that of the hyper-frequent survey takers. The study looked at how many surveys people were taking in a single day. They found that 3% of the survey completes were doing more than 20 surveys a day, some of them more than 50 or 100 surveys a day. These 3% account for 19% of all survey completes and are NOT being caught by the current fraud, duplicates, bad response type filters. The consensus is that this is likely to be bad data – but more work is being done, partly to understand how it is even possible to complete 100 surveys in a 24-hour period.

Disengaged Participants, Mistakes, and Bad Survey Design
As well as the frauds, there are problems that the researchers and the data collection systems facilitate. Boring surveys, long surveys, surveys that require mental arithmetic, surveys that require people my age to fiddle with tiny fonts on a mobile design, and badly constructed questions all reduce the quality of the data AND they reduce the willingness of people to do more surveys in the future.

This is not new news, I wrote about it in The Handbook of Online and Social Media Research, published in 2010 – but bad surveys still seem to outnumber good ones.

Insufficient Participants
Not everybody wants to do surveys, even people who are happy to do surveys don’t want to do them all the time, when somebody has a bad experience they become less likely to do surveys in the future, and the number of survey invitations is growing. The consequence of this is that there are too few good participants being chased by too many surveys. The evidence for that can be seen when panel companies have to outsource part of a project to another vendor (or buy them from a marketplace), or where an innovative non-panel provider keeps asking for more time to collect the sample.

How many participants are there?
Response rates have reduced over the years. This means the proportion of the population willing to take part in surveys has fallen. The proportion of the population who are willing to take part in surveys depends on the topic, the incentive, the quality of the survey, and the nature of the task. PeoplePulse estimate that for a typical ad market research survey, the response rate is likely to fall in the range 10% to 30%. Genroe have found that 50% of NPS surveys have a response rate of less than 20%.

The number of people willing to take part in an ongoing commitment, such as an online panel or an online community is less than the number willing to take a single ad hoc survey. From the sources above, the most common estimate of the proportion of population (in the developed, researched, Wester economies) willing to sign up to an ongoing program of research is approximately 20%.

Can’t we just pay more?
One suggestion we hear often (and which I have a lot of sympathy with) is the idea that we should pay people more to take surveys. However, if this is done too simply the result is a massive increase in fraud – the people wanting to cheat the system move faster and work harder than the typical participant. So, while paying more will probably (and should probably) be part of the answer, it can’t be done in isolation.

What Needs to Be Done?
We need more studies like the CASE studies, looking at more countries, looking at B2B and looking at niche audiences. We need to research the extent to which branded online communities help. We need to research what the impact of this has on CX research (which often has zero or minimal incentives).

We need buyers to insist on better data and to check they are getting it. In a conversation with a long-term insider, I heard the view that only about 25% of agencies and end clients manually check their data to reject suspect participants. This reduces the pressure on vendors to keep finding bad data, vendors who pass on bad data can charge lower costs than those that do a better job of cleaning the data.

We need incentive systems which encourage honesty and engagement. We need to make our surveys better, which means engaging, easier, clearer, and convenient. This probably means tackling the methodological issues that bedevil mix-mode approaches. Newer option such as voice, chatbots etc are great for some people, but not for everybody, nor for every type of project.

Solving the Participant Crisis – #NewMR Webinar 14 April 2022
To help address this problem NewMR is hosting a webinar on 14 April, click here to register for it. The webinar comprises two presentations

  • Shifra Cook, CEO and Founder of Ayda will show how technology can help prioritize participants.
  • Ray Poynter, Founder of NewMR will share 7 tips for creating better surveys.