Do you know how to assess innovations in market research and insights?

Virtual Reality HeadsetPosted by Ray Poynter, 1 November 2018


We are surrounded by new approaches to understanding customers and markets, for example: behavioural economics, automated facial coding, neuroscience, chatbots, passive tracking, Artificial Intelligence, and of course big data. However, evaluating these new options is becoming ever harder, because there are so many of them, and because they make claims that are based on technologies that are hard for non-experts to understand.

In this post, I want to share some of the techniques I use to assess innovations in market research and insight. In essence, I look at the following issues:

  1. Can it be provided by multiple suppliers? If an innovation can only be utilised via one supplier, it is much less likely to be successful, and I am much less likely to recommend it. Good innovations benefit from competition, prices come down when there is competition, and the diffusion into a market is accelerated if several solutions are available. When online surveys burst on the scene, we could use several different platforms to write the surveys, and choose between several difference panel companies for the sample – this promoted adoption, and cost reductions.
  2. Does it increase speed and/or reduce net price? In most cases, an innovation is only successful if it reduces the total spend on research, and typically the gap between problem identification and results needs to be faster than other options. The shift to CATI, the shift to online, and the shift to packaged solutions such as Zappi are all promoted by reductions in cost and time. ‘Better’ is only occasionally a driver of success.
  3. Does it tackle a known problem that clients will prioritise? When Maurice Millward and Gordon Brown invented brand tracking in the 1980s, advertisers knew they needed a method of evaluating their spend. When, in 2003, Fred Reichheld came up with NPS, the adoption was rapid because brands knew they wanted a single number system for evaluating customer satisfaction. These are the situations when better works, i.e. when better address a known need that people will prioritise.
  4. Is it scalable, in terms of its intended role? If the technique is supposed to be a mainstream solution, then it needs to be capable of being rolled out across the developed economies (or at the very least across the USA). This means the technology needs to be capable of handling large numbers of projects, the support teams need to be capable of being scaled up, and there needs to be a method of finding sample. If a technique requires data scientists, or brain specialists, it is unlikely to make it as a mass product. If the product is intended to be an occasional product, for example, video ethnography, it still needs to be scalable, but the scale is smaller. The questions become things like, can you find enough willing users, are there sufficient relevant researchers (e.g. people capable of designing and interpreting video ethnographic studies) and can the technology work in the markets of interest, with the sorts of participants you have in mind.
  5. Is it easy for prospective clients to know it works? The best proof for most clients is that other similar clients are using it successfully. The next option is that the service is very simple to assess, for example the role of a panel company. A third method is to use products that are already known and trusted (for example the way that Zappi offer Link tests that have been developed by Millward Brown).
  6. Is it a good cultural fit with the market? DIY has been enormously successful in the USA, partly because it is a good cultural fit – many people like to feel they are in charge and are getting a bargain. In some of the markets in Asia, the idea of telling a client that he/she can do the work themselves is less attractive. In Europe, ideas around semiotics are very attractive, but they are a less good fit in the USA.
  7. Is it easy to transition to the new solution? If a client has to stop using something in order to be able to get the advantages of the new, the new will suffer. Loss avoidance often drives decision making, clients do not want to lose historic data, they do not want to re-design processes, they do not want to have to educate internal clients etc. People like to talk about ‘disruption’, but buyers do not often voluntarily choose to be disrupted.
  8. Does it have a wide area of use? Some techniques are good, but they are trapped within a niche. A good example of this is VR (virtual reality). For tests that need VR, it is invaluable, but most market research does not need VR, indeed most market research would be slower and more expensive if it used VR. Companies like Steve Needel’s Advanced Simulations have been using VR for over 25 years, dealing with problems that need VR – but this does not mean VR will expand into areas where it is not essential.

Using these questions we can look at various recent changes that have been successful, for example:

  • Online Communities, these delivered faster/cheaper research and enabled brands to have conversations with their customers, which was something they were asking about. Initially the take up of online communities was slow as there were only a few providers and the adoption process could be disruptive.
  • Mobile Surveys, the main driver here was the fact that so many clients (and agencies) knew they needed to do it. Participants made it necessary because they chose to respond via mobiles.
  • NPS – Net Promoter Score, from 2003, in about 10 years this metric went from being published in the Harvard Business Review, to being almost ubiquitous. Despite attempts at legal restrictions, the solution was available from many solutions, the cost was low, and it made the lives of clients easier (because it meant they could focus on a single number). The many attacks on the true validity of the measure have not dented its adoption, because its adoption was not driven by its technical excellence – it was driven ease of collection, ease of analysis, and its production of a single number.
  • Smartphone video, this has taken off because it answers some key needs, it makes research buyers look smarter to research users and it can be undertaken in a wide range of ways. I predict it will grow even more as the infrastructure to support it grows (e.g. the transcriptions, storage, AI etc).

And here are some that have not been so successful:

  • Improved Tracking. For several years there have been options to add passive data, big data, and social media data to brand tracking studies, but only a few companies have adopted them. The two key blockers have been a) it wasn’t faster/cheaper/easier, and b) the disruption of losing historical benchmarks. This change will happen, but only when the replacements are easier and when clients reach the point that they can’t keep repeating past errors.
  • Behaviour Economics at Scale. There are some really good small agencies doing great BE work, but the use of BE has not been scaled up. There are no mass-produced solutions, each project is designed by an expert (and there are only a few of them) and interpreted by an expert. BE has influenced survey design and panel management, but large-scale BE remains rare, and will remain rare unless it can be standardised or utilised via AI.
  • Social Media Research. This has been around for more than 10 years now, and some good work is being done, but it has not taken off. Indeed many organisations have scaled back their use of social media research for areas other than integration with social media advertising delivery and evaluation. The solutions provided by social media research to issues such as customer satisfaction, market measuring, brand tracking, and segmentation tend to be slower and more expensive than surveys – which means that surveys remain much more widely used for these purposes than social media research.

 

IIeX APAC 2018Do you want to learn more about how to evaluate new research approaches?

I will be running a workshop on evaluating new innovations at IIeX Bangkok 28-29 November. Indeed, I will be running the 40-minute workshop twice at the conference, with hands-on practice to help you develop your skills at evaluating the future. If you are interested in attending this workshop, there are still conference places available.

Do you want to ask about other new techniques?
If so, ask about them the in comments below.

3 thoughts on “Do you know how to assess innovations in market research and insights?

  1. Ray – thought provoking as always. Begs a question – how many of these criteria does a technique have to meet to be rated as successful in your scheme? Like you say (thanks, by the way), our VR approach meets some of your criteria, not others, yet we would like to think we’ve been successful – heck, we’ve been going for 25 years now. But we don’t have wide areas of use and clients are not prioritising things like category management research these days. It’s a tool for a purpose, which it sounds like you would downgrade because its use is not sufficiently wide?

  2. Hi Steve, in this context I am thinking of success as something that is associated with about 1% or more of global market research, so at least $0.5billion or more. As your company have demonstrated, it is possible to offer a good service and make a good living using a technique that is not used for a wide range of different purposes. The point of the post is not to belittle techniques like eye-tracking, VR, AR, or traditional ethnography, but to highlight that there is a difference between ‘the next big thing’ and something which fills a niche.

  3. Well, that makes sense. We never expected VR to be the next big thing. Perhaps there’s 2 definitions for success; one that is the next big thing (like online research, mobile research, etc.) and one that is meant to solve more specific research problems.

Comments are closed.