The material below is an excerpt from a book I am writing with Navin Williams and Sue York on Mobile Market Research, but its implications are much wider and I would love to hear people’s thoughts and suggestions.
Most commercial fields have methods of gaining and assessing insight other than market research, for example testing products against standards or legal parameters, test launching, and crowd-funding. There are also a variety of approaches that although used by market researchers are not seen by the market place as exclusively (or even in some cases predominantly) the domain of market research, such as big data, usability testing, and A/B testing.
The mobile ecosystem (e.g. telcos, handset manufacturers, app providers, mobile services, mobile advertising and marketing, mobile shopping etc) employs a wide range of these non-market research techniques, and market researchers working in the field need to be aware of the strengths and weaknesses of these approaches. Market researchers need to understand how they can use the non-market research techniques and how to use market research to complement what they offer.
The list below cover techniques frequently used in the mobile ecosystem which are either not typically offered by market researchers or which are offered by a range of other providers as well as market researchers. Key items are:
- Usage data, for example web logs from online services and telephone usage from the telcos.
- A/B testing.
- Agile development.
- Crowdsourcing, including open-source development and crowdfunding.
- Usability testing.
- Technology or parameter driven development.
The mobile and online worlds leave an extensive electronic wake behind users. Accessing a website tells the website owner a large amount about the user, in terms of hardware, location, operating system, language the device is using (e.g. English, French etc), and it might make an estimate of things like age and gender based on the sites you visit and the answers you pick. Use a mobile phone and you tell the telco who you contacted, where you were geographically, how long the contact lasted, what sort of contact was it (e.g. voice or SMS). Use email, such as Gmail or Yahoo, and you tell the service provider who you contacted, which of your devices you used, and the content of your email. Use a service like RunKeeper or eBay or Facebook and you share a large amount of information about yourself and in most cases about other people too.
In many fields, market research is used to estimate usage and behaviour, but in the mobile ecosystem there is often at least one company who can see this information without using market research, and see it in much better detail. For example, a telco does not need to conduct a survey with a sample of its subscribers to find out how often they make calls or to work out how many texts they send, and how many of those texts are to international numbers. The telco has this information, for every user, without any errors.
Usage data tends to be better, cheaper, and often quicker than market research for recording what people did. It is much less powerful in working out why patterns are happening, and it is thought (by some people) to be weak in predicting what will happen if circumstances change. However, it should be noted that the advocates of big data and in particular ‘predictive analytics’ believe that it is possible to work out the answer to ‘what-if’ questions, just from usage/behaviour data.
Unique access to usage data
One limitation to the power of usage data is that in most cases only one organisation has access to a specific section of usage data. In a country with two telcos, each will only have access to the usage data for their subscribers, plus some cross-network traffic information. The owner of a website is the only company who can track the people who visit that site (* with a couple of exceptions). A bank has access to the online, mobile and other data from its customers, but not data about the users of other banks.
This unique access feature of usage data is one of the reasons why organisations buy data from other organisations and conduct market research to get a whole market picture.
* There are two exceptions to the unique access paradigm.
The first is that if users can be persuaded to download a tracking device, such as the Alexa.com toolbar, then that service will build a large, but partial picture of users of other services. This is how Alexa.com is able to estimate the traffic for the leading websites globally.
The second exception is if the service provider buys or uses a tool or service from a third party then some information is shared with that provider.
A complex and comprehensive example of this type of access is Google who sign users up to their Google services (including Android), offer web analytics to websites, and serve ads to websites, which allows them to gain a large but partial picture of online and mobile behaviour.
Legal implications of usage data
Usage data, whether it is browsing, emailing, mobile, or financial, is controlled by law in most countries, although the laws tend to vary from one jurisdiction to another. Because the scale and depth of usage data is a new phenomenon and because the tools to analyse it and the markets for selling/using it are still developing the laws are tending to lag behind the practice.
A good example, of the challenges that legislators and data owners face is determining what is permitted and what is not, are the problems that Google had in Spain and Netherlands towards the end of 2013. The Dutch Government’s Data Protection Agency ruled in November 2013 that Google had broken Dutch law by combining data together from its many services to create a holistic picture of users. Spain went one step further and fined Google 900,000 Euros for the same offence (about $1.25 million). This is unlikely to be the end of the story, the laws might change, Google might change its practices (or the permissions it collects), or the findings might be appealed. However, they illustrate that data privacy and protection are likely to create a number of challenges for data users and legislators over the next few year.
The definition of A/B testing is a developing and evolving one; and it is likely to evolve and expand further over the next few years. At its heart A/B testing is based on a very old principle, create a test where two offers only differ in one detail, present these two choices to matched but separate groups of people to evaluate, and whichever is the more popular is the winner.
What makes modern A/B testing different from traditional research is the tendency to evaluate the options in the real market, rather than with research participants. One high profile user of A/B testing is Google, who use it to optimise their online services. Google systematically, and in many cases automatically, select a variable, offer two options, and count the performance with real users. The winning option becomes part of the system.
Google’s A/B testing is now available to users of some of its systems, such as Google Analytics. There are also a growing range of companies offering A/B testing systems. Any service that can be readily tweaked and offered is potentially suitable for A/B testing – in particular virtual or online services.
The concept of A/B testing has moved well beyond simply testing two options and assessing the winner, for example:
- Many online advertising tools allow the advertiser to submit several variations and the platform adjusts which execution is shown most often and to whom it is shown to maximise a dependent variable, for example to maximise click through.
- Companies like Phillips have updated their direct mailing research/practice by developing multiple offers, e.g. 32 versions of a mailer, employing design principles to allow the differences to be assessed. The mailers are used in the market place, with a proportion of the full database, to assess their performance. The results are used in two ways. 1) The winning mailer is used for the rest of the database. 2) The performance of the different elements are assessed to create predictive analytics for future mailings.
- Dynamic pricing models are becoming increasingly common in the virtual and online world. Prices in real markets, such as stock exchanges have been based for many years on dynamic pricing, but now services such as eBay, Betfair, and Amazon apply differing types of automated price matching.
- Algorithmic bundling and offer development. With services that are offered virtually the components can be varied to iteratively seek combinations that work better than others.
The great strength of A/B testing is in the area of small, iterative changes, allowing organisations to optimise their products, services, and campaigns. Market research’s key strength, in this area, is the ability to research bigger changes and help suggest possible changes.
Agile development refers to operating in ways where is it easy, quick, and cheap for the organisation to change direction and to modify products and services. One consequence of agile development is that organisations can try their product or service with the market place, rather than assessing it in advance.
Market research is of particular relevance when the costs of making a product are large, or where the consequences of launching an unsatisfactory product or service are large. But, if products and services can be created easily and the consequences of failure are low, then ‘try it and see’ can be a better option than classic forms of market research.
Whilst the most obvious place for agile development is in the area of virtual products and services, it is also used in more tangible markets. The move to print on demand books has reduced the barriers to entry in the book market and facilitated agile approaches. Don Tapscott in his book Wikinomics talks about the motorcycle market in China, which adopted an open-source approach to its design and manufacture of motorcycles, something which combined agile development and crowdsourcing (the next topic in this section).
Crowdsourcing is being used in a wide variety of way by organisations, and several of these ways can be seen as an alternative to market research, or perhaps as routes that make market research less necessary. Key examples of crowdsourcing include:
- Open source. Systems like Linux and Apache are developed collaboratively and then made freely available. The priorities for development are determined by the interaction of individuals and the community, and the success of changes is determined by a combination of peer review and market adoption.
- Crowdfunding. One way of assessing whether an idea has a good chance of succeeding is to try and fund it through a crowdfunding platform, such as Kickstarter. The crowdfunding route can provide feedback, advocates, and money.
- Crowdsourced product development. A great example of crowdsourcing is the T-shirt company Threadless.com. People who want to be T-shirt designers upload their designs to the website. Threadless displays these designs to the people who buy T-shirts and asks which ones people want to buy. The most popular designs are then manufactured and sold via the website. In this sort of crowdsourced model there is little need for market research as the audience get what the audience want, and the company is not paying for the designs, unless the designs prove to be successful.
Some market research companies offer usability testing, but there are a great many providers of this service who are not market researchers and who do not see themselves as market researchers. The field of usability testing brings together design professionals, HCI (human computer interaction), ergonomics, as well market researchers.
Usability testing for a mobile phone, or a mobile app, can include:
- Scoring it against legal criteria to make sure it conforms to statutory requirements.
- Scoring it against design criteria, including criteria such as disability access guidelines.
- User lab testing, where potential users are given access to the product or service and are closely observed as they use it.
- User testing, where potential users are given the product or given access to the service and use it for a period of time, for example two weeks. The usage may be monitored, there is often a debrief at the end of the usage period (which can be qualitative, quantitative, or both), and usage data may have been collected and analysed.
Technology or parameter driven
In some markets there are issues other than consumer choice that guide design and innovation. In areas like mobile commerce and mobile connectivity, there are legal and regulatory limits and requirements as to what can be done, so the design process will often be focused on how to maximise performance, minimise cost, whilst complying with the rules. In these situations, the guidance comes from professionals (e.g. engineers or lawyers) rather than from consumers, which reduces the role for market research.
This section of the chapter has looked at a wide range of approaches to gaining insight that are not strengths of market research. It is likely that this list will grow over time as technologies develop and it is likely to grow as the importance of the mobile ecosystem continues to grow.
As well as new non-market research approaches being developed it is possible, perhaps likely, that areas which are currently seen as largely or entirely the domain of market research will be shared with other non-market research companies and organisations. The growth in DIY or self-serve options in surveys, online discussions, and even whole insight communities are an indication of this direction of travel.
So, that is where the text is at the moment. Plenty of polishing still to do. But here are my questions?
- Do you agree with the main points?
- Have I missed any major issuies?
- Are there good examples of the points I’ve made that you could suggest highlighting/using?