Market Research in Japan, an Alternative Adoption Curve

Click here to read in Japanese – 日本語 Most market researchers are familiar with the Rogers Adoption Curve, which divides the adoption of a successful new technology in to Innovators, Early Adopters, Early Majority, Late Majority, and Laggards. In a typical version of the curve, the proportions tend to be: Innovators 2.5% Early Adopters 13.5% Early majority 34% And the slower two categories make up 50%. However, in Japan, in market research and perhaps beyond, I think the proportions in the Rogers Adoption Curve need re-visiting. Data presented by Mr Hagihara (author of ‘Next Generation Market Research’) at a meeting of JMRX in Tokyo this week, showing the adoption of CATI in the 80s and 90s, suggests that Japan was slow to innovate in market research. More recently the data presented by Mr Hagihara show that Japan was very slow to start to adopt online surveys. However, by 2011 Japan had the highest percentage of online research in the World. In Japan 40% of research in 2011, by value, was conducted online, according to JMRA and ESOMAR. Talking with leading opinion formers in Tokyo this week, I formed the opinion that the Adoption Curve has a different shape in Japan. […]

How to run a successful MROC in Japan?

Click here to read in Japanese – 日本語 Yesterday in Tokyo I attended two events (one run by the JMA and one by JMRX – sponsored by GMO Research) and a client meeting, and one specific question arose at all three. The background to the question lies in Japan’s experience with MROCs (in particularly short-term, qualitative research communities). Although some companies have been very successful, several others have not, and some clients are beginning to be worried about MROCs. So, the question I was asked three times was “How do you create a good MROC in Japan?” By the time I had spoken to three audiences I had refined my answer down to three clear points: Good recruitment. A short-term, qualitative MROC (e.g. one month, 60 people) needs to be based on the right people. These people need to be informed about what they will be expected to do, they need to understand how to access the MROC, they need to be engaged with the topic (they might love the topic, hate the topic, be curious about the topic, have recently started using it, or perhaps have given it as a gift – but they need to be engaged). Good […]

NewMR and Asia

I am about half way through my current tour of Asia, I have been in Singapore and Hong Kong and fly to Tokyo tonight. During my time here I have split my time between events and one-on-one meetings with clients and agencies. I am convinced that the next leaps forward for NewMR will come from Asia. There are several reasons why I think that Asia will be the next factor in accelerating change: Hunger for change. One theme I am hearing from agencies and clients from China, Korea, Singapore, etc is a real desire to move forward, to get away from the static idea of surveys and focus groups and to embrace better alternatives. The interest in ethnography, semiotics, Big Data, neuroscience, social media, and communities is immense. Most of the sessions I have been booked for have been sold out. The complexity of the market. In Asia the languages are more complex (more complex for computers that is), and the variations within are greater (for example in China the gulf between what is possible in a Tier 1 city like Guangzhou and a Tier 3 like Weihai, or a Tier 4 city like Chaozhou is immense – but even […]

Laplace and Big Data fallacy

Earlier this week I was in Singapore, attending the MRSS Asia Research Conference, which this year focused on the theme of Big Data. There was an interesting range of papers, including ones linking neuroscience, Behavioural Economics, and ethnography to Big Data. One reference that was repeated by several of the speakers, including me, was IBM’s four Vs, i.e. Volume, Velocity, Variety, and Veracity. Volume is a given, big data is big. Velocity relates to the speed that people want to access the information. Variety reminds us that Big Data includes a mass of unstructured information, including photos, videos, and open-ended comments. Veracity relates to whether the information is correct or reliable. However, as I listened to the presentations, and whilst I heard at least three references to the French mathematician/philosopher René Descartes, my mind turned to another French mathematician, Peirre-Simon Laplace. In 1814, Laplace put forward the view that if someone were (theoretically) to know the precise position and movement of every atom it would be possible to estimate their future position – a philosophical position known as determinism. Laplace was shown to be wrong, first by the laws of thermodynamics, and secondly and more thoroughly by quantum mechanics. The […]

What will Telepresence bring to Market Research?

Helen Thomson has a great article in New Scientist (you’ll need to register to read it) about how we already have the technology to attend an event via a robot. Thomson starts her article by talking about a 7 year old child who can’t attend school because of allergies and who attends via a robot, linking from his pc/video to the robots audio tools in the classroom, a phenomenon known as telepresence. Thomson talks about two leading brands of robots that are currently available on the market. The two brands are VGO and Anybots, which currently cost about $6000 and $10,000, respectively. Pricey, but not as expensive as flying somebody from London, to Sydney, to Hong Kong, Tokyo, to London, which is what happens to me sometimes. However, Thomson reports that this technology is about to get a lot cheaper. Double Robotics have announced a product for 2013 which will use an iPad for its head and cost about $2000. Thomson talks about a wide range of telepresence examples and issues; including: drones being used in the battlefield, surgeons operating on patients thousands of miles away, and even robots manipulated via signals detected through an fMRI scanner. My interest is […]

2012 – the year manual coding made a comeback!

In 2011, at events and conferences around the world the world seemed to be on the edge of a new world, a world where automated coding, and in particular automated sentiment analysis, would allow researchers to tackle megabytes of open-ended text. A great example of that confidence was the ESOMAR 3D Conference in Miami. What a difference a year makes. Last week in Amsterdam the news was all about researchers manually coding vast amounts of open-ended comments, because the machines would not deliver what the researchers had hoped they would deliver. The prize, undoubtedly, went to Porsche and SKOPOS who reported on a social media study where they captured 36,000 comments, mostly from forums, and ended up coding the comments manually. I remain convinced that automated techniques will continue to develop and will soon open the door to large data sets. But for the time being, much of the material that market researchers handle will need to be, at least partly, coded by hand. My suspicion is that Twitter will prove to be less useful than blogs, open-ended comments in surveys, and conversations in MROCs. When I work with Twitter, my feeling is that the grammar is to unstructured, the […]

Is the term MROC morphing?

When Brad Bortner, of Forrester, coined the term MROC (Market Research Online Community) in 2009, he defined it as a qualitative tool. This definition has been used widely to define qualitative communities, in contrast to online access panels (large, quant, and minimal community), and in contrast to community panels (large, qual and quant, with community). However, the difference between access panels, community panels, and MROCs may soon be a redundant distinction. One of the reasons that things are changing is that the platforms for conducting research via communities are changing. In the early days of using communities for research, there were essentially two types of platforms, discussion-based systems and panel based systems. Discussion-based systems, such as forums and bulletin-boards, tended to have good discussion tools, but they had few survey options and few community management tools. This made them ideal for small communities, e.g. 30 to 300, where the overhead of looking after queries, incentives, sampling etc were very simple. Companies that opted for this type of platform tended to stay with a qualitative type of research, making the term MROC, synonymous with both the type of research and the type of software used. Panel-based systems, such as community panels, […]

What do we need? Faster Research! When do we need it? NOW!

Yes, it is a truism that we want better, cheaper, faster research, and that we have always wanted better, faster, research. However, right now research is becoming redundant to many decision makers because it is not fast enough. This theme was covered in a recent article on Research-Live. A couple of weeks ago I was at the annual conference of the AMI (Australian Marketing Institute) and speaker after speaker highlighted the pace of change and the need to respond quickly. Mark Lollback from McDonald’s showed how his company went from tasting a product at a regular review session to launching it, with TV advertising, in fourteen days. Joanna McCarthy of Kimberley Clark showed how they used scenario planning and ‘war gaming’ to be able to respond to social media stories in real-time (in sensitive areas like product malfunctions in toilet paper and nappies). I was at the AMI Conference as a keynote speaker and to launch my latest (free) book The Quick and the Dead which looks specifically at the need for speed, and suggests a new framework for how organisations can adopt a Built for Speed approach, you can download a copy of it by clicking here.