AI and Synthetic Data Dominate ESOMAR Congress in Athens
Ray Poynter, 17 September, 2024
Last week saw over 1000 insight and market researchers from 78 countries gather in Athens for ESOMAR’s Annual Congress. With three stages and a host of side events, the range of topics discussed was immense, but one theme stood out above all the rest and that was AI. One aspect of AI in particular was widely discussed, reviewed and commented on – namely Synthetic Data.
Multiple aspects of AI
Here are just a few of the presentations on the topic of AI
- “A 21st Century Euphrosyne: How AI and insight communities unlocked the secret of joy” by Andrew Cannon and Paul Hudson – how AI helps in generating insights from communities and enhancing joy.
- “The Human AI Symbiosis: Defining the future of insights: where wisdom meets technology” by David V. L. Smith and Adam Riley – focusing on the relationship between human insights and AI.
- “Fear of Missing Out, Fear of Getting In: What are opportunities for marketers in the AI era?” by Minh Nguyen, Mahima Surana, and Munish Bhamoriya – exploring AI’s role in marketing.
- “Semantic Frontiers: Unearthing trends with AI and knowledge graphs” by Benjamin Bado, Benoit Hubert, and Laurent Vuillon – covering AI’s use in trend discovery.
- “AI Power and Human Expertise: A product innovation journey” by Nic Umana, Meghan Reinhardt, and Donovan Kennedy – looked at how AI complements human expertise in product innovation.
- “AI: Perseus or Zeus? How AI can be both the hero and antagonist in researchers’ quest for the truth” by Kirstin Hamlyn and Suhasini Sanyal – examined the dual role of AI in market research.
- “Amplifying Human Insights with AI” by Shari Aaron, Shayna Beckwith, Matt Blacknell, and Betsy Fitzgibbons – discussed AI’s impact on improving human insights in delivery services.
- “AI-Enhanced Market Research: Assembling a perfect team” by Laurence Minisini, Orkan Dolay, Denis Bonnay, and Melissa Short – focused on AI in enhancing market research.
Synthetic Data
There were four key presentations that were focused on synthetic data, these were:
- “Mapping Billions of Synthetic Journeys: Creating a synthetic population for enhanced out-of-home audience performance”: Author: Jean-Noël Zeh. This paper showed how Ipsos had worked with Mobimétrie. The project took census data for France, data from a panel of 18,000 people who tracked their journeys with GPS to create a synthetic population of 4 million people. This project allowed the team to produce information for a range of out of home advertisers, such as Cityz Media and JCDecaux. On this occasion the synthetic data were generated via a form of data fusion.
- “Synthetic Data in Marketing Studies: Exploring the promise of generative AI and synthetic data”: Authors: Samuel Cohen, Thomas Duhard. The presentation showed how over 7,000 data sets from Pew Research had been used to validate the system’s ability to boost under sampled groups – an approach often referred to as augmented synthetic data. The presentation highlighted a case study from France, looking at swing voters in France in the European Elections.
- “Unleashing Innovation in Market Research: Using synthetic data to solve client problems”: Authors: Julia Brannigan and Kerry Jones. This paper was a collaboration between an agency and an end client. The researchers generated synthetic personas and then subjected them to qualitative questioning. By comparing the results to real projects, they were able to form the view that a) the personas seemed able to answer simple survey type questions and b) the personas were not good at the deeper questions with emotional nuances.
- “Modern Day Prometheus: Digital respondents and their implications for market research”: Authors: Michael Patterson and Cole Patterson. The authors constructed side-by-side comparisons of asking questionnaires to real and synthetic samples. In aggregate the synthetic samples replicated the real data well. However, looking at individual cases showed some areas of concern as did dealing with emotional and nuanced concepts.
Other Aspects of Synthetic Data at Congress
The ESOMAR AI Task Force had a face-to-face meeting at Congress and a working group for Synthetic Data has been established. The group will, in conjunction with the ESOMAR Professional Standards Committee, work to create a definition of Synthetic Data, to establish a set of case studies, and to help the committee in creating recommendations.
I chaired a panel conversation on the topic of Synthetic Data and this gave the audience a chance to debate the many issues they had seen presented. There was a wide diversity of opinion in the room about what works, what we should do next, and what the short-term and long-term implications of Synthetic Data will be for insights and market research.
My net takeout?
Overall, it is clear that insights and market research are moving quickly to utilize and explore many aspects and uses of AI. Sometimes our profession is accused of moving too slowly, this is not true of AI. There might be a few companies who are moving slowly, but the majority seem to be making rapid steps.
Ray:
What was the reaction of the audience to telling their management that they are using synthetic data? How does their management feel about synthetic (i.e., fake) respondents?
Steve
Fake would imply that the provider was trying to deceive the buyer. In all of the projects presented, the buyers were active parties in the decision.