Published by Ray Poynter, 3 October 2021
The first CT Scanner (Computed Tomography Scanner) was produced 40 years ago, by British electrical engineer Godfrey Hounsfield. CT Scanning rotates an X-ray source around the subject and the results are picked up on an array of detectors. A computer then collates the multiple images and creates a set of three-dimensional slices through the body, allowing doctors to see what is happening inside people.
From a collecting better data point of view, the interesting thing about the CT Scanner’s development was the change in direction that Hounsfield caused. In the 1960s through to the early 1970s the focus was on creating ever more accurate X-rays, with finer resolution. But X-rays reduce a three-dimensional subject to a two-dimensional picture, and whilst they are good are representing bones, they are not good for understanding soft tissue issues. Hounsfield’s scanner had a much lower resolution, but it produced a 360° view, enabling new things to be seen. Over time, the resolution and operation of CT Scanning improved, and in 1979, Hounsfield along with the South African physicist Allan Cormack were awarded the Nobel Prize in medicine for their work on the practice and theory of CT Scanning.
The lesson for market research?
The key lesson is to focus on usefulness not accuracy and to focus on getting a complete picture rather than an enhanced partial picture. Example of this process include:
- Longitudinal Data. If we take a single picture of satisfaction, needs, or beliefs we do not get a sense of how they are changing, collecting data across time will give a better picture. If we collect different samples each time we gather new data; we are using cross-sectional studies, which means we assume the differences are due to people changing – but we can’t be sure. Collecting longitudinal data, i.e. measurements from the same people over time allows us to see the changes that happen. This is particularly important in CX studies.
- Identifying the undercounted A good first question to ask when looking at a sample is ‘Who is missing and who is being undercounted?’ – instead of just looking at how big the sample is. Consider the 2016 USA Presidential Election, the aggregate of the polls (i.e. based on the total of hundreds of thousands of responses) predicted a win for Hillary Clinton, but in terms of the Electoral College, Donald Trump was the clear winner. In reviewing what went wrong, one of the key findings was that white, Christians, without college education were under-sampled. This was partly the fault of sampling and weighting, but it was also a problem of non-response errors. The people who distrust ‘the system’ and who were more pro-Trump, were less likely to agree to do a survey. In the NewMR webinar on 14 October 2021, Allen Porter & Malgorzata Mleczko, from Enghouse Interactive will show how to use multi-mode methods to collect data from people who might otherwise be missed or undercounted.
- Asking the right question There are many ways to ask the wrong sort of question, for example asking people questions that are impacted by biases, e.g. “Do you cheat on your partner?”, or ask questions which people don’t know the answer to, e.g. “How important to you is it that the brand of detergent you use has good advertising?” There are also cases where the question is wrong because of the context. When New Coke was being tested in the 1980s, the researchers tested whether people preferred the new flavour to the old flavour (and they did). However, they did not test, “Can we change the flavour of Coke?”, and it turned out the answer to that was no. More data would not have helped, they needed a better question.
- Triangulation. Triangulation is the process of taking a reading from two or more sources to get a better result. In addition to a survey, perhaps collect some video evidence, and combine that with observational data, or published third-party data, or benchmarks.
- Accounting for heterogeneity. If your data is too limited, it can (and will) fool you sometimes. One example is Simpson’s Paradox, when the trend in the aggregate is in the opposite direction to the trend in the groups that make up the data. In 1973, data from the University of California, Berkeley showed that 44% of men who applied were admitted, but only 35% of women were admitted. But, when the separate departments were looked at, women we equally likely or more likely to be accepted in every department. (Look up how and why here). If the data set had not had the breaks by department, the wrong conclusion would have been generated.
- Auditing your AI. Artificial intelligence and machine learning are the next big thing. But they introduce a new type of bad data problem, biases and errors built into the training data. For example, back in 2014, Amazon started developing software that would process job applications and select the five most employable candidates from a list of say 100. However, despite several efforts to fix the program it kept biasing the results in favour of men and away from women, because that is what had been happening in the real process, and the software learned (and perfected) the biases that were implicit in the data. More data was not the answer, they needed better data.
Want to find out more about collecting better data?
NewMR is hosting a webinar on this topic on 14 October, with two presentations and Q&A sessions – click here to register
- What does collecting better data mean, and how to achieve it?
Ray Poynter (NewMR & Potentiate) presents a 2021 State of the Art review of the issues surrounding the collection of better data. Ray will outline the key challenges, new initiatives, the impact of quality on decisions, and pointers to what is likely to happen in the near future.
- Thinking Outside the Mode – Improving Accuracy by Choosing a Multi-Mode Approach
Allen Porter & Malgorzata Mleczko, from Enghouse Interactive will show how to use multi-mode methods to collect better data. Allen and Malgorzata will show how using a mode-independent approach to designing your studies can significantly improve data across your organization.