Market Research: Belief in a Pseudo Science

Probability is an interesting thing. It’s relatively easy to evaluate the probability of a particular event happening but, when it comes down to it, we humans are much more concerned with our own experience.

If we experience an unlikely event we tend to massively over-estimate the likelihood of that event happening again.

For example, last week I was hit on the head by a golf ball. There I was waiting for someone else in my group to play, minding my own business, when an errant drive from an adjacent hole landed a direct blow to my skull.

Fortunately, this was a good inch above either of the points where it could have killed me (the temple and the eye).

The probability of me being hit was tiny. I was playing golf with friends who I play with only once or twice a year. We had never played the course before. Every single shot we had hit up to that point had contributed to us being at that part of the course at that particular moment. And if I had selected a six iron instead of a seven iron as I teed off on the par 3 I would have been on the green not just short of it.

Objectively, millions of people play golf, hardly any serious injuries or deaths are reported, so the probability of me being hit again are as tiny as they always were.

But my mind is already working on ways it can avoid a repeat occurrence: should I always wear a hat – the solid peak would afford some protection to my face. My Oakley sunglasses have lenses that are tested to be impact resistance and arms that cover the temple area – should I wear them like safety glasses every time I play (even when there’s no sun).

Rationally I know that I should carry on as before, but emotionally I want to ‘learn’ from the experience.

Probability gets distorted in other ways in our perceptions too. The artificial learning we do from stories mean that sensational news of child abductions or air disasters feel like much more significant risks than their probabilities would suggest they should do.

When it comes down to it, we aren’t as objective and scientific in evaluating factors that affect our own lives as we’re inclined to tell ourselves.

All of which brings me to market research.

As I explain in my forthcoming book Consumer.ology, market research is a pseudo science. It isn’t a proven demonstrable, repeatable science; it’s something that people feel makes sense and that they choose to believe in.

Once belief is established – usually because people like the idea of whatever is being suggested – the human minds lack of objectivity kicks in. We can select the evidence that reinforces our beliefs and tacitly ignore that which conflicts.

If you consider anything that other people believe in and you don’t you will see this pattern; the tricky part is seeing it in something you have, up to this point, chosen to believe in yourself.

I will leave Consumer.ology to explain why the vast majority of market research is fundamentally flawed (and what you should do instead). But for now I’ll ask you to consider the probability that market research will be right, and why you shouldn’t let a misguided sense of probability determine your belief in it.

The standard defence to my assertion that science can’t support the notion that asking consumers what they think is valid any more than it can support the idea that the position of the planets and your birthday will determine what kind of day you’re going to have, is to say that, “We’ve used market research on lots of occasions and it guided us to the right decision.”

What is the probability that a piece of market research will produce a result that is correct?

Broadly speaking there are three possible outcomes from conducting market research:

  1. The results suggest you should take a particular course of action that turns out to be profitable.
  2. The results are ambiguous.
  3. The results suggest you should take a particular course of action that turns out to be unprofitable.

Already, the probability of research being wrong is one third.

However, virtually no research is unbiased. The very moment that you ask a question you are pointing the spotlight in a particular direction. That direction is informed by what the business believes is important and often by what the business has decided it wants to do and is simply looking to justify.

Since the business should know what’s important the likelihood is that it is looking for confirmation to do something that it intends to do. This will affect both the questions it asks and the way it interprets the results. Research becomes an exercise of confirmatory self-justification. Unless the company is clueless this should be right more often than it’s wrong.

So now the probability of research being wrong is dropping further – perhaps as low as one in ten.

That makes it much easier to unconsciously dismiss these occasions as aberrations: some glitch in the works that can be over-looked because we ‘know’ research works from the other nine occasions.

But I would argue that we should look at the question of research validity from two different directions.

Firstly, since research does fail (and there are countless examples that have found their way into the public domain), we should seek to understand why.

Secondly, we should look at the science underlying the notion of asking people questions: does it make sense that people can explain what they think or what they will do?

It’s tempting to say “Of course” because we all like to believe that we know we think ourselves and that our actions are consciously determined.

But…

After examining all the evidence from psychology and neuroscience I have concluded that people can’t explain what they think or what they will do with any useful degree of accuracy.

Consumer.ology explains why this is the case and what you should do instead. But for now, whilst you wait for it to be published in the UK this September (November in the US), I would urge you to start being more objective about the market research you’ve experienced. The probability is that you’ll be shocked.


Image courtesy: SalFalko

4 Comments

  1. Craig Kolb

    Cannot agree entirely. Firstly MR is not always about forecasting whether or not something will be profitable or not. So the probability of being ‘right’ in the sense of a forecast isn’t going to help you assess all of the MR being conducted. Examples include segmentation and satisfaction research. Secondly research is repeatable and the errors can be measured when the output is observable behaviour. Nielsen Bases volumetric forecasting is repeated again and again, allowing the measurement of error. Lastly, I don’t agree that people’s claimed future behaviour is so flawed that it can’t be used. It is extremely useful when measured correctly and when there are known systematic biases, calibrations can be used. Diary panels are a good example, these can be calibrated against actual sales using distribution theory – so that things like penetration and purchase frequency are more accurate.

    1. Philip Graves

      Hi Craig,
      Thanks for dropping by and taking the time to comment. You’re absolutely right that a lot of market research is not about forecasting – however, the vast majority that is about understanding existing behaviour is, equally, profoundly flawed because it presumes that people understand their own behaviour. All the evidence that we have from behavioural psychology and behavioural economics proves that this is not the case (because people have no direct access to the unconscious traits that drive such a large proportion of behaviour).

      I have lost count of the number of meaningless segmentation studies that I’ve encountered in my work (other than behavioural ones). The simple question to ask is, “How does our non-behavioural segmentation link to behaviour?” If it no such link can be established, one has to ask whether the difference is one of self-perception rather than something that is commercially meaningful. For example, in work I did with a large energy company, they wanted to use their environmental attitudes segmentation to improve take-up of environmental products and services. But when I got them to cross-reference how those attitudes related to the existing take-up of those products and services there was absolutely no correlation – each group was just as likely to have used them. And, I would argue, if you can identify behavioural differences, use that as your segmentation!

      Satisfaction studies are largely meaningless. They reflect misattributed feelings that are a consequence of the context, prior expectations and individual outcome, rather than the overall quality of a service. There are much better ways to track and improve service quality. The best example that comes to mind is the hospital satisfaction survey that, it transpired, was actually driven by the amount of pain relief patients were given!

      As for Nielsen Bases… I’m pleased to say that most of the companies that I work with are now moving on from it. If Bases worked we would have a constant stream of successful new product development in every market. The reality is that most products are not still on the shelf 12 months after they launch. Bases can’t hope to be an accurate reflection of consumer behaviour, because it strips away context and forces attention onto a product (these are arguably two of the biggest psychological drivers of consumer behaviour). The fact that there are known biases and systematic calibrations used simply serves to underline the point that one shouldn’t trust what people say. Those calibrations are a manipulation that try to address the flaws (lack of psychological validity) in the data based on how things have turned out in the past. Electoral opinion polls work in the same way. And what we see is that, when everything is equal (which of course it often isn’t) those fiddle factors work out OK. However, if you’ve got something unusual to take to market or the political landscape has changed the adjustments don’t work. The problem is not that models are used, it’s that people often don’t recognise the nature of those models, don’t question the margins of error, and take the results as facts not extrapolations based on a long series of assumptions.

      Ultimately, a lot of market research is used because it’s easy to do and feels reassuring. Fortunately, more and more companies are looking to other ways of making decisions because they want to be right more often.

      1. Craig Kolb

        Apologies for my delayed response, and thank you for your detailed reply.

        Since coverage factor calibration relates to recall and page location, I think the calibration makes sense and should be fairly stable?

        In relation to Nielsen Bases calibrations – and what you describe as forcing “attention onto a product” – If I understand correctly, you mean monadic exposure? I suspect that, even though not ideal, it has something to do with speed and cost, since only boards for the test product are needed? The trial rate calibration is then needed to ‘fix’ this lack of context. While not perfect, it seems to work most of the time.

        By the way, there are other pre-test market models that don’t require any norms databases, that take into account competitive context; your ‘Assessor style’ pre-test market models.
        In fact, pre-test market models are widely validated. Whether it it is a Nielsen style model reliant on norms, or a self-calibrating Assessor style model they all do pretty well. If a model gets within 10% or even 20% of actual volume 90% of the time – with a new concept that has no sales history – I would say that these models are actually pretty amazing given what is demanded of them. Of course – as you mention – there will be failure anecdotes here and there, but I suspect changes post launch have something to do with that; inputs are no longer aligned with what was input into the model.

        If you are saying that you have an issue with Calibrations, per se, I would point out that calibrations are widely used across many scientific disciplines. Wherever you have systematic error, you can potentially apply calibrations. Of course explicit calibration models that relate the calibration to other variables that explain the error are preferable to brute force calibrations; and the time elapsed since the reference was collected is important.

        Then we get to the issue of how the data is processed. Many of the issues inexperienced researchers have with forecasting, have nothing to do with calibration or consumers not being able to tell you what they will do. On numerous occasions I have come across people naively assuming they can just take the percentage saying “yes I will buy” and use this as a prediction!

        There is nothing wrong with a consumer’s response if he says “yes I will buy your product” and then doesn’t do it. It does not mean that the consumer is incapable of telling us what they will do; it is in fact the researchers fault for ignoring the context of the situation – e.g. perfect awareness and distribution within the survey – and not factoring this into the model.

        The question itself, is also important. A single-product intention question should only be used in a limited set of situations. Otherwise a question that reflects competitive context makes far more sense.

        I agree that models using survey input should link pre-cursors to behaviour wherever possible, since behaviour is measurable in other ways – such as observation, accounting records or scanner data.

        Lastly, I don’t agree that we should abandon the study of behaviour in surveys, simply because people can’t explain their own behaviour at times. Many techniques exist for understanding behavioural drivers without direct questioning.

        1. Philip Graves

          I mean forcing attention on to a product.  A huge challenge any FMCG product faces is getting noticed by shoppers who are unconsciously filtering out thousands of products in order to make the act of filling their trolley manageable.  I think you are being much too charitable when you suggest a trial rate calibration is an acceptable fix for this.  Part of the challenge when designing a new pack (and the marketing support to go with it) is getting it noticed.  To remove this element from the research process (which it doesn’t have to be) seems to me misguided (at best).

          I would also question how well it works ‘most of the time’.  Most new product launches are not successful.  The big win for companies like Nielsen is that most organisations are hopeless at evaluating their own processes.  There is no empirical validation of the processes that lead to failure; often because there is a culture of not acknowledging failure.

          Beyond that, with all due respect, you appear to be taking a belief-based stance on the value of asking people questions, rather than one that reconciles what we know about the role of the unconscious mind in human decision-making with research techniques that ignore its existence.  Questions can’t provide context.  They can’t create the feeling of being in a different time and place.  They create the feeling of answering questions because it’s all they can do.

          Survey techniques can be used to conduct experiments, where contexts are changed and differences in response observed (using separate samples), but the value of any such approach needs to be evaluated on a case by case basis (because of the context issue).

          Implicit association techniques have some merit (again, best used experimentally) but the context problem remains a challenge I’ve not seen addressed in many instances.

Leave a Reply

Your email address will not be published. Required fields are marked *