The title the ‘International Journal of Market Research’ (IJMR) sounds undeniably impressive. Generally speaking journals are good things, bringing together peer reviewed papers from people pushing the boundaries in a particular field.
But I wonder… do astrologers have a Journal of Astrology? Google suggests that they do, sort of – it looks as though it might just be one astrologer selling predictions.
There’s a National Journal of Homeopathy – I wonder, to paraphrase Tim Minchin, if they’ve had any papers on how water forgets about the wee and poo it’s had in it and just remembers the traces of medicinally advantageous ingredients?
My point is that it’s easy to get a false sense of validity from a name. In Consumer.ology I describe market research as a pseudo-science and, arguably, having an ‘International Journal’ is all part of the industry’s mystique.
I must declare a personal interest at this point: recently the IJMR reviewed my book; the reviewer hated it. But the review, a viewpoint article and one of the papers made for particularly interesting reading: let me explain…
[Curiously, the reviewer's reading skills didn't extend to my name: I was called Robert Graves and Peter Graves, but never Philip. I wonder if the book had him in such a rage that he struggled to focus on all the words.]
Pedantry aside, the IJMR’s choice of reviewer was somewhat self-satisfying. As far as I can tell, the reviewer earns his living conducting the research that Consumer.ology questions the fundamental validity of. How is such a person to address the evidence from psychology and neuroscience that shows people aren’t very good witnesses to their own behaviour and that the process of asking them what they think influences them to say particular things?
The somewhat strange thrust of the review is that the market research industry knows all about the problems that lead me to believe asking questions isn’t worthwhile – “… they have long been widely recognised by many in market research.” Strange then that if you look at the Market Research Buyer’s Guide virtually every single one of the companies listed is offering the sorts of research that is beset by these problems.
Do they tell prospective clients all about these “widely recognised” problems when they come to them requesting a survey or focus group be commissioned? I suspect not. Partly I suspect this because no research company has ever mentioned them to me when I was commissioning research. And partly I suspect not because, were they to mention the problems, the research wouldn’t go ahead.
Entertainingly, the review was, I assume, written before the recent Scottish Parliamentary elections. My rejection of opinion polls was described as ‘…simply not justified by reality.’ He adds that, ‘When it became clear a few years ago that something was going awry in the accuracy of such polling, the industry effort aimed at addressing these issues was remarkable…’ Remarkable until they got the results wrong again in the Scottish elections.
Election polls conducted close to the date of an election should be quite accurate, and yet even they prove problematic to the research industry.
Elsewhere in the same edition two contributors attempt to reconcile market research with behavioural economics. In a ‘Viewpoint’ article Nick Southgate attempts to align asking questions with behavioural economics by pushing it into the gaps that a behavioural approach can’t fill. That behavioural economics can’t identify (at least not directly) the content of people’s decisions, doesn’t mean that market research can.
In a (presumably) peer-reviewed paper (market research peers, of course), Wendy Gordon draws the astonishing conclusion that a new branch of (dynamic) market research can help provide behavioural economics with “the practical skills and applications that they need to solve the problems that face them in an increasingly complex world context.”
Psychology and behavioural economics have provided the basis for identifying the folly of traditional market research (evidence that is quoted in the paper): expecting the people whose work up to this point has been so clearly undermined to be the custodians of a new approach is, it strikes me, somewhat risky.
This risk is evidenced when Gordon advocates “not throwing the baby out with the bath water” before going on to suggest that qualitative research has strengths in certain areas including identifying values and beliefs: behavioural economics has demonstrated that espoused values and beliefs can be (and frequently are) irrelevant, when examined alongside actual behaviour. Surely, that’s a case of pouring the bathwater back in again.
The problem for the IJMR is that people don’t approach anything with an open mind. We arrive burdened with associations and experiences that colour how we interpret what we find. It seems that the IJMR wants to take the challenges from outside its field and force its existing techniques onto that evidence. Put another way, how can someone who has spent their entire career asking questions reconcile themselves to the information that there is little evidence to support the validity of asking questions and much to undermine it as a reliable tool?
The belief that you can ask people questions and what you hear back will be an accurate insight is just that, a belief. It’s an apparently plausible concept that fits with people’s view of themselves as the conscious agents of their actions. Its reinforced by those times research appears to confirm something we believed, or appears to be born out by what happens next: of course, the same benchmarks are what perpetuates the use of things like astrology.
With so much that is central to market researchers’ beliefs questioned in Consumer.ology, I would advise them not to read it. Or at the very least, they should consult their horoscope to see if it’s a good day for reading a book.