Tag Archives: market research

In Response to ‘Is n=1 Ever Enough?’


Alone (Photo credit: JB London)

I found reading Is n=1 Ever Enough by Nicky Halverson highly thought provoking.

n (lower case) refers to the sample size, that is how many people you have asked your questions. Nicky’s article is therefore asking is it ever ok to just ask one person?

There’s no chance that a sample of one is statistically significant, but research, even quantitative research, isn’t always about producing statistically significant results. Instead, good research is about producing appropriate information to make informed decisions.

Sometimes statistically significant or highly detailed information is necessary, and therefore appropriate. Examples might be high risk decisions involving patients or significant sums of money. It’s probable that n=1 isn’t going to be sufficient.

But what about low risk decisions? Well, I have to agree with Halverson that a sample of one wouldn’t be my first choice, and I would encourage my client to reconsider. I firmly believe that one main criterion of good research is that it is reliable. In the research context, reliable specifically means obtaining consistent results. By achieving reliable – i.e. consistent – results, you can be more confident that your results are going to be meaningful and useful. By having a sample size of one you cannot determine if your results are going to be consistent with each other, simply because you will not have anything to compare your result with.

But in some cases that might not matter. Going back to the purpose of research, it is about producing information to inform a decision. If it involves low risk, your client might be comfortable making their decision on the basis of just one response. My job in this hypothetical situation would be to make my client aware of the risks of basing their decision on one case, but it is up to them if they choose to do it or not.

So, as with so much, it depends. It depends on how comfortable your client is making a decision with very limited information. But ultimately, n=1 is infinitely better than n=0.


Sugar Research Was Anything But Sweet

In Manchester recently I was stopped by a market researcher to ask if I would buy a new product. The whole experience was very poor and left me feeling frustrated. I’m more inclined than most to answer questions since I have an unhealthy interest in research, so if I walked away feeling frustrated what would other people feel like?

The researchers had set themselves up on a side street, next to a busy main shopping street. I think their choice of site wasn’t ideal, since the side street was very quiet. Far better to set yourself up on the edge of the main street, as you have the chance to (pseudo-)randomise then, rather than having to ask everyone who passes. But still…

Without explaining who the researchers were (simply, “we’re researchers…”) they asked if I bought sugar, and what packaging the sugar was in when I bought it. I was showed a cue card with a picture of a sugar packaged in a bag, a cardboard box, and a plastic box. I know I’ve recently bought my wife icing sugar for baking so I know it was in a cardboard box, which I indicated. The researcher, quite rudely, said that it couldn’t have been because that product wasn’t launched yet. I was incredulous. I was giving up my time to answer questions without any compensation and I was being patronised.

I think I should have walked away at this point but I wanted to see what else this researcher had up her sleeve. So, after agreeing that I couldn’t possibly have bought sugar in a cardboard box, she recorded that it was a bag. Whatever.

The next question was, to paraphrase, “If a plastic, resealable box was a penny extra for the same amount of sugar, would you buy it?” Before I get in to my answer and her issues with that, this is not a great way to ask this question. It is asking about a hypothetical situation. The respondent‘s answer could be anything, and there’s no real way to tell if that’s how they would behave. Far better to ask about a similar situation that has recently occurred, and what the respondent did in that case. You are then grounding your question in an actual occurrence, and you can be reasonably confident that the respondent actually behaved in that way. For example, I would ask if the respondent buys other products that are available in plastic, resealable containers.

So, because of this going through my head, I answered that I wasn’t sure if I would buy a plastic container or not. I don’t really care and I don’t really know, so I thought I was doing her a favour by being honest. She was not happy with me. She cajoled me in to answering yes or no, as if it was the simplest question in the world and that I was being stupid or obtuse for not answering her properly.

At that point I actually did walk off because I’d had enough of being patronised, so I’ve no idea what she recorded my answer as. Probably a non-response, or just discarded it. Either way, how can their results even remotely reflect people’s real opinions and buying habits?

If you are reading this and you just happen to work for a large sugar company and have just commissioned some research to see if your consumers would buy a plastic container, discard it. It’s not worth the paper it’s written on. Fire your researchers, and I’ll re-do it for you.

The Worst Survey Ever?

I recently popped into a branch of one of the main high street banks and couldn’t resist picking up the in-branch service questionnaire. These things are relatively common these days, with every organisation from high street shops to the police asking you to rate the quality of the service you received. This one, though, has issues than most.

Paper survey

The first issue is the poor design and quality of the survey. It has been roughly torn along one edge, uses too many colours (including red and green together, which is a disaster for many red-green colour-blind readers), and has a large organisational chart taken straight from Microsoft Word in the middle of the page. Any literature you produce and provide to customers reflects your brand and image, so the poor presentation and finish of this survey is bound to reflect poorly.

The first sentence is one of the most obvious examples of a loaded question I’ve seen, one of the cardinal sins of survey and questionnaire design.

The next four questions are similarly problematic. I don’t think it’s clear that they are questions, as the alignment (centre-aligned) is ambiguous, and there are no places to mark an answer, like a simple empty box. The provided answers are also entirely arbitrary: why can you answer ‘excellent/very good’ for queue experience and not for ‘making you feel valued?’ Is the queue experience actually how long you had to queue, or is it the entertainment provided while you queue that respondents are asked to comment on? And, finally for this section, ‘making you feel valued’ is so vague it’s practically meaningless.

Continue reading

Website Usability – Why it Matters

Icon for WikiProject Usability

Icon for WikiProject Usability (Photo credit: Wikipedia)

Most businesses and organisations today rely on their website as an essential marketing tool or sales portal in the case of e-commerce websites. Whatever the size and scope of your website, have you considered how usable – that is, how easy it is to use – your site is for your customers or clients?

The usability of your site matters because research studies have shown, time and time again, that if your website visitors cannot find the information they need quickly and easily, they will leave your site, probably taking their business with them. It doesn’t matter how compelling or engaging your website is. To visitors this is a secondary concern. The primary goal for most websites should be to make it as easy to use as possible.

The good news is it’s easy to test the usability of your site, and that’s exactly what I’ve been doing for a client over the last couple of weeks. My client, a local museum, suspected they had a few key usability issues with their existing website, and they wanted to make sure that they improved these when they redesigned their site.

Continue reading

Timing and Allowing for Seasonal Variations

Sometimes getting the timing of your research right is just as important as getting the method right.

Typical examples include businesses or organisations with significant seasonal variations in their output or activities. For example, an organisation that wants to measure the effectiveness of a Christmas campaign would do well to carry out their research in the run up to Christmas.

There are, though, less obvious examples.

I recently completed a project for the Friends of Rhyddings Park who were looking to determine the number of people who use the local park. Many of the facilities in the park focus around children – including two play areas – but the main bulk of the research was scheduled to begin after the summer holidays when the children and young people had returned to school so would not be using the park during the day.

To get a picture of the true park use during the school holidays it was important to at least bring some of the research forward, and that was exactly what I suggested and what my client did since they recognised the importance of getting their research right.

When you’re planning your research bear in mind seasonal variation in your activities, and try to plan your research at the best time to answer your research question.