I recently popped into a branch of one of the main high street banks and couldn’t resist picking up the in-branch service questionnaire. These things are relatively common these days, with every organisation from high street shops to the police asking you to rate the quality of the service you received. This one, though, has issues than most.
The first issue is the poor design and quality of the survey. It has been roughly torn along one edge, uses too many colours (including red and green together, which is a disaster for many red-green colour-blind readers), and has a large organisational chart taken straight from Microsoft Word in the middle of the page. Any literature you produce and provide to customers reflects your brand and image, so the poor presentation and finish of this survey is bound to reflect poorly.
The first sentence is one of the most obvious examples of a loaded question I’ve seen, one of the cardinal sins of survey and questionnaire design.
The next four questions are similarly problematic. I don’t think it’s clear that they are questions, as the alignment (centre-aligned) is ambiguous, and there are no places to mark an answer, like a simple empty box. The provided answers are also entirely arbitrary: why can you answer ‘excellent/very good’ for queue experience and not for ‘making you feel valued?’ Is the queue experience actually how long you had to queue, or is it the entertainment provided while you queue that respondents are asked to comment on? And, finally for this section, ‘making you feel valued’ is so vague it’s practically meaningless.
I received a link to this survey recently, asking me to rate my experience with a popular brand of sofas and furnishings. While it got a lot of things right, it got a few of the fundamentals wrong.
In case you can’t read the screenshot, the two questions are:
- Thinking about your overall customer experience, how likely are you to recommend [the company] to friends, family and colleagues? Answers given on a scale of 0 to 10.
- Please tell us a little more about why you have given this score.
Using a scale of 0 to 10 is good, and the fact that it’s not part of a matrix is even better. The problem with its implementation here is that the scale reads backwards, i.e. from 10 on the left to 0 on the right. I assume this is either an oversight or, if I was feeling cynical, a suggestive way to improve the score.
The scale should always read naturally, i.e. from 0 on the left to 10 on the right, as someone would read a series of numbers in ascending order.
A progress indicator was good to see, as well as the general lack of matrices, so it wasn’t all bad. In fact this is one of the better laid-out surveys I’ve seen recently.
I think this question is definitely too specific for anybody’s needs:
In case you can’t see the screencapture, the question asks, “When did you start using the internet?” Possible responses are:
- 1995 or before
I’m afraid I just don’t remember. I’ll have to check my diary for ‘started using internet today.’
This question is asking for a very, very specific answer. One that the respondent is going to struggle to remember, and isn’t really going to give much useful insight anyway. Does it really matter if there’s a difference between users who have been using the internet since 1997 and 1998?
To me, this question is like asking how far it is between London and Tokyo, to the nearest centimetre.
Use a level of specificity that is appropriate for its use and that your respondent might be able to remember.
In case you’re wondering, this is a bone fide survey question sent to me to answer from one of the many survey sites out there, but I won’t tell you which one.
While doing a bit of research on the minimum price of alcohol, I stumbled across this website poll:
If you can’t read the screenshot, today’s poll is: “Would you ever consider plastic surgery?” This is a poor question in itself, but lets move on. Possible responses are:
- Maybe, if it wasn’t so expensive.
- Yes I would seriously consider it.
- No way, I’m happy with what I have.
- Yes but I can’t afford it.
What, exactly, is the difference between ‘maybe, if it wasn’t so expensive’ and ‘yes but I can’t afford it’? While there’s a slight difference in semantics – ‘yes but I can’t afford it’ being more decided – I doubt the average respondent is going to agonise over this. What’s wrong with:
And if you really, really want to know why people answer the way they do, you need another question.