Author Archives: Phil

About Phil

Social and market research consultant and practitioner. Co-convenor of Sociologists Outside Academia Group. Prodigious drinker of coffee.

Blog Migrated

Stop X

Stop X (Photo credit: Wikipedia)


I’ve migrated all the existing content from this site to my new personal blog,


You can access all past and future blog posts from:


I’ll delete this wordpress site in the near future.


I hope you’ll continue to read my posts on my new site.



In Response to ‘Is n=1 Ever Enough?’


Alone (Photo credit: JB London)

I found reading Is n=1 Ever Enough by Nicky Halverson highly thought provoking.

n (lower case) refers to the sample size, that is how many people you have asked your questions. Nicky’s article is therefore asking is it ever ok to just ask one person?

There’s no chance that a sample of one is statistically significant, but research, even quantitative research, isn’t always about producing statistically significant results. Instead, good research is about producing appropriate information to make informed decisions.

Sometimes statistically significant or highly detailed information is necessary, and therefore appropriate. Examples might be high risk decisions involving patients or significant sums of money. It’s probable that n=1 isn’t going to be sufficient.

But what about low risk decisions? Well, I have to agree with Halverson that a sample of one wouldn’t be my first choice, and I would encourage my client to reconsider. I firmly believe that one main criterion of good research is that it is reliable. In the research context, reliable specifically means obtaining consistent results. By achieving reliable – i.e. consistent – results, you can be more confident that your results are going to be meaningful and useful. By having a sample size of one you cannot determine if your results are going to be consistent with each other, simply because you will not have anything to compare your result with.

But in some cases that might not matter. Going back to the purpose of research, it is about producing information to inform a decision. If it involves low risk, your client might be comfortable making their decision on the basis of just one response. My job in this hypothetical situation would be to make my client aware of the risks of basing their decision on one case, but it is up to them if they choose to do it or not.

So, as with so much, it depends. It depends on how comfortable your client is making a decision with very limited information. But ultimately, n=1 is infinitely better than n=0.

Sugar Research Was Anything But Sweet

In Manchester recently I was stopped by a market researcher to ask if I would buy a new product. The whole experience was very poor and left me feeling frustrated. I’m more inclined than most to answer questions since I have an unhealthy interest in research, so if I walked away feeling frustrated what would other people feel like?

The researchers had set themselves up on a side street, next to a busy main shopping street. I think their choice of site wasn’t ideal, since the side street was very quiet. Far better to set yourself up on the edge of the main street, as you have the chance to (pseudo-)randomise then, rather than having to ask everyone who passes. But still…

Without explaining who the researchers were (simply, “we’re researchers…”) they asked if I bought sugar, and what packaging the sugar was in when I bought it. I was showed a cue card with a picture of a sugar packaged in a bag, a cardboard box, and a plastic box. I know I’ve recently bought my wife icing sugar for baking so I know it was in a cardboard box, which I indicated. The researcher, quite rudely, said that it couldn’t have been because that product wasn’t launched yet. I was incredulous. I was giving up my time to answer questions without any compensation and I was being patronised.

I think I should have walked away at this point but I wanted to see what else this researcher had up her sleeve. So, after agreeing that I couldn’t possibly have bought sugar in a cardboard box, she recorded that it was a bag. Whatever.

The next question was, to paraphrase, “If a plastic, resealable box was a penny extra for the same amount of sugar, would you buy it?” Before I get in to my answer and her issues with that, this is not a great way to ask this question. It is asking about a hypothetical situation. The respondent‘s answer could be anything, and there’s no real way to tell if that’s how they would behave. Far better to ask about a similar situation that has recently occurred, and what the respondent did in that case. You are then grounding your question in an actual occurrence, and you can be reasonably confident that the respondent actually behaved in that way. For example, I would ask if the respondent buys other products that are available in plastic, resealable containers.

So, because of this going through my head, I answered that I wasn’t sure if I would buy a plastic container or not. I don’t really care and I don’t really know, so I thought I was doing her a favour by being honest. She was not happy with me. She cajoled me in to answering yes or no, as if it was the simplest question in the world and that I was being stupid or obtuse for not answering her properly.

At that point I actually did walk off because I’d had enough of being patronised, so I’ve no idea what she recorded my answer as. Probably a non-response, or just discarded it. Either way, how can their results even remotely reflect people’s real opinions and buying habits?

If you are reading this and you just happen to work for a large sugar company and have just commissioned some research to see if your consumers would buy a plastic container, discard it. It’s not worth the paper it’s written on. Fire your researchers, and I’ll re-do it for you.

Ethics in Community-Based Participatory Research

I recently attended a conference at Durham University’s School of Applied Social Science into ethics in community-based participatory research.

The day was really enlightening I learned a few things at the conference about ethics and carrying out such research with integrity.

First, I learned that sociologists can count well enough to construct the first ten or so numbers in the Fibonacci sequence.

The second thing I learned was that there are new ethical considerations that need to be taken in to account, as well as some important new dimensions of existing, or traditional, ethical issues in research.

  1. Presenters commonly felt that it was important to have the group involved agree to the terms of the research collectively and with a high level or agreement. Traditionally each participant is asked individually to consent to the research but this was inadequate on its own and should be done in addition to asking the group as a while to consent.
  2. Many researchers were unsure of who owns the material once collected, and who can use it for what purpose. The situation is complicated when a group helps to produce the work, because can one member of that group then use the findings for their own project?
  3. There were ambiguities around acknowledgement of contributions. Is it appropriate to acknowledge the group, or individuals within the group, and how do those who wish to remain anonymous?

Many of the presenters discussed the methods and practicalities involved in community-based participatory research. It was particularly interesting to hear some of the academic researchers and some of the researchers from the community group discuss their different perspectives and, sometimes, disagree over the research.

Finally, I discussed with some researchers how the closed their project, as I have previously been involved in a collaborative project and felt unsatisfied with how the project ended. It would appear there’s no easy solution, but planning ahead seemed to be the key to ensuring the project closed satisfactorily.

This is really just a summary of the event, but there’s more information – and draft guidelines – available from the conference webpage. I recommend you give it a read if you’re interested in research in this way.

Enhancing Statistical Knowledge in Sociology

English: Normal distribution curve that illust...

English: Normal distribution curve that illustrates standard deviations (Photo credit: Wikipedia)

This week I attended two events to encourage the development and use of statistical and quantitative knowledge in A level and undergraduate level sociology.

The Royal Statistical Society invited me to the first event in London, The Future of Statistics in Our Schools and Colleges, and the second event was part of the Higher Education Academy‘s Science, Technology, Engineering and Mathematics (STEM) programme. Both were looking in some degree at A level and undergraduate level teaching of statistics and quantitative methods.

I was pleased to share both my experiences and those of my SOA colleagues of using quantitative methods and statistics as a social researcher and sociologist outside of academia and as someone who has trained others in the use of these methods. By sharing this knowledge hopefully we have provided an understanding of the sort of skills that will be required by students in the workplace and the advantages some ability and confidence with quantitative methods can provide.

Anecdotally, for example, my colleagues and I were all dependent, at least in part, on our knowledge of quantitative methods to be doing the jobs we are.

For the most part, the following skills in statistics and quantitative methods are advantageous:

  • Fractions, proportions and percentages.
  • Descriptive statistics, such as mean, median and mode, standard deviation, and confidence intervals.
  • Frequencies.
  • Understand sampling and population.
  • Statistical significance (p value).
  • Communication skills – to share findings with others, usually those who do not have knowledge of these techniques.

Nail these and you’re massively more employable as a social researcher.

BSA Annual Conference: Engaging Sociology

SOA at the BSA Annual Conference

This year my co-convenors of the Sociologists Outside Academia group and I submitted a proposal for a presentation at the British Sociological Association‘s annual conference in April. I’m especially excited to be organising the presentation because this will be the first time SOA have presented at the BSA annual conference.

The SOA presentation will be a symposium, that is a short presentation from a number of speakers, followed by a question and answer session. We’ve invited SOA members to attend and present the nature of their work outside academia and in industry, and I will chair the session.

If you would like to attend you can register for the annual conference. If you want to register for one day only we’re on Wednesday 3 April.

Our submitted abstract:

Sociology Outside Academia: Reflections and Experiences of Working in the Public, VCFS, and Private Sector

Not all sociologists work in university departments. Sociologists Outside Academia (SOA) does exactly what it says on the tin: we are a group of sociologists who work outside of the academy and SOA provides a ‘virtual institution’ to support its members and strengthen the idea that we are first and foremost sociologists regardless of our circumstances.

As a result of carrying out sociological work in a variety of organisations our members have a wealth of experience of the public, private, and VCFS sectors. It is this experience that we wish to share with our colleagues and fellow sociologists working in university departments.

This symposium will take the format of short presentations from four SOA members about their research interests and careers, followed by a question and answer session where there will be opportunity to ask about the presentations, the nature of working outside academia, and the SOA network.

This symposium will be an opportunity for sociologists to develop relationships and exchange knowledge regardless of their institution or background.

The Worst Survey Ever?

I recently popped into a branch of one of the main high street banks and couldn’t resist picking up the in-branch service questionnaire. These things are relatively common these days, with every organisation from high street shops to the police asking you to rate the quality of the service you received. This one, though, has issues than most.

Paper survey

The first issue is the poor design and quality of the survey. It has been roughly torn along one edge, uses too many colours (including red and green together, which is a disaster for many red-green colour-blind readers), and has a large organisational chart taken straight from Microsoft Word in the middle of the page. Any literature you produce and provide to customers reflects your brand and image, so the poor presentation and finish of this survey is bound to reflect poorly.

The first sentence is one of the most obvious examples of a loaded question I’ve seen, one of the cardinal sins of survey and questionnaire design.

The next four questions are similarly problematic. I don’t think it’s clear that they are questions, as the alignment (centre-aligned) is ambiguous, and there are no places to mark an answer, like a simple empty box. The provided answers are also entirely arbitrary: why can you answer ‘excellent/very good’ for queue experience and not for ‘making you feel valued?’ Is the queue experience actually how long you had to queue, or is it the entertainment provided while you queue that respondents are asked to comment on? And, finally for this section, ‘making you feel valued’ is so vague it’s practically meaningless.

Continue reading

Website Usability – Why it Matters

Icon for WikiProject Usability

Icon for WikiProject Usability (Photo credit: Wikipedia)

Most businesses and organisations today rely on their website as an essential marketing tool or sales portal in the case of e-commerce websites. Whatever the size and scope of your website, have you considered how usable – that is, how easy it is to use – your site is for your customers or clients?

The usability of your site matters because research studies have shown, time and time again, that if your website visitors cannot find the information they need quickly and easily, they will leave your site, probably taking their business with them. It doesn’t matter how compelling or engaging your website is. To visitors this is a secondary concern. The primary goal for most websites should be to make it as easy to use as possible.

The good news is it’s easy to test the usability of your site, and that’s exactly what I’ve been doing for a client over the last couple of weeks. My client, a local museum, suspected they had a few key usability issues with their existing website, and they wanted to make sure that they improved these when they redesigned their site.

Continue reading

Timing and Allowing for Seasonal Variations

Sometimes getting the timing of your research right is just as important as getting the method right.

Typical examples include businesses or organisations with significant seasonal variations in their output or activities. For example, an organisation that wants to measure the effectiveness of a Christmas campaign would do well to carry out their research in the run up to Christmas.

There are, though, less obvious examples.

I recently completed a project for the Friends of Rhyddings Park who were looking to determine the number of people who use the local park. Many of the facilities in the park focus around children – including two play areas – but the main bulk of the research was scheduled to begin after the summer holidays when the children and young people had returned to school so would not be using the park during the day.

To get a picture of the true park use during the school holidays it was important to at least bring some of the research forward, and that was exactly what I suggested and what my client did since they recognised the importance of getting their research right.

When you’re planning your research bear in mind seasonal variation in your activities, and try to plan your research at the best time to answer your research question.


Ethical Requirements of Research

All research should be carried out to the highest ethical standards. It’s important to: protect the respondent or respondents; help ensure good quality research; and maintain the integrity of the research industries who depend on goodwill to attract future respondents.

Complying with the ethical guidelines of the Market Research Society, Social Research Association or British Sociological Association is not arduous for a relatively straightforward research project.

Imagine my dismay, then, at reading that research carried out by the Troubled Families Unit doesn’t seem to have thought about the ethical implications of their research.

Compounding the issue is that this wasn’t a piece of general research, but research where vulnerable members of society were the principal respondent.

Summarising Nick Bailey’s original blog post on the subject, the research seems to have made the following crucial errors:

  • Respondents were not free to decline to participate or to withdraw, a basic tenet of ethical research.
  • Bailey suggests that the identity of the respondents might not be protected.
  • The department’s definition of ‘social research’ and defining the research as a ‘dipstick/informal information gathering’ is dubious.

Neglecting ethical standards has arguably harmed the respondents involved, the social research industry and the government, and I’d certainly take a closer look at the method section and results.

Getting it Right

Getting the ethics right is so crucial for your research; you can’t afford to get it wrong or it will harm your brand. You even need to consider the ethical needs of a straightforward online survey.

The easiest way to make sure you meet your ethical obligations is to employ a market research or social research professional. For a minimal cost they can protect your respondents, the industry (which is important to ensure there are respondents in the future) and your brand.

Contact me if you would like me to look over the ethical requirements of your research >