Survey Design: software is not a substitute for skill
With the advent of MailChimp and other templated Survey Design applications, a lot of SPARC’s clients ask why we need to spend time (and money) on Survey Design, “Anybody can do it with the software we have these days, right?” Right.
It is a misperception that seems quite widespread, to the point where it has almost become a marketing research urban legend. As a full service Market Research company, SPARC is a heavy user of survey research data. It is concerning to see that fundamental Survey Design skills are eroding. PowerPoint does not make one a competent presenter nor can Word transform one into a professional writer. Likewise, user-friendly Survey Design software is not a substitute for genuine skill and experience.
Survey Design is not easy. Ask the pollsters! In our observation and experience, a considerable amount of time and budget is wasted because of poor Survey Design. We often spend more time and more money than we have to in order to collect less valuable data. That’s a lose-lose-lose proposition, really.
Research objectives have the biggest impact on survey quality. Unfortunately, they can be blurry and like many questionnaires, essentially a product of an ad hoc committee. One result is that nice-to-know questions may outnumber need-to-know questions.
Excessive questionnaire length has long been an issue in marketing research and, with mobile surveys on the rise, will become even more so. In some articles, a Questionnaire Mass Index (QMI) is proposed. It’s derived from the more commonly known Body Mass Index (BMI):
QMI = (Time wasted on nice-to-know questions)2 / (Time required for need-to-know questions) x 100
So, if your average interview length is 20 minutes and respondents, on average, spend four minutes answering questions that actually have little business meaning, your QMI score would be 100. The lower the score, the better the Survey Design. Though a little tongue-in-cheek, the fact remains that a simple guideline such as this can help us discipline ourselves and improve the health of our questionnaires.
There are countless ways to reduce the flab in our Survey Design. Even on a mobile device we still have much more leeway than in the “old days” when surveys were often conducted by telephone and administered by a human interviewer. A 10-to-15-minute mobile survey will generally be able to cover more ground than a 10-to-15-minute telephone survey.
I’ve noticed that very similar questions are sometimes asked multiple times in the same questionnaire. While this is occasionally deliberate – for instance, when very important questions are asked in slightly different ways – more often than not it’s an oversight. This wastes time and can confuse or irritate respondents. Moreover, members of online panels have been profiled to some degree and key demographics and some psychographics have already been collected. There may be no need to ask these questions again. Another way to save time is to split questionnaires into chunks so that only the most critical questions are asked of all respondents. Clever questionnaire design can reduce questionnaire length, lower costs and improve response quality.
Ask yourself whether ordinary consumers will interpret the questions in your survey the way you do. They are not brand managers or marketing researchers. Also ask yourself if you would be able to answer your own questions accurately! Highly detailed recall questions have always been discouraged by survey research professionals and the folks who established consumer diary panels decades ago were well aware that even diary data are not 100 percent accurate. Answers to questions about purchase, for example, should be interpreted directionally and should not be used as substitutes for actual sales figures when the latter are available.
Attitudes and opinions
Surveys are particularly useful for uncovering attitudes and opinions, which leave no trail at the cash register. Knowing what consumers buy is important but knowing why they buy it is also important. Deriving the why from the what is much harder than is sometimes assumed. That said, this is where survey research often fails badly, usually because of poor Survey Design. Merely copying and pasting attitudinal statements from old questionnaires or from a lengthy, brainstormed list of statements is asking for trouble.
When developing your own scales, think first of the factors – the underlying dimensions – then the items that you will use to represent these factors. A good resource if you want to explore this topic, is Marketing Scales, an online repository of more than 3,500 attitude and opinion scales.
You don’t need to wed yourself to five- or seven-point agree-disagree scales, which are prone to straightlining. Maximum difference scaling, simple sorting tasks and various other alternatives often work better. However, if the statements themselves do not make sense to respondents, or mean different things to them than they do to you, you’ll have a problem regardless of the type of question you’ve settled on!
When certain types of questions are asked again and again – awareness and usage questions, for example – there is no need to keep reinventing the wheel. In fact, this is bad practice that can run up costs and lower data quality. Consider building banks of standard questions and questionnaire templates for different kinds of surveys. This is one way questionnaire design software can come in handy and raise productivity. QUAID is an artificial intelligence tool that can help you improve the wording of your questions.
Sample design is another place where survey research can go awry. In my opinion – and I suspect Byron Sharp and his colleagues at the Ehrenberg-Bass Institute would agree – we often survey a slice of consumers that is far too narrow. Not infrequently, a client wishes to interview women aged 18-24, for instance, when the potential consumer base for their product is vastly more diverse. Often, these sorts of screening criteria are driven by gut feel or emerged from a few focus groups and have no true empirical foundation. Casting a net which is too narrow runs up research costs, increases field time and can give us a distorted picture of reality. This is another lose-lose-lose proposition.
Though advanced analytics can be conducted after the fact, they usually work best when designed into the research. “Begin at the end and work backwards” is sound advice and is especially pertinent when the data will be analyzed beyond the cross-tab level. For example, if you intend to run key driver analysis – the simplest example of which would be correlating product ratings with overall liking – make sure to ask all respondents the questions that will be used in the analysis.
Adding more value
Consumer surveys are not easy and now that we have many other data sources (e.g., transaction records, social media), they can actually add more value than ever through their synergy with these other data. Access to a variety of data can help you both design your survey and interpret its results. Though often the butt of criticism (including mine), questionnaire design tools do have many benefits. A former employer of mine began developing such a tool in the late 1980s, so I know firsthand that they can cut down considerably on the clerical aspects of questionnaire design, giving us more time to think about things that really matter, and also reduce errors. However, in the hands of inexperienced or poorly-trained researchers these tools may do more harm than good. As with miracle diets, there are risks and your questionnaires may actually gain weight.
This has been a very small article about a very big topic. For those who wish to learn more – and there is so much to learn – there are online sources, seminars and university courses. There are many books as well. Sharon Lohr has written an excellent and popular textbook on sampling, Sampling: Design and Analysis. Many excellent books have also been published on survey research and questionnaire design, and three I’ve found particularly helpful are: Internet, Phone, Mail and Mixed-Mode Surveys (Dillman et al.), The Science of Web Surveys (Tourangeau et al.) and Web Survey Methodology (Callegaro et al.). The Psychology of Survey Response (Tourangeau et al.) and Asking Questions (Bradburn et al.) have stood the test of time and I highly recommend them.