It’s become fashionable to say – “researchers who administer long surveys have their head up their you know what” — Hell, I’ve said it in the past. The Future of Insight blog predicts that it’s definitely NOT part of the future. Sandra Rathod and Andrea la Bruna concluded in 2004 (Yes that is 6 Years ago) – that respondents “check out” after 15-20 mins. Our buddies from Survey Sampling say that again a couple of months ago.
So, I thought WHAT IS THE DEAL here. Two questions –
a) are we still administering long surveys?
b) If so – why?
As it turns out, I actually have access to some real data. As part of the group of services and companies I am a part of, we cumulatively conduct about 2 million surveys a month across thousands of clients. So, I thought, I’d mine the data for some insight into how long are the survey exactly.
In March we conducted 1.96 Million surveys. Median Question Length: 35.
Now, lets think about it – In the age of 140 Character Tweets, we are asking 2 million users to fill out a 35 question survey?
Now – lets think about some of the reasons. From our vantage point, there are 4 audiences for surveys:
a) Market Research
b) Customer Satisfaction
c) Public Opinion / Social Science Research
e) Employee Satisfaction
Of the 4 broad categories, Market Research has traditionally been the culprit. We know, Customer Satisfaction has pretty much undergone the transformation from overtly long surveys to “One Question” – aka Net Promoter Score – so the issue of long running Customer Sat surveys are our of the question. Even if you are not using Net Promoter, most companies I know use a variant of the Net Promoter — Secure Customer Index, or ACSI Index — all of which are very short – 3 questions max.
I also happen to know empirically that public opinion, social science research and employee sat. studies account for less than 20% of our customer base – so – in effect they are a non-participant here — at least in no significant way.
This leads us to Market Research. Traditional Market Research consists of 4 players in the value chain:
a) Client – Pays the bills. Has the problem. Has a checkbook. Think – BP, P&G etc.
b) MR Agency – Big agencies traditionally outsource parts of the work to smaller ones. Think GFK, Kantar, IPSOS and then over 14,000 smaller MR firms that service them.
b) Sample Provider — Provides the list of participants to the survey. SSI, e-Rewards, PeanutLabs etc.
d) Technology Providers / Data Collector – QuestionPro, SurveyMonkey, Zoomerang, Survey Analytics, Vovici, Confirmit etc.
So – lets think about this purely from one axis – Length of the Survey.
Market Research studies generally have budget on a temporal basis – they can do X projects in a year and have budgeted for that. So, its natural to cram as many questions into each project – natural bloat. There is no significant cost differential between running a 20 question survey vs. a 40 question survey vs. an 80 question survey. The incremental cost are not 2x or 4x.
MR Agency / Sub Agency / Consultant:
Typically – there is a base cost for conducting research – then MR agencies, as with most service oriented businesses, are fundamentally dependent on time. Everyone knows the agency business ultimately is an arbitrage – Pay X and charge Y – for time. The more complicated and lengthy a project it – its better for business. Duh! – So there is no compelling reason for any agency to try and shorten the size of the study – unless they felt the study was too large for success. So, in effect there is absolutely no financially compelling reason why ANY MR Agency would advise a client to shorten the study, break the study into multiple parts. It would have the direct effect of simply reduce their billings.
OK – this entity in the value chain is probably hurt the most – however, since all panel providers are priced based on the length of the survey, they are (in theory) covered. The two axis that determine price for any sample provider is a) incidence — what kind of respondents are you looking for, and b) completion rates — directly a function of survey length. I have talked in great length to many sample providers on how much they HATE long surveys and non-interactive surveys. I understand — this is core to their business. They are about the only entity that actually cares about the respondent. However keep in mind, that relationship is not a typical Vendor-Client relationship. Lost revenue on incorrectly priced projects as well as the bad taste that respondents have, vis-a-vis the panel provider, are compelling enough reasons for most panel providers to hate studies that are long. However, since they have factored in time – this should be reflected in price they quote. But then competition kicks in and we all have to do what we need to do to get a deal done!
Technology Providers / Data Collector:
I obviously can attest to this part personally. Frankly, we don’t really care. The cost of storing more data (more questions) is insignificant. When we started our business, it used to matter. But once you achieve scale, it really does not matter. So, technically we are ambivalent to this — from a pure economic standpoint – it does not cost us much more to run a 30 question survey vs. a 10 question survey. Since it does not cost us (or for that matter any other technology service provider) more, we are not at a position to charge more. So, in effect we are enabling this phenomenon. I’ll admit to that. I know some tech. providers and in some of our own services, we charge on a per completed response basis – again – this has absolutely no effect on the number of questions/length of the survey.
So, when we net it out, there is actually no one who actually, from a strict economic standpoint, cares about the length of the survey. It does not even scale linearly — in fact if you model it out, its usually exponentially inverse. Its cheaper to run longer surveys (on a per question basis.)
Customer Sat. studies used to be overtly long and terrifying – until Bain, McKinsey and BCG told all the CEO’s of the world that the only thing they should care about is the single question on “How likely are you to recommend?” – Will Kantar, IPSOS post a “You shall not ask more than 20 questions at a time” rule? I doubt it. Long surveys are here to stay – and we all like it – that is unless someone asks us to take one!