Customer satisfaction surveys. Hmm, what to do with them? Every organization has them. And very few organizations design them in such a way that they really work. We all want to steer towards a more customer-centric organization. And customer satisfaction surveys are often seen to be an important tool in doing so. Rightly so…? In this column, I shall attempt to clarify the sense and nonsense of customer satisfaction surveys.

There are various forms of customer satisfaction surveys. For now, I’ll be discussing the quantitative customer satisfaction survey. The type of survey that virtually any research agency can conduct on your behalf. I’ve yet to encounter an organization which hasn’t had such as survey conducted at least once in its lifetime. Some organizations do so a number of times a year, others once every few years. But the main question is: why?

If this has indeed been given thought, the survey generally serves multiple objectives: meeting a certain performance indicator as agreed with the stakeholders; discovering what really matters to the customer; improving the organization on the basis of the customer’s opinion and of course the less ambitious but still commonly occurring ‘well, customer satisfaction just needs measuring from time to time?’

But if multiple objectives are to be served, why bundle them all together in one and the same customer satisfaction survey? By combining them all in that one survey, the results will often only prove valuable for a limited number of all the objectives. That’s hardly surprising. After all, the survey will often be too high-level to meet the needs of the various departments, when it comes to improvement, for example.
In other words: design your survey in keeping with your objective, even if it all boils down to customer satisfaction in each case.

1. Reporting an agreed performance indicator

If this is the objective, then a ‘lean and mean’ survey among all customers will suffice, without further distinction into departments, customer groups, etc. Such detailed classification serves no purpose. After all, the performance indicators often apply to the organization as a whole. Think in terms of a ministry which holds an entire implementing body to account for customer satisfaction (curious as it may be, that same implementing body is allowed to conduct its own customer satisfaction survey in order to produce this score… but that’s beside the point). This is often an annual event, but even if it requires reporting a number of times per year, it can be based on an orderly cross-section of the customers and particularly also an orderly number of questions in the questionnaire. 200 customers for example, with 10 questions per customer.

2. Knowing what really matters to customers

These are the ‘knobs’ to which I often refer. Discovering the drivers of customer satisfaction. Once again, this does not need to be worked out in detail among many customer groups for various departments. While 9 out of 10 organizations are still struggling to become more customer centric (i.e. to get the basics sorted), it’s no big deal determining what really matters to customers. This type of survey can also simply be conducted once annually. Unless of course you have an extremely speedy improvement cycle with great impact on your services, in which case it should be done more often (but once you’ve achieved that, I dare say you’ll have extremely satisfied customers…). The impact scores will change after all, based on the improvements you make. A somewhat larger random sample is required, but the questionnaire need not be extremely long.

3. Truly learning and improving

In my opinion, this is the most important objective of the survey, but also the point most commonly lost due to the high-level approach. Truly learning and improving really can differ per customer group, per department and per product/service. After all, you want very specific feedback from your customer, with which you can take action within your department, for your specific product and your specific customer group. I can’t help wondering whether (large-scale) quantitative surveys would ever work in this case. Much more effective, in my opinion, is to identify what input is required per department / product / customer group. Let’s imagine that a department is still in the start-up phase and has yet to organize its own processes effectively. You don’t need customers to tell you that, you already know. When you’re a little further down the line however, their opinions certainly do become relevant. You’d then be better advised to ‘simply’ undertake a dialog with those customers. Organize a focus group session with customers relevant to that department / that product / that specific customer group, and ask their opinions! And if in the end you choose to quantify the information, you can ask them to express their opinion in terms of a score, during such a session. Each department or region can then determine its own cycle of improvement and therefore also the frequency with which customers are consulted.

A great deal of money is spent on such surveys. Wouldn’t it be nice if they actually had the intended result?

 

# Coach me | Want to get advice for your specific situation? Book a 1-1 coaching session!
# Test me | Want to navigate through the sense and non sense of CX? Discover your CX potential!
# Teach me | Want to accelerate your organisation with experience? Check the online playground!
# Inspire me | Want to be inspired to keep innovating your CX? Sign up for the Inspire me e-mails!

All online courses have been accredited by NIMA, the Dutch Marketing Association which is part of the European Marketing Confederation.

Share via
Copy link
Powered by Social Snap