Prefer the Dutch version of this blog? Check out the blog on Marketingfacts.
I’m still confronted with organizations who are really struggling to design their customer surveys, almost on a day-to-day basis. That’s pretty logical, seeing as most of the people responsible for tackling this within the organization, have no research background. Combine that with the popularity of simple, ‘I like’-type research applications made popular by Facebook, etc, and the confusion is easily explained. This blog attempts to unravel some of that, enabling everyone to truly steer their satisfaction surveys and to see the results of their use.
Misconceptions
One of the greatest misconceptions is that qualitative information provided by customers will result in correct steering information. By qualitative information, I’m referring to the brief online questions which only ask: good or bad (often with a thumbs up or down) followed by the question: why? Or the net promoter score survey (NPS) with the open question: why this score? Aside from the time-consuming nature of these types of open text analyses (and no, text mining is not yet sufficiently effective to fill this gap, in my experience), you also run the serious risk of steering in the wrong direction.
A good example can be found in the telecom world. If you scan the open answers there, or check the irritation in social media for example, the selection menu soon comes to the front. But if you objectively measure what customers truly believe important, good listening and a first time fix when answering a question are much more important.
When only considering the open answers therefore, and steering those issues which are most commonly named, you run the risk of your efforts having no effect.
Quantitative research
Do you want to know exactly how to invest those 100,000 dollars for the best impact on your NPS or customer satisfaction (CSAT)? Make sure your questionnaire comprises statements which you can have analyzed. These types of analyses directly show that the employee and the first time fix are three times more important than improving waiting times. If you then formulate the questionnaire in keeping with each step in the customer experience journey, your driver analyses will become extremely reliable indeed! After all, analyzing the answers in this way helps you immediately discover whether you are asking the right questions.
This is also known as explained variance. If you score less than 50 percent, it’s time to head back to the drawing board because you’re apparently not asking the customer the right questions. If you follow his customer experience journey very strictly, putting yourself in your customer’s shoes, you will not require quantitative (preliminary) research to compile a good questionnaire. I’ve generated many reliable driver analyses in this way, also for customer processes.
Stop negotiating!
I also increasingly hear that various people, departments and stakeholders are allowed to submit their own questions. Stop that! You have no idea whether these are relevant questions from the customer’s perspective. There’s a good chance it will be a long hodgepodge questionnaire with too many compromises, whereas an objective customer-centric approach removes all discussion. You have then defined the knobs which the customer believes important, and which can be effectively adjusted for improvement, showing everyone that the right questions have been asked from the customer’s point of view.
From channels to customer processes
This method of creating surveys can be applied not only to your channels but also to your customer processes. And so you know precisely: if I want to increase satisfaction with a customer process (requesting benefit payments, submitting a claim, et cetera), these are the knobs to be adjusted. Many organizations have been measuring their channel satisfaction for some time, but there is much less attention for the customer processes. Yet there are great gains to be had here in particular.
So how to combine qualitative research?
If you really are a fan of brief, qualitative surveys (or if your environment so requires), you can combine them with these driver analyses. Let’s imagine that your analysis shows a first time fix to be an important driver where improvement is required, you can ask customers specifically whether their answer was solved in one go; if not, what could you have done to immediately solve it? And so you can combine both surveys very effectively. Personally I nearly always aim the continuous measurement process at the entire questionnaire, as the response received is high enough (20 percent) to allow me to trace the underlying items of the trend.
NPS or CSAT
Aside from my preference for CSAT, it makes no difference whether you are steering for NPS or CSAT in this type of customer survey. In the driver analysis to identify the knobs, the NPS question can just as easily be chosen as steering info, instead of the satisfaction question. Almost every research agency will inform you that you have drivers with the so-called priority matrix based on correlations. But trust me, that is something totally different to actual cause-consequence analyses!
To summarize
The 5 tips for steering info from your customer survey
- Base your questionnaire on the customer experience journey.
- Use statements and driver analyses to determine the actual drivers.
- Use this driver analyses to check whether you’re asking the right questions.
- Stop negotiating!
- Measure your customer processes too, and not only your channels.