Wondering what is the best metric to use in what situation?
In this blog, I will share the pros and cons of the three most commonly used metrics in customer experience:
Net Promotor Score (NPS), Customer Effort Score (CES) and Customer Satisfaction (C-SAT).
So you can then decide for yourself when to measure what.
NPS, CES, C-SAT… which will it be?
Let’s first go back to square one: why are we measuring these metrics? Because organizations want loyal customers.
For commercial organizations, loyal customers mean more revenue: more sales per customer, more enduring customer relationships and mouth-to-mouth advertising for new customers.
For public sector organizations, loyalty means trust: a relationship between the general public and the government in which a dialogue is possible.
This prevents continuous escalation to third parties, due to the relationship being marked only by mistrust.
The million-dollar question remains: what’s the best metric if I am to increase this loyalty and trust?
One of the most persistent convictions since the arrival of the NPS is that customer satisfaction does not matter, that satisﬁed customers still move on.
However, it is the degree of satisfaction that counts.
Customers who score you a 7 are satisﬁed, but can indeed easily switch to some other organization.
However, customers scoring an 8 or higher, have a strong relationship and loyal behavior.
This has been proven in many studies over the years (see also my PhD) looking at the relationship between satisfaction and loyalty, and that relationship is still ongoing.
The measurement of satisfaction is a perfect instrument with which to improve customer experience, but you need to ensure your customers score you 8 or higher when looking to create loyal customers or increase trust.
So customer satisfaction is the perfect metric to use for each detailed journey of your customer, to find the real drivers of satisfaction and thus improve the entire experience.
Application C-SAT: to find the latent drivers for each journey of your customers and then measure these journeys continuously
Net Promoter Score
The Net Promoter Score is a percentage calculated on the basis of the score for the question: “How likely are you to recommend company X to friends or family?” on a scale of 0 to 10.
The percentage of customers scoring 0 through 6 (the detractors) is deducted from the percentage of customers scoring 9 or 10 (the promoters).
Due to the largest group (7 and 8) not being included, the NPS is seen to ﬂuctuate greatly in practice.
The NPS may be -10 in the ﬁrst quarter and +20 in the second quarter, without reasons becoming known.
This makes it very diﬃcult to steer customer experience.
Another disadvantage of the NPS is that more and more organizations are using the NPS question on very small, transactional touchpoints.
For example, in a UX question on page X on the website, they ask: “Would you recommend company X based on your experience with our website?”.
For me as a customer, recommending an organization contains many more variables than purely that website.
So my NPS score will have little to do with that specific web page. NPS is more about my entire evaluation of an organization.
So measuring NPS as a complete evaluation of the organization is when it makes sense. In that situation it doesn’t fluctuate. And you don’t have to measure it that often.
And the last observation that I see, is more and more creativity in defining the NPS.
Varying from measuring NPS on a 7-point scale, to completely redefining the NPS question itself (for example: “Would you recommend meeting with one or our advisors?”).
So making a very detailed version of NPS. Which is all fine, as long as you realize that when benchmarking, NPS version A may not be the same as NPS version B.
And that such small definitions are not a good indicator of my loyalty toward the entire organization.
Just as an extra heads up: even though the title of his book may seem to hint otherwise (“The Ultimate Question”), Reichheld himself is the first to agree that there is never only 1 question to measure CX.
You need a right mix of metrics to really be in the driving seat of customer experience. For example, NPS is measuring my intent to recommend.
But intent has often been proven not to be the best indicator of actual behavior.
So make sure you combine it with actual loyalty behavior as buying more, staying longer.
Oh, and while I’m busy giving you heads ups: the European NPS is not making any sense (where 8-10 are promotors instead of 9-10).
We have done many measurements where European customers also rate 9s and 10s.
So when you are not scoring too high, don’t adjust the metric. Make sure you find and improve the right drivers!
Application NPS: as a periodic indication for loyalty, to see whether the continuous journey improvements using C-SAT are moving the entire organization evaluation in the right direction.
Customer Eﬀort Score
A new indicator came into being in 2013, the Customer Eﬀort Score: “How much eﬀort did you personally have to make to …?” (PS: this book is a must-read for anyone in customer contact).
This formulation appeared to be difficult in practice. That’s why the authors created a 2.0 version asking: “Organization X made it easy for me to …”.
The dot dot dot concerns a certain transaction: buy a product, get my question answered, and so on.
The authors indicate that customers are not at all interested in receiving a bunch of ﬂowers and having their expectations exceeded, but that the crux of good customer experience lies in the sense of convenience.
They have conducted extensive research into the customer experience in contact center environments. And of that you must be aware.
While the sense of convenience is absolutely important, the question is whether this is also the case outside the context of specific touchpoints.
Convenience really matters after all, though the degree depends entirely on the branch, organization, process, channel and target group.
In a study at Harvard, they found that the CES was a better indicator of future sales intent then NPS and C-SAT.
We tried to replicate these results for a Dutch insurance company but found that NPS and C-SAT were a much better indicator then CES.
Which for me makes sense, since CES is purely focused on the ease of the transaction. And as such, an important metric to score well at.
But for me to be loyal to an organization, there’s a lot more involved than simply ease of transaction.
Their statement “Stop Delighting Your Customers” needs some nuance. They mean you don’t have to give flowers and go to extremes to delight them. That’s absolutely true.
You do need to delight your customer though, to get the 8+. But nowadays, creating convenient journeys, calling me back when you promise to, means that I am delighted!
So are the expectations of customers increased? I don’t think so.
Did the context for organizations become extremely more complex to get the basics right? Absolutely.
But still, in all research, we see that the basics (do what you promise, listen to your customer, first time right, etc.) are what delights your customers. Even in 2019.
Application CES: either as one of the drivers of C-SAT in the journeys or in a very specific contact application, for example on UX of the website, ease of using a form, ease of applying online, etc.
# TIP: you should definitely read their chapter on Experience Engineering. Purely by changing how you communicate, while the result stays the same, you see dramatic positive increases in your score.
The right metric for online
With the growth of digitization, the importance of UX is growing as well. So what metric should you use when you want to improve your website?
It depends on your goal.
When the goal is to maximize conversion, why use any of these 3 metrics?
It makes more sense to use real click data (google analytics and comparable tools) to find the hiccups for a better conversion.
The NPS question in online user experience is a strange application.
When you want to understand whether the content on the website is valuable, just ask that.
When you want to know whether the content answered the question of the visitor, just ask that.
So check your goals and measure accordingly.
What science is saying
The pros and cons of the various metrics determine when you should apply which one.
But what does science say about the relationship between these three metrics?
It says that loyalty is always preceded by customer satisfaction (see my PhD). Commitment and trust also play a role in the relationship between satisfaction and loyalty.
They are both boosted when satisfaction is increased, and in turn, they boost loyalty.
Satisfaction, therefore, has both a direct and indirect positive impact on loyalty.
Word-of-mouth advertising (which essentially is the NPS question) is only one of the two components of the scientiﬁc deﬁnition of true loyalty.
True loyalty also includes the intention to make a repeat purchase from the organization.
NPS is not equal to loyalty, therefore.
You cannot create loyal customers without this being preceded by a sense of satisfaction.
Due to the Customer Eﬀort Score preceding loyalty, and satisfaction nestling in between the two, this also means that the Customer Eﬀort Score is a driver for satisfaction.
The convenience of the service determines my satisfaction with the service. The easier it is, the higher my score and therefore the steadier my loyalty.
And so the three metrics are strongly interrelated.
Due to the central role played by customer satisfaction, I advise using C-SAT to steer customer experience.
Let’s end with some common sense
In the end, all that matters is a measurably improved customer experience.
Meaning that you have a system which works as follows: you measure a score, you analyze the causes, you improve the causes and you measure a higher score.
If the mechanism works – whether it be C-SAT, NPS or CES – please stick to the method you are using, as we know all three to be interrelated.
There are two situations though when you should check your measurement methodology:
1. You discover that your choice of interventions does not boost your score (any longer)
2. Your touchpoints are giving high scores, but your churn is not decreasing
Both situations are a sign that you are not measuring what really matters to your customers to create a better experience for them.
Finding the right drivers is the key
More important than using the right metric, is finding the right drivers to improve the metrics.
In that blog you will read how you can find the real, latent drivers of your customers. And why the trend of AI and text analyses may not be the best solution to find them.