Satisfaction or ‘Statisfaction’?

One of my esteemed colleagues recently sent a draft document to me that had a typo – satisfaction had been spelt with an extra ‘t’, making up a new word ‘statisfaction’.

That got me thinking!

I have been involved in numerous movements and initiatives to drive customer-focused business improvement for over 25 years – from Total Quality & CSI through to Customer Attuned capability assessments and Deep-Relationship-NPS.

One thing that I have learned working with hundreds of companies across the world is that:

IT’S NOT ABOUT THE SCORE – IT’S ABOUT THE CUSTOMERS

Businesses like things quantified (there’s a perception that companies are run by accountants nowadays?), and on the whole I go along with the “what gets measured gets managed” mantra (see below), so I fully endorse customer experience and internal capability measurement.

I also like statistics! I’m intrigued by the fact that (as happened recently in a client) the average score of the Net Promoter question can go up but the NPS itself goes down! I love exploring how ‘the same’ internal capability score generated by a Customer Attuned assessment can be made up of completely different profiles of strength, weakness, consistency and impact across the organisation.

The first trouble with ‘the numbers’ (scores, averages, top-box, etc.) is that they DE-HUMANISE their source – our customers and how we account manage them.

Yes, verbatims that are often included in the appendices of research reports and are summarised into frequency graphs of positive & negative sentiment (quantification again!), but I really wonder how many executives actually read every customer comment?

My point here is that customers are PEOPLE, and have a STORY to tell, but organisationally we’re only interested in a number.

My second problem with ‘the numbers’ is that hitting the score target can easily become the objective in itself rather than improving organisational capabilities. I have seen this lead to many counter-cultural, and indeed downright destructive, behaviours:

  • Deselection of unhappy or difficult customers from surveys
  • Writing new strategies instead of implementing the one you’ve got
  • NPS begging – “please give me a 9 or 10 or I’ll get fired”
  • Only ever addressing ‘quick wins’ – never the underlying systemic issues
  • Blaming sample sizes and methodologies as an excuse for inactivity
  • Blatant attempts to fix the scores (e.g. fabricated questionnaire completions, ‘evidence’ of capability that is in fact just a Powerpoint slide)
  • Corporate tolerance of low-scorers – many companies seem content with the fact that large proportions of their customers hate them!
  • Putting metrics into performance scorecards but with such a low weighting (vs. sales) that nobody really cares
  • Improving “the process” instead of “the journey”
  • No follow-up at a personal level because of research anonymity; or inconsistent follow-up if anonymity is waived – often only of low scorers treated as complainants – what about thanking those who compliment and asking for referrals from advocates?

I could go on, but I hope the point is made – beware of “what gets measured gets managed” becoming:

“WHAT GETS MEASURED GETS MANIPULATED”

So instead of targeting statistical scores, find ways of improving your systemic capabilities to cost-effectively manage your account relationships (at least the most valuable ones!).

This can be achieved with the combined ‘inside-out’ approach of Customer Attuned (which assesses internal capabilities), and Deep-Insight’s ‘outside-in’ methodology, which takes a census approach to really listen to what customers are saying to you.

Please ask us how they work and how they lead to “the scores” improving too!

Have a view on this?  // or //

Peter Lavers