[go: up one dir, main page]

You can’t mandate the NPS you want

Brian Utesch
Bootcamp
Published in
5 min readApr 20, 2021

Avoid the use of NPS where the respondent has little to no choice

Photo of hands in hand cuffs
Photo by niu niu on Unsplash

The Net Promoter Score (NPS) is an emotional topic to many UX practitioners. A less emotional view of NPS is that it is not all bad and it is not all good. Like many things, it depends on how it is used and how it is applied. My journey on NPS has taken me from both extremes (Pro/Con) to a current relationship status of It’s complicated. One complicating factor is under which conditions it is applied and the role choice has in the relevancy of the measure.

What is NPS?

As a brief background, Fred Reichheld introduced NPS in his HBR article The One Number You Need to Grow. He positioned NPS as the single loyalty measure that best predicted company growth. It used a single 11-point question that measured willingness to recommend a company to someone. Those with the highest scores (9’s and 10’s) were considered ‘Promoters’ and those with the lowest scores (0’s to 6’s) were considered ‘Detractors’ and the net difference between the percentage of the two was the resulting Net Promoter Score.

The factor of choice

Reichheld found a strong relationship between NPS and company growth; Companies with the highest NPS had the highest growth. But he also found that where consumer choice was limited (such as a monopoly), NPS was not a good predictor of growth.

Asking users of the system whether they would recommend the system to a friend or colleague seemed a little abstract, as they had no choice in the matter. -Fred Reichheld

There was common sense logic to the point that where choice was limited that NPS relevancy was limited. After all, why would someone recommend something that they are being forced to use and that their colleagues or friends were also being forced to use.

There is no such thing as freedom of choice unless there is freedom to refuse. -David Hume

This was borne out in anecdotal data I collected from NPS surveys. Typically, the NPS question was followed immediately by a “why?” question to understand what lead to the NPS response.

It was somewhat interesting to know that NPS was good or bad (relative to established norms) but it was not particularly useful in terms of providing insights for improvement. Analyzing the verbatim comments for sentiment and themes was the only way to make the NPS actionable.

The problem

In my role, I was tasked with evaluating tools provided to employees and NPS was one of the required questions. Reading the responses, I saw verbatim comments such as:

“This is a rather silly question. It is a mandated tool”

“This question doesn’t make sense — this is a MANDATORY tool to use!”

As a researcher, my goal was to only ask questions that were relevant and actionable. Clearly my goal was in jeopardy.

The verbatim responses above indicated that there might be a problem generalizing NPS from a high-choice external environment to a no-choice internal environment. What caused this situation?

Utilization as KPI

Executives were being measured by two primary KPIs: Utilization and NPS.

Utilization was calculated as the percentage of the population using the tool. The goal was 100% utilization while also achieving an excellent NPS. However, the tool was neither delivering the target utilization nor the target NPS. Utilization was determined to be the primary KPI as it was used to justify the investment in the tool. To drive up utilization an executive mandate was put in place to use the tool.

The mandate had the intended effect on utilization (driving it up) but had the opposite effect on NPS (driving it down). There was common sense intuitiveness in this outcome. That is, those dissatisfied were not permitted to abandon the tool (thus removing themselves from the NPS sample) but instead were retained in the NPS sample, thus driving it down.

Freedom to Use

I wanted to confirm what appeared to be common sense with data. A new question was created that I called “Freedom to Use (FTU)” that used the same arbitrary 11-point scale used by the NPS. The question asked “Indicate how much freedom of choice you have in using [tool]” with the end points of “None. I am required to use it (0).” and “Completely free to use it (10).”

Perception of choice (Freedom to Use)

I found the data distribution interesting as it provided confirmation that respondents were aware of the mandate. The sharp spike at the extreme end of choice (no free choice) was eye-opening, particularly since extreme responses are often elusive and there is a reluctance to give perfectly bad or perfectly good responses (“Nothing is all bad or all good”).

For the analysis of the effect of choice, NPS was calculated for each of the FTU response options.

NPS by perception of choice

There was a clear relationship between perceived choice and NPS, with less choice resulting in lower NPS responses. If those with the lowest choice ratings were allowed to stop using the tool, the NPS would have gone up. However, because the tool was not designed well, NPS would not have increased enough to meet the goal of an excellent NPS.

The good news is this was a situation that has since been corrected. Foundational research indicated that the tool was trying to do too many things and none of them well. The internal audience was engaged in global research and a greatly reduced in scope tool was designed, resulting in a tool with obvious value and high organic use.

Following are my key takeaways from looking at the effect of forced use on NPS:

· There are no shortcuts to good design. You will pay now with design investment or pay later with poor metrics.

· Mandated use won’t solve the problem of bad design, it just transfers the problem from one place to another.

· NPS is susceptible to logical fallacies when choice is not allowed.

· If no choice exists, consider abandoning NPS and replace with other measures to collect attitudinal data.

Are you seeing the application of NPS within an internal context where there is little to no choice?

--

--

Human Factors by training, UX Research by application. Researcher and manager of research teams.