Customer Effort Score as a Predictor of Customer Behavior

Customer Effort Score as a Predictor of Customer Behavior

For the successful product manager, measurable goals are essential; typically, they can be categorized into three types: a) business-oriented metrics such as customer acquisition cost, LTV, revenue, conversions, retention, and churn, b) product metrics like product usage and adoption, and c) customer-oriented metrics like NPS and CSAT.

Understanding the customer experience is a critical aspect of product development, and traditional metrics such as customer satisfaction or NPS have long been the go-to for measuring it. However, as demonstrated in a 2010 HBR article ("Stop Trying to Delight Your Customers"), reducing customer effort in executing a job and improving the speed and accuracy of execution are better ways of understanding the problem-space and predicting customer behavior and loyalty. I posit that, for product managers, using CES (Customer Effort Score) instead of NPS or CSAT will lead to better business outcomes.

A concrete example of this approach can be seen in the scenario of a computer science teacher giving lessons online, using Teams or Zoom. In this process of teaching, one of the biggest challenges is for the teacher to assess the comprehension of the students. Today, this job step is very hard, and it is slow, as it requires the teacher to look at each student's video, one by one, which is time-consuming and results in low accuracy, as it may be that, by the time the teacher gets to the last student, any signs of comprehension have already faded from their face. This results in a high customer effort score for the task of "assessing class understanding of a concept", with low speed and low accuracy.

To measure the impact of the proposed feature on the teacher's customer effort, we introduce a customer (teacher) effort score (CES) survey to measure the effort required by the teacher to assess student comprehension before and after the feature's implementation. The CES survey would ask the teacher to rate their effort on a scale of 1-10 when assessing student comprehension, with a higher score indicating higher effort. Also, we would measure the time it takes for a teacher to assess student comprehension for all students (per-student and in the aggregate) and the speed with which the task is completed, how long before the entire class has been assessed.

Then, to address this struggle, a product manager could propose a new feature - an AI agent that assesses student comprehension in real-time and provides the teacher with instant feedback. The teacher would be able to view the comprehension scores for each student in real-time, displayed in a grid format. The scores would be color-coded, with green indicating high comprehension, yellow indicating moderate comprehension, and red indicating low comprehension. The AI agent would use machine learning algorithms to analyze various indicators of student comprehension, such as facial expressions, body language, and engagement with the content. To make the feature even more effective, the AI agent would be trained with the help of the students themselves. The system would ask the students to confirm or refute the automated assessment periodically, helping to improve the accuracy of the system over time. The AI agent would also be programmed to identify areas where most students are struggling and flag those areas for the teacher's attention.

By comparing the CES scores before and after the feature's implementation, we could measure the reduction in teacher effort resulting from the feature's introduction. With an automated agent to handle this task, the teacher must only glance at the screen to quickly determine whether they can move on with the lesson; both speed and accuracy improve.

In my view, product management is not an art but a discipline that requires rigorous frameworks and tools to enable a systematic approach to decision-making. Tools like Customer Effort Score (CES) can be employed under the Jobs-to-be-Done umbrella to address the need for a more precise feature prioritization process. In my next article, I will delve deeper into the topic of using CES for feature prioritization.

Jared Ranere

Value creation through growth for private equity and corporate executives at thrv

1y

Nice! Brings back great memories!

Like
Reply
Lucian Gheorghe

Nissan North America Inc. - CCS UX Innovation Senior Manager

1y

For automotive, CES is used rather often with different scales. For example one wants to understand how good is an ADAS system at reducing driver burden and so on..

Like
Reply
Radu Orghidan

Global SVP, Data & AI Strategy, PhD, MBA

1y

Cool idea: Automatic Customer Effort Score (CES) assessment using AI based cognitive behavioral analysis. I really like the concept!

Like
Reply
Russell Bennett

Principal at Ipgnosis LLC

1y

I can see the benefits in a remote scenario, where all you can see is a thumbnail grid of each student/interlocutor. But would this also work in person? Maybe a teacher (worth their salt) could scan a classroom and see who was engaged, but with a larger audience (e.g. keynote presentation) it's impossible.

Like
Reply

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics