Every support team tracks at least one of these. Most track both. Fewer than half use them correctly, meaning they understand what each metric actually measures, what behaviors it incentivizes, and when each one is the right lens for a given decision.
This article gives you clean definitions, explains the perverse incentives that emerge when you optimize for the wrong metric, makes the case for a composite approach, provides industry benchmarks, and shows you how Velaro automates the collection of both.
Clean Definitions
CSAT - Customer Satisfaction Score
- Measures satisfaction with a specific interaction
- Collected immediately after a conversation ends
- Typically: "How satisfied were you with this interaction?" 1–5 or 1–10
- Score = % who rated 4–5 (or 9–10)
- Transactional - reflects a single moment in time
- Best for: agent-level coaching, queue performance
NPS - Net Promoter Score
- Measures likelihood to recommend the company
- Collected periodically (quarterly, post-milestone)
- Question: "How likely are you to recommend us?" 0–10
- Score = % Promoters (9–10) minus % Detractors (0–6)
- Relational - reflects the overall relationship
- Best for: retention risk, product feedback, executive reporting

The key distinction: CSAT measures the transaction; NPS measures the relationship. A customer can give you a 5/5 CSAT on a chat conversation and still be a detractor on your NPS survey, because CSAT captures the quality of that one interaction, not their cumulative experience with your company.
The Perverse Incentive Problem
Every metric creates incentives. When those incentives aren't aligned with the actual goal (customer retention, revenue, satisfaction), you get perverse outcomes.
Optimizing for CSAT alone creates these problems:
Agents learn to close conversations quickly on a positive note, even when the underlying problem isn't solved. A customer who was told "our team will look into this and email you" rates the interaction 5/5 because the agent was friendly and responsive. The underlying issue wasn't resolved. CSAT looks great; the customer churns three weeks later when the problem recurs.
Another pattern: agents avoid transferring or escalating difficult conversations because a transferred chat often yields lower CSAT. The agent who hands off gets a poor score even when the escalation was correct. This discourages good behavior.
Optimizing for NPS alone creates these problems:
NPS is collected quarterly and aggregated at the company level. Individual agents have almost no direct connection to their NPS impact. The signal is too diluted and too delayed to drive behavior change. Support teams that only report NPS miss the week-to-week, agent-by-agent data that's actually actionable for support operations. NPS tells you there's a problem. CSAT tells you where and with whom.

CSAT without NPS optimizes for pleasant interactions. NPS without CSAT optimizes for feelings about the brand while ignoring daily execution. You need both for different decisions.
The Composite Metric Approach
The right framework uses CSAT and NPS for different purposes at different levels of the organization:
Agent level
CSAT. This is the metric that agents can actually influence conversation-by-conversation. Weekly CSAT trends by agent are your coaching data. An agent whose CSAT drops 0.4 points over a two-week period is either dealing with a harder-than-usual queue, going through something personally, or has developed a behavioral pattern that needs correction. CSAT gives you the signal early enough to intervene.
Team level
CSAT + First Contact Resolution (FCR). Team CSAT tells you whether your routing and staffing are producing good experiences at scale. FCR tells you whether issues are actually getting resolved, not just closed politely.
Executive level
NPS + Customer Effort Score (CES). NPS captures the relationship health. CES (how easy was it to get help?) is a powerful predictor of churn that's often missed by teams focused only on CSAT. Low effort = high loyalty.

Industry Benchmarks: What "Good" Looks Like
If your scores are below the benchmark for your industry, there's almost always a specific driver: response time, first-contact resolution rate, agent knowledge gaps, or channel coverage. The benchmark comparison tells you that you have a problem; the CSAT driver analysis tells you what the problem is.
When to Send CSAT vs NPS
Getting value from CSAT and NPS comes down to timing. Each one captures a different moment in the customer experience, so sending them at the right time matters more than how they’re configured.
CSAT: Right After the Interaction
CSAT works best when it reflects a specific interaction while it’s still fresh.
- Send it immediately after a chat or support conversation ends
- For async channels like email, a short delay can help if the customer isn’t actively engaged
- Keep the question tied to that single interaction
- Add an optional open-text follow-up to capture context behind the score
This is your closest view into day-to-day execution. It tells you how each conversation is handled and where coaching or process changes are needed.
NPS: At Defined Relationship Moments
NPS should not be tied to a single interaction. It’s a broader signal, so timing it around key milestones gives you cleaner data.
- Send it on a consistent cadence (quarterly is standard)
- Trigger it after meaningful points in the customer journey: after onboarding, after a defined period of usage (e.g., 60–90 days) or after a major support or product milestone
This gives you a read on how customers feel about your company overall, not just support performance.
Use CSAT Signals to Inform NPS Follow-Up
Low CSAT scores are often an early warning.
- Customers with repeated low CSAT responses are strong candidates for follow-up
- Reaching out quickly after a poor interaction can recover accounts that would otherwise churn
- Pairing CSAT trends with NPS outreach helps connect operational issues to relationship risk
When timed correctly, each metric reflects its true purpose:CSAT for immediate experience, NPS for long-term sentiment.
The CSAT/NPS Disconnect: When to Investigate
The most revealing signal is when CSAT and NPS diverge. Some patterns to watch for:
High CSAT, Low NPS
Customers like your agents but don't like your product or company. Support is performing well; the product or sales experience is failing. This is a signal to escalate to product or customer success, not support operations.
Low CSAT, High NPS
Customers love your brand but recently had a bad support experience. This is a temporary signal — perhaps a team member is struggling or a new product issue is generating contact. Intervene quickly before relationship goodwill is depleted.
Both declining simultaneously
Systemic problem. Could be a product issue driving support volume, a staffing gap, or a process breakdown. This is the pattern that requires immediate cross-functional attention.
Most teams collect both metrics but don’t use them in a way that drives decisions. CSAT tells you what’s happening now. NPS tells you what it means for the relationship. Treating them as separate signals, and acting on each at the right time, is what turns feedback into a real advantage.




