Find the why behind NPS drops.
A score change alone doesn't tell you what changed. Use indelliaGPT™ and the Theme Agent to answer "what's different in the feedback this quarter vs last?" — with citations.
The short answer
When an NPS, CSAT, or star-rating score drops, the number alone doesn't surface the cause. Driver analysis answers "what's different in the feedback between the period the score was high and the period it dropped?" The workflow uses theme comparison across time windows, aspect-level sentiment delta, and a conversational interface that returns cited reviews — so the answer is traceable, not speculative. This page uses NPS in its editorial sense; Indellia's in-product tool is called Customer Recommendation Score.
The job.
An exec opens the Monday brief. "Model 7 NPS dropped from 47 to 32 last quarter. What happened?" The Consumer Insights analyst has the score — they track it quarterly — but the answer to the "why" lives in 5,000 open-ended survey responses, 800 Amazon reviews, and 340 support tickets from the same period. The standard answer is "we'll have a briefing by end of week." What the exec wants is "here are the three themes that drove it, here's the language, here's the timeline."
The job is to close the gap between the score and the cause. Not every score drop is a product problem — sometimes it's a channel issue, a batch issue, or a seasonal artifact. A driver-analysis workflow names the cause with evidence, fast enough that the analyst can answer in the meeting rather than in the follow-up email.
Why it's hard today.
- Survey verbatims and reviews live apart. The score comes from one system; the supporting unstructured feedback from another (or several).
- Comparison windows are manual. "What changed between Q3 and Q4" means tagging both quarters by the same taxonomy and diffing, which is a days-long task.
- Weak themes get lost. The theme that drove a 15-point drop might only appear in 80 of the 5,000 open-ends — a 1.6% mention rate that doesn't surface in a top-10 list without delta analysis.
- Exec-facing answers need citations. "Battery dropped" is less useful than "23 reviews mentioned the charger running hot, here are three" — but producing the citations is additional work.
- Conversational exec asks are one-off. By the time the briefing is prepped, the exec has moved to the next question.
How Indellia does this job.
indelliaGPT™ for natural-language driver questions.
Ask "what's different in Model 7 feedback in Q4 vs Q3?" indelliaGPT™ returns a cited answer — theme deltas, aspect-level sentiment changes, volume shifts, and the specific reviews behind each claim. The answer is traceable; every sentence is backed by a retrievable citation from the underlying corpus. See deterministic AI for the grounding method.
Theme Agent period comparison.
The Theme Agent tags every piece of feedback with a consistent taxonomy. Period-over-period diffs show which themes grew, which shrank, and which newly emerged. The comparison runs in seconds, not days.
Weak-theme amplification.
Themes with low absolute volume but high period-over-period velocity are flagged separately. A theme that went from 6 mentions to 80 is a 13× rise — often the real driver of a score drop, missed by top-volume-only analysis.
Customer Recommendation Score (Indellia's in-product tool).
Indellia's on-site and in-product score calculator is called Customer Recommendation Score to avoid trademark conflict. It uses the standard 0–10 promoter/detractor framework. Editorial pages like this one reference NPS with proper attribution; product interfaces use the Customer Recommendation Score name. See the glossary entry for the full context.
A day doing this job with Indellia.
Sunday night prep for Monday's exec meeting. The Head of Consumer Insights opens indelliaGPT™ and asks: "What are the top three drivers of the Model 7 score drop in Q4 versus Q3?" The answer returns in under a minute: driver 1 is a charger-heat theme that rose from 4 mentions to 74 (18×), concentrated on units produced in September; driver 2 is a setup-instructions theme that stayed flat in volume but saw sentiment drop −18 points after a Q4 printing change; driver 3 is a Bluetooth-stability theme on units running firmware 2.3. Each has cited reviews.
She pulls three verbatims for each driver and writes a half-page brief: "The score drop is 80% explained by three drivers. One is a contained batch issue. One is an operational issue we can fix this quarter. One is already resolved in firmware 2.4 and will reverse as the fleet updates." She has the brief in 40 minutes. Prior quarter, the same analysis took two days.
What you'll need to set up.
Connect survey and review sources.
Typeform, SurveyMonkey, Qualtrics for surveys; Amazon, Walmart, Bazaarvoice for reviews; Zendesk, Intercom for support tickets. One taxonomy applied to all.
Pin the period boundaries.
Calendar quarters, fiscal quarters, or launch-relative windows. indelliaGPT™ uses the pinned boundaries for its comparison answers.
Map scores to SKU and period.
If you track NPS per SKU per quarter, connect the scorecard (Google Sheet, Snowflake, or survey export). Indellia ties the score context to the theme analysis.
Share indelliaGPT™ access with exec staff.
Exec-facing questions answered by the person in the meeting, not in a follow-up email. The MCP Server exposes the same interface to Claude, ChatGPT, and Cursor for analysts preferring those tools.
Related.
Frequently asked questions
Can Indellia calculate NPS?
Indellia's in-product score is called Customer Recommendation Score, using the standard 0–10 promoter/detractor framework. This editorial page references NPS in its industry-standard sense, with trademark attribution. In the Indellia product itself, the term "Customer Recommendation Score" is used to respect the NPS trademark owned by Bain & Company, Satmetrix, and Fred Reichheld.
How far back does comparison go?
As far back as your ingested corpus goes. For new Indellia accounts, backfill to 24 months is available on Mid-Market. For ongoing comparison, rolling 90-day, 180-day, and 365-day windows are standard.
What counts as a "driver" of a score drop?
A theme that either (a) grew materially in volume period-over-period, (b) saw aspect-level sentiment deteriorate materially, or (c) emerged as new in the later period. Indellia ranks candidate drivers by correlation with the score movement and returns the top contributors with their supporting reviews.
How does this differ from a driver analysis in a survey tool?
Survey tools run driver analysis on the survey-internal variables (rating per question, demographic segment). Indellia's driver analysis runs on the combined corpus — surveys plus reviews plus tickets — and uses theme deltas rather than only rating deltas. The goal is language, not variance.
Have a specific question?
Indellia's AI agents answer with citations from real customer feedback across Amazon, Walmart, Best Buy, and 20+ retail channels.
Answer "why did the score drop" in the meeting, not after it.
Connect surveys, reviews, and tickets. Ask indelliaGPT™ for driver answers with cited reviews.
Net Promoter, NPS, and Net Promoter Score are registered trademarks of Bain & Company, Inc., Satmetrix Systems, Inc., and Fred Reichheld.