Skip to main content
Sample report · Customer feedback analysis

Sample Customer Feedback Analysis Report.

A full sample output based on a plausible scenario — 1,420 pieces of feedback across reviews, support tickets, and survey responses for a fictional robot vacuum.

What this sample shows

A fictional HomeSweep Robot Vacuum 500 with 1,420 feedback rows from three sources — reviews (62%), support tickets (28%), and surveys (10%). The analysis surfaces eight themes ranked by volume and sentiment, a source-distribution donut, a 90-day volume trend for the top theme, and a cross-channel insight showing that "app pairing" is the strongest candidate for action because it appears across all three sources at high volume.

Report · Cross-source · Feedback analysis

HomeSweep Robot Vacuum 500

1,420 rows · 3 sources · 90-day window

Sample — illustrative data

Source distribution

1,420 rows analyzed
Reviews · 62% · 880 rows
Support tickets · 28% · 398 rows
Surveys · 10% · 142 rows

Volume trend — top theme "app pairing"

60 40 20 0 day −90 day −45 today

App-pairing complaints rose from 6 rows/day to 57 rows/day over 90 days — a 9.5× rise. The Anomaly Agent would have flagged this around day 60.

Theme breakdown

Theme Total rows Reviews Tickets Surveys Net sentiment
App pairing EMERGING 318 186 117 15 −62
Suction power 246 212 18 16 +72
Mapping accuracy 201 164 24 13 +41
Battery life 172 95 51 26 −18
Noise level 128 112 4 12 +12
Brush maintenance 114 38 68 8 −29
Customer support 132 42 78 12 −44
Price value 109 31 38 40 +24
Cross-channel insight

"App pairing" shows up across all three sources.

The app-pairing theme represents 318 of 1,420 total rows (22%) and appears in reviews (58% of the theme), tickets (37%), and surveys (5%). That cross-source signature is the strongest indicator that this is a real product issue, not a single-channel artifact. Themes that appear in only one source are typically platform-specific (e.g., "customer support" appears in tickets more than reviews because upset customers contact support rather than review). Themes appearing everywhere are candidates for engineering action.

Methodology

How Indellia built this sample.

The sample is a fabricated scenario designed to illustrate realistic cross-source analysis. The product name (HomeSweep Robot Vacuum 500) is fictional. The 1,420 rows, source mix, and theme distribution are plausible for a mid-volume consumer product six months after launch — enough tickets to matter, enough reviews to establish baseline sentiment, enough survey signal to confirm themes.

The analysis engine — theme clustering via the Theme Agent, aspect-based sentiment scoring, and anomaly detection — is identical to the live Indellia platform. When you submit a real CSV through the Customer Feedback Analysis Tool, the output format and analysis depth match what you see here.

The cross-channel insight framework — looking for themes that appear across multiple sources, which are stronger signal than single-source themes — is described in the how-to-analyze-customer-feedback guide.

Ask Indellia

Have a specific question?

Indellia's AI agents answer with citations from real customer feedback across Amazon, Walmart, Best Buy, and 20+ retail channels.

Run this on your real data

A CSV is a snapshot. Your feedback is continuous.

Connect Zendesk, Amazon, Typeform, and 20+ other sources. See themes update as feedback arrives, with Slack alerts when something starts rising.