Track customer sentiment across retail channels.
Amazon sentiment and Walmart sentiment tell different stories about the same product. See both — plus Best Buy, Costco, Lowe's, and Bazaarvoice — in one view, at SKU level.
The short answer
Tracking sentiment across retail channels means normalizing review sentiment from every retailer where a brand sells — Amazon, Walmart, Best Buy, Costco, Lowe's, Target, Bazaarvoice — to a single taxonomy, at SKU level. The goal is to see where sentiment diverges between channels (often a listing or merchandising signal) versus where it moves together (usually a product signal). Channel-by-channel star averages miss this distinction; aspect-based sentiment with a shared taxonomy surfaces it.
The job.
Consumer Insights and Marketing teams at brands selling through retail share a version of this conversation monthly: "Amazon reviews say X, Walmart reviews say Y, and we can't tell if that's because they're different customers, different listing quality, or because the actual product is being experienced differently." The tools they use — Bazaarvoice dashboards, Amazon Seller Central, separate Walmart reports — show each channel in isolation.
The job is to bring every retail review channel into one view, normalized to the same themes and the same SKUs, so you can answer "is this a product issue or a channel issue" in seconds rather than days. That answer determines whether the fix goes to R&D, to listing ops, or to retailer-specific merchandising.
Why it's hard today.
- Each retailer has a different dashboard. Amazon Brand Analytics, Walmart Item Performance, Bazaarvoice ConneX, Best Buy partner portal — four separate interfaces, four different taxonomies.
- Star ratings aren't comparable across retailers. Walmart 4.2 and Amazon 4.2 mean different things because of different reviewer bases and rating distributions.
- Review IDs are retailer-specific. The same review syndicated through Bazaarvoice to five retailers counts five times without deduplication.
- Themes are inconsistent. Without a shared taxonomy, "battery life" on Amazon and "battery performance" on Walmart don't aggregate.
- Internal SKU mapping is manual. ASINs, Walmart Item IDs, Best Buy SKUs, Costco item numbers — each tied to your internal Model# only via a spreadsheet a PM maintains on the side.
How Indellia does this job.
Native ingestion from 20+ retail channels.
Amazon, Walmart, Best Buy, Costco, Lowe's, Target, Home Depot, and Bazaarvoice-powered retailer pages are all native connectors. Reviews flow in continuously with their original identifiers preserved. Bazaarvoice-syndicated reviews are deduplicated on ingestion using Bazaarvoice's review ID so a single review doesn't count five times.
One shared taxonomy, one shared SKU.
The Theme Agent applies one taxonomy to every channel. "Battery life" on Amazon and "battery performance" on Walmart resolve to the same theme. The SKU Agent ties every review — regardless of which retailer it came from — to the same Model# record, so one product accumulates feedback across channels.
Channel-by-channel sentiment comparison.
A single Model# view shows aspect-based sentiment per channel side by side. If "setup" is 22% negative on Amazon but 9% negative on Walmart, that's a listing-copy or retailer-specific signal, not a product signal. The view makes the distinction visible without a separate investigation.
Anomaly Agent per channel.
The Anomaly Agent watches each theme-channel-SKU combination and alerts on deviation. A specific theme starting to rise on a single channel often indicates a channel-specific root cause (a new listing photo, a merchandising change, a competitor launch on that shelf). The alert tells you which channel to investigate.
A day doing this job with Indellia.
Monday brief. A Brand Manager opens the Model 12 view in Indellia. Aspect-based sentiment for "sound quality" is +88 on Amazon, +74 on Best Buy, +31 on Walmart. The same product, three retailers, very different sentiment on the same feature. She clicks through the Walmart cohort. The supporting reviews cluster around one phrase — "muddy on streaming" — that doesn't appear in Amazon or Best Buy reviews. She pulls the Walmart product listing. The listing uses a different product photo than Amazon (different angle, different lighting that makes the speakers look larger than they are) and a description that doesn't mention the Clear Voice DSP mode. She flags the listing to the Walmart merchandising team with the specific reviews attached, asks for a photo swap and description update, and moves to the next SKU on her list. It's 9:42 AM.
What you'll need to set up.
Connect every channel you sell on.
Amazon, Walmart, Best Buy, Costco, Lowe's, Target, Home Depot, Bazaarvoice. Each is about five minutes. Don't leave any out — missing channels create blind spots.
Map retailer IDs to Model#.
One CSV, updated as SKUs launch or retire. SKU Agent handles normalization from there.
Pin the theme taxonomy.
The Theme Agent auto-clusters, but for cross-channel comparison you want a stable taxonomy. Pin the top 10–20 themes once and let new themes surface as emerging.
Set up channel-divergence alerts.
Anomaly Agent alerts when one channel diverges materially from the others on the same theme-SKU pair. These are the signals worth investigating.
Related.
Frequently asked questions
How is sentiment normalized across different retailers?
Indellia applies aspect-based sentiment with one shared taxonomy to every channel, so "battery life" on Amazon and "battery performance" on Walmart resolve to the same theme. Star ratings are reported per retailer but aren't used as the primary signal — aspect-level sentiment with confidence bands is the cross-retailer comparable.
How do you handle Bazaarvoice-syndicated reviews?
Bazaarvoice syndicates the same review to multiple retailers (Walmart, Target, Home Depot, Lowe's, and others). Counting each occurrence inflates volume and distorts per-retailer comparison. Indellia deduplicates on ingestion using Bazaarvoice's review ID, so one review counts once. Amazon is not in the Bazaarvoice network, so its reviews are independent.
What does channel divergence usually mean?
If a theme's sentiment is materially different on one channel versus others for the same SKU, the cause is usually listing-specific (different photo, different description, different price point) or merchandising-specific (a recent competitor launch on the same shelf). If sentiment moves together across every channel, the cause is usually the product itself — which routes differently.
Do I need a separate dashboard for each team?
No. The same cross-channel view serves Marketing, Insights, Product, and CX with different default filters. Everyone sees the same data, filtered to their concerns — which prevents the "four different data pictures in one meeting" problem consumer brand teams hit when each function runs its own tool.
Have a specific question?
Indellia's AI agents answer with citations from real customer feedback across Amazon, Walmart, Best Buy, and 20+ retail channels.
One view. Every channel. SKU-level sentiment.
Connect Amazon, Walmart, Best Buy, Costco, Lowe's, Target, and Bazaarvoice in about thirty minutes. See divergence where it matters.