Skip to main content
Guide · Technical

MCP for voice of customer: connecting feedback to Claude, ChatGPT, and Cursor.

Model Context Protocol (MCP) is the plumbing that lets AI tools query external data sources. For voice of customer teams, it means your PM can ask Claude "what are customers saying about Model 7 batteries?" and get an answer grounded in your actual feedback corpus. This guide covers what MCP is, why it matters for VoC, how the Indellia MCP Server works, and how to connect.

Reading time · 10 min Format · Technical Updated · April 2026

The short answer

Model Context Protocol (MCP) is an open specification for connecting AI assistants to external data sources and tools. An MCP server exposes a set of capabilities (data queries, tool calls) that an MCP-compatible client — Claude Desktop, ChatGPT, Cursor — can use during a conversation. For voice of customer teams, an MCP server means the brand's feedback corpus becomes queryable from within the AI tools the team already uses, without exporting data or switching contexts.

What MCP is

Model Context Protocol is an open specification originally introduced in late 2024 for connecting AI assistants to external data and tools. It standardizes how a model (the client) asks an external system (the server) to retrieve data, call functions, or provide context during a conversation.

Before MCP, each AI tool had its own plugin system, each with separate authentication, rate limits, and developer experience. MCP consolidates these into one protocol — a data source published once as an MCP server can be consumed by any MCP-compatible client.

As of Q1 2026 Verified · Q1 2026, MCP-compatible clients include Claude Desktop, Claude on the web (via settings), ChatGPT Enterprise connectors, Cursor, Zed, and several other AI-first developer tools. Compatibility is expanding monthly.

Why MCP matters for voice of customer teams

Three reasons.

It eliminates the context switch. Without MCP, a PM who wants to ask a question about feedback either logs into the VoC platform and clicks around, or exports a dataset and uploads it to an AI tool. Both break the flow of analytical work. With MCP, the question goes directly into Claude or ChatGPT, and the answer comes back from the feedback corpus — in the same conversation where the PM is doing other work.

It grounds the LLM. Free-form LLMs hallucinate. A PM asking Claude "what are reviews saying about Model 7 batteries?" without MCP will get a plausible-sounding made-up answer. With the Indellia MCP Server connected, Claude queries the actual feedback corpus and returns an answer grounded in real reviews with citations. The risk of hallucination drops meaningfully.

It democratizes access. MCP makes feedback queryable from tools a wider set of people already use. A QA engineer who never opens the VoC dashboard can query feedback from Cursor while writing a diagnostic workbook. A CMO can query from Claude Desktop while prepping a board deck.

MCP eliminates the context switch. The question goes directly into Claude; the answer comes back from the feedback corpus, grounded in citations. Indellia — MCP for VoC

The Indellia MCP Server

The Indellia MCP Server exposes the brand's feedback corpus as an MCP-compatible data source. Capabilities exposed:

  • Search feedback — natural-language search across the full corpus, returning relevant records with citations.
  • Filter by SKU — scope queries to specific SKUs or Model#s.
  • Filter by theme — scope queries to specific themes from the Theme Agent's taxonomy.
  • Filter by channel, date range, sentiment, rating — standard dimensions.
  • Anomaly retrieval — pull the current week's anomalies from the Anomaly Agent.
  • Record detail — retrieve the full text and metadata of a specific review, ticket, or return.

The server authenticates to the client via a token generated in the Indellia web app. Queries run with the same permissions as the user who generated the token.

Connecting Claude Desktop

Step 1 — Generate an Indellia MCP token in the Indellia web app under Settings → Integrations → MCP. Step 2 — Open Claude Desktop's configuration file at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS or %APPDATA%\Claude\claude_desktop_config.json on Windows. Step 3 — Add the Indellia MCP entry.

{
  "mcpServers": {
    "indellia": {
      "command": "npx",
      "args": ["-y", "@indellia/mcp-server"],
      "env": {
        "INDELLIA_TOKEN": "your-token-here"
      }
    }
  }
}

Restart Claude Desktop. In a new conversation, Indellia capabilities will be available. Ask a question and Claude will query the feedback corpus automatically when relevant.

Connecting ChatGPT

ChatGPT supports MCP via its Enterprise and Team connectors interface. In ChatGPT Enterprise settings, add a new MCP connector. Provide the Indellia MCP Server URL (available in the Indellia web app alongside your token) and the token itself.

ChatGPT will present the MCP capabilities in conversations where the connector is enabled. Scoping tends to work better when the user enables the connector at the start of a conversation rather than mid-stream.

Connecting Cursor

Cursor's MCP configuration lives in Settings → Features → MCP Servers. Click Add new MCP server and populate:

Name: indellia
Type: command
Command: npx -y @indellia/mcp-server
Env:
  INDELLIA_TOKEN=your-token-here

Reload Cursor. The indellia server shows up in the MCP panel. Cursor's composer can now query feedback mid-workflow — useful for QA engineers writing test plans or PMs drafting product requirements.

Example queries

The quality of MCP-enabled feedback queries depends on how specific the question is. Examples that work well:

  • "What are reviewers saying about Model 7 battery life over the last 30 days?"
  • "Compare sentiment on the Lumix camera lineup between Amazon and Best Buy for Q1."
  • "Find the top three themes driving negative reviews on ASIN B0CH7K2LNP."
  • "Show me any reviews mentioning 'firmware' from the last two weeks across all SKUs."
  • "What's the trend in sentiment on the coffee-maker category year-over-year?"
  • "Pull the current week's anomalies. Focus on the three most severe."

Questions that work less well are the ones that ask the LLM to reason beyond what's in the feedback ("should we launch the Model 8?"). MCP grounds answers in the corpus; it doesn't turn the LLM into a strategy consultant.

Security considerations

Four things to understand before deploying MCP at an organization.

Token scoping. The Indellia MCP token carries the permissions of the user who generated it. A read-only user generates a read-only token; an admin generates an admin token. For most use cases, generate read-only tokens.

Local vs remote execution. The Indellia MCP server runs locally on the user's machine (via npx), so feedback data flows through the local process before reaching Claude/ChatGPT/Cursor. No new network path opens from Indellia to the AI vendor; the AI client is the one talking to Indellia directly.

Prompt injection. MCP servers expose data; data can contain instructions. A malicious review could in theory include text designed to manipulate the LLM. Indellia filters and sanitizes responses, but this is an active area of research in AI security. Be thoughtful about exposing writeable MCP endpoints to LLMs.

Audit. Every MCP query Indellia serves is logged and visible in the web app's audit log, with user, query, and response metadata. Useful for compliance and for investigating surprising behavior.

Try the Indellia MCP Server. Available to every SME and Mid-Market customer on the free trial. Connect Claude Desktop in under five minutes.

FAQ

Frequently asked questions

What is MCP (Model Context Protocol)?

Model Context Protocol is an open specification for connecting AI assistants to external data sources and tools. An MCP server exposes capabilities (data queries, tool calls) that an MCP-compatible client — Claude Desktop, ChatGPT, Cursor, Zed, and others — can use during a conversation. It standardizes what used to be a fragmented plugin ecosystem.

Why does MCP matter for voice of customer teams?

Three reasons. It eliminates the context switch — questions go directly into the AI tool and answers come back from the feedback corpus. It grounds the LLM — answers cite real reviews rather than hallucinating. It democratizes access — people who wouldn't open a VoC dashboard can query feedback from tools they already use.

How does the Indellia MCP Server work?

The Indellia MCP Server exposes the brand's feedback corpus as an MCP-compatible data source. Capabilities include natural-language search, SKU filtering, theme filtering, channel/date/sentiment/rating dimensions, anomaly retrieval, and full-record detail. Authentication uses a token generated in the Indellia web app; queries run with the permissions of the user who generated the token.

What AI tools support MCP?

As of Q1 2026, MCP-compatible clients include Claude Desktop, Claude on the web (via settings), ChatGPT Enterprise connectors, Cursor, Zed, and several other AI-first developer tools. The list is expanding monthly. Any brand using one of these tools can connect the Indellia MCP Server.

Is MCP secure for production use?

With care. Token scoping matters — generate read-only tokens for most use cases. Local execution reduces network attack surface. Prompt injection is an active concern whenever LLMs receive data that contains instructions; Indellia sanitizes responses, but operators should be thoughtful about exposing writeable endpoints. Every MCP query is logged in the Indellia audit log.

Does MCP cost extra?

No. The Indellia MCP Server is included in both the $495/month SME and $1,995/month Mid-Market tiers at no additional charge. Usage is unmetered like the rest of the platform.

Ask Indellia

Have a specific question?

Indellia's AI agents answer with citations from real customer feedback across Amazon, Walmart, Best Buy, and 20+ retail channels.

Get started

Query feedback from Claude, ChatGPT, and Cursor.

The Indellia MCP Server is included in every plan. Five-minute setup, grounded answers, citations on every response.