Elasticsearch MCP Server
Search documents, aggregate results, and inspect indexes in Elasticsearch from Claude Code and Cursor over stdio MCP.
Updated: April 15, 2026
Install
{
"mcpServers": {
"elasticsearch-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-server-elasticsearch"
],
"env": {
"ELASTICSEARCH_URL": "http://localhost:9200",
"ELASTICSEARCH_API_KEY": "optional_api_key"
}
}
}
}Capabilities
- + Run full-text `match` and `query_string` searches against any index
- + Apply filters, term queries, and range constraints on search results
- + Execute aggregations for counts, histograms, and cardinality estimates
- + List indexes, mappings, and alias definitions across the cluster
- + Fetch a document by ID or by a custom query path
- + Check cluster health status and shard allocation state
Limitations
- - Complex aggregations and nested bool queries need Elasticsearch DSL knowledge; the model often misplaces brackets
- - No index or mapping management; templates, ILM policies, and reindex operations are not exposed
- - No ILM or snapshot lifecycle control; cluster maintenance still happens through Kibana or the HTTP API
- - Cluster health issues surface as read data but are not auto-remediated by the tool
Elasticsearch MCP server setup for Claude Code and Cursor
Quick answer: The Elasticsearch MCP server is a Node process that wraps the Elasticsearch HTTP API as MCP tools. Point it at a cluster URL, optionally pass an API key, and Claude Code or Cursor can run search queries, read indexes, and check cluster health. Setup runs about 3 minutes, tested on Elasticsearch 8.12 with mcp-server-elasticsearch@0.3.0 on April 15, 2026.
Searching across a production Elasticsearch cluster usually means flipping between Kibana Dev Tools and a scratch notes file. The MCP server replaces that loop. When Claude drafts a query, it runs against the cluster directly, returns the hits, and then refines the query based on what came back. For anyone who debugs data in Elasticsearch more than once a week, that alone is worth the setup time.
This guide covers install, config for both editors, query patterns that work, and the places where Elasticsearch DSL trips the model up.
What this server does
The server speaks MCP over stdio and wraps the official @elastic/elasticsearch Node client. It exposes roughly 10 tools:
- Search:
search,msearch,count - Documents:
get_document,mget - Schema:
list_indexes,get_mapping,get_settings - Cluster:
cluster_health,cluster_stats
Authentication is optional for local dev clusters and required for anything cloud-hosted. Pass an API key via ELASTICSEARCH_API_KEY or basic auth via ELASTICSEARCH_USERNAME and ELASTICSEARCH_PASSWORD. The server holds credentials in process memory and never writes them to disk.
Installing the Elasticsearch MCP server
The package is on npm as mcp-server-elasticsearch. The npx -y prefix fetches on first launch. Cold pull is around 9 MB, mostly driven by the Elasticsearch client dependency tree. Expect 4 seconds of startup on first run, 2 seconds on subsequent runs.
You need the cluster URL (including scheme and port) and, for production, an API key with read cluster privilege and read on the indexes you care about. For a local Docker cluster on http://localhost:9200, no auth is needed.
Configuring for Claude Code
Claude Code reads MCP servers from ~/.claude/mcp.json or a per-project .mcp.json. Add an elasticsearch entry pointing at your cluster:
{
"mcpServers": {
"elasticsearch": {
"command": "npx",
"args": ["-y", "mcp-server-elasticsearch"],
"env": {
"ELASTICSEARCH_URL": "http://localhost:9200",
"ELASTICSEARCH_API_KEY": "optional_api_key"
}
}
}
}
Restart Claude Code and run /mcp. Call cluster_health once as a smoke test. A green or yellow status and a shard count confirms the wiring. red means the cluster itself has issues, which the MCP server cannot fix.
For Elastic Cloud clusters, paste the full endpoint URL from the deployment page. It looks like https://abcdef.es.us-east-1.aws.found.io:9243. API keys generated in Kibana paste in as a single base64 string.
Configuring for Cursor
Cursor reads from ~/.cursor/mcp.json. The JSON is identical to the Claude Code config. Toggle the server on from the Cursor settings MCP tab.
Cursor spawns the subprocess lazily on the first tool call. Expect 4 seconds of cold start and 150 to 400 ms per subsequent query depending on cluster proximity. For clusters in another region, factor in cross-region network latency.
Example prompts and workflows
A few prompts that work well:
- "List every index in the cluster and group them by alias."
- "Search the
logs-*index forlevel:error AND service:checkoutin the last hour and show the top 10 hits by timestamp." - "Run a terms aggregation on the
host.namefield acrossmetricbeat-*and give me the top 20 hosts." - "Get the document with ID
abc123from theproductsindex and show the raw JSON." - "Describe the mapping for the
eventsindex and tell me which fields are of type keyword."
The model will chain calls. A read flow usually runs list_indexes, then get_mapping to see fields, then search with a match or term query scoped to the right field type. Telling the model the index name up front cuts the round trip.
One caveat: the DSL has traps. Match queries run on analyzed text fields, term queries on keyword fields. If Claude writes a term query against an analyzed text field, it returns zero hits and then rewrites the query 3 times before landing on match. Priming with the field X is a keyword saves that loop.
Troubleshooting
Tool call returns 401. The API key is wrong or missing. Regenerate under Kibana Security > API keys and paste the full base64 string into the env var, then restart the server.
Tool call returns 403. The API key lacks privilege on the index. Edit the key in Kibana and grant read on the target index pattern. Document-level security can also block reads even when the index permission is set.
Tool call returns 429 on a large search. The cluster is throttling. Reduce the size parameter, narrow the time range, or rewrite as an aggregation that returns summarized counts instead of raw hits.
Search succeeds but returns hits: []. The query is syntactically valid but matches nothing. Run get_mapping on the index and check field types. Analyzed text fields need match, not term. Date ranges need ISO 8601 timestamps.
Connection refused on local cluster. The cluster is not running or bound to a different interface. Confirm with curl http://localhost:9200 from a terminal. If that works, the MCP subprocess may not have the env var applied. Restart the editor from a terminal to pick up the shell env.
Alternatives
A few options if the Elasticsearch server does not fit:
opensearch-mcptargets OpenSearch forks (AWS OpenSearch Service, self-hosted OpenSearch) with a near-identical tool surface.loki-mcphandles log search for teams on Grafana Loki instead of the Elastic stack.clickhouse-mcpis faster for time-series analytics and structured event data, though it misses full-text search features.
For one-off queries, Kibana Dev Tools is still the quickest path. The MCP server pays off when Claude needs to iterate on a query - tweaking fields, expanding the filter, adding an aggregation - without you copy-pasting between tabs.
Performance notes and tuning
On a cluster with 200 indexes and 2 TB of data, list_indexes returns in under 200 ms. search latency depends on shard count and query complexity. A simple term query across a single 10 GB index returns in 50 to 150 ms. A multi-level aggregation on 500 GB of time-series data can take 3 to 8 seconds. If Claude runs the same query repeatedly, tell it to cache the result in the prompt rather than re-querying.
For clusters behind a corporate proxy, set HTTPS_PROXY in the env block. Self-signed certificates require NODE_TLS_REJECT_UNAUTHORIZED=0, which is fine for dev and risky in production. Prefer a trusted CA bundle via NODE_EXTRA_CA_CERTS instead. The server respects both variables on startup.
The Elasticsearch MCP server is the right default for teams running Elastic stack in production who spend regular time writing search queries. Three minutes of setup returns hours a month in query iteration. Start with a read-only API key scoped to one index pattern, confirm your baseline prompts work, then widen access as the team picks up the query patterns.
Guides
Frequently asked questions
Does this server work with Elastic Cloud and self-hosted clusters?
Yes for both. Elastic Cloud needs the full `https://xxx.es.region.provider.found.io:9243` endpoint and an API key generated in Kibana. Self-hosted clusters can use basic auth, API keys, or no auth if the cluster is open. TLS is supported as long as the CA is trusted by the Node runtime.
Can I run write operations through the MCP server?
The current release is read-focused. Document indexing, updating, and deletion tools are not included to reduce the risk of a prompt injection wiping production data. Use the Elasticsearch HTTP API directly or the `_bulk` endpoint for ingest tasks.
How does the server handle huge search results?
The `search` tool defaults to 10 hits per call. Ask Claude to raise the `size` parameter up to 1,000 or use `search_after` for pagination. For full-result exports, prefer a scroll or async search from a dedicated script; the MCP transport is not designed to stream multi-megabyte payloads.
Does the server support runtime fields and scripted queries?
Runtime fields defined in the mapping work the same as regular fields. Inline painless scripts in queries are passed through unchanged, though Claude occasionally gets the painless syntax wrong. For critical scripts, validate in Kibana first.
Can I point it at multiple clusters?
One cluster per server instance. To target multiple clusters, spin up multiple named entries in your MCP config, each with its own `ELASTICSEARCH_URL`. Claude will see them as separate tool namespaces like `es_prod` and `es_staging`.
What Elasticsearch versions are supported?
The 8.x series works without changes. 7.17 works with minor differences in some cluster-level calls. Versions 7.10 and older are not actively tested and may fail on endpoints that changed between releases. Upgrade the cluster before you lean on the server in production.