MongoDB MCP Server
Find, insert, and aggregate MongoDB documents from Claude Code or Cursor with one connection string.
Updated: April 15, 2026
Install
{
"mcpServers": {
"mongodb-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-server-mongodb"
],
"env": {
"MONGODB_URI": "mongodb://localhost:27017/mydb"
}
}
}
}Capabilities
- + Find and filter documents with full MongoDB query operators
- + Insert, update, and delete documents one or many at a time
- + Run aggregation pipelines including $match, $group, $lookup, and $project
- + List collections in the current database and their sizes
- + Describe indexes on any collection with key, type, and usage stats
- + Count documents by filter without reading the full result set
Limitations
- - Complex aggregations over large collections can be slow and block the connection
- - No transaction support; multi-document writes are not atomic across collections
- - No change stream subscriptions through the MCP transport
- - Full text search requires Atlas Search, which is not configured by default
MongoDB MCP server setup for Claude Code and Cursor
Quick answer: The MongoDB MCP server is a Node process that wraps the official MongoDB driver as MCP tools. Install with one npx command, pass a MONGODB_URI in the env block, and Claude Code or Cursor can read, write, and aggregate documents. Setup takes 4 minutes, tested on server version 0.4.0 against MongoDB 7.0 on April 15, 2026.
The MongoDB MCP server is a short bridge between your coding agent and your database. Instead of Claude writing queries it hopes are correct, it runs them, reads the results, and keeps going. The round trip is fast enough that an agent can iterate on a schema migration in minutes rather than hours.
This guide covers installation, editor config, useful prompt patterns, and the operational limits worth knowing before you wire it into a production cluster.
What this server does
The server exposes roughly 15 MongoDB tools backed by the official mongodb Node driver. When Claude wants to read documents, it calls find with a filter and projection. When it wants to group and summarize, it calls aggregate with a pipeline array.
Main tool groups:
- Reads:
find,findOne,count,distinct - Writes:
insertOne,insertMany,updateOne,updateMany,deleteOne,deleteMany - Aggregation:
aggregatewith full pipeline support - Meta:
listCollections,listIndexes,createIndex,collStats
The server connects once at spawn time and reuses a single connection pool. That means multiple tool calls share the same session, which matters if you care about read-your-own-writes consistency within a single prompt.
Because the server runs locally, your connection string never leaves the machine. The driver holds it for the life of the subprocess.
Installing MongoDB MCP
The package is published as mcp-server-mongodb. The npx -y prefix fetches on first launch. Cold start is about 3 seconds and pulls roughly 4.5 MB including the MongoDB driver.
You need a MongoDB instance the server can reach. Local options:
brew tap mongodb/brew && brew install mongodb-communityon macOS.docker run -d -p 27017:27017 mongo:7.0on any machine with Docker.- MongoDB Atlas - grab the SRV connection string from the Atlas console.
The MONGODB_URI follows the standard driver format: mongodb://[user:pass@]host:27017/dbname or mongodb+srv://user:pass@cluster.abcd.mongodb.net/dbname for Atlas. The dbname in the URI becomes the default database for tool calls that omit an explicit database name.
Configuring for Claude Code
Claude Code reads MCP servers from ~/.claude/mcp.json globally or .mcp.json at the project root. Add a mongodb entry:
{
"mcpServers": {
"mongodb": {
"command": "npx",
"args": ["-y", "mcp-server-mongodb"],
"env": {
"MONGODB_URI": "mongodb://localhost:27017/mydb"
}
}
}
}
Restart Claude Code. Run /mcp and you should see the server with around 15 tools. Call listCollections as a smoke test - if it returns your collection names, the wiring is correct.
For Atlas or any production cluster, create a read-only user first and point the agent at that. If you then need write access, rotate to a scoped user with readWrite on the specific database. Do not hand the agent a cluster-admin account.
Configuring for Cursor
Cursor reads from ~/.cursor/mcp.json with the same JSON shape:
{
"mcpServers": {
"mongodb": {
"command": "npx",
"args": ["-y", "mcp-server-mongodb"],
"env": {
"MONGODB_URI": "mongodb://localhost:27017/mydb"
}
}
}
}
Open Cursor settings > MCP and toggle the server on. Cursor spawns the process on the first tool call. Latency on localhost is 5 to 15 ms per call; against Atlas with TLS it is 40 to 120 ms depending on region.
Example prompts and workflows
Once the server is attached, MongoDB is queryable from chat. A few prompts:
- "In the
userscollection, find documents whereplanisproandcreatedAtis in the last 30 days." - "Run an aggregation on
ordersthat groups bycustomerIdand sumstotal, sorted descending, top 10." - "Insert 5 test documents into
eventswithtype: 'signup'and random timestamps in the last hour." - "Describe the indexes on
sessionsand tell me which ones have not been used in the last week." - "Count how many documents in
productsare missing thecategoryfield."
The model chains calls on its own. A typical "find and summarize" flow runs listCollections once to verify the name, then find or aggregate. You can see every call in the tool log.
One pattern that saves time: hand the model an example document when you ask it to write queries. Schemaless collections vary more than the model expects, and one example cuts the retry loop from 3 tool calls to 1.
Troubleshooting
Connection fails with MongoServerSelectionError. The URI points at a host the server cannot reach. Check mongosh with the same URI from the same machine. For Atlas, confirm the IP allowlist includes your public IP.
Tool call fails with authentication failed. The user or password in the URI is wrong or URL-escaped incorrectly. Special characters like @ and / in passwords must be percent-encoded.
Aggregation hangs or times out. A pipeline stage is running a collection scan on a large collection. Ask the model to add a $match at the start with an indexed field, or create the missing index with createIndex.
Insert fails with BSON document too large. MongoDB caps individual documents at 16 MB. Split the payload or use GridFS for binary blobs.
Server fails with ENOENT. npx is not on PATH in the editor's env. Launch from a terminal or set an absolute path for command.
Alternatives
If the MongoDB server does not fit, a few options exist:
postgres-mcpfor teams on Postgres instead of Mongo.dynamodb-mcpfor the AWS key-value equivalent.couchbase-mcpfor Couchbase users - similar document model, different query syntax.
For read-heavy analytical work against Atlas, pointing the server at an analytics node (Atlas allows one per cluster) keeps the write node free. Latency goes up by 5 to 10 ms but the cluster stays responsive to application traffic.
The MongoDB MCP server is the right default for any app already using Mongo. Four minutes of setup replaces a lot of Compass clicking. Keep the connection scoped to a read-only user on day one, then graduate to read-write once the prompt patterns are stable.
Guides
Frequently asked questions
Does the server support MongoDB transactions?
Not yet. Each tool call runs outside a session, so multi-document writes are not atomic across collections. For transactional work, call a stored JavaScript function or do the writes through your application code.
Can I connect to MongoDB Atlas with this server?
Yes. Use the `mongodb+srv://` connection string from the Atlas console and make sure your IP is on the project allowlist. TLS is handled automatically by the driver.
How do I limit the agent to a single collection?
Create a MongoDB user with `read` on one specific database and one specific collection, then use its credentials. The server forwards permission errors directly, so the agent cannot touch anything outside the grant.
Does it support Atlas Search or vector search?
Atlas Search works through the `aggregate` tool with a `$search` stage if your cluster has search enabled. Vector search is supported the same way with `$vectorSearch`, again provided the cluster and index exist.
What happens if the connection pool fills up?
The driver queues new requests up to the `maxPoolSize` default of 100. Beyond that, calls error with a queue timeout. For heavy agent workloads, raise the pool size in the URI with `maxPoolSize=200`.
Can the agent drop a collection?
Yes, through the `dropCollection` tool. It is exposed but gated behind an explicit confirmation flag. For production, use a read-only user to remove the risk entirely.