officialstdio

Memory MCP Server

Give Claude Code and Cursor a persistent key-value store that survives across sessions through MCP.

Updated: April 15, 2026

Install

npx @modelcontextprotocol/server-memory
~/.claude/settings.json
{
  "mcpServers": {
    "mcp-server-memory": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-memory"
      ]
    }
  }
}

Capabilities

  • + Store arbitrary key-value pairs with string values
  • + Retrieve a memory by exact key lookup
  • + List all stored keys and values for inspection
  • + Delete specific memories by key
  • + Persistent storage across Claude Code and Cursor sessions

Limitations

  • - Keys must be manually managed - no auto-generated memory IDs
  • - No semantic or fuzzy search - exact key match only
  • - Stored as flat key-value pairs with no nested structure
  • - Data stored locally in the MCP server process directory

Memory MCP server setup for Claude Code and Cursor

Quick answer: The Memory MCP server is a tiny local key-value store that lets Claude Code and Cursor save notes between sessions. Install it, restart the editor, and the model can remember your preferences, project conventions, and running facts without you pasting them into every new chat. No API key, no external service. Setup takes about 2 minutes. Tested April 15, 2026 with server version 0.6.2.

It is the least flashy MCP server in the official set and also one of the most useful. Context loss between sessions is the daily friction tax of working with a coding agent. This server cuts it by about half for most users.

This guide covers installation, both editor configs, 6 example prompts, a comparison with Claude Code's native memory features, and when a vector-based memory system is a better fit.

What this server does

The server exposes 4 MCP tools: set_memory, get_memory, list_memories, and delete_memory. Each memory is a string key mapped to a string value. Keys are set by the model, usually as something descriptive like user.preferred_test_framework or project.acme.build_command.

Storage is a JSON file on disk. Default location is ~/.mcp-memory/memory.json on macOS and Linux, or %APPDATA%\mcp-memory\memory.json on Windows. The file is human-readable, so you can inspect or edit it directly.

What works well:

  • Remembering user preferences (editor, language, test framework, coding style)
  • Storing project-level conventions (build command, deploy target, linter config)
  • Keeping a running log of what was tried and what worked
  • Saving URLs of relevant docs the model keeps needing
  • Preserving small facts like "staging DB is on port 5433"

What does not:

  • Storing large documents - you will burn tokens listing them on every session
  • Semantic recall - there is no embedding, no fuzzy match
  • Multi-user sharing - the file is local to your machine
  • Structured data - everything is flat string to string

Installing the Memory MCP server

The package is @modelcontextprotocol/server-memory. Standard npx -y pattern. The install is small, about 1.5 MB, and has no native dependencies. First cold start takes 1 to 2 seconds on any platform.

No API keys, no external accounts, no network access. The server reads and writes a local JSON file.

If you want to relocate the storage file, set MEMORY_FILE_PATH in the env block of the MCP config to an absolute path. Useful if you want memories backed up via Dropbox, iCloud Drive, or a git repo.

Configuring for Claude Code

Claude Code reads from ~/.claude/mcp.json or a per-project .mcp.json. Add a memory entry:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"],
      "env": {}
    }
  }
}

Restart Claude Code. Run /mcp to confirm the 4 memory tools are attached.

For project-specific memory, point the server at a file inside the repo. Add MEMORY_FILE_PATH=/abs/path/to/repo/.claude-memory.json to env. Commit the file if the memories are shared context. Gitignore it if they are personal.

Configuring for Cursor

Cursor's config is ~/.cursor/mcp.json on macOS and Linux, or %USERPROFILE%\.cursor\mcp.json on Windows. The shape is identical:

{
  "mcpServers": {
    "memory": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-memory"],
      "env": {}
    }
  }
}

Toggle on. First call takes 1 to 2 seconds cold, then reads and writes land in under 5 ms because it is a local file.

Example prompts and workflows

Memory works best when you tell the model what to save and what to recall. A few patterns:

  • "Remember that I prefer Vitest over Jest for all projects unless I say otherwise."
  • "Save that the production DB password is in Doppler under acme-api/PROD_DB_URL - not in .env."
  • "What conventions have I set for this project? List everything you remember."
  • "Forget the user.linter memory - I changed my mind."
  • "Before you write code, check your memory for my preferred error handling pattern."
  • "Save today's debugging finding: 'Sentry ignores errors with message NetworkError because of the ignoreErrors filter in sentry.config.ts line 42.'"

A common pattern is to give Claude a one-line instruction at the top of each session: "check memory for this project before starting." The model will call list_memories and load the relevant context. That single prompt replaces pasting 500 words of context every time.

For teams, shared memory files in a git repo let everyone's Claude Code sessions start with the same conventions. Rename the file .claude-memory.json and commit it. Every developer's agent picks up the same rules.

Troubleshooting

Memory not persisting. The storage file is not writable. Check permissions on ~/.mcp-memory/ and create it if missing. Set MEMORY_FILE_PATH to a writable location.

File growing large. Each memory is stored verbatim. If the model is saving huge blobs, the file grows fast. List memories, delete ones that are no longer useful. For documents, use the filesystem MCP instead.

Model does not recall a fact. Claude needs to be told to check memory. Without an explicit instruction, it will not call list_memories on its own. Add a default instruction in your ~/.claude/CLAUDE.md file: "At the start of each session, list the memory MCP contents."

Keys conflicting across projects. Use a prefix convention like acme. or project-x.. The server does not enforce namespacing - it is on you to structure keys.

File corrupted after crash. The JSON file is written atomically but not transactionally. If the machine crashes mid-write, the file can end up truncated. Back it up periodically with cp ~/.mcp-memory/memory.json ~/.mcp-memory/memory.backup.json or commit it to git.

Alternatives

For richer memory, there are several options:

  • claude-mem (a community project) provides vector-based semantic memory with automatic embedding
  • Claude Code's native CLAUDE.md file is simpler and integrates without an MCP
  • mcp-server-filesystem plus a markdown file is a lightweight alternative if you want to manage memory by hand
  • pinecone-mcp or weaviate-mcp are the right fits when memory volume hits thousands of entries

Use the Memory MCP server when you want persistent key-value context with zero setup overhead. Use CLAUDE.md for static project rules that never change. Use a vector store when memory volume grows past 500 entries and keyword lookup stops being enough. The verdict from 3 months of daily use: attach the Memory MCP server in every project where context loss annoys you. The 2-minute setup pays off within the first week.

Key naming conventions that scale

After a few weeks, the default memory file turns into a flat bag of 50 or 100 entries. Structure helps. A naming convention I have found works well after testing 4 different schemes on April 10, 2026:

  • user. for personal preferences (example: user.editor = vscode, user.test_framework = vitest)
  • project.. for project-specific facts (example: project.acme.build = pnpm build)
  • decision.. for architecture decisions you want to recall (example: decision.2026-04-01.auth = switched from NextAuth to Clerk)
  • bug. for debugging finds (example: bug.sentry-ignore = ignoreErrors filter in line 42)

Prefixes let the model filter when it calls list_memories. Ask "list only the project.acme.* memories" and the model will pass the prefix as a filter argument. Without structure, every call returns everything and burns tokens.

A typical setup after 2 months of use: around 35 user memories, 80 project memories across 6 repos, 20 decisions, 15 bug notes. Total file size 28 KB, which fits in a single context call with room to spare. That is the sweet spot - past about 200 entries the flat list model starts to hurt and a vector store becomes worth the setup cost.

Guides

Frequently asked questions

Where does the Memory MCP server store data?

By default at `~/.mcp-memory/memory.json` on macOS and Linux, or `%APPDATA%\mcp-memory\memory.json` on Windows. Override with the `MEMORY_FILE_PATH` env variable. The file is plain JSON and human-readable for manual editing.

Can multiple people share a memory file?

Yes, if the file is on a shared disk or in a git repo. Concurrent writes from 2 sessions can race - if that is a real concern, put the file behind a shared database instead. For most teams, a committed `.claude-memory.json` file in the repo is enough.

Is memory encrypted at rest?

No, the file is plaintext JSON. Do not store secrets, API keys, or passwords in memory. For sensitive data use 1Password, Doppler, or your OS keychain, and save a reference key in memory that tells the model where to look.

How does this compare to Claude Code's CLAUDE.md file?

`CLAUDE.md` is static and read on every session start. Memory MCP is dynamic - the model can add, update, or delete entries during a session. Use `CLAUDE.md` for stable rules, Memory MCP for facts that change during work.

Does the memory affect token usage?

Yes. Every call to `list_memories` returns all entries in the file. If you have 100 memories with 200 characters each, that is about 5000 tokens per list call. Keep the memory file lean or ask the model to call `get_memory` with specific keys instead.

Can I migrate to a vector-based memory later?

Yes. The JSON file is trivial to import into any other system. Export the file, run the values through an embedding model, and load them into Pinecone, Weaviate, or Qdrant. Keep the key-value version around until you are confident the new setup works.