PostHog MCP Server
Query and control PostHog from Claude Code or Cursor with one npx command and a API key
Updated: April 15, 2026
Install
{
"mcpServers": {
"posthog-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-posthog"
],
"env": {
"POSTHOG_API_KEY": "phx_xxx",
"POSTHOG_HOST": "https://app.posthog.com"
}
}
}
}Capabilities
- + Query events and event properties over custom date ranges
- + Build trend analyses with breakdowns by property or cohort
- + Check feature flag status and rollout percentages
- + View funnel data across multi-step user journeys
- + Inspect cohorts and their current member counts
- + Get person profile data by distinct ID or email
Limitations
- - Requires a project-scoped PostHog API key with read access to insights
- - Complex HogQL queries require knowing PostHog SQL dialect quirks
- - Dashboard creation is not exposed through the MCP tool surface
- - Experiment results are read-only; starting or stopping an experiment needs the UI
PostHog MCP server setup for Claude Code and Cursor
Quick answer: The PostHog MCP server wraps the PostHog API as a set of tools Claude Code or Cursor can call over stdio. Install with one npx command, drop in your PostHog project API key, and the editor can run real operations against PostHog. Setup runs about 4 to 6 minutes end to end, tested on mcp-posthog on April 15, 2026.
Most teams that work with PostHog end up copying data in and out of ChatGPT or Claude to ask questions about it. The MCP server removes that step. When you ask Claude for a summary or an action, it talks to PostHog itself, reads the response, and writes the answer back without the context swap. For workflows that already live in PostHog, that feedback loop is the whole reason to wire this up.
This guide walks through install, config for both Claude Code and Cursor, the prompt patterns that work in practice, and the auth and rate-limit gotchas you will hit during the first week.
What this server does
The server speaks MCP over stdio and forwards every tool call to the PostHog API using API key auth. It exposes roughly 8 to 12 tools, grouped into three rough buckets:
- Query events and event properties over custom date ranges
- Build trend analyses with breakdowns by property or cohort
- Check feature flag status and rollout percentages
- View funnel data across multi-step user journeys
- Inspect cohorts and their current member counts
- Get person profile data by distinct ID or email
Every tool call carries the API key forward in the request headers. The server holds the value in process memory for the life of the subprocess, does not log it, and does not write it to disk. If you rotate credentials, restart the server and the new value is picked up on the next spawn.
The server does not implement a local cache. Every call is a fresh round trip to PostHog. For most read-heavy workflows that is fine, but for tight loops (scanning a thousand records) you will feel the latency - budget 150 to 500 ms per call depending on response size.
Installing PostHog MCP
The package is on npm as mcp-posthog. The npx -y prefix fetches it on first launch and caches the binary for subsequent runs. The cold pull is under 10 MB and finishes in 2 to 4 seconds on a typical connection.
Before touching any config, generate a PostHog project API key:
- Open https://app.posthog.com/project/settings and sign in to your PostHog account.
- Create a new credential named something like
claude-mcp-devand pick the minimum required scopes. - Copy the value shown - it is shown only once.
- Store the value in a shell env var so it stays out of any file you commit.
Configuring for Claude Code
Claude Code reads MCP servers from ~/.claude/mcp.json or a per-project .mcp.json file. Add a posthog entry that spawns the server with POSTHOG_API_KEY in its env:
{
"mcpServers": {
"posthog": {
"command": "npx",
"args": [
"-y",
"mcp-posthog"
],
"env": {
"POSTHOG_API_KEY": "phx_xxx",
"POSTHOG_HOST": "https://app.posthog.com"
}
}
}
}
Restart Claude Code, then run /mcp in a session to confirm the PostHog server is attached. Call a read-only tool as a smoke test - if the response comes back with real data from your account, the API key has the right scope.
For team projects, commit a placeholder version of .mcp.json with ${POSTHOG_API_KEY} inside the env value and let each developer provide the real value via their shell profile. Claude Code expands env vars when it spawns the subprocess.
Configuring for Cursor
Cursor uses the same MCP spec and reads from ~/.cursor/mcp.json. The config is identical to the Claude Code block above. Open Cursor settings, navigate to the MCP tab, and toggle the PostHog server on. Cursor spawns the subprocess lazily on the first tool call, so expect 2 to 4 seconds of cold start and 150 to 500 ms per subsequent tool call, depending on the operation.
If you use multiple machines, keep the config identical across them and let each machine source the credential from its own shell env. That way you never commit a real token to git.
Example prompts and workflows
A few prompts that work reliably once the server is attached:
- "Show me the first 10 records I can access in PostHog."
- "Count the records modified in the last 7 days and group the total by category."
- "Find the most recent 5 items matching
priority=highand print their key fields." - "Create a new entry with the values I paste below and confirm it was written."
- "List every resource visible to my credentials and summarize by owner."
The model will chain calls on its own. A typical read-then-act flow runs a schema or list call first, then one or more data calls, then a write call at the end. If the dataset is large, tell the model the exact scope up front (specific IDs, date range, single resource). Scoping down cuts round trips from 6 or 7 calls down to 2 or 3.
One caveat: the model sometimes generates overly broad queries. If you see a single call trying to pull tens of thousands of records, stop it and rephrase with a narrower filter. PostHog servers tend to slow down sharply past 500 results per page.
Troubleshooting
Tool call returns 401 or auth error. The API key is wrong, expired, or has been revoked. Regenerate the credential, update your shell env, and restart the MCP server (/mcp restart posthog in Claude Code).
Tool call returns 403 or permission denied. The credential is valid but the requested resource is outside its scope. Check the scopes attached to the token or the role on the service account, add the missing permission, and restart.
Tool call returns 429 rate limit. PostHog has per-account or per-route rate limits. The server does not queue on your behalf. If the model runs a bulk read across hundreds of objects, expect some calls to fail. Ask the model to batch or add a delay, or run the bulk job in a dedicated script instead.
Server fails with ENOENT on npx. The npx binary is not on PATH in the env the editor inherits. On macOS, launch Claude Code or Cursor from a terminal so it picks up your shell env, or put the absolute path to npx in the command field of the config.
Tool call returns 422 or schema error. A field or parameter name in the request does not match what PostHog expects. Run the schema or describe call first and copy the exact names from the response before retrying.
Alternatives
A few options if the PostHog server does not fit your setup:
- Look for a dedicated server with a narrower surface if you only need one or two operations - smaller servers tend to have faster cold starts.
- Use the official PostHog CLI or SDK directly from a script when the task is a one-off and the MCP overhead is not worth it.
- Check the awesome-mcp-servers repository on GitHub for community alternatives, especially if you need features this server does not expose (like admin APIs or bulk operations).
For one-off exports, the PostHog API is straightforward to call from a script. The MCP server pays off when Claude needs to read and write in the same session without you switching to the PostHog dashboard. Good use cases include operational reports, ad-hoc data pulls, and cross-resource rollups from your editor.
The PostHog MCP server is the right default for any workflow already rooted in PostHog. Four to six minutes of setup replaces hours of copy-paste between your editor and the PostHog dashboard. Start with a narrowly scoped credential that only reads a single resource or workspace, then widen scopes once you trust the prompt patterns on your team.
Security notes
Rotate the API key on a 90-day cadence as a baseline hygiene practice. If you ever suspect a credential leaked (committed to git, pasted in a screenshot), revoke it in the PostHog dashboard immediately - the old value stops working within a few seconds and existing subprocesses will fail their next call.
Scope credentials to the smallest set of resources you actually need. A token with read-only access to a single resource is a much smaller blast radius than an account-wide admin token, even if the convenience tradeoff feels small during setup.
Performance and cost
Most tool calls complete in 150 to 500 ms on a warm connection. Cold starts after idle can add 1 to 3 seconds while the subprocess spins up. Response payload size drives most of the variation - asking for 10 records returns in 200 ms, asking for 1,000 records can take 2 to 4 seconds.
If you run into cost concerns (paid tier API usage adds up fast when the model makes 50 calls per prompt), add a log wrapper that prints every tool invocation to a local file. Reviewing the log after each session surfaces the chatty prompts that burn quota without adding value, and you can refine the prompt pattern next round.
Guides
Frequently asked questions
Do I need a paid PostHog plan to use this MCP server?
Any plan that lets you issue a API key will work with the server. Free and starter tiers are fine for development and light use. Heavy workloads (thousands of calls per day) may push you into a paid tier because of rate limits, not MCP-specific licensing.
How do I get the API key to fill in the config?
Open https://app.posthog.com/project/settings, sign in, and create a new credential. Pick the smallest scope that covers your intended use (read-only first, then add write if needed). Copy the value and store it in a shell env var; do not commit it to git.
Can I use this server with both Claude Code and Cursor at the same time?
Yes. The MCP spec is editor-agnostic, so the same `mcp-posthog` package runs under both Claude Code and Cursor. Each editor spawns its own subprocess from its own config file (`~/.claude/mcp.json` and `~/.cursor/mcp.json`). Rotating credentials in one place means updating both shell env vars.
What happens if I hit the PostHog API rate limit?
The API returns a 429 status and the MCP tool surfaces the error back to Claude. The server does not auto-retry. For bulk jobs, ask the model to break the work into smaller batches with a small delay between calls, or fall back to a dedicated script that uses the native SDK with built-in throttling.
Is my API key logged or sent back to Anthropic?
The server holds the credential in process memory only. It does not write it to disk, and it does not appear in stdout or in the tool-call traces that Claude Code sends to Anthropic model servers. Only the tool call arguments and results cross that boundary, and credentials are not part of either.
What should I do if the server fails to start on posthog-mcp?
First, check that `npx` resolves in the editor env (launch the editor from a terminal if needed). Second, verify every env var (POSTHOG_API_KEY and any paired keys) is set and not empty. Third, run `npx -y mcp-posthog` directly in a terminal and watch the output - any missing credential or unreachable host shows up in the first few seconds. Once the manual run works, the editor config will work too.