Kubernetes MCP Server
Inspect pods, fetch logs, and apply manifests against your Kubernetes cluster from Claude Code or Cursor.
Updated: April 15, 2026
Install
{
"mcpServers": {
"kubernetes-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-server-kubernetes"
]
}
}
}Capabilities
- + List and describe pods, services, and deployments across any namespace
- + Fetch pod logs with tail, since, and container selection
- + Apply YAML manifests directly through the Kubernetes API
- + Check rollout status for deployments and statefulsets
- + Manage namespaces - list, create, and describe
- + Describe nodes with capacity, allocatable resources, and conditions
Limitations
- - Uses the current kubeconfig context; no context switching through MCP
- - No exec into pods or kubectl exec equivalent for safety
- - No port-forwarding support because of the stdio transport
- - Destructive operations like delete and scale require explicit confirmation flags
Kubernetes MCP server setup for Claude Code and Cursor
Quick answer: The Kubernetes MCP server is a Node process that reads your local ~/.kube/config and exposes safe kubectl-equivalent actions as MCP tools. Install with one npx command, and Claude Code or Cursor can list pods, read logs, and apply manifests. Setup takes 3 minutes, tested on server version 0.6.1 against Kubernetes 1.29 on April 15, 2026.
The Kubernetes MCP server is a bridge between your coding agent and your cluster. It uses whatever context is currently selected in your kubeconfig, which means the agent can see what you can see and nothing more. No extra credentials, no separate access path.
This guide covers installation, config for both editors, working prompt patterns, and the operations the server deliberately blocks for safety.
What this server does
The server exposes about 20 Kubernetes operations as MCP tools backed by the official @kubernetes/client-node library. When Claude wants to list pods, it calls list_pods with an optional namespace. When it wants to read logs, it calls get_pod_logs with a pod name and tail count.
Main tool groups:
- Pods:
list_pods,describe_pod,get_pod_logs,delete_pod - Deployments:
list_deployments,describe_deployment,rollout_status,scale_deployment - Services:
list_services,describe_service - Manifests:
apply_yaml,apply_from_file,delete_by_manifest - Namespaces:
list_namespaces,create_namespace,describe_namespace - Nodes:
list_nodes,describe_node - Events:
list_events
Blocked operations include exec, port-forward, cp, attach, and debug. Exec is blocked because the stdio transport cannot stream an interactive shell safely. The others either need a two-way stream or can exfiltrate files in ways the server cannot audit.
The server reads your kubeconfig at spawn time and caches the context name. If you switch contexts in another terminal with kubectl config use-context, the MCP server keeps using the original context until you restart it.
Installing Kubernetes MCP
The package is published as mcp-server-kubernetes. The npx -y prefix fetches on first launch. Cold start is about 3 seconds and pulls roughly 6 MB including the Kubernetes client library.
You need a working kubeconfig. To verify:
- Run
kubectl config current-context- it should print the context name. - Run
kubectl get nodes- it should list the nodes in your cluster. - If either fails, fix kubeconfig first. The MCP server will fail the same way.
The server reads from $KUBECONFIG if set, otherwise ~/.kube/config. If you use aws-iam-authenticator or gcloud plugins, make sure the binaries are on PATH when the MCP server starts, otherwise auth will fail.
Configuring for Claude Code
Claude Code reads MCP servers from ~/.claude/mcp.json globally or .mcp.json at the project root. Add a kubernetes entry - no env vars required if your kubeconfig is in the default location:
{
"mcpServers": {
"kubernetes": {
"command": "npx",
"args": ["-y", "mcp-server-kubernetes"],
"env": {}
}
}
}
Restart Claude Code. Run /mcp and you should see the server with around 20 tools. Call list_namespaces as a smoke test - if it returns the namespaces you expect, the connection works.
For a non-default kubeconfig path, add "KUBECONFIG": "/path/to/kubeconfig" in the env block. For multi-cluster setups, run one MCP server per context with a different server name in the config.
Configuring for Cursor
Cursor reads from ~/.cursor/mcp.json with the same JSON:
{
"mcpServers": {
"kubernetes": {
"command": "npx",
"args": ["-y", "mcp-server-kubernetes"],
"env": {}
}
}
}
Open Cursor settings > MCP and toggle the server on. The first tool call spawns the process and reads kubeconfig, which takes 2 to 3 seconds. After that, API calls run 100 to 400 ms depending on cluster latency.
Example prompts and workflows
Once the server is attached, your cluster is queryable from chat. A few prompts:
- "List every pod in the
productionnamespace that is not in Running state and show the last 50 log lines for each." - "Describe the
api-gatewaydeployment and tell me what changed in the most recent rollout." - "Apply the YAML in
k8s/staging/redis.yamland watch the rollout until it is healthy." - "Find pods that have been restarted more than 3 times in the last hour and summarize the error patterns in their logs."
- "List nodes with less than 20% CPU headroom and show which workloads are pinned to them."
The model chains calls on its own. A "debug a failing pod" flow typically runs list_pods to find the right name, describe_pod to see events, and get_pod_logs for the stack trace. Three calls is usually enough.
One pattern that saves time: pass the namespace explicitly. If you say "in the payments namespace," the agent skips cluster-wide scans that return hundreds of objects and pages of output.
Troubleshooting
Tool call fails with Unauthorized. Your kubeconfig token is expired. Refresh with aws eks update-kubeconfig, gcloud container clusters get-credentials, or whatever your provider requires, then restart the MCP server.
Tool call fails with Forbidden. Your user or service account lacks RBAC for that resource. Check with kubectl auth can-i list pods -n production. If it says no, the MCP server will get the same error.
Rollout status hangs. The deployment is stuck in a crash loop or image pull error. Tell the agent to describe the pod and read events; it usually diagnoses within one more tool call.
Cannot find exec binary for auth plugin. The server inherits PATH from the editor process, not your shell. Launch the editor from a terminal after sourcing your shell profile, or set absolute paths in kubeconfig's exec block.
Server fails with ENOENT. npx is not on PATH. Same fix as other servers - launch from a shell session or hardcode the npx path.
Alternatives
If the Kubernetes server does not fit, a few options exist:
helm-mcpif you primarily deploy through Helm charts rather than raw YAML.argo-mcpfor teams using ArgoCD - it exposes sync, rollback, and app status.k9s-mcpwraps the k9s TUI as MCP tools for users who already live in k9s.
For read-only audit use cases, pointing the agent at a cluster-admin-read-only service account token (not your personal kubeconfig) avoids any risk of accidental writes. Create the token once, mount it into the agent's environment, and move on.
The Kubernetes MCP server is the right default for anyone already running kubectl daily. Three minutes of setup replaces a lot of terminal context switching. Start with a non-production context, let the agent learn your conventions, then point it at staging and eventually prod with appropriate RBAC.
Guides
Frequently asked questions
Can the agent switch between kubeconfig contexts?
No. The server reads the current context at spawn time and stays there. For multi-cluster workflows, register each cluster as a separate MCP server in your config - `kubernetes-prod`, `kubernetes-staging`, and so on.
Why is exec into pods blocked?
Exec requires a bidirectional stream that the MCP stdio transport cannot carry safely. Letting an agent run arbitrary commands inside production pods is also a blast-radius concern. For debugging, use `kubectl exec` directly from your terminal.
Does the server work with OpenShift?
Mostly yes. The underlying Kubernetes API is the same. OpenShift-specific resources like Routes and BuildConfigs are not exposed as dedicated tools, but you can apply them through `apply_yaml` and read them through `describe_custom_resource`.
How do I scope the agent to one namespace?
Create a service account with RBAC limited to that namespace, export its kubeconfig with `kubectl config set-credentials`, and point the MCP server at that kubeconfig via `KUBECONFIG`. The agent will fail cleanly on anything outside the namespace.
Can the server stream logs in real time?
Not over stdio MCP. Each `get_pod_logs` call returns a snapshot. For live tailing, run `kubectl logs -f` in a side terminal. The agent can still fetch recent logs on demand via repeated tool calls.
What about deleting resources by label selector?
Yes, the `delete_by_manifest` and `delete_pod` tools accept label selectors. Be careful - a typo in the selector can delete more than you wanted. The server requires an explicit `confirm: true` flag for any bulk delete.