communitystdio

OpenAPI MCP Server

Turn any OpenAPI spec into MCP tools so Claude Code or Cursor can call the API with typed parameters.

Updated: April 15, 2026

Install

npx mcp-openapi-server
~/.claude/settings.json
{
  "mcpServers": {
    "openapi-mcp": {
      "command": "npx",
      "args": [
        "-y",
        "mcp-openapi-server",
        "--spec",
        "https://api.example.com/openapi.json"
      ]
    }
  }
}

Capabilities

  • + Make API calls defined in any OpenAPI v3 spec, with one tool per endpoint
  • + List available endpoints and group them by tag or path
  • + Describe request and response schemas including nested objects
  • + Generate example requests from the spec examples or schema defaults
  • + Validate parameters against the schema before sending the request

Limitations

  • - Spec URL must be accessible at startup; private specs need a local file path instead
  • - No automatic OAuth2 or token refresh flow; auth headers are static
  • - Complex OAuth authorization code flows are not supported
  • - File upload endpoints with multipart/form-data are limited to small payloads

OpenAPI MCP server setup for Claude Code and Cursor

Quick answer: The OpenAPI MCP server wraps the OpenAPI API as MCP tools for Claude Code and Cursor. Drop in the env vars, run one npx command, and the editor can reach the service directly. Setup takes about 2 minutes, tested on mcp-openapi-server@1.2.0 on April 15, 2026.

OpenAPI (formerly Swagger) is the machine-readable contract for HTTP APIs. A single spec file describes every endpoint, parameter, and response shape. Without an MCP connection, working with OpenAPI means flipping between the editor and the web UI - copying IDs, pasting results, losing context. The MCP server removes that loop. Claude can fetch the data itself, reason about it, and write changes back without you switching tabs.

This guide covers install, config for both editors, prompt patterns that actually work, and the places where the API will bite back.

What this server does

The server speaks MCP over stdio and wraps the standard OpenAPI SDK. The tool surface is grouped into these sets:

  • Introspection: list_endpoints, describe_endpoint, get_schema
  • Invocation: one tool per endpoint, named after the operationId in the spec
  • Auth: static headers injected from env vars

Authentication uses a local binary on PATH, with no credentials in env. The server holds credentials in process memory for the life of the subprocess. Nothing is written to disk by the server itself. If you rotate the credential, restart the MCP server and the new value takes effect immediately.

The server does not implement a local cache. Every tool call is a fresh round trip. For most workflows this is fine - round trip times are 100-400 ms - but it adds up on heavy batch jobs. For those, prefer the native SDK in a script.

Installing the server

The package ships on npm as mcp-openapi-server. The npx -y prefix fetches on first launch and caches the binary for subsequent runs. Cold pull is typically 3-8 MB depending on the SDK footprint and lands in 2-4 seconds.

Before touching editor config, get your credentials ready:

  1. Have the target API's OpenAPI spec URL or local file path ready
  2. Generate any required API key or bearer token in the target service
  3. Pass the token through an env var referenced by the spec's security scheme (e.g. API_KEY or BEARER_TOKEN)
  4. Pass the spec location via --spec in the args array

Keep the credential values out of any file you commit. The rest of this guide assumes they live in your shell profile or a .envrc managed by direnv.

Configuring for Claude Code

Claude Code reads MCP servers from ~/.claude/mcp.json or a per-project .mcp.json. Add a openapi entry:

{
  "mcpServers": {
    "openapi": {
      "command": "npx",
      "args": ["-y", "mcp-openapi-server", "--spec", "https://api.example.com/openapi.json"],
      "env": {}
    }
  }
}

Restart Claude Code, then run /mcp in a session to confirm OpenAPI is attached. Call a read-only tool as a smoke test before any write operations. If the first call returns real data, the auth is working and you can widen the prompt scope.

For team projects, commit a placeholder version of .mcp.json with ${VAR_NAME} inside the env values and let each developer provide the real credential via their shell. Claude Code expands env vars when it spawns the subprocess.

Configuring for Cursor

Cursor uses the same MCP spec and reads from ~/.cursor/mcp.json. The config is identical to Claude Code:

{
  "mcpServers": {
    "openapi": {
      "command": "npx",
      "args": ["-y", "mcp-openapi-server", "--spec", "https://api.example.com/openapi.json"],
      "env": {}
    }
  }
}

Open Cursor settings, navigate to the MCP tab, and toggle the server on. Cursor spawns the subprocess lazily on the first tool call. Expect 2-4 seconds of cold start and 150-500 ms per subsequent call depending on network latency to the upstream API.

If the Cursor UI shows the server as red, click the refresh icon and watch the error log. Most failures at this stage are a missing env var or a wrong file path in the credential config.

Example prompts and workflows

A few prompts that work reliably once the server is attached:

  • "List every endpoint grouped by tag."
  • "Call the GET /users/{id} endpoint with id 42 and show the full JSON response."
  • "Show me the request schema for POST /orders and highlight required fields."
  • "Make a POST /webhooks request with url https://example.com/hook and event order.created."
  • "Which endpoints in this spec accept query parameter filter?"

The model will chain calls. A typical discovery flow runs list_endpoints first, then describe_endpoint on the ones that match the task, then makes the actual call. For specs with 500+ endpoints, tell Claude the path prefix or tag to filter, otherwise list calls return a lot of unnecessary context.

One pattern that saves calls: narrow the scope up front. Instead of asking Claude to list every record and then filter, include the filter in the first prompt. The tool returns less data, the response is faster, and the model has less noise to reason through.

Troubleshooting

Server refuses to start with Cannot fetch spec. The spec URL requires auth or lives behind a VPN. Download it locally and pass the file path instead: --spec ./openapi.json.

Tool calls return 401 or 403. The auth env var is missing or wrong. Check the spec's securitySchemes section and confirm the env var name matches, then restart the server.

One endpoint fails with schema validation error. The request payload does not match the spec. Run describe_endpoint first and compare the payload field by field - type coercion (string vs int) is a common culprit.

Multipart file upload returns 400. File uploads over 1 MB often fail through the MCP transport. Use the direct HTTP client or a dedicated upload tool for binary-heavy workloads.

Endpoints with recursive $ref crash parsing. Some versions of the parser choke on recursive schemas. Update to the latest release or preprocess the spec with swagger-cli bundle to inline refs.

Server fails with ENOENT. npx is not on PATH in the env the editor inherits. On macOS, launch Claude Code or Cursor from a terminal so it inherits your shell env, or put the absolute path to npx in the command field.

Subprocess keeps restarting. The MCP transport is strict about newlines on stdio. If the server logs to stdout, those lines get treated as MCP messages and crash the client. Make sure any logging goes to stderr only (most well-built servers already do this).

Alternatives

A few options if the OpenAPI server does not fit your setup:

  • graphql-mcp handles GraphQL endpoints with a single tool per query or mutation.
  • curl-mcp is the escape hatch for APIs without any spec; it exposes raw HTTP calls with a template system.
  • postman-collection-mcp works with Postman collections instead of OpenAPI, which some internal APIs prefer.

The MCP server pays off whenever you need Claude to script against a third-party API that already publishes an OpenAPI spec - payment gateways, internal services, government data APIs, and more.

Performance notes and hardening

Steady-state call latency lands in the 150-500 ms range for most tools. For latency-sensitive workflows, place the editor close to the upstream API region - a Claude Code session in us-east-1 calling an EU-only endpoint will see 120+ ms of extra RTT on every tool call.

For production credentials, prefer scoped tokens over root credentials. Most services expose fine-grained permission models; use them. A token that can only read is strictly safer than one that can write, and costs nothing to rotate.

Log review is easier if you redirect MCP subprocess stderr to a file. Most editors do this by default, but not all surface the log path. On macOS, check ~/Library/Logs/Claude/ or the Cursor equivalent.

The OpenAPI MCP server is the right default for any workflow that already touches OpenAPI regularly. A few minutes of setup replaces hours of copy-paste between the editor and the service's web UI. Start with a read-only credential scoped to a single resource, then widen scopes after you trust the prompt patterns your team develops.

Guides

Frequently asked questions

Does it support OpenAPI 2.0 (Swagger) specs?

Only v3. Convert a v2 spec to v3 with `swagger-cli` or the online Swagger Editor first. The v3 schema is richer and the server depends on its stricter typing.

How does auth work for APIs with OAuth2?

Only client-credentials grant with a pre-fetched access token is supported. The server does not run the full OAuth2 dance. For authorization-code flows, fetch the token separately (for example with a short Node script) and pass it via env.

Can I point it at multiple APIs at once?

Run multiple MCP entries, one per spec. Each entry needs a unique name (`github`, `stripe`, `internal`). Claude will see them as separate tool namespaces and route calls correctly.

Does the server cache responses?

No. Every tool call hits the upstream API. For idempotent reads that you repeat often, put a cache in front of the API (a small Cloudflare Worker, for example) or cache the response in your prompt directly.

How big a spec can it load?

Specs up to 5 MB parse in under a second. Larger specs (10 MB+) take 5-10 seconds to load and eat a chunk of the MCP tool list. For huge specs, trim unused paths with a spec filter tool first.

Does it respect custom x-headers in the spec?

Yes. Any `x-` extension headers declared in the spec are forwarded as-is. Required headers must be present either in the env or in the spec defaults, otherwise the call fails before it reaches the API.