AWS S3 MCP Server
Query and control Amazon S3 from Claude Code or Cursor with one npx command and a IAM credential
Updated: April 15, 2026
Install
{
"mcpServers": {
"s3-mcp": {
"command": "npx",
"args": [
"-y",
"mcp-server-s3"
],
"env": {
"AWS_ACCESS_KEY_ID": "AKIA...",
"AWS_SECRET_ACCESS_KEY": "your_secret",
"AWS_REGION": "us-east-1",
"S3_BUCKET": "your-bucket-name"
}
}
}
}Capabilities
- + List buckets and objects with prefix filters and paging
- + Read file content directly as text or base64 for small binary files
- + Upload files from local paths to a target bucket and key
- + Copy objects between keys and delete objects by key
- + Generate presigned URLs for temporary public access
- + Check object metadata including size, content type, and last modified
Limitations
- - Credentials live in env so rotate often and avoid committing the config
- - The bucket must be specified at config time; multi-bucket setups need multiple servers
- - Multipart upload is not exposed, so uploads above about 5 GB will fail
- - Versioned objects need an explicit version ID on read and delete calls
Amazon S3 MCP server setup for Claude Code and Cursor
Quick answer: The Amazon S3 MCP server wraps the Amazon S3 API as a set of tools Claude Code or Cursor can call over stdio. Install with one npx command, drop in your AWS credentials and bucket name, and the editor can run real operations against Amazon S3. Setup runs about 4 to 6 minutes end to end, tested on mcp-server-s3 on April 15, 2026.
Most teams that work with Amazon S3 end up copying data in and out of ChatGPT or Claude to ask questions about it. The MCP server removes that step. When you ask Claude for a summary or an action, it talks to Amazon S3 itself, reads the response, and writes the answer back without the context swap. For workflows that already live in Amazon S3, that feedback loop is the whole reason to wire this up.
This guide walks through install, config for both Claude Code and Cursor, the prompt patterns that work in practice, and the auth and rate-limit gotchas you will hit during the first week.
What this server does
The server speaks MCP over stdio and forwards every tool call to the Amazon S3 API using IAM credential auth. It exposes roughly 8 to 12 tools, grouped into three rough buckets:
- List buckets and objects with prefix filters and paging
- Read file content directly as text or base64 for small binary files
- Upload files from local paths to a target bucket and key
- Copy objects between keys and delete objects by key
- Generate presigned URLs for temporary public access
- Check object metadata including size, content type, and last modified
Every tool call carries the IAM credential forward in the request headers. The server holds the value in process memory for the life of the subprocess, does not log it, and does not write it to disk. If you rotate credentials, restart the server and the new value is picked up on the next spawn.
The server does not implement a local cache. Every call is a fresh round trip to Amazon S3. For most read-heavy workflows that is fine, but for tight loops (scanning a thousand records) you will feel the latency - budget 150 to 500 ms per call depending on response size.
Installing Amazon S3 MCP
The package is on npm as mcp-server-s3. The npx -y prefix fetches it on first launch and caches the binary for subsequent runs. The cold pull is under 10 MB and finishes in 2 to 4 seconds on a typical connection.
Before touching any config, generate a AWS credentials and bucket name:
- Open https://console.aws.amazon.com/iam/home#/users and sign in to your Amazon S3 account.
- Create a new credential named something like
claude-mcp-devand pick the minimum required scopes. - Copy the value shown - it is shown only once.
- Store the value in a shell env var so it stays out of any file you commit.
Configuring for Claude Code
Claude Code reads MCP servers from ~/.claude/mcp.json or a per-project .mcp.json file. Add a s3 entry that spawns the server with AWS_ACCESS_KEY_ID in its env:
{
"mcpServers": {
"s3": {
"command": "npx",
"args": [
"-y",
"mcp-server-s3"
],
"env": {
"AWS_ACCESS_KEY_ID": "AKIA...",
"AWS_SECRET_ACCESS_KEY": "your_secret",
"AWS_REGION": "us-east-1",
"S3_BUCKET": "your-bucket-name"
}
}
}
}
Restart Claude Code, then run /mcp in a session to confirm the S3 server is attached. Call a read-only tool as a smoke test - if the response comes back with real data from your account, the IAM credential has the right scope.
For team projects, commit a placeholder version of .mcp.json with ${AWS_ACCESS_KEY_ID} inside the env value and let each developer provide the real value via their shell profile. Claude Code expands env vars when it spawns the subprocess.
Configuring for Cursor
Cursor uses the same MCP spec and reads from ~/.cursor/mcp.json. The config is identical to the Claude Code block above. Open Cursor settings, navigate to the MCP tab, and toggle the S3 server on. Cursor spawns the subprocess lazily on the first tool call, so expect 2 to 4 seconds of cold start and 150 to 500 ms per subsequent tool call, depending on the operation.
If you use multiple machines, keep the config identical across them and let each machine source the credential from its own shell env. That way you never commit a real token to git.
Example prompts and workflows
A few prompts that work reliably once the server is attached:
- "Show me the first 10 objects in the logs prefix."
- "Count the records modified in the last 7 days and group the total by category."
- "Find the most recent 5 items matching
priority=highand print their key fields." - "Create a new object at path logs/2026-04-15.json and confirm it was written."
- "List every resource visible to my credentials and summarize by owner."
The model will chain calls on its own. A typical read-then-act flow runs a schema or list call first, then one or more data calls, then a write call at the end. If the dataset is large, tell the model the exact scope up front (specific IDs, date range, single resource). Scoping down cuts round trips from 6 or 7 calls down to 2 or 3.
One caveat: the model sometimes generates overly broad queries. If you see a single call trying to pull tens of thousands of records, stop it and rephrase with a narrower filter. Amazon S3 servers tend to slow down sharply past 500 results per page.
Troubleshooting
Tool call returns 401 or auth error. The IAM credential is wrong, expired, or has been revoked. Regenerate the credential, update your shell env, and restart the MCP server (/mcp restart s3 in Claude Code).
Tool call returns 403 or permission denied. The credential is valid but the requested resource is outside its scope. Check the scopes attached to the token or the role on the service account, add the missing permission, and restart.
Tool call returns 429 rate limit. Amazon S3 has per-account or per-route rate limits. The server does not queue on your behalf. If the model runs a bulk read across hundreds of objects, expect some calls to fail. Ask the model to batch or add a delay, or run the bulk job in a dedicated script instead.
Server fails with ENOENT on npx. The npx binary is not on PATH in the env the editor inherits. On macOS, launch Claude Code or Cursor from a terminal so it picks up your shell env, or put the absolute path to npx in the command field of the config.
Tool call returns 422 or schema error. A field or parameter name in the request does not match what Amazon S3 expects. Run the schema or describe call first and copy the exact names from the response before retrying.
Alternatives
A few options if the Amazon S3 server does not fit your setup:
- Look for a dedicated server with a narrower surface if you only need one or two operations - smaller servers tend to have faster cold starts.
- Use the official Amazon S3 CLI or SDK directly from a script when the task is a one-off and the MCP overhead is not worth it.
- Check the awesome-mcp-servers repository on GitHub for community alternatives, especially if you need features this server does not expose (like admin APIs or bulk operations).
For one-off exports, the Amazon S3 API is straightforward to call from a script. The MCP server pays off when Claude needs to read and write in the same session without you switching to the Amazon S3 dashboard. Good use cases include operational reports, ad-hoc data pulls, and cross-resource rollups from your editor.
The Amazon S3 MCP server is the right default for any workflow already rooted in Amazon S3. Four to six minutes of setup replaces hours of copy-paste between your editor and the Amazon S3 dashboard. Start with a narrowly scoped credential that only reads a single resource or workspace, then widen scopes once you trust the prompt patterns on your team.
Security notes
Rotate the IAM credential on a 90-day cadence as a baseline hygiene practice. If you ever suspect a credential leaked (committed to git, pasted in a screenshot), revoke it in the Amazon S3 dashboard immediately - the old value stops working within a few seconds and existing subprocesses will fail their next call.
Scope credentials to the smallest set of resources you actually need. A token with read-only access to a single resource is a much smaller blast radius than an account-wide admin token, even if the convenience tradeoff feels small during setup.
Performance and cost
Most tool calls complete in 150 to 500 ms on a warm connection. Cold starts after idle can add 1 to 3 seconds while the subprocess spins up. Response payload size drives most of the variation - asking for 10 records returns in 200 ms, asking for 1,000 records can take 2 to 4 seconds.
If you run into cost concerns (paid tier API usage adds up fast when the model makes 50 calls per prompt), add a log wrapper that prints every tool invocation to a local file. Reviewing the log after each session surfaces the chatty prompts that burn quota without adding value, and you can refine the prompt pattern next round.
Guides
Frequently asked questions
Do I need a paid Amazon S3 plan to use this MCP server?
Any plan that lets you issue a IAM credential will work with the server. Free and starter tiers are fine for development and light use. Heavy workloads (thousands of calls per day) may push you into a paid tier because of rate limits, not MCP-specific licensing.
How do I get the IAM credential to fill in the config?
Open https://console.aws.amazon.com/iam/home#/users, sign in, and create a new credential. Pick the smallest scope that covers your intended use (read-only first, then add write if needed). Copy the value and store it in a shell env var; do not commit it to git.
Can I use this server with both Claude Code and Cursor at the same time?
Yes. The MCP spec is editor-agnostic, so the same `mcp-server-s3` package runs under both Claude Code and Cursor. Each editor spawns its own subprocess from its own config file (`~/.claude/mcp.json` and `~/.cursor/mcp.json`). Rotating credentials in one place means updating both shell env vars.
What happens if I hit the Amazon S3 API rate limit?
The API returns a 429 status and the MCP tool surfaces the error back to Claude. The server does not auto-retry. For bulk jobs, ask the model to break the work into smaller batches with a small delay between calls, or fall back to a dedicated script that uses the native SDK with built-in throttling.
Is my IAM credential logged or sent back to Anthropic?
The server holds the credential in process memory only. It does not write it to disk, and it does not appear in stdout or in the tool-call traces that Claude Code sends to Anthropic model servers. Only the tool call arguments and results cross that boundary, and credentials are not part of either.
What should I do if the server fails to start on s3-mcp?
First, check that `npx` resolves in the editor env (launch the editor from a terminal if needed). Second, verify every env var (AWS_ACCESS_KEY_ID and any paired keys) is set and not empty. Third, run `npx -y mcp-server-s3` directly in a terminal and watch the output - any missing credential or unreachable host shows up in the first few seconds. Once the manual run works, the editor config will work too.