Model Context Protocol (MCP) Complete Guide 2026
Model Context Protocol (MCP) is Anthropic's open standard for connecting AI assistants to external tools and data sources. If you've heard of it but aren't quite sure what it does or why it matters, this guide covers everything.
What Is MCP?
MCP solves a coordination problem. Before MCP, every AI application had to build its own integrations: custom code to connect Claude to your database, custom code to connect it to GitHub, custom code to connect it to Slack. Each integration was one-off and non-reusable.
MCP defines a standard protocol — like HTTP for web servers, but for AI tool use. An MCP server exposes capabilities (tools, resources, prompts) through a standard interface. Any MCP-compatible client (Claude Desktop, Claude Code, Cursor, etc.) can use any MCP server without custom integration code.
The ecosystem effect: build one MCP server for your database, and it works in every MCP-compatible AI tool your team uses.
Architecture: Three Primitive Types
MCP servers expose three types of capabilities:
1. Tools
Functions the AI can call. Examples:search_database, send_email, run_query, create_issue.Tools are action-oriented — the AI calls them to do things.
2. Resources
Data the AI can read. Examples: a file, a database row, a git commit, an API response.Resources are data-oriented — the AI reads them for context.
3. Prompts
Reusable prompt templates the AI can invoke. Examples:code_review_prompt, bug_report_template.Prompts are workflow-oriented — they encode standard patterns.
Transport Layer
MCP supports two transports:
- stdio: Server runs as a subprocess, communication over stdin/stdout. Used for local tools.
- SSE (Server-Sent Events): Server is an HTTP server. Used for remote/hosted tools.
Most developer-facing MCP servers use stdio. Enterprise deployments increasingly use SSE.
Top 10 MCP Servers in 2026
| Server | What It Does | Transport |
@modelcontextprotocol/server-filesystem | Read/write local files | stdio |
@modelcontextprotocol/server-github | GitHub API (issues, PRs, code) | stdio |
@modelcontextprotocol/server-postgres | Query PostgreSQL databases | stdio |
@modelcontextprotocol/server-brave-search | Web search via Brave | stdio |
@modelcontextprotocol/server-puppeteer | Browser automation | stdio |
mcp-server-sqlite | Read/write SQLite files | stdio |
@upstash/mcp-server | Redis/vector store via Upstash | stdio |
mcp-server-linear | Linear issue tracking | stdio |
mcp-server-slack | Slack messages and channels | stdio |
@modelcontextprotocol/server-fetch | HTTP fetch for any URL | stdio |
Installing MCP Servers in Claude Desktop
Step 1: Find your config file
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Step 2: Edit the config
{
"mcpServers": {
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/Users/yourname/projects"
]
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_your_token_here"
}
},
"postgres": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-postgres",
"postgresql://localhost/mydb"
]
}
}
}
Step 3: Restart Claude Desktop
MCP servers initialize at startup. After editing the config, fully quit and relaunch.
Installing MCP in Claude Code
# Add a server
claude mcp add filesystem npx @modelcontextprotocol/server-filesystem /path/to/dir
# Add with environment variables
claude mcp add github -e GITHUB_PERSONAL_ACCESS_TOKEN=ghp_xxx \
npx @modelcontextprotocol/server-github
# List installed servers
claude mcp list
# Remove a server
claude mcp remove filesystem
Claude Code stores MCP config in ~/.claude/settings.json.
Building Your Own MCP Server
Here's a complete MCP server in TypeScript that exposes a company knowledge base:
// knowledge-base-mcp/src/index.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "knowledge-base", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "search_kb",
description: "Search the company knowledge base for documentation and FAQs",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "Search query" },
category: {
type: "string",
enum: ["engineering", "product", "hr", "legal"],
description: "Optional category filter"
}
},
required: ["query"]
}
},
{
name: "get_article",
description: "Get the full text of a specific knowledge base article",
inputSchema: {
type: "object",
properties: {
article_id: { type: "string" }
},
required: ["article_id"]
}
}
]
}));
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "search_kb") {
const results = await searchKnowledgeBase(
args.query as string,
args.category as string | undefined
);
return {
content: [{
type: "text",
text: JSON.stringify(results, null, 2)
}]
};
}
if (name === "get_article") {
const article = await getArticle(args.article_id as string);
return {
content: [{
type: "text",
text: article ? article.content : "Article not found"
}]
};
}
throw new Error(`Unknown tool: ${name}`);
});
// Mock implementations — replace with your actual KB logic
async function searchKnowledgeBase(query: string, category?: string) {
// In production: vector search, full-text search, etc.
return [
{ id: "kb-001", title: "Engineering Onboarding Guide", snippet: "..." },
{ id: "kb-042", title: "Deployment Process", snippet: "..." },
];
}
async function getArticle(id: string) {
// In production: fetch from your KB database
return { id, content: "Full article content here..." };
}
// Start server
const transport = new StdioServerTransport();
await server.connect(transport);
// package.json
{
"name": "knowledge-base-mcp",
"type": "module",
"scripts": { "start": "node dist/index.js" },
"dependencies": { "@modelcontextprotocol/sdk": "^1.0.0" },
"devDependencies": { "typescript": "^5.0.0" }
}
npm install && npm run build
Add to Claude Desktop config:
{
"mcpServers": {
"knowledge-base": {
"command": "node",
"args": ["/path/to/knowledge-base-mcp/dist/index.js"]
}
}
}
Building an MCP Server in Python
# server.py
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp import types
import asyncio
app = Server("my-tools")
@app.list_tools()
async def list_tools() -> list[types.Tool]:
return [
types.Tool(
name="get_weather",
description="Get current weather for a city",
inputSchema={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
}
)
]
@app.call_tool()
async def call_tool(name: str, arguments: dict) -> list[types.TextContent]:
if name == "get_weather":
city = arguments["city"]
# In production: call a weather API
weather = f"Weather in {city}: 72°F, partly cloudy"
return [types.TextContent(type="text", text=weather)]
raise ValueError(f"Unknown tool: {name}")
async def main():
async with stdio_server() as streams:
await app.run(streams[0], streams[1], app.create_initialization_options())
if __name__ == "__main__":
asyncio.run(main())
# Install
pip install mcp
# Run
python server.py
MCP vs Direct Tool Use
You might wonder: why use MCP when Claude already supports tool use in the API?
| Aspect | Direct Tool Use (API) | MCP |
| Where defined | Your application code | Separate server process |
| Reusability | Application-specific | Reusable across apps |
| Who invokes | Your code | Claude / MCP client |
| Ecosystem | None | Growing ecosystem |
| Latency | Lower (in-process) | Slightly higher (IPC) |
For custom applications: direct tool use in the API is simpler. For developer tooling / Claude Desktop: MCP is the right pattern. For shared internal tools: MCP enables one server → many AI clients.
Security Considerations
MCP servers run with the permissions of the process that launches them. Keep these rules in mind:
- Principle of least privilege: Give MCP servers only the permissions they need. The filesystem server should only access directories you specify.
- Don't expose secrets in config: Use environment variables for API keys, not hardcoded values.
- Input validation: Validate all inputs in your MCP server. The AI can hallucinate parameter values.
- Review what you're installing: Public MCP servers can be malicious. Only install servers you trust.
- Audit tool calls: Log every tool call in production to detect anomalous behavior.
The Ecosystem Is Growing Fast
As of early 2026, there are over 2,000 public MCP servers available, covering databases, SaaS tools, developer tools, and APIs. The MCP server registry is the best place to find them.
MCP has also been adopted by Cursor, Cline, and other AI coding tools — making it the de facto standard for AI tool integration. If you're building tools for AI assistants, building them as MCP servers is the right long-term bet.
Methodology
All benchmarks, pricing, and performance figures cited in this article are sourced from publicly available data: provider pricing pages (verified 2026-04-16), LMSYS Chatbot Arena ELO leaderboard, MTEB retrieval benchmark, and independent API tests. Costs are listed as per-million-token input/output unless noted. Rankings reflect the publication date and change as models update.