Build an MCP Server: Model Context Protocol Tutorial for Developers
A practical tutorial on building MCP servers. Covers the Model Context Protocol architecture, building servers in Python and TypeScript, defining resources and tools, and connecting to Claude Desktop.
The Model Context Protocol is the closest thing AI has to a USB-C port. Before MCP, every AI app that needed to talk to external tools had its own bespoke integration -- different formats, different auth, different everything. MCP standardizes all of that into one protocol.
Anthropic created it. OpenAI and Google have adopted it. If you're building anything that connects AI models to external data or tools, you should understand MCP. And the best way to understand it is to build a server.
What MCP Actually Is
MCP defines a standard way for AI models (the "client") to interact with external systems (the "server"). The architecture looks like this:
Host Application (e.g., Claude Desktop)
└── MCP Client (built into the host)
└── MCP Server (your code)
└── Your data, APIs, tools
An MCP server exposes three types of capabilities:
- Resources -- data the model can read (files, database records, API responses)
- Tools -- functions the model can call (create a ticket, send an email, run a query)
- Prompts -- reusable prompt templates with parameters
Let's be honest about why this matters: without MCP, if you wanted Claude to access your company's Jira board, you'd write a custom integration. If you also wanted it to access your Postgres database, that's another custom integration. MCP means you write one server per data source, and any MCP-compatible client can use it.
Building an MCP Server in Python
We'll build a server that gives AI models access to a local SQLite database -- a common real-world use case.
pip install mcp httpx
Here's the complete server:
# server.py
import sqlite3
import json
from mcp.server import Server
from mcp.server.stdio import stdio_server
from mcp.types import Resource, Tool, TextContent
# Create the MCP server
app = Server("my-database-server")
# Database connection
DB_PATH = "app.db"
def get_db():
conn = sqlite3.connect(DB_PATH)
conn.row_factory = sqlite3.Row
return conn
# --- Resources: data the model can read ---
@app.list_resources()
async def list_resources():
"""Tell the client what resources are available."""
db = get_db()
tables = db.execute(
"SELECT name FROM sqlite_master WHERE type='table'"
).fetchall()
db.close()
return [
Resource(
uri=f"db://tables/{table['name']}",
name=f"Table: {table['name']}",
description=f"Schema and sample data from the {table['name']} table",
mimeType="application/json",
)
for table in tables
]
@app.read_resource()
async def read_resource(uri: str):
"""Return the actual data for a resource."""
table_name = uri.split("/")[-1]
db = get_db()
# Get schema
columns = db.execute(f"PRAGMA table_info({table_name})").fetchall()
schema = [{"name": col["name"], "type": col["type"]} for col in columns]
# Get sample rows
rows = db.execute(f"SELECT * FROM {table_name} LIMIT 10").fetchall()
sample = [dict(row) for row in rows]
db.close()
content = json.dumps({"schema": schema, "sample_rows": sample}, indent=2)
return content
# --- Tools: functions the model can call ---
@app.list_tools()
async def list_tools():
"""Tell the client what tools are available."""
return [
Tool(
name="query_database",
description="Run a read-only SQL query against the database. Only SELECT statements are allowed.",
inputSchema={
"type": "object",
"properties": {
"sql": {
"type": "string",
"description": "SQL SELECT query to execute",
}
},
"required": ["sql"],
},
),
Tool(
name="list_tables",
description="List all tables in the database with their row counts.",
inputSchema={
"type": "object",
"properties": {},
},
),
]
@app.call_tool()
async def call_tool(name: str, arguments: dict):
"""Execute a tool and return the result."""
if name == "list_tables":
db = get_db()
tables = db.execute(
"SELECT name FROM sqlite_master WHERE type='table'"
).fetchall()
result = []
for table in tables:
count = db.execute(
f"SELECT COUNT(*) as cnt FROM {table['name']}"
).fetchone()
result.append({"table": table["name"], "rows": count["cnt"]})
db.close()
return [TextContent(type="text", text=json.dumps(result, indent=2))]
elif name == "query_database":
sql = arguments.get("sql", "")
# Safety check: only allow SELECT
if not sql.strip().upper().startswith("SELECT"):
return [TextContent(
type="text",
text="Error: Only SELECT queries are allowed.",
)]
try:
db = get_db()
rows = db.execute(sql).fetchall()
result = [dict(row) for row in rows]
db.close()
return [TextContent(
type="text",
text=json.dumps(result, indent=2),
)]
except sqlite3.Error as e:
return [TextContent(type="text", text=f"SQL Error: {e}")]
return [TextContent(type="text", text=f"Unknown tool: {name}")]
# --- Run the server ---
async def main():
async with stdio_server() as (read_stream, write_stream):
await app.run(read_stream, write_stream)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
That's a working MCP server. It exposes your database tables as resources and provides a safe read-only query tool.
Building an MCP Server in TypeScript
If TypeScript is more your speed:
npm install @modelcontextprotocol/sdk
// server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
ListToolsRequestSchema,
CallToolRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
const server = new Server(
{ name: "my-tools-server", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
// Define available tools
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [
{
name: "get_github_issues",
description:
"Fetch open issues from a GitHub repository. Use owner/repo format.",
inputSchema: {
type: "object" as const,
properties: {
repo: {
type: "string",
description: "Repository in owner/repo format, e.g. 'facebook/react'",
},
limit: {
type: "number",
description: "Max issues to return (default 10)",
},
},
required: ["repo"],
},
},
],
}));
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "get_github_issues") {
const { repo, limit = 10 } = request.params.arguments as {
repo: string;
limit?: number;
};
const response = await fetch(
https://api.github.com/repos/${repo}/issues?state=open&per_page=${limit},
{ headers: { "User-Agent": "mcp-server" } }
);
const issues = await response.json();
const summary = issues.map((issue: any) => ({
number: issue.number,
title: issue.title,
labels: issue.labels.map((l: any) => l.name),
created: issue.created_at,
}));
return {
content: [{ type: "text", text: JSON.stringify(summary, null, 2) }],
};
}
return { content: [{ type: "text", text: "Unknown tool" }] };
});
// Start
const transport = new StdioServerTransport();
server.connect(transport);
Connecting to Claude Desktop
The whole point of MCP is that clients discover and use your server automatically. To connect to Claude Desktop, edit its config file:
macOS:~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"my-database": {
"command": "python",
"args": ["C:/path/to/server.py"]
},
"my-github-tools": {
"command": "node",
"args": ["C:/path/to/server.js"]
}
}
}
Restart Claude Desktop. You'll see a hammer icon in the chat input showing your tools are connected. Now when you ask Claude "How many users signed up last week?" it can call your query_database tool to find out.
Adding Prompts
Prompts are reusable templates that help the model use your server effectively:
from mcp.types import Prompt, PromptArgument, PromptMessage, TextContent
@app.list_prompts()
async def list_prompts():
return [
Prompt(
name="analyze_table",
description="Analyze a database table and provide insights",
arguments=[
PromptArgument(
name="table_name",
description="Name of the table to analyze",
required=True,
),
],
),
]
@app.get_prompt()
async def get_prompt(name: str, arguments: dict):
if name == "analyze_table":
table = arguments["table_name"]
return {
"messages": [
PromptMessage(
role="user",
content=TextContent(
type="text",
text=f"Analyze the '{table}' table. First list the tables to understand the schema, then run queries to find: total row count, key distributions, any anomalies or interesting patterns. Summarize your findings.",
),
)
]
}
Testing Without Claude Desktop
You don't need Claude Desktop to test. Use the MCP Inspector:
npx @modelcontextprotocol/inspector python server.py
This opens a web UI where you can browse resources, call tools, and debug your server interactively.
MCP Server vs Regular API
A question that comes up: when should you build an MCP server instead of a regular REST API?
Build an MCP server when:- You want AI models to discover and use your tools automatically
- You're integrating with MCP clients (Claude Desktop, IDEs with MCP support)
- You want the model to read your data contextually (resources)
- You want a standard protocol instead of a bespoke integration
- Your consumers are other applications, not AI models
- You need complex authentication flows
- You need real-time streaming (WebSockets)
- MCP's request-response pattern doesn't fit your use case
Common Gotchas
Stdio vs SSE transport. Local servers use stdio (stdin/stdout). Remote servers use Server-Sent Events over HTTP. Don't mix them up -- Claude Desktop expects stdio for local servers. Tool descriptions are prompts. The model reads your tool descriptions to decide when to call them. "Query the database" is too vague. "Run a read-only SQL SELECT query to find specific records, aggregate data, or check statistics" tells the model exactly when this tool is useful. Security matters. If your MCP server can access a database, it can access everything in that database. Limit permissions. Use read-only database users. Validate inputs. Don't give an MCP server more access than you'd give an intern. Error messages should be helpful. When a tool fails, the error message goes back to the model. "Error" tells it nothing. "SQL Error: no such table 'usrs' -- did you mean 'users'?" lets the model self-correct.MCP is still early, but the trajectory is clear -- it's becoming the standard way AI models interact with external systems. Building an MCP server today puts you ahead of the curve, and the skills transfer directly to whatever comes next in AI tooling.
For more practical AI development tutorials, check out CodeUp.