MCP Server Patterns#

Model Context Protocol (MCP) is Anthropic’s open standard for connecting AI agents to external tools and data. Instead of every agent framework inventing its own tool integration format, MCP provides a single protocol that any agent can speak.

An agent that supports MCP can discover tools at runtime, understand their inputs and outputs, and invoke them – without hardcoded integration code for each tool.

Server Structure: Three Primitives#

An MCP server exposes three types of capabilities:

  • Tools: Functions the agent can call. These are the most common primitive. A tool has a name, description, input schema, and a handler that executes the operation and returns a result.
  • Resources: Data the agent can read. Think of these as read-only endpoints – file contents, database records, configuration values. Resources have URIs.
  • Prompts: Reusable prompt templates. These let the server suggest how to use its tools effectively. Less commonly used but useful for complex workflows.

Most MCP servers are tool-focused. Resources and prompts are optional.

Tool Definition Pattern#

Every tool needs four things: a name, a human-readable description (this is what the agent reads to decide whether to use the tool), a JSON Schema for inputs, and a handler function.

server.tool(
  "read_file",
  "Read the contents of a file at the given path. Returns the file content as text.",
  {
    path: z.string().describe("Absolute path to the file"),
    encoding: z.enum(["utf-8", "base64"]).default("utf-8")
      .describe("How to encode the file content")
  },
  async ({ path, encoding }) => {
    const content = await fs.readFile(path, encoding);
    return {
      content: [{ type: "text", text: content }]
    };
  }
);

The description matters more than you think. Agents choose tools based on descriptions, not names. A vague description like “reads files” leads to misuse. Be specific about what the tool does, what it returns, and any constraints.

Transport Types#

MCP supports three transport mechanisms. The choice depends on where your server runs.

stdio: Local Subprocess#

The agent spawns your MCP server as a child process and communicates over stdin/stdout using JSON-RPC 2.0 messages. This is the simplest transport and the most common for local tools.

{
  "mcpServers": {
    "filesystem": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-filesystem", "/home/user/projects"]
    }
  }
}

The agent starts the process, sends JSON-RPC messages to stdin, and reads responses from stdout. No networking, no ports, no authentication. The server lives and dies with the agent session.

Use stdio when: the tool runs locally, the agent can spawn processes, and you want zero configuration.

HTTP+SSE: Remote, Stateful#

The server runs as an HTTP service. The client sends requests via HTTP POST and connects to a Server-Sent Events endpoint for server-initiated messages. This transport supports multiple concurrent clients and persistent state.

Client --POST /message--> Server
Client <--SSE /events--- Server

Use HTTP+SSE when: the server runs remotely, multiple agents share the same server, or the server needs to push updates to the client.

Streamable HTTP: Newer, Simpler#

A more recent transport using a single HTTP endpoint for both requests and responses. The server can optionally upgrade responses to SSE streams for long-running operations – simpler than HTTP+SSE because it does not require a separate SSE connection.

Use Streamable HTTP when: you want remote access with a simpler protocol.

Tool Execution Flow#

Every tool invocation follows the same sequence:

  1. Discovery: Agent connects to the MCP server and calls tools/list. The server returns all available tools with their schemas.
  2. Selection: The agent reads the tool descriptions and decides which tool to call based on the user’s request.
  3. Invocation: The agent sends a tools/call request with the tool name and parameters (validated against the JSON Schema).
  4. Execution: The server handler runs, performs the operation, and returns a result.
  5. Consumption: The agent reads the result and incorporates it into its response.

Error Handling#

Tool handlers should return structured errors, not throw exceptions that crash the server. The MCP SDK provides an isError flag for this:

async ({ path }) => {
  if (!await fs.exists(path)) {
    return {
      isError: true,
      content: [{ type: "text", text: `File not found: ${path}` }]
    };
  }
  // ... normal execution
}

For long-running tools, implement timeouts in the handler itself. The agent may also impose its own timeout, but your server should not hang indefinitely.

Security Considerations#

MCP servers execute real operations – file reads, API calls, database queries. Treat every input as untrusted.

  • Validate inputs beyond what JSON Schema catches. Check path traversal (../../../etc/passwd), command injection, and resource limits.
  • Sandbox execution where possible. If your tool runs shell commands, use restricted shells or containers.
  • Rate limit tool calls, especially for tools that hit external APIs or modify state.
  • Scope access narrowly. A file-reading tool should accept a base directory and reject paths outside it.

MCP Server vs REST API#

Build an MCP server when your primary consumer is an AI agent and you want automatic tool discovery. Agents that speak MCP can find and use your tools without any custom integration code.

Build a REST API when your consumers include web frontends, mobile apps, or other services. REST is universal.

If you need both, build the REST API first and wrap it in a thin MCP server. The MCP layer becomes a translation shim that maps tool calls to API requests.