What is MCP Server? A Deep Dive into the Model Context Protocol
Model Context Protocol (MCP) is an open-source standard for connecting AI applications to external systems. Introduced by Anthropic in November 2024, MCP has quickly become a foundational layer for how modern AI assistants—including Claude, ChatGPT, and others—connect to real-world data sources, tools, and workflows.
Think of MCP like USB-C for AI applications: just as USB-C provides a single, standardized way to connect devices regardless of manufacturer, MCP provides a universal protocol so that any AI application can integrate with any compatible data source or tool, without each vendor building custom, one-off integrations.
This article explores what an MCP server is, how it fits into the broader architecture, what capabilities it exposes, and why it matters for the future of AI applications.
Large language models (LLMs) are powerful at reasoning and generating text, but they are inherently isolated. By default they:
Cannot read your files, databases, or APIsCannot perform actions (send emails, create calendar events, query systems)Cannot combine multiple external systems in one workflowBefore MCP, every AI application had to build its own integrations: custom connectors for Slack, custom adapters for databases, custom plugins for calendars. That meant duplicated effort, inconsistent behavior, and a fragmented ecosystem. MCP addresses this by defining a single, open protocol that both AI applications (clients) and data/tool providers (servers) can implement.
Architecture: Client, Host, and Server MCP is built on a client–host–server model.
| Role | Description |
|---|
| Host | The AI application (e.g. Claude Desktop, Cursor, ChatGPT). It creates and manages MCP clients, aggregates context from multiple servers, handles user authorization, and enforces security. |
| Client | Created by the host. Each client has a 1:1 connection to one MCP server. It handles protocol negotiation, message encoding, and transport. |
| Server | A program that exposes specific capabilities (data and/or actions) through the MCP protocol. Servers can run locally (e.g. as a subprocess) or remotely (e.g. over HTTP). |
Important properties of this design:
Isolation: Each server talks only to its client; servers do not talk to each other. The host is the only component that sees and coordinates across all connections.Security: The host can enforce policies (which servers are allowed, which tools need user approval, which resources are visible).Composability: One host can connect to many servers at once, so a single AI session can use files, databases, calendar, and email in one flow.
An MCP server is a program that exposes capabilities to AI applications through the standardized MCP protocol. It does not “serve” a human user directly; it serves an MCP client (and thus the host AI application) by providing:
Resources – read-only dataTools – callable functions (actions)Prompts – reusable, parameterized instruction templatesCommon examples of MCP servers:
File system server – exposes local or remote files and foldersDatabase server – exposes schemas, queries, or read-only viewsGitHub server – repos, issues, PRs, code searchSlack server – channels, messages, send messageCalendar server – events, availability, create eventWeather/API servers – external APIs wrapped as resources and toolsServers are often small, focused programs: one server for one domain (e.g. “files”, “calendar”). The AI’s power comes from the host composing many such servers in one conversation.
The Three Building Blocks Resources are passive, read-only sources of context. They give the model access to information without the model “doing” anything—just reading.
What they are: Data identified by a URI (e.g. file:///path/to/doc.md, calendar://events/2024). Each resource has a MIME type and optional metadata (title, description).Discovery: Servers can expose direct resources (fixed URIs) and resource templates (parameterized URIs, e.g. weather://forecast/{city}/{date}). The client/host can list and read them.Who controls usage: The application (host) decides which resources to fetch and pass to the model. The model does not pull resources by itself; the host retrieves them (e.g. via resources/read) and adds them to context.Typical use: Documents, database query results, API responses, calendar snapshots, configuration—anything the AI needs to “know” before answering or acting.Protocol operations include:
resources/list – list direct resourcesresources/templates/list – list resource templatesresources/read – get content of a resource by URIresources/subscribe – optional notifications when a resource changesTools are callable functions. They are the way the model can act on the world: search, create, update, delete, call APIs, run code.
What they are: Each tool has a name, description, and an input schema (e.g. JSON Schema). The model sees the list of tools and their schemas, then requests calls (e.g. searchFlights(origin, destination, date)) when appropriate.Who controls usage: The model decides when to call a tool, based on the user’s request and the context. The host typically adds safety layers: approval dialogs, activity logs, or pre-approved tool sets.Typical use: Search flights, send email, create calendar events, run database writes, trigger CI/CD, call external APIs.Protocol operations:
tools/list – discover available tools and their schemastools/call – execute a tool with given arguments; return result (or error) to the modelTools are the main mechanism for “AI can do things,” not just “AI can read things.”
Prompts are reusable, parameterized instruction templates provided by the server. They encode best practices for a domain (e.g. “plan a vacation”, “summarize my meetings”).
What they are: A prompt has a name, description, and a list of arguments (required/optional, with types). When invoked, the host/app can fill in the arguments and send the resulting prompt (and optionally selected resources) to the model.Who controls usage: The user (or the application on behalf of the user). Prompts are explicitly invoked (e.g. via a slash command or a button), not auto-invoked by the model.Typical use: “Plan a vacation” (destination, dates, budget), “Draft an email” (recipient, topic), “Summarize meetings” (date range, calendar). They guide the model to use the right tools and resources in a consistent way.Protocol operations:
prompts/list – discover available promptsprompts/get – get full prompt definition and arguments
How Communication Works: JSON-RPC and Transports MCP’s application-layer protocol is JSON-RPC 2.0: messages are JSON objects, UTF-8 encoded. There are requests (with an id, expecting a response), responses (with the same id, result or error), and notifications (no id, no response).
How these messages are carried depends on the transport:
The client (run by the host) starts the server as a subprocess.Communication is over the process’s stdin (client → server) and stdout (server → client). Messages are newline-delimited JSON.stderr is typically used for server logs.Use case: Local servers (filesystem, SQLite, scripts). Simple to run and debug; one process per server instance.Streamable HTTP transport The server is a long-running process (or service) that listens for HTTP.Client → server: HTTP POST with JSON-RPC body.Server → client: Server-Sent Events (SSE) for streaming (e.g. long-running tool calls or streaming content).Use case: Remote servers, multiple concurrent clients, web-based hosts. Requires origin validation and proper CORS/security.Custom transports (e.g. WebSockets, gRPC) can be defined so long as both client and server agree on the same message format (JSON-RPC) and semantics.
Example: Multi-Server Travel Planning A single user request can tie together several MCP servers:
User invokes a prompt like “Plan a vacation” with arguments: destination Barcelona, dates, budget, travelers.User or app attaches resources: e.g. travel://past-trips/Spain-2023, calendar://my-calendar/June-2024.Host sends the prompt (with arguments) and selected resource contents to the model.Model reasons and calls tools from different servers:checkWeather() (weather server)searchFlights() (travel server)createCalendarEvent() (calendar server)sendEmail() (email server)bookHotel() (travel server) Host may ask for user approval for certain tool calls, then execute them and return results to the model.Model continues until the task is done (e.g. trip planned and booked).So: one user intent, one conversation, many MCP servers—each exposing resources and tools, coordinated by the host.
For users: AI applications become more capable and personalized: they can see your calendar, your docs, your DBs, and take actions (send email, create events) with proper consent and visibility.For AI applications: One integration surface (MCP) instead of N custom integrations. Access to a growing ecosystem of community and official MCP servers.For developers: Write one MCP server (resources + tools + prompts) and it can work with any MCP-capable host. Less duplication, clearer contracts, standard tooling (SDKs, transports).MCP has seen rapid adoption: widespread SDK usage, official support in major AI products, and thousands of community-built servers. It is on a path to be a default way for AI applications to “plug in” to the rest of the world.
MCP = open protocol for connecting AI applications to external systems.MCP server = a program that speaks MCP and exposes resources (read-only data), tools (callable actions), and prompts (reusable templates).Architecture: Host (AI app) runs one or more clients; each client connects to one server. The host aggregates context and enforces security; servers stay isolated and composable.Communication: JSON-RPC 2.0 over transports such as stdio (local subprocess) or Streamable HTTP (remote, multi-client).Impact: One standard way for AI to read data and perform actions across many systems, improving capability for users, simplicity for developers, and scalability of the AI ecosystem.Understanding “what is an MCP server” is therefore understanding the unit of integration in this new layer: a focused program that exposes a slice of the world (data + actions + prompts) to any MCP-compatible AI application.