Connecting Your AI Agent to Everything
An AI agent that can only read and write files is limited. The real power comes when the agent can interact with databases, browsers, APIs, messaging platforms, and external services. But every integration used to require custom code: a different library, a different authentication flow, a different data format.
MCP โ the Model Context Protocol โ changes this. It is a single, open standard that lets any AI agent connect to any external service through a consistent interface.
Think of MCP as USB-C for AI. One protocol, any device.
What MCP Is
MCP is an open specification originally created by Anthropic in November 2024, now governed by the Linux Foundation and the Agentic AI Foundation. It defines how AI agents discover, connect to, and interact with external tools and data sources.
The architecture has three components:
- Host โ the application that runs the AI model (Claude Code, an IDE plugin, a custom agent framework).
- Client โ the MCP client inside the host that manages connections to MCP servers.
- Server โ the MCP server that exposes tools, resources, and prompts from an external service (a database, GitHub, Slack, a browser, etc.).
The host creates one client per server. Each client maintains a stateful session with its server. The servers can run locally (as a subprocess) or remotely (over HTTP).
This separation means the AI model never directly touches external services. It requests actions through the MCP client, which routes them to the appropriate server. This creates a clean boundary for security, logging, and permission control.
The Three Primitives
MCP defines three core primitives that cover everything an AI agent needs from an external service:
1. Resources โ Data the AI Reads
Resources are read-only data sources. They let the agent pull in context without executing any action. Examples include:
- A database query result.
- The contents of a file or directory.
- A configuration document.
- The current state of a deployment.
Resources are declared by the server and discovered by the client. The agent can request a resource by URI, and the server returns structured data.
2. Tools โ Functions the AI Calls
Tools are executable actions. They let the agent do things: create a pull request, send a message, run a database migration, click a button in a browser. Examples include:
github_create_prโ creates a pull request on GitHub.slack_send_messageโ sends a message to a Slack channel.browser_navigateโ opens a URL in a headless browser.db_execute_queryโ runs a SQL query against a database.
Each tool has a defined schema (name, description, input parameters, output format). The agent reads the schema to understand what the tool does and how to call it.
3. Prompts โ Behaviour Templates
Prompts are pre-defined templates that shape how the agent behaves in specific contexts. They are like reusable system prompts that a server can provide. Examples include:
- A code review prompt that instructs the agent to check for security vulnerabilities.
- A database analysis prompt that structures how the agent explores schema.
- A debugging prompt that walks the agent through a systematic diagnostic flow.
Prompts are the least commonly used primitive today, but they become powerful in enterprise settings where consistent agent behaviour across teams is important.
Setting Up MCP
MCP servers are configured in your agent's settings file. For Claude Code, this is ~/.claude.json or the project-level .claude/settings.json.
Each MCP server entry defines:
- The server name (how you reference it).
- The transport type โ
stdio(local subprocess) orstreamable HTTP(remote server). - The command to launch it (for stdio) or the URL to connect to (for HTTP).
- Any environment variables or arguments the server needs.
Transport types:
- stdio โ the MCP server runs as a local process. The host communicates via stdin/stdout. This is the simplest setup and works well for local tools like filesystem access, Git, or local databases.
- Streamable HTTP โ the MCP server runs remotely. The host communicates over HTTP with server-sent events for streaming. This is used for cloud services, shared team servers, or remote APIs.
Most developers start with stdio servers for local tooling and graduate to HTTP servers when they need shared or remote services.
Common MCP Servers
The MCP ecosystem has grown rapidly. Here are the most commonly used servers:
- Database servers โ PostgreSQL, MySQL, SQLite, MongoDB. Let the agent query, inspect schema, and (with permission) write data.
- Filesystem server โ provides controlled access to local files and directories with configurable permissions.
- GitHub server โ full GitHub API access: issues, pull requests, reviews, actions, releases.
- Slack server โ read and send messages, manage channels, search conversation history.
- Playwright server โ full browser automation: navigate, click, fill forms, take screenshots, extract content.
There are also servers for Jira, Linear, Notion, Google Drive, AWS, Kubernetes, and dozens of other services. The list grows weekly as the ecosystem matures.
The 2026 MCP Roadmap
MCP is still early. The 2026 roadmap focuses on four areas:
1. Transport Scalability
Current stdio transport works well for single-agent setups but does not scale to multi-agent systems. The roadmap includes improved connection pooling, reconnection handling, and support for high-throughput agent swarms.
2. Agent-to-Agent Communication
Today, MCP connects agents to services. The next step is connecting agents to other agents. This enables patterns like supervisor/dispatch and parallel fan-out natively within the MCP protocol, without custom orchestration code.
3. Governance Maturation
As MCP moves from Anthropic stewardship to the Linux Foundation, governance processes are formalizing: RFC processes, specification versioning, compliance testing, and certification programmes for MCP server implementations.
4. Enterprise Readiness
Enterprise adoption requires features like fine-grained access control, audit logging, rate limiting, secret management, and compliance with SOC 2 and similar frameworks. These are actively being developed.
Token Efficiency Consideration
There is a practical cost to MCP that is rarely discussed: token usage. Every MCP server injects its tool schemas into the conversation context on every turn. If you have five MCP servers with 10 tools each, that is 50 tool schemas loaded into context before the agent even reads your prompt.
This matters because:
- Tokens cost money. Injecting hundreds of tool schemas every turn adds up quickly.
- Context windows are finite. Tool schemas compete for space with your actual code, specs, and conversation history.
- More tools create decision fatigue for the model. With 50 tools available, the model spends more time deciding which tool to use.
Alternatives to consider:
- CLI tools as wrappers โ instead of a Playwright MCP server, use a CLI tool that the agent calls via bash. The tool schema is simpler and does not inject into every turn.
- Agent-browser CLI โ some teams use a dedicated CLI for browser automation instead of the Playwright MCP, reducing token overhead while keeping full browser capability.
- Selective server loading โ only enable MCP servers that are relevant to the current task. Do not load the Slack server when you are writing backend code.
MCP is powerful, but every MCP server you add has a token cost. Be intentional about which servers you enable for each workflow.
Conclusion
MCP is the infrastructure layer that turns AI agents from isolated code generators into connected systems that can interact with the real world. The protocol is simple (three primitives), the setup is straightforward (a JSON config file and a server), and the ecosystem is growing fast.
Start with one or two MCP servers for tools you already use (GitHub, your database). Add more only when you have a clear need. And always keep token efficiency in mind โ the best MCP setup is the one that gives your agent exactly the tools it needs, nothing more.