Skip to content
spaget

the complete guide to MCP servers for AI coding agents

spaget team 7 min read

MCP — Model Context Protocol — is one of those technologies that looks like infrastructure until you actually use it, and then you wonder how you coded without it. If your AI coding agent feels limited by what it can access and do, MCP is probably the answer.

This guide covers what MCP is, how it extends your AI agent’s capabilities, which servers are most useful for coding workflows, and how to configure them in Claude Code, Cursor, and other tools.

What is MCP?

MCP is an open protocol that lets AI assistants connect to external tools and data sources in a standardized way. Think of it as an API layer between your AI agent and everything else in your development environment.

Without MCP, your AI agent knows what’s in your current file, what you’ve shared in the conversation, and what’s in its training data. With MCP, your agent can:

  • Query your database directly
  • Search and read from Notion, Linear, or Confluence
  • Look up GitHub issues and PRs
  • Run and interpret terminal commands
  • Access your design files in Figma
  • Search the web for current documentation
  • And anything else a server implements

The “standardized” part is what makes this powerful. Tool authors build MCP servers once, and any MCP-compatible AI client can use them. Claude Code, Cursor, Cline, Windsurf, and other tools all support MCP. One server, many clients.

How MCP works

MCP has two sides:

Servers expose resources, tools, and prompts. A GitHub MCP server, for example, exposes tools like list_issues, create_pr, get_file_contents. Each tool has a schema that tells the AI what parameters it accepts and what it returns.

Clients (your AI tool) discover available servers, list their capabilities, and call tools as needed. When your AI agent decides it needs to check a GitHub issue, it calls the get_issue tool on the GitHub MCP server and gets back structured data it can reason about.

The AI decides when to use which tool — you don’t have to tell it “now check the issue.” You just describe your task, and the agent uses the available tools to gather the context it needs.

Setting up MCP in Claude Code

Claude Code has native MCP support. Servers are configured in your Claude settings file:

{
"mcpServers": {
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "your-token-here"
}
},
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/projects"]
}
}
}

Start Claude Code with your configured servers and it automatically discovers their capabilities. Ask it to look at a GitHub issue and it will — without you needing to copy-paste the issue content.

Setting up MCP in Cursor

Cursor added MCP support via its settings:

  1. Open Cursor settings → MCP section
  2. Add a new server with name, command, and arguments
  3. Restart Cursor
  4. The configured servers appear in the AI’s context

Cursor’s implementation exposes MCP tools in the chat interface. When you ask a question that benefits from external context, Cursor calls the relevant MCP tools automatically.

The most useful MCP servers for coding workflows

GitHub MCP server

The official GitHub server from Anthropic. Exposes tools for:

  • Reading and searching issues and PRs
  • Getting file contents from repos
  • Creating issues and comments
  • Searching code across repos

Install: npx -y @modelcontextprotocol/server-github

Requires: GitHub Personal Access Token

Best for: “Fix the bug described in issue #247” — the agent reads the issue, understands the context, and implements the fix. No copy-pasting the issue description.

Filesystem MCP server

Gives your AI agent read/write access to your filesystem beyond the current working directory.

Install: npx -y @modelcontextprotocol/server-filesystem [allowed-paths]

Best for: Multi-repo workflows, accessing reference projects, reading config files outside the current project.

PostgreSQL / SQLite MCP server

Lets your AI agent query your database directly.

Install: npx -y @modelcontextprotocol/server-postgres

Requires: Database connection string

Best for: “Write a query that finds users who signed up in the last 30 days and haven’t completed onboarding.” The agent queries your actual schema, not an imagined one.

Security note: Use read-only database credentials for this. You don’t want your AI agent executing writes against production.

Linear MCP server

Access your Linear workspace — issues, projects, teams, cycles.

Install: npx -y @linear/mcp-server

Requires: Linear API key

Best for: Implementing features directly from tickets. “Build the feature described in LIN-1247” — the agent reads the Linear issue and implements accordingly.

Notion MCP server

Access Notion pages and databases.

Best for: Teams that document architecture and conventions in Notion. The agent can read your actual runbooks, not rely on what’s in the CLAUDE.md.

Real-time web search within the AI conversation.

Best for: Checking current library versions, reading docs for APIs that changed after the training cutoff, verifying that a dependency still exists.

Memory MCP server

Persistent memory that the agent can read and write across sessions.

Best for: Long-running projects where you want the AI to remember decisions, context, and preferences across multiple sessions.

Configuring MCP in your CLAUDE.md

Once you have MCP servers configured, tell your agent what they’re for in your CLAUDE.md or AGENTS.md:

# Available tools (MCP)
## GitHub
You have access to GitHub via MCP. Use it to:
- Read issue details when implementing features
- Check PR status when asked about code reviews
- Look up related PRs when fixing bugs
## Database (read-only)
You have read access to the staging database. Use it to:
- Query the actual schema when writing migrations
- Verify data patterns before writing queries
- Check foreign key relationships
## Linear
You have access to Linear. When given a ticket number (LIN-XXXX):
1. Read the full ticket including comments
2. Check linked issues for context
3. Note the acceptance criteria

This guidance helps the agent understand when to reach for MCP tools, not just that they exist.

Building custom MCP servers

The MCP SDK makes building custom servers straightforward. If you have an internal tool, API, or data source that would make your AI agent more effective, you can expose it as an MCP server.

A minimal server in TypeScript:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
const server = new Server(
{ name: "my-internal-tool", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
server.setRequestHandler(ListToolsRequestSchema, async () => ({
tools: [{
name: "get_deployment_status",
description: "Get the current deployment status for a service",
inputSchema: {
type: "object",
properties: {
service: { type: "string", description: "Service name" }
},
required: ["service"]
}
}]
}));
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "get_deployment_status") {
const status = await fetchDeploymentStatus(request.params.arguments.service);
return { content: [{ type: "text", text: JSON.stringify(status) }] };
}
});
const transport = new StdioServerTransport();
await server.connect(transport);

Common candidates for custom MCP servers:

  • Internal deployment and infrastructure tools
  • Custom metrics and dashboards
  • Internal documentation systems
  • Proprietary data sources

MCP and spaget

If you’re using spaget to manage your agent configurations, your MCP configuration can be documented in the visual builder and referenced in your AGENTS.md exports. When you describe which MCP tools are available and how to use them in spaget, that guidance gets included in every export format — so your Claude Code CLAUDE.md and your cursor rules both have accurate information about what tools the agent can reach for.

No account required — open the builder to see how MCP documentation fits into the configuration.

The bottom line

MCP is the upgrade that takes your AI coding agent from “smart text generation” to “genuinely integrated development partner.” When your agent can read actual GitHub issues, query your real database, and access your actual documentation, it stops making assumptions and starts working with ground truth.

Set up the GitHub and filesystem servers as a starting point. Add database access if you work with data regularly. Build custom servers for your internal tools. The investment pays off quickly.

Further reading