Saltearse al contenido

MCP Server Overview

Esta página aún no está disponible en tu idioma.

Contextia ships two interfaces on top of the same core logic: a CLI for humans and an MCP server for AI agents. The MCP server gives coding assistants direct, tool-level access to every capability Contextia offers — finding specs, assembling context, reading session logs, running checks — without the agent needing to shell out or parse terminal output.

The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. It defines a structured JSON-RPC interface that lets an LLM discover available tools, call them with typed parameters, and receive structured results.

MCP supports three primitive types:

  • Tools — functions the LLM invokes on demand (e.g., search for a spec, assemble context for a task).
  • Resources — data the client pre-loads into the conversation (e.g., project identity, artifact index).
  • Prompts — workflow templates the agent can instantiate to follow structured processes (e.g., starting a feature task).

Contextia uses all three. Tools are the primary interface (12 tools), resources provide always-available context (2 resources), and prompts encode the recommended workflows (3 prompts).

AI agents can use a CLI by shelling out and parsing stdout. But this approach has significant downsides:

  1. Fragile parsing. Terminal output is designed for humans: it includes color codes, progress bars, and formatting that changes between versions. Agents must regex their way through it.
  2. No type safety. CLI arguments are strings. The agent must guess parameter names and valid values from --help output.
  3. No discoverability. The agent cannot programmatically list available commands or inspect their schemas.
  4. Process overhead. Each CLI invocation spawns a new Python process, re-reads config, and rebuilds state.

MCP solves all of these. The server advertises typed tool schemas, returns structured data, and runs as a persistent process for the duration of a session.

Both the CLI and the MCP server are thin wrappers around the core/ package. Neither contains business logic. The difference is in how they format results:

CLI (click) → core.find_specs(root, query=...) → format_human()
MCP (FastMCP) → core.find_specs(root, query=...) → format_context()

The CLI formats for humans: colored output, tables, progress indicators on stderr. The MCP server formats for agents: compact plain text optimized for information density, no ANSI escape codes, no decorative elements.

This means every capability available in the CLI is also available via MCP, with identical behavior. If contextia find auth returns three specs on the command line, mcp_search(query="auth", type="spec") returns the same three specs through the MCP server.

Contextia’s MCP server uses stdio transport exclusively. The client launches the server as a child process and communicates over stdin/stdout using JSON-RPC. There is no HTTP server, no WebSocket, no authentication layer.

┌─────────────────┐ stdin/stdout ┌──────────────────┐
│ AI Client │ ◄──────────────► │ contextia-mcp │
│ (Claude Code, │ JSON-RPC │ (Python process) │
│ Cursor, etc.) │ │ │
└─────────────────┘ └──────────────────┘

This design is intentional:

  • No network exposure. The server runs locally as a subprocess. No ports to open, no TLS to configure, no auth tokens to manage.
  • Inherits the working directory. The client passes its cwd to the subprocess, which Contextia uses to find the project root.
  • Simple lifecycle. The client starts the server when it needs it and kills it when done. No daemon management.

The MCP server does not require per-project configuration in the client. When the server starts, it walks up from the inherited working directory looking for .contextia/config.yaml, exactly like git finds .git/:

def find_project_root(start: Path | None = None) -> Path | None:
current = (start or Path.cwd()).resolve()
for directory in [current, *current.parents]:
if (directory / ".contextia" / "config.yaml").exists():
return directory
return None

The project root is resolved once at the first tool call and cached for the duration of the session. If the working directory is inside a Contextia project, everything works automatically.

The MCP server is stateless per call. Each tool invocation reads the current state of the .contextia/ directory from disk. There is no in-memory cache of specs, no watch-mode for file changes, no stale data.

This means:

  • If you edit a spec file between two mcp_show calls, the second call sees the updated content.
  • If another developer pushes changes and you pull, the next tool call reflects those changes.
  • The server never holds locks on files and cannot corrupt state.

The only exception is the project root path itself, which is cached after first resolution. If you move the .contextia/ directory while the server is running (an unusual operation), restart the server.

PrimitiveCountPurpose
Tools12On-demand operations: search, read, assemble, check, log
Resources2Pre-loaded context: project identity, artifact index
Prompts3Workflow templates: feature task, bug fix, end session

Tools are documented in detail in MCP Tools. Resources are covered in MCP Resources. Prompts are described in MCP Prompts.

A typical agent session with Contextia follows this pattern:

  1. Session start. The client pre-loads the contextia://identity resource, giving the agent the project’s name, purpose, and key norms.
  2. Orientation. The agent calls mcp_list(type="spec") or reads contextia://index to understand what artifacts exist.
  3. Context assembly. For a specific task, the agent calls mcp_context(task_id="TASK-042") to get the full context bundle: the task itself, linked specs, relevant decisions, applicable norms.
  4. Work. The agent writes code, consulting individual specs with mcp_show as needed.
  5. Logging. The agent writes session logs with mcp_write_log to record decisions and progress.
  6. Session end. The agent calls mcp_close_log to finalize the session record.

This workflow is encoded in the start_feature_task and end_session prompts, which the agent can follow step by step.