MCP Server Overview
Contextia ships two interfaces on top of the same core logic: a CLI for humans and an MCP server for AI agents. The MCP server gives coding assistants direct, tool-level access to every capability Contextia offers — finding specs, assembling context, reading session logs, running checks — without the agent needing to shell out or parse terminal output.
What is MCP?
Section titled “What is MCP?”The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. It defines a structured JSON-RPC interface that lets an LLM discover available tools, call them with typed parameters, and receive structured results.
MCP supports three primitive types:
- Tools — functions the LLM invokes on demand (e.g., search for a spec, assemble context for a task).
- Resources — data the client pre-loads into the conversation (e.g., project identity, artifact index).
- Prompts — workflow templates the agent can instantiate to follow structured processes (e.g., starting a feature task).
Contextia uses all three. Tools are the primary interface (12 tools), resources provide always-available context (2 resources), and prompts encode the recommended workflows (3 prompts).
Why MCP instead of CLI wrapping?
Section titled “Why MCP instead of CLI wrapping?”AI agents can use a CLI by shelling out and parsing stdout. But this approach has significant downsides:
- Fragile parsing. Terminal output is designed for humans: it includes color codes, progress bars, and formatting that changes between versions. Agents must regex their way through it.
- No type safety. CLI arguments are strings. The agent must guess parameter names and valid values from
--helpoutput. - No discoverability. The agent cannot programmatically list available commands or inspect their schemas.
- Process overhead. Each CLI invocation spawns a new Python process, re-reads config, and rebuilds state.
MCP solves all of these. The server advertises typed tool schemas, returns structured data, and runs as a persistent process for the duration of a session.
The dual interface model
Section titled “The dual interface model”Both the CLI and the MCP server are thin wrappers around the core/ package. Neither contains business logic. The difference is in how they format results:
CLI (click) → core.find_specs(root, query=...) → format_human()MCP (FastMCP) → core.find_specs(root, query=...) → format_context()The CLI formats for humans: colored output, tables, progress indicators on stderr. The MCP server formats for agents: compact plain text optimized for information density, no ANSI escape codes, no decorative elements.
This means every capability available in the CLI is also available via MCP, with identical behavior. If contextia find auth returns three specs on the command line, mcp_search(query="auth", type="spec") returns the same three specs through the MCP server.
Transport: stdio only
Section titled “Transport: stdio only”Contextia’s MCP server uses stdio transport exclusively. The client launches the server as a child process and communicates over stdin/stdout using JSON-RPC. There is no HTTP server, no WebSocket, no authentication layer.
┌─────────────────┐ stdin/stdout ┌──────────────────┐│ AI Client │ ◄──────────────► │ contextia-mcp ││ (Claude Code, │ JSON-RPC │ (Python process) ││ Cursor, etc.) │ │ │└─────────────────┘ └──────────────────┘This design is intentional:
- No network exposure. The server runs locally as a subprocess. No ports to open, no TLS to configure, no auth tokens to manage.
- Inherits the working directory. The client passes its cwd to the subprocess, which Contextia uses to find the project root.
- Simple lifecycle. The client starts the server when it needs it and kills it when done. No daemon management.
Project root auto-discovery
Section titled “Project root auto-discovery”The MCP server does not require per-project configuration in the client. When the server starts, it walks up from the inherited working directory looking for .contextia/config.yaml, exactly like git finds .git/:
def find_project_root(start: Path | None = None) -> Path | None: current = (start or Path.cwd()).resolve() for directory in [current, *current.parents]: if (directory / ".contextia" / "config.yaml").exists(): return directory return NoneThe project root is resolved once at the first tool call and cached for the duration of the session. If the working directory is inside a Contextia project, everything works automatically.
Stateless per-call execution
Section titled “Stateless per-call execution”The MCP server is stateless per call. Each tool invocation reads the current state of the .contextia/ directory from disk. There is no in-memory cache of specs, no watch-mode for file changes, no stale data.
This means:
- If you edit a spec file between two
mcp_showcalls, the second call sees the updated content. - If another developer pushes changes and you pull, the next tool call reflects those changes.
- The server never holds locks on files and cannot corrupt state.
The only exception is the project root path itself, which is cached after first resolution. If you move the .contextia/ directory while the server is running (an unusual operation), restart the server.
What the server exposes
Section titled “What the server exposes”| Primitive | Count | Purpose |
|---|---|---|
| Tools | 12 | On-demand operations: search, read, assemble, check, log |
| Resources | 2 | Pre-loaded context: project identity, artifact index |
| Prompts | 3 | Workflow templates: feature task, bug fix, end session |
Tools are documented in detail in MCP Tools. Resources are covered in MCP Resources. Prompts are described in MCP Prompts.
How an agent typically interacts
Section titled “How an agent typically interacts”A typical agent session with Contextia follows this pattern:
- Session start. The client pre-loads the
contextia://identityresource, giving the agent the project’s name, purpose, and key norms. - Orientation. The agent calls
mcp_list(type="spec")or readscontextia://indexto understand what artifacts exist. - Context assembly. For a specific task, the agent calls
mcp_context(task_id="TASK-042")to get the full context bundle: the task itself, linked specs, relevant decisions, applicable norms. - Work. The agent writes code, consulting individual specs with
mcp_showas needed. - Logging. The agent writes session logs with
mcp_write_logto record decisions and progress. - Session end. The agent calls
mcp_close_logto finalize the session record.
This workflow is encoded in the start_feature_task and end_session prompts, which the agent can follow step by step.