4
Tools allow the AI to perform actions with side effects, like executing code or querying a database.
- JSON Schema: Servers define exactly what arguments a tool requires using JSON Schema, ensuring the LLM formats its requests correctly.
- User Control: Because tools can have side effects (like deleting files), clients typically require explicit human approval before executing them.
5
Resources expose read-only data (like files, API responses, or database schemas) to the LLM as context.
- URI-Based: Every resource is identified by a unique URI (e.g., `postgres://schema/users`).
- Subscriptions: Clients can subscribe to a resource, and the server will notify the client whenever the underlying data changes, keeping the LLM's context fresh.
6
Servers can define standardized, parameter-driven prompts to guide the LLM through specific tasks.
- Server-Side Logic: Instead of users copying and pasting complex system prompts, the server provides them dynamically based on the current context (e.g., a "Code Review" prompt that automatically includes the current git diff).
7
An advanced feature where the Server can ask the Client's LLM to perform a sub-task.
- Agentic Workflows: Allows an MCP server to use the host's LLM to summarize data, format responses, or make routing decisions internally before returning the final result to the user.
The Bottom Line: The three pillars of MCP are Tools (actions), Resources (context), and Prompts (instructions). Mastering these allows you to build highly capable, specialized AI integrations.