Name | pptx-extractor-mcp JSON |
Version |
0.1.0
JSON |
| download |
home_page | None |
Summary | MCP server for extracting PPTX files to Marp format |
upload_time | 2025-07-20 14:30:35 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | None |
keywords |
mcp
extractor
marp
pptx
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# MCP Python SDK
<div align="center">
<strong>Python implementation of the Model Context Protocol (MCP)</strong>
[![PyPI][pypi-badge]][pypi-url]
[![MIT licensed][mit-badge]][mit-url]
[![Python Version][python-badge]][python-url]
[![Documentation][docs-badge]][docs-url]
[![Specification][spec-badge]][spec-url]
[![GitHub Discussions][discussions-badge]][discussions-url]
</div>
<!-- omit in toc -->
## Table of Contents
- [MCP Python SDK](#mcp-python-sdk)
- [Overview](#overview)
- [Installation](#installation)
- [Adding MCP to your python project](#adding-mcp-to-your-python-project)
- [Running the standalone MCP development tools](#running-the-standalone-mcp-development-tools)
- [Quickstart](#quickstart)
- [What is MCP?](#what-is-mcp)
- [Core Concepts](#core-concepts)
- [Server](#server)
- [Resources](#resources)
- [Tools](#tools)
- [Structured Output](#structured-output)
- [Prompts](#prompts)
- [Images](#images)
- [Context](#context)
- [Completions](#completions)
- [Elicitation](#elicitation)
- [Sampling](#sampling)
- [Logging and Notifications](#logging-and-notifications)
- [Authentication](#authentication)
- [Running Your Server](#running-your-server)
- [Development Mode](#development-mode)
- [Claude Desktop Integration](#claude-desktop-integration)
- [Direct Execution](#direct-execution)
- [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server)
- [Advanced Usage](#advanced-usage)
- [Low-Level Server](#low-level-server)
- [Writing MCP Clients](#writing-mcp-clients)
- [Parsing Tool Results](#parsing-tool-results)
- [MCP Primitives](#mcp-primitives)
- [Server Capabilities](#server-capabilities)
- [Documentation](#documentation)
- [Contributing](#contributing)
- [License](#license)
[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg
[pypi-url]: https://pypi.org/project/mcp/
[mit-badge]: https://img.shields.io/pypi/l/mcp.svg
[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE
[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg
[python-url]: https://www.python.org/downloads/
[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg
[docs-url]: https://modelcontextprotocol.io
[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg
[spec-url]: https://spec.modelcontextprotocol.io
[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk
[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions
## Overview
The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio, SSE, and Streamable HTTP
- Handle all MCP protocol messages and lifecycle events
## Installation
### Adding MCP to your python project
We recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects.
If you haven't created a uv-managed project yet, create one:
```bash
uv init mcp-server-demo
cd mcp-server-demo
```
Then add MCP to your project dependencies:
```bash
uv add "mcp[cli]"
```
Alternatively, for projects using pip for dependencies:
```bash
pip install "mcp[cli]"
```
### Running the standalone MCP development tools
To run the mcp command with uv:
```bash
uv run mcp
```
## Quickstart
Let's create a simple MCP server that exposes a calculator tool and some data:
<!-- snippet-source examples/snippets/servers/fastmcp_quickstart.py -->
```python
"""
FastMCP quickstart example.
cd to the `examples/snippets/clients` directory and run:
uv run server fastmcp_quickstart stdio
"""
from mcp.server.fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
# Add a prompt
@mcp.prompt()
def greet_user(name: str, style: str = "friendly") -> str:
"""Generate a greeting prompt"""
styles = {
"friendly": "Please write a warm, friendly greeting",
"formal": "Please write a formal, professional greeting",
"casual": "Please write a casual, relaxed greeting",
}
return f"{styles.get(style, styles['friendly'])} for someone named {name}."
```
_Full example: [examples/snippets/servers/fastmcp_quickstart.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/fastmcp_quickstart.py)_
<!-- /snippet-source -->
You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
```bash
uv run mcp install server.py
```
Alternatively, you can test it with the MCP Inspector:
```bash
uv run mcp dev server.py
```
## What is MCP?
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)
- And more!
## Core Concepts
### Server
The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
<!-- snippet-source examples/snippets/servers/lifespan_example.py -->
```python
"""Example showing lifespan support for startup/shutdown with strong typing."""
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
from dataclasses import dataclass
from mcp.server.fastmcp import Context, FastMCP
# Mock database class for example
class Database:
"""Mock database class for example."""
@classmethod
async def connect(cls) -> "Database":
"""Connect to database."""
return cls()
async def disconnect(self) -> None:
"""Disconnect from database."""
pass
def query(self) -> str:
"""Execute a query."""
return "Query result"
@dataclass
class AppContext:
"""Application context with typed dependencies."""
db: Database
@asynccontextmanager
async def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:
"""Manage application lifecycle with type-safe context."""
# Initialize on startup
db = await Database.connect()
try:
yield AppContext(db=db)
finally:
# Cleanup on shutdown
await db.disconnect()
# Pass lifespan to server
mcp = FastMCP("My App", lifespan=app_lifespan)
# Access type-safe lifespan context in tools
@mcp.tool()
def query_db(ctx: Context) -> str:
"""Tool that uses initialized resources."""
db = ctx.request_context.lifespan_context.db
return db.query()
```
_Full example: [examples/snippets/servers/lifespan_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lifespan_example.py)_
<!-- /snippet-source -->
### Resources
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
<!-- snippet-source examples/snippets/servers/basic_resource.py -->
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="Resource Example")
@mcp.resource("file://documents/{name}")
def read_document(name: str) -> str:
"""Read a document by name."""
# This would normally read from disk
return f"Content of {name}"
@mcp.resource("config://settings")
def get_settings() -> str:
"""Get application settings."""
return """{
"theme": "dark",
"language": "en",
"debug": false
}"""
```
_Full example: [examples/snippets/servers/basic_resource.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_resource.py)_
<!-- /snippet-source -->
### Tools
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
<!-- snippet-source examples/snippets/servers/basic_tool.py -->
```python
from mcp.server.fastmcp import FastMCP
mcp = FastMCP(name="Tool Example")
@mcp.tool()
def sum(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
@mcp.tool()
def get_weather(city: str, unit: str = "celsius") -> str:
"""Get weather for a city."""
# This would normally call a weather API
return f"Weather in {city}: 22degrees{unit[0].upper()}"
```
_Full example: [examples/snippets/servers/basic_tool.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_tool.py)_
<!-- /snippet-source -->
#### Structured Output
Tools will return structured results by default, if their return type
annotation is compatible. Otherwise, they will return unstructured results.
Structured output supports these return types:
- Pydantic models (BaseModel subclasses)
- TypedDicts
- Dataclasses and other classes with type hints
- `dict[str, T]` (where T is any JSON-serializable type)
- Primitive types (str, int, float, bool, bytes, None) - wrapped in `{"result": value}`
- Generic types (list, tuple, Union, Optional, etc.) - wrapped in `{"result": value}`
Classes without type hints cannot be serialized for structured output. Only
classes with properly annotated attributes will be converted to Pydantic models
for schema generation and validation.
Structured results are automatically validated against the output schema
generated from the annotation. This ensures the tool returns well-typed,
validated data that clients can easily process.
**Note:** For backward compatibility, unstructured results are also
returned. Unstructured results are provided for backward compatibility
with previous versions of the MCP specification, and are quirks-compatible
with previous versions of FastMCP in the current version of the SDK.
**Note:** In cases where a tool function's return type annotation
causes the tool to be classified as structured _and this is undesirable_,
the classification can be suppressed by passing `structured_output=False`
to the `@tool` decorator.
<!-- snippet-source examples/snippets/servers/structured_output.py -->
```python
"""Example showing structured output with tools."""
from typing import TypedDict
from pydantic import BaseModel, Field
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Structured Output Example")
# Using Pydantic models for rich structured data
class WeatherData(BaseModel):
"""Weather information structure."""
temperature: float = Field(description="Temperature in Celsius")
humidity: float = Field(description="Humidity percentage")
condition: str
wind_speed: float
@mcp.tool()
def get_weather(city: str) -> WeatherData:
"""Get weather for a city - returns structured data."""
# Simulated weather data
return WeatherData(
temperature=72.5,
humidity=45.0,
condition="sunny",
wind_speed=5.2,
)
# Using TypedDict for simpler structures
class LocationInfo(TypedDict):
latitude: float
longitude: float
name: str
@mcp.tool()
def get_location(address: str) -> LocationInfo:
"""Get location coordinates"""
return LocationInfo(latitude=51.5074, longitude=-0.1278, name="London, UK")
# Using dict[str, Any] for flexible schemas
@mcp.tool()
def get_statistics(data_type: str) -> dict[str, float]:
"""Get various statistics"""
return {"mean": 42.5, "median": 40.0, "std_dev": 5.2}
# Ordinary classes with type hints work for structured output
class UserProfile:
name: str
age: int
email: str | None = None
def __init__(self, name: str, age: int, email: str | None = None):
self.name = name
self.age = age
self.email = email
@mcp.tool()
def get_user(user_id: str) -> UserProfile:
"""Get user profile - returns structured data"""
return UserProfile(name="Alice", age=30, email="alice@example.com")
# Classes WITHOUT type hints cannot be used for structured output
class UntypedConfig:
def __init__(self, setting1, setting2):
self.setting1 = setting1
self.setting2 = setting2
@mcp.tool()
def get_config() -> UntypedConfig:
"""This returns unstructured output - no schema generated"""
return UntypedConfig("value1", "value2")
# Lists and other types are wrapped automatically
@mcp.tool()
def list_cities() -> list[str]:
"""Get a list of cities"""
return ["London", "Paris", "Tokyo"]
# Returns: {"result": ["London", "Paris", "Tokyo"]}
@mcp.tool()
def get_temperature(city: str) -> float:
"""Get temperature as a simple float"""
return 22.5
# Returns: {"result": 22.5}
```
_Full example: [examples/snippets/servers/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/structured_output.py)_
<!-- /snippet-source -->
### Prompts
Prompts are reusable templates that help LLMs interact with your server effectively:
<!-- snippet-source examples/snippets/servers/basic_prompt.py -->
```python
from mcp.server.fastmcp import FastMCP
from mcp.server.fastmcp.prompts import base
mcp = FastMCP(name="Prompt Example")
@mcp.prompt(title="Code Review")
def review_code(code: str) -> str:
return f"Please review this code:\n\n{code}"
@mcp.prompt(title="Debug Assistant")
def debug_error(error: str) -> list[base.Message]:
return [
base.UserMessage("I'm seeing this error:"),
base.UserMessage(error),
base.AssistantMessage("I'll help debug that. What have you tried so far?"),
]
```
_Full example: [examples/snippets/servers/basic_prompt.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_prompt.py)_
<!-- /snippet-source -->
### Images
FastMCP provides an `Image` class that automatically handles image data:
<!-- snippet-source examples/snippets/servers/images.py -->
```python
"""Example showing image handling with FastMCP."""
from PIL import Image as PILImage
from mcp.server.fastmcp import FastMCP, Image
mcp = FastMCP("Image Example")
@mcp.tool()
def create_thumbnail(image_path: str) -> Image:
"""Create a thumbnail from an image"""
img = PILImage.open(image_path)
img.thumbnail((100, 100))
return Image(data=img.tobytes(), format="png")
```
_Full example: [examples/snippets/servers/images.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/images.py)_
<!-- /snippet-source -->
### Context
The Context object gives your tools and resources access to MCP capabilities:
<!-- snippet-source examples/snippets/servers/tool_progress.py -->
```python
from mcp.server.fastmcp import Context, FastMCP
mcp = FastMCP(name="Progress Example")
@mcp.tool()
async def long_running_task(task_name: str, ctx: Context, steps: int = 5) -> str:
"""Execute a task with progress updates."""
await ctx.info(f"Starting: {task_name}")
for i in range(steps):
progress = (i + 1) / steps
await ctx.report_progress(
progress=progress,
total=1.0,
message=f"Step {i + 1}/{steps}",
)
await ctx.debug(f"Completed step {i + 1}")
return f"Task '{task_name}' completed"
```
_Full example: [examples/snippets/servers/tool_progress.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/tool_progress.py)_
<!-- /snippet-source -->
### Completions
MCP supports providing completion suggestions for prompt arguments and resource template parameters. With the context parameter, servers can provide completions based on previously resolved values:
Client usage:
<!-- snippet-source examples/snippets/clients/completion_client.py -->
```python
"""
cd to the `examples/snippets` directory and run:
uv run completion-client
"""
import asyncio
import os
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from mcp.types import PromptReference, ResourceTemplateReference
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="uv", # Using uv to run the server
args=["run", "server", "completion", "stdio"], # Server with completion support
env={"UV_INDEX": os.environ.get("UV_INDEX", "")},
)
async def run():
"""Run the completion client example."""
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# List available resource templates
templates = await session.list_resource_templates()
print("Available resource templates:")
for template in templates.resourceTemplates:
print(f" - {template.uriTemplate}")
# List available prompts
prompts = await session.list_prompts()
print("\nAvailable prompts:")
for prompt in prompts.prompts:
print(f" - {prompt.name}")
# Complete resource template arguments
if templates.resourceTemplates:
template = templates.resourceTemplates[0]
print(f"\nCompleting arguments for resource template: {template.uriTemplate}")
# Complete without context
result = await session.complete(
ref=ResourceTemplateReference(type="ref/resource", uri=template.uriTemplate),
argument={"name": "owner", "value": "model"},
)
print(f"Completions for 'owner' starting with 'model': {result.completion.values}")
# Complete with context - repo suggestions based on owner
result = await session.complete(
ref=ResourceTemplateReference(type="ref/resource", uri=template.uriTemplate),
argument={"name": "repo", "value": ""},
context_arguments={"owner": "modelcontextprotocol"},
)
print(f"Completions for 'repo' with owner='modelcontextprotocol': {result.completion.values}")
# Complete prompt arguments
if prompts.prompts:
prompt_name = prompts.prompts[0].name
print(f"\nCompleting arguments for prompt: {prompt_name}")
result = await session.complete(
ref=PromptReference(type="ref/prompt", name=prompt_name),
argument={"name": "style", "value": ""},
)
print(f"Completions for 'style' argument: {result.completion.values}")
def main():
"""Entry point for the completion client."""
asyncio.run(run())
if __name__ == "__main__":
main()
```
_Full example: [examples/snippets/clients/completion_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/completion_client.py)_
<!-- /snippet-source -->
### Elicitation
Request additional information from users. This example shows an Elicitation during a Tool Call:
<!-- snippet-source examples/snippets/servers/elicitation.py -->
```python
from pydantic import BaseModel, Field
from mcp.server.fastmcp import Context, FastMCP
mcp = FastMCP(name="Elicitation Example")
class BookingPreferences(BaseModel):
"""Schema for collecting user preferences."""
checkAlternative: bool = Field(description="Would you like to check another date?")
alternativeDate: str = Field(
default="2024-12-26",
description="Alternative date (YYYY-MM-DD)",
)
@mcp.tool()
async def book_table(
date: str,
time: str,
party_size: int,
ctx: Context,
) -> str:
"""Book a table with date availability check."""
# Check if date is available
if date == "2024-12-25":
# Date unavailable - ask user for alternative
result = await ctx.elicit(
message=(f"No tables available for {party_size} on {date}. Would you like to try another date?"),
schema=BookingPreferences,
)
if result.action == "accept" and result.data:
if result.data.checkAlternative:
return f"[SUCCESS] Booked for {result.data.alternativeDate}"
return "[CANCELLED] No booking made"
return "[CANCELLED] Booking cancelled"
# Date available
return f"[SUCCESS] Booked for {date} at {time}"
```
_Full example: [examples/snippets/servers/elicitation.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/elicitation.py)_
<!-- /snippet-source -->
The `elicit()` method returns an `ElicitationResult` with:
- `action`: "accept", "decline", or "cancel"
- `data`: The validated response (only when accepted)
- `validation_error`: Any validation error message
### Sampling
Tools can interact with LLMs through sampling (generating text):
<!-- snippet-source examples/snippets/servers/sampling.py -->
```python
from mcp.server.fastmcp import Context, FastMCP
from mcp.types import SamplingMessage, TextContent
mcp = FastMCP(name="Sampling Example")
@mcp.tool()
async def generate_poem(topic: str, ctx: Context) -> str:
"""Generate a poem using LLM sampling."""
prompt = f"Write a short poem about {topic}"
result = await ctx.session.create_message(
messages=[
SamplingMessage(
role="user",
content=TextContent(type="text", text=prompt),
)
],
max_tokens=100,
)
if result.content.type == "text":
return result.content.text
return str(result.content)
```
_Full example: [examples/snippets/servers/sampling.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/sampling.py)_
<!-- /snippet-source -->
### Logging and Notifications
Tools can send logs and notifications through the context:
<!-- snippet-source examples/snippets/servers/notifications.py -->
```python
from mcp.server.fastmcp import Context, FastMCP
mcp = FastMCP(name="Notifications Example")
@mcp.tool()
async def process_data(data: str, ctx: Context) -> str:
"""Process data with logging."""
# Different log levels
await ctx.debug(f"Debug: Processing '{data}'")
await ctx.info("Info: Starting processing")
await ctx.warning("Warning: This is experimental")
await ctx.error("Error: (This is just a demo)")
# Notify about resource changes
await ctx.session.send_resource_list_changed()
return f"Processed: {data}"
```
_Full example: [examples/snippets/servers/notifications.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/notifications.py)_
<!-- /snippet-source -->
### Authentication
Authentication can be used by servers that want to expose tools accessing protected resources.
`mcp.server.auth` implements OAuth 2.1 resource server functionality, where MCP servers act as Resource Servers (RS) that validate tokens issued by separate Authorization Servers (AS). This follows the [MCP authorization specification](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization) and implements RFC 9728 (Protected Resource Metadata) for AS discovery.
MCP servers can use authentication by providing an implementation of the `TokenVerifier` protocol:
<!-- snippet-source examples/snippets/servers/oauth_server.py -->
```python
"""
Run from the repository root:
uv run examples/snippets/servers/oauth_server.py
"""
from pydantic import AnyHttpUrl
from mcp.server.auth.provider import AccessToken, TokenVerifier
from mcp.server.auth.settings import AuthSettings
from mcp.server.fastmcp import FastMCP
class SimpleTokenVerifier(TokenVerifier):
"""Simple token verifier for demonstration."""
async def verify_token(self, token: str) -> AccessToken | None:
pass # This is where you would implement actual token validation
# Create FastMCP instance as a Resource Server
mcp = FastMCP(
"Weather Service",
# Token verifier for authentication
token_verifier=SimpleTokenVerifier(),
# Auth settings for RFC 9728 Protected Resource Metadata
auth=AuthSettings(
issuer_url=AnyHttpUrl("https://auth.example.com"), # Authorization Server URL
resource_server_url=AnyHttpUrl("http://localhost:3001"), # This server's URL
required_scopes=["user"],
),
)
@mcp.tool()
async def get_weather(city: str = "London") -> dict[str, str]:
"""Get weather data for a city"""
return {
"city": city,
"temperature": "22",
"condition": "Partly cloudy",
"humidity": "65%",
}
if __name__ == "__main__":
mcp.run(transport="streamable-http")
```
_Full example: [examples/snippets/servers/oauth_server.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/oauth_server.py)_
<!-- /snippet-source -->
For a complete example with separate Authorization Server and Resource Server implementations, see [`examples/servers/simple-auth/`](examples/servers/simple-auth/).
**Architecture:**
- **Authorization Server (AS)**: Handles OAuth flows, user authentication, and token issuance
- **Resource Server (RS)**: Your MCP server that validates tokens and serves protected resources
- **Client**: Discovers AS through RFC 9728, obtains tokens, and uses them with the MCP server
See [TokenVerifier](src/mcp/server/auth/provider.py) for more details on implementing token validation.
## Running Your Server
### Development Mode
The fastest way to test and debug your server is with the MCP Inspector:
```bash
uv run mcp dev server.py
# Add dependencies
uv run mcp dev server.py --with pandas --with numpy
# Mount local code
uv run mcp dev server.py --with-editable .
```
### Claude Desktop Integration
Once your server is ready, install it in Claude Desktop:
```bash
uv run mcp install server.py
# Custom name
uv run mcp install server.py --name "My Analytics Server"
# Environment variables
uv run mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...
uv run mcp install server.py -f .env
```
### Direct Execution
For advanced scenarios like custom deployments:
<!-- snippet-source examples/snippets/servers/direct_execution.py -->
```python
"""Example showing direct execution of an MCP server.
This is the simplest way to run an MCP server directly.
cd to the `examples/snippets` directory and run:
uv run direct-execution-server
or
python servers/direct_execution.py
"""
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My App")
@mcp.tool()
def hello(name: str = "World") -> str:
"""Say hello to someone."""
return f"Hello, {name}!"
def main():
"""Entry point for the direct execution server."""
mcp.run()
if __name__ == "__main__":
main()
```
_Full example: [examples/snippets/servers/direct_execution.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/direct_execution.py)_
<!-- /snippet-source -->
Run it with:
```bash
python servers/direct_execution.py
# or
uv run mcp run servers/direct_execution.py
```
Note that `uv run mcp run` or `uv run mcp dev` only supports server using FastMCP and not the low-level server variant.
### Streamable HTTP Transport
> **Note**: Streamable HTTP transport is superseding SSE transport for production deployments.
<!-- snippet-source examples/snippets/servers/streamable_config.py -->
```python
"""
Run from the repository root:
uv run examples/snippets/servers/streamable_config.py
"""
from mcp.server.fastmcp import FastMCP
# Stateful server (maintains session state)
mcp = FastMCP("StatefulServer")
# Other configuration options:
# Stateless server (no session persistence)
# mcp = FastMCP("StatelessServer", stateless_http=True)
# Stateless server (no session persistence, no sse stream with supported client)
# mcp = FastMCP("StatelessServer", stateless_http=True, json_response=True)
# Add a simple tool to demonstrate the server
@mcp.tool()
def greet(name: str = "World") -> str:
"""Greet someone by name."""
return f"Hello, {name}!"
# Run server with streamable_http transport
if __name__ == "__main__":
mcp.run(transport="streamable-http")
```
_Full example: [examples/snippets/servers/streamable_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_config.py)_
<!-- /snippet-source -->
You can mount multiple FastMCP servers in a Starlette application:
<!-- snippet-source examples/snippets/servers/streamable_starlette_mount.py -->
```python
"""
Run from the repository root:
uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload
"""
import contextlib
from starlette.applications import Starlette
from starlette.routing import Mount
from mcp.server.fastmcp import FastMCP
# Create the Echo server
echo_mcp = FastMCP(name="EchoServer", stateless_http=True)
@echo_mcp.tool()
def echo(message: str) -> str:
"""A simple echo tool"""
return f"Echo: {message}"
# Create the Math server
math_mcp = FastMCP(name="MathServer", stateless_http=True)
@math_mcp.tool()
def add_two(n: int) -> int:
"""Tool to add two to the input"""
return n + 2
# Create a combined lifespan to manage both session managers
@contextlib.asynccontextmanager
async def lifespan(app: Starlette):
async with contextlib.AsyncExitStack() as stack:
await stack.enter_async_context(echo_mcp.session_manager.run())
await stack.enter_async_context(math_mcp.session_manager.run())
yield
# Create the Starlette app and mount the MCP servers
app = Starlette(
routes=[
Mount("/echo", echo_mcp.streamable_http_app()),
Mount("/math", math_mcp.streamable_http_app()),
],
lifespan=lifespan,
)
```
_Full example: [examples/snippets/servers/streamable_starlette_mount.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_starlette_mount.py)_
<!-- /snippet-source -->
For low level server with Streamable HTTP implementations, see:
- Stateful server: [`examples/servers/simple-streamablehttp/`](examples/servers/simple-streamablehttp/)
- Stateless server: [`examples/servers/simple-streamablehttp-stateless/`](examples/servers/simple-streamablehttp-stateless/)
The streamable HTTP transport supports:
- Stateful and stateless operation modes
- Resumability with event stores
- JSON or SSE response formats
- Better scalability for multi-node deployments
### Mounting to an Existing ASGI Server
By default, SSE servers are mounted at `/sse` and Streamable HTTP servers are mounted at `/mcp`. You can customize these paths using the methods described below.
For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).
#### SSE servers
> **Note**: SSE transport is being superseded by [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http).
You can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications.
```python
from starlette.applications import Starlette
from starlette.routing import Mount, Host
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("My App")
# Mount the SSE server to the existing ASGI server
app = Starlette(
routes=[
Mount('/', app=mcp.sse_app()),
]
)
# or dynamically mount as host
app.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))
```
When mounting multiple MCP servers under different paths, you can configure the mount path in several ways:
```python
from starlette.applications import Starlette
from starlette.routing import Mount
from mcp.server.fastmcp import FastMCP
# Create multiple MCP servers
github_mcp = FastMCP("GitHub API")
browser_mcp = FastMCP("Browser")
curl_mcp = FastMCP("Curl")
search_mcp = FastMCP("Search")
# Method 1: Configure mount paths via settings (recommended for persistent configuration)
github_mcp.settings.mount_path = "/github"
browser_mcp.settings.mount_path = "/browser"
# Method 2: Pass mount path directly to sse_app (preferred for ad-hoc mounting)
# This approach doesn't modify the server's settings permanently
# Create Starlette app with multiple mounted servers
app = Starlette(
routes=[
# Using settings-based configuration
Mount("/github", app=github_mcp.sse_app()),
Mount("/browser", app=browser_mcp.sse_app()),
# Using direct mount path parameter
Mount("/curl", app=curl_mcp.sse_app("/curl")),
Mount("/search", app=search_mcp.sse_app("/search")),
]
)
# Method 3: For direct execution, you can also pass the mount path to run()
if __name__ == "__main__":
search_mcp.run(transport="sse", mount_path="/search")
```
For more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).
## Advanced Usage
### Low-Level Server
For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
<!-- snippet-source examples/snippets/servers/lowlevel/lifespan.py -->
```python
"""
Run from the repository root:
uv run examples/snippets/servers/lowlevel/lifespan.py
"""
from collections.abc import AsyncIterator
from contextlib import asynccontextmanager
import mcp.server.stdio
import mcp.types as types
from mcp.server.lowlevel import NotificationOptions, Server
from mcp.server.models import InitializationOptions
# Mock database class for example
class Database:
"""Mock database class for example."""
@classmethod
async def connect(cls) -> "Database":
"""Connect to database."""
print("Database connected")
return cls()
async def disconnect(self) -> None:
"""Disconnect from database."""
print("Database disconnected")
async def query(self, query_str: str) -> list[dict[str, str]]:
"""Execute a query."""
# Simulate database query
return [{"id": "1", "name": "Example", "query": query_str}]
@asynccontextmanager
async def server_lifespan(_server: Server) -> AsyncIterator[dict]:
"""Manage server startup and shutdown lifecycle."""
# Initialize resources on startup
db = await Database.connect()
try:
yield {"db": db}
finally:
# Clean up on shutdown
await db.disconnect()
# Pass lifespan to server
server = Server("example-server", lifespan=server_lifespan)
@server.list_tools()
async def handle_list_tools() -> list[types.Tool]:
"""List available tools."""
return [
types.Tool(
name="query_db",
description="Query the database",
inputSchema={
"type": "object",
"properties": {"query": {"type": "string", "description": "SQL query to execute"}},
"required": ["query"],
},
)
]
@server.call_tool()
async def query_db(name: str, arguments: dict) -> list[types.TextContent]:
"""Handle database query tool call."""
if name != "query_db":
raise ValueError(f"Unknown tool: {name}")
# Access lifespan context
ctx = server.request_context
db = ctx.lifespan_context["db"]
# Execute query
results = await db.query(arguments["query"])
return [types.TextContent(type="text", text=f"Query results: {results}")]
async def run():
"""Run the server with lifespan management."""
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="example-server",
server_version="0.1.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
import asyncio
asyncio.run(run())
```
_Full example: [examples/snippets/servers/lowlevel/lifespan.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/lifespan.py)_
<!-- /snippet-source -->
The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers
<!-- snippet-source examples/snippets/servers/lowlevel/basic.py -->
```python
"""
Run from the repository root:
uv run examples/snippets/servers/lowlevel/basic.py
"""
import asyncio
import mcp.server.stdio
import mcp.types as types
from mcp.server.lowlevel import NotificationOptions, Server
from mcp.server.models import InitializationOptions
# Create a server instance
server = Server("example-server")
@server.list_prompts()
async def handle_list_prompts() -> list[types.Prompt]:
"""List available prompts."""
return [
types.Prompt(
name="example-prompt",
description="An example prompt template",
arguments=[types.PromptArgument(name="arg1", description="Example argument", required=True)],
)
]
@server.get_prompt()
async def handle_get_prompt(name: str, arguments: dict[str, str] | None) -> types.GetPromptResult:
"""Get a specific prompt by name."""
if name != "example-prompt":
raise ValueError(f"Unknown prompt: {name}")
arg1_value = (arguments or {}).get("arg1", "default")
return types.GetPromptResult(
description="Example prompt",
messages=[
types.PromptMessage(
role="user",
content=types.TextContent(type="text", text=f"Example prompt text with argument: {arg1_value}"),
)
],
)
async def run():
"""Run the basic low-level server."""
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="example",
server_version="0.1.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
asyncio.run(run())
```
_Full example: [examples/snippets/servers/lowlevel/basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/basic.py)_
<!-- /snippet-source -->
Caution: The `uv run mcp run` and `uv run mcp dev` tool doesn't support low-level server.
#### Structured Output Support
The low-level server supports structured output for tools, allowing you to return both human-readable content and machine-readable structured data. Tools can define an `outputSchema` to validate their structured output:
<!-- snippet-source examples/snippets/servers/lowlevel/structured_output.py -->
```python
"""
Run from the repository root:
uv run examples/snippets/servers/lowlevel/structured_output.py
"""
import asyncio
from typing import Any
import mcp.server.stdio
import mcp.types as types
from mcp.server.lowlevel import NotificationOptions, Server
from mcp.server.models import InitializationOptions
server = Server("example-server")
@server.list_tools()
async def list_tools() -> list[types.Tool]:
"""List available tools with structured output schemas."""
return [
types.Tool(
name="get_weather",
description="Get current weather for a city",
inputSchema={
"type": "object",
"properties": {"city": {"type": "string", "description": "City name"}},
"required": ["city"],
},
outputSchema={
"type": "object",
"properties": {
"temperature": {"type": "number", "description": "Temperature in Celsius"},
"condition": {"type": "string", "description": "Weather condition"},
"humidity": {"type": "number", "description": "Humidity percentage"},
"city": {"type": "string", "description": "City name"},
},
"required": ["temperature", "condition", "humidity", "city"],
},
)
]
@server.call_tool()
async def call_tool(name: str, arguments: dict[str, Any]) -> dict[str, Any]:
"""Handle tool calls with structured output."""
if name == "get_weather":
city = arguments["city"]
# Simulated weather data - in production, call a weather API
weather_data = {
"temperature": 22.5,
"condition": "partly cloudy",
"humidity": 65,
"city": city, # Include the requested city
}
# low-level server will validate structured output against the tool's
# output schema, and additionally serialize it into a TextContent block
# for backwards compatibility with pre-2025-06-18 clients.
return weather_data
else:
raise ValueError(f"Unknown tool: {name}")
async def run():
"""Run the structured output server."""
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="structured-output-example",
server_version="0.1.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
asyncio.run(run())
```
_Full example: [examples/snippets/servers/lowlevel/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/structured_output.py)_
<!-- /snippet-source -->
Tools can return data in three ways:
1. **Content only**: Return a list of content blocks (default behavior before spec revision 2025-06-18)
2. **Structured data only**: Return a dictionary that will be serialized to JSON (Introduced in spec revision 2025-06-18)
3. **Both**: Return a tuple of (content, structured_data) preferred option to use for backwards compatibility
When an `outputSchema` is defined, the server automatically validates the structured output against the schema. This ensures type safety and helps catch errors early.
### Writing MCP Clients
The SDK provides a high-level client interface for connecting to MCP servers using various [transports](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports):
<!-- snippet-source examples/snippets/clients/stdio_client.py -->
```python
"""
cd to the `examples/snippets/clients` directory and run:
uv run client
"""
import asyncio
import os
from pydantic import AnyUrl
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client
from mcp.shared.context import RequestContext
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="uv", # Using uv to run the server
args=["run", "server", "fastmcp_quickstart", "stdio"], # We're already in snippets dir
env={"UV_INDEX": os.environ.get("UV_INDEX", "")},
)
# Optional: create a sampling callback
async def handle_sampling_message(
context: RequestContext, params: types.CreateMessageRequestParams
) -> types.CreateMessageResult:
print(f"Sampling request: {params.messages}")
return types.CreateMessageResult(
role="assistant",
content=types.TextContent(
type="text",
text="Hello, world! from model",
),
model="gpt-3.5-turbo",
stopReason="endTurn",
)
async def run():
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:
# Initialize the connection
await session.initialize()
# List available prompts
prompts = await session.list_prompts()
print(f"Available prompts: {[p.name for p in prompts.prompts]}")
# Get a prompt (greet_user prompt from fastmcp_quickstart)
if prompts.prompts:
prompt = await session.get_prompt("greet_user", arguments={"name": "Alice", "style": "friendly"})
print(f"Prompt result: {prompt.messages[0].content}")
# List available resources
resources = await session.list_resources()
print(f"Available resources: {[r.uri for r in resources.resources]}")
# List available tools
tools = await session.list_tools()
print(f"Available tools: {[t.name for t in tools.tools]}")
# Read a resource (greeting resource from fastmcp_quickstart)
resource_content = await session.read_resource(AnyUrl("greeting://World"))
content_block = resource_content.contents[0]
if isinstance(content_block, types.TextContent):
print(f"Resource content: {content_block.text}")
# Call a tool (add tool from fastmcp_quickstart)
result = await session.call_tool("add", arguments={"a": 5, "b": 3})
result_unstructured = result.content[0]
if isinstance(result_unstructured, types.TextContent):
print(f"Tool result: {result_unstructured.text}")
result_structured = result.structuredContent
print(f"Structured tool result: {result_structured}")
def main():
"""Entry point for the client script."""
asyncio.run(run())
if __name__ == "__main__":
main()
```
_Full example: [examples/snippets/clients/stdio_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/stdio_client.py)_
<!-- /snippet-source -->
Clients can also connect using [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http):
<!-- snippet-source examples/snippets/clients/streamable_basic.py -->
```python
"""
Run from the repository root:
uv run examples/snippets/clients/streamable_basic.py
"""
import asyncio
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
async def main():
# Connect to a streamable HTTP server
async with streamablehttp_client("http://localhost:8000/mcp") as (
read_stream,
write_stream,
_,
):
# Create a session using the client streams
async with ClientSession(read_stream, write_stream) as session:
# Initialize the connection
await session.initialize()
# List available tools
tools = await session.list_tools()
print(f"Available tools: {[tool.name for tool in tools.tools]}")
if __name__ == "__main__":
asyncio.run(main())
```
_Full example: [examples/snippets/clients/streamable_basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/streamable_basic.py)_
<!-- /snippet-source -->
### Client Display Utilities
When building MCP clients, the SDK provides utilities to help display human-readable names for tools, resources, and prompts:
<!-- snippet-source examples/snippets/clients/display_utilities.py -->
```python
"""
cd to the `examples/snippets` directory and run:
uv run display-utilities-client
"""
import asyncio
import os
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from mcp.shared.metadata_utils import get_display_name
# Create server parameters for stdio connection
server_params = StdioServerParameters(
command="uv", # Using uv to run the server
args=["run", "server", "fastmcp_quickstart", "stdio"],
env={"UV_INDEX": os.environ.get("UV_INDEX", "")},
)
async def display_tools(session: ClientSession):
"""Display available tools with human-readable names"""
tools_response = await session.list_tools()
for tool in tools_response.tools:
# get_display_name() returns the title if available, otherwise the name
display_name = get_display_name(tool)
print(f"Tool: {display_name}")
if tool.description:
print(f" {tool.description}")
async def display_resources(session: ClientSession):
"""Display available resources with human-readable names"""
resources_response = await session.list_resources()
for resource in resources_response.resources:
display_name = get_display_name(resource)
print(f"Resource: {display_name} ({resource.uri})")
templates_response = await session.list_resource_templates()
for template in templates_response.resourceTemplates:
display_name = get_display_name(template)
print(f"Resource Template: {display_name}")
async def run():
"""Run the display utilities example."""
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
print("=== Available Tools ===")
await display_tools(session)
print("\n=== Available Resources ===")
await display_resources(session)
def main():
"""Entry point for the display utilities client."""
asyncio.run(run())
if __name__ == "__main__":
main()
```
_Full example: [examples/snippets/clients/display_utilities.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/display_utilities.py)_
<!-- /snippet-source -->
The `get_display_name()` function implements the proper precedence rules for displaying names:
- For tools: `title` > `annotations.title` > `name`
- For other objects: `title` > `name`
This ensures your client UI shows the most user-friendly names that servers provide.
### OAuth Authentication for Clients
The SDK includes [authorization support](https://modelcontextprotocol.io/specification/2025-03-26/basic/authorization) for connecting to protected MCP servers:
<!-- snippet-source examples/snippets/clients/oauth_client.py -->
```python
"""
Before running, specify running MCP RS server URL.
To spin up RS server locally, see
examples/servers/simple-auth/README.md
cd to the `examples/snippets` directory and run:
uv run oauth-client
"""
import asyncio
from urllib.parse import parse_qs, urlparse
from pydantic import AnyUrl
from mcp import ClientSession
from mcp.client.auth import OAuthClientProvider, TokenStorage
from mcp.client.streamable_http import streamablehttp_client
from mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken
class InMemoryTokenStorage(TokenStorage):
"""Demo In-memory token storage implementation."""
def __init__(self):
self.tokens: OAuthToken | None = None
self.client_info: OAuthClientInformationFull | None = None
async def get_tokens(self) -> OAuthToken | None:
"""Get stored tokens."""
return self.tokens
async def set_tokens(self, tokens: OAuthToken) -> None:
"""Store tokens."""
self.tokens = tokens
async def get_client_info(self) -> OAuthClientInformationFull | None:
"""Get stored client information."""
return self.client_info
async def set_client_info(self, client_info: OAuthClientInformationFull) -> None:
"""Store client information."""
self.client_info = client_info
async def handle_redirect(auth_url: str) -> None:
print(f"Visit: {auth_url}")
async def handle_callback() -> tuple[str, str | None]:
callback_url = input("Paste callback URL: ")
params = parse_qs(urlparse(callback_url).query)
return params["code"][0], params.get("state", [None])[0]
async def main():
"""Run the OAuth client example."""
oauth_auth = OAuthClientProvider(
server_url="http://localhost:8001",
client_metadata=OAuthClientMetadata(
client_name="Example MCP Client",
redirect_uris=[AnyUrl("http://localhost:3000/callback")],
grant_types=["authorization_code", "refresh_token"],
response_types=["code"],
scope="user",
),
storage=InMemoryTokenStorage(),
redirect_handler=handle_redirect,
callback_handler=handle_callback,
)
async with streamablehttp_client("http://localhost:8001/mcp", auth=oauth_auth) as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await session.list_tools()
print(f"Available tools: {[tool.name for tool in tools.tools]}")
resources = await session.list_resources()
print(f"Available resources: {[r.uri for r in resources.resources]}")
def run():
asyncio.run(main())
if __name__ == "__main__":
run()
```
_Full example: [examples/snippets/clients/oauth_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/oauth_client.py)_
<!-- /snippet-source -->
For a complete working example, see [`examples/clients/simple-auth-client/`](examples/clients/simple-auth-client/).
### Parsing Tool Results
When calling tools through MCP, the `CallToolResult` object contains the tool's response in a structured format. Understanding how to parse this result is essential for properly handling tool outputs.
```python
"""examples/snippets/clients/parsing_tool_results.py"""
import asyncio
from mcp import ClientSession, StdioServerParameters, types
from mcp.client.stdio import stdio_client
async def parse_tool_results():
"""Demonstrates how to parse different types of content in CallToolResult."""
server_params = StdioServerParameters(
command="python", args=["path/to/mcp_server.py"]
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# Example 1: Parsing text content
result = await session.call_tool("get_data", {"format": "text"})
for content in result.content:
if isinstance(content, types.TextContent):
print(f"Text: {content.text}")
# Example 2: Parsing structured content from JSON tools
result = await session.call_tool("get_user", {"id": "123"})
if hasattr(result, "structuredContent") and result.structuredContent:
# Access structured data directly
user_data = result.structuredContent
print(f"User: {user_data.get('name')}, Age: {user_data.get('age')}")
# Example 3: Parsing embedded resources
result = await session.call_tool("read_config", {})
for content in result.content:
if isinstance(content, types.EmbeddedResource):
resource = content.resource
if isinstance(resource, types.TextResourceContents):
print(f"Config from {resource.uri}: {resource.text}")
elif isinstance(resource, types.BlobResourceContents):
print(f"Binary data from {resource.uri}")
# Example 4: Parsing image content
result = await session.call_tool("generate_chart", {"data": [1, 2, 3]})
for content in result.content:
if isinstance(content, types.ImageContent):
print(f"Image ({content.mimeType}): {len(content.data)} bytes")
# Example 5: Handling errors
result = await session.call_tool("failing_tool", {})
if result.isError:
print("Tool execution failed!")
for content in result.content:
if isinstance(content, types.TextContent):
print(f"Error: {content.text}")
async def main():
await parse_tool_results()
if __name__ == "__main__":
asyncio.run(main())
```
### MCP Primitives
The MCP protocol defines three core primitives that servers can implement:
| Primitive | Control | Description | Example Use |
|-----------|-----------------------|-----------------------------------------------------|------------------------------|
| Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
| Resources | Application-controlled| Contextual data managed by the client application | File contents, API responses |
| Tools | Model-controlled | Functions exposed to the LLM to take actions | API calls, data updates |
### Server Capabilities
MCP servers declare capabilities during initialization:
| Capability | Feature Flag | Description |
|--------------|------------------------------|------------------------------------|
| `prompts` | `listChanged` | Prompt template management |
| `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates |
| `tools` | `listChanged` | Tool discovery and execution |
| `logging` | - | Server logging configuration |
| `completions`| - | Argument completion suggestions |
## Documentation
- [Model Context Protocol documentation](https://modelcontextprotocol.io)
- [Model Context Protocol specification](https://spec.modelcontextprotocol.io)
- [Officially supported servers](https://github.com/modelcontextprotocol/servers)
## Contributing
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "pptx-extractor-mcp",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "MCP, extractor, marp, pptx",
"author": null,
"author_email": "Lamb <1552780414@qq.com>",
"download_url": "https://files.pythonhosted.org/packages/5a/33/78582017bcaf1c80ebb2ee654a53c98a907d7de84c0a505d8a89981eb83c/pptx_extractor_mcp-0.1.0.tar.gz",
"platform": null,
"description": "# MCP Python SDK\n\n<div align=\"center\">\n\n<strong>Python implementation of the Model Context Protocol (MCP)</strong>\n\n[![PyPI][pypi-badge]][pypi-url]\n[![MIT licensed][mit-badge]][mit-url]\n[![Python Version][python-badge]][python-url]\n[![Documentation][docs-badge]][docs-url]\n[![Specification][spec-badge]][spec-url]\n[![GitHub Discussions][discussions-badge]][discussions-url]\n\n</div>\n\n<!-- omit in toc -->\n## Table of Contents\n\n- [MCP Python SDK](#mcp-python-sdk)\n - [Overview](#overview)\n - [Installation](#installation)\n - [Adding MCP to your python project](#adding-mcp-to-your-python-project)\n - [Running the standalone MCP development tools](#running-the-standalone-mcp-development-tools)\n - [Quickstart](#quickstart)\n - [What is MCP?](#what-is-mcp)\n - [Core Concepts](#core-concepts)\n - [Server](#server)\n - [Resources](#resources)\n - [Tools](#tools)\n - [Structured Output](#structured-output)\n - [Prompts](#prompts)\n - [Images](#images)\n - [Context](#context)\n - [Completions](#completions)\n - [Elicitation](#elicitation)\n - [Sampling](#sampling)\n - [Logging and Notifications](#logging-and-notifications)\n - [Authentication](#authentication)\n - [Running Your Server](#running-your-server)\n - [Development Mode](#development-mode)\n - [Claude Desktop Integration](#claude-desktop-integration)\n - [Direct Execution](#direct-execution)\n - [Mounting to an Existing ASGI Server](#mounting-to-an-existing-asgi-server)\n - [Advanced Usage](#advanced-usage)\n - [Low-Level Server](#low-level-server)\n - [Writing MCP Clients](#writing-mcp-clients)\n - [Parsing Tool Results](#parsing-tool-results)\n - [MCP Primitives](#mcp-primitives)\n - [Server Capabilities](#server-capabilities)\n - [Documentation](#documentation)\n - [Contributing](#contributing)\n - [License](#license)\n\n[pypi-badge]: https://img.shields.io/pypi/v/mcp.svg\n[pypi-url]: https://pypi.org/project/mcp/\n[mit-badge]: https://img.shields.io/pypi/l/mcp.svg\n[mit-url]: https://github.com/modelcontextprotocol/python-sdk/blob/main/LICENSE\n[python-badge]: https://img.shields.io/pypi/pyversions/mcp.svg\n[python-url]: https://www.python.org/downloads/\n[docs-badge]: https://img.shields.io/badge/docs-modelcontextprotocol.io-blue.svg\n[docs-url]: https://modelcontextprotocol.io\n[spec-badge]: https://img.shields.io/badge/spec-spec.modelcontextprotocol.io-blue.svg\n[spec-url]: https://spec.modelcontextprotocol.io\n[discussions-badge]: https://img.shields.io/github/discussions/modelcontextprotocol/python-sdk\n[discussions-url]: https://github.com/modelcontextprotocol/python-sdk/discussions\n\n## Overview\n\nThe Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:\n\n- Build MCP clients that can connect to any MCP server\n- Create MCP servers that expose resources, prompts and tools\n- Use standard transports like stdio, SSE, and Streamable HTTP\n- Handle all MCP protocol messages and lifecycle events\n\n## Installation\n\n### Adding MCP to your python project\n\nWe recommend using [uv](https://docs.astral.sh/uv/) to manage your Python projects.\n\nIf you haven't created a uv-managed project yet, create one:\n\n ```bash\n uv init mcp-server-demo\n cd mcp-server-demo\n ```\n\n Then add MCP to your project dependencies:\n\n ```bash\n uv add \"mcp[cli]\"\n ```\n\nAlternatively, for projects using pip for dependencies:\n\n```bash\npip install \"mcp[cli]\"\n```\n\n### Running the standalone MCP development tools\n\nTo run the mcp command with uv:\n\n```bash\nuv run mcp\n```\n\n## Quickstart\n\nLet's create a simple MCP server that exposes a calculator tool and some data:\n\n<!-- snippet-source examples/snippets/servers/fastmcp_quickstart.py -->\n```python\n\"\"\"\nFastMCP quickstart example.\n\ncd to the `examples/snippets/clients` directory and run:\n uv run server fastmcp_quickstart stdio\n\"\"\"\n\nfrom mcp.server.fastmcp import FastMCP\n\n# Create an MCP server\nmcp = FastMCP(\"Demo\")\n\n\n# Add an addition tool\n@mcp.tool()\ndef add(a: int, b: int) -> int:\n \"\"\"Add two numbers\"\"\"\n return a + b\n\n\n# Add a dynamic greeting resource\n@mcp.resource(\"greeting://{name}\")\ndef get_greeting(name: str) -> str:\n \"\"\"Get a personalized greeting\"\"\"\n return f\"Hello, {name}!\"\n\n\n# Add a prompt\n@mcp.prompt()\ndef greet_user(name: str, style: str = \"friendly\") -> str:\n \"\"\"Generate a greeting prompt\"\"\"\n styles = {\n \"friendly\": \"Please write a warm, friendly greeting\",\n \"formal\": \"Please write a formal, professional greeting\",\n \"casual\": \"Please write a casual, relaxed greeting\",\n }\n\n return f\"{styles.get(style, styles['friendly'])} for someone named {name}.\"\n```\n\n_Full example: [examples/snippets/servers/fastmcp_quickstart.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/fastmcp_quickstart.py)_\n<!-- /snippet-source -->\n\nYou can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:\n\n```bash\nuv run mcp install server.py\n```\n\nAlternatively, you can test it with the MCP Inspector:\n\n```bash\nuv run mcp dev server.py\n```\n\n## What is MCP?\n\nThe [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:\n\n- Expose data through **Resources** (think of these sort of like GET endpoints; they are used to load information into the LLM's context)\n- Provide functionality through **Tools** (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)\n- Define interaction patterns through **Prompts** (reusable templates for LLM interactions)\n- And more!\n\n## Core Concepts\n\n### Server\n\nThe FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:\n\n<!-- snippet-source examples/snippets/servers/lifespan_example.py -->\n```python\n\"\"\"Example showing lifespan support for startup/shutdown with strong typing.\"\"\"\n\nfrom collections.abc import AsyncIterator\nfrom contextlib import asynccontextmanager\nfrom dataclasses import dataclass\n\nfrom mcp.server.fastmcp import Context, FastMCP\n\n\n# Mock database class for example\nclass Database:\n \"\"\"Mock database class for example.\"\"\"\n\n @classmethod\n async def connect(cls) -> \"Database\":\n \"\"\"Connect to database.\"\"\"\n return cls()\n\n async def disconnect(self) -> None:\n \"\"\"Disconnect from database.\"\"\"\n pass\n\n def query(self) -> str:\n \"\"\"Execute a query.\"\"\"\n return \"Query result\"\n\n\n@dataclass\nclass AppContext:\n \"\"\"Application context with typed dependencies.\"\"\"\n\n db: Database\n\n\n@asynccontextmanager\nasync def app_lifespan(server: FastMCP) -> AsyncIterator[AppContext]:\n \"\"\"Manage application lifecycle with type-safe context.\"\"\"\n # Initialize on startup\n db = await Database.connect()\n try:\n yield AppContext(db=db)\n finally:\n # Cleanup on shutdown\n await db.disconnect()\n\n\n# Pass lifespan to server\nmcp = FastMCP(\"My App\", lifespan=app_lifespan)\n\n\n# Access type-safe lifespan context in tools\n@mcp.tool()\ndef query_db(ctx: Context) -> str:\n \"\"\"Tool that uses initialized resources.\"\"\"\n db = ctx.request_context.lifespan_context.db\n return db.query()\n```\n\n_Full example: [examples/snippets/servers/lifespan_example.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lifespan_example.py)_\n<!-- /snippet-source -->\n\n### Resources\n\nResources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:\n\n<!-- snippet-source examples/snippets/servers/basic_resource.py -->\n```python\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(name=\"Resource Example\")\n\n\n@mcp.resource(\"file://documents/{name}\")\ndef read_document(name: str) -> str:\n \"\"\"Read a document by name.\"\"\"\n # This would normally read from disk\n return f\"Content of {name}\"\n\n\n@mcp.resource(\"config://settings\")\ndef get_settings() -> str:\n \"\"\"Get application settings.\"\"\"\n return \"\"\"{\n \"theme\": \"dark\",\n \"language\": \"en\",\n \"debug\": false\n}\"\"\"\n```\n\n_Full example: [examples/snippets/servers/basic_resource.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_resource.py)_\n<!-- /snippet-source -->\n\n### Tools\n\nTools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:\n\n<!-- snippet-source examples/snippets/servers/basic_tool.py -->\n```python\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(name=\"Tool Example\")\n\n\n@mcp.tool()\ndef sum(a: int, b: int) -> int:\n \"\"\"Add two numbers together.\"\"\"\n return a + b\n\n\n@mcp.tool()\ndef get_weather(city: str, unit: str = \"celsius\") -> str:\n \"\"\"Get weather for a city.\"\"\"\n # This would normally call a weather API\n return f\"Weather in {city}: 22degrees{unit[0].upper()}\"\n```\n\n_Full example: [examples/snippets/servers/basic_tool.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_tool.py)_\n<!-- /snippet-source -->\n\n#### Structured Output\n\nTools will return structured results by default, if their return type\nannotation is compatible. Otherwise, they will return unstructured results.\n\nStructured output supports these return types:\n\n- Pydantic models (BaseModel subclasses)\n- TypedDicts\n- Dataclasses and other classes with type hints\n- `dict[str, T]` (where T is any JSON-serializable type)\n- Primitive types (str, int, float, bool, bytes, None) - wrapped in `{\"result\": value}`\n- Generic types (list, tuple, Union, Optional, etc.) - wrapped in `{\"result\": value}`\n\nClasses without type hints cannot be serialized for structured output. Only\nclasses with properly annotated attributes will be converted to Pydantic models\nfor schema generation and validation.\n\nStructured results are automatically validated against the output schema\ngenerated from the annotation. This ensures the tool returns well-typed,\nvalidated data that clients can easily process.\n\n**Note:** For backward compatibility, unstructured results are also\nreturned. Unstructured results are provided for backward compatibility\nwith previous versions of the MCP specification, and are quirks-compatible\nwith previous versions of FastMCP in the current version of the SDK.\n\n**Note:** In cases where a tool function's return type annotation\ncauses the tool to be classified as structured _and this is undesirable_,\nthe classification can be suppressed by passing `structured_output=False`\nto the `@tool` decorator.\n\n<!-- snippet-source examples/snippets/servers/structured_output.py -->\n```python\n\"\"\"Example showing structured output with tools.\"\"\"\n\nfrom typing import TypedDict\n\nfrom pydantic import BaseModel, Field\n\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"Structured Output Example\")\n\n\n# Using Pydantic models for rich structured data\nclass WeatherData(BaseModel):\n \"\"\"Weather information structure.\"\"\"\n\n temperature: float = Field(description=\"Temperature in Celsius\")\n humidity: float = Field(description=\"Humidity percentage\")\n condition: str\n wind_speed: float\n\n\n@mcp.tool()\ndef get_weather(city: str) -> WeatherData:\n \"\"\"Get weather for a city - returns structured data.\"\"\"\n # Simulated weather data\n return WeatherData(\n temperature=72.5,\n humidity=45.0,\n condition=\"sunny\",\n wind_speed=5.2,\n )\n\n\n# Using TypedDict for simpler structures\nclass LocationInfo(TypedDict):\n latitude: float\n longitude: float\n name: str\n\n\n@mcp.tool()\ndef get_location(address: str) -> LocationInfo:\n \"\"\"Get location coordinates\"\"\"\n return LocationInfo(latitude=51.5074, longitude=-0.1278, name=\"London, UK\")\n\n\n# Using dict[str, Any] for flexible schemas\n@mcp.tool()\ndef get_statistics(data_type: str) -> dict[str, float]:\n \"\"\"Get various statistics\"\"\"\n return {\"mean\": 42.5, \"median\": 40.0, \"std_dev\": 5.2}\n\n\n# Ordinary classes with type hints work for structured output\nclass UserProfile:\n name: str\n age: int\n email: str | None = None\n\n def __init__(self, name: str, age: int, email: str | None = None):\n self.name = name\n self.age = age\n self.email = email\n\n\n@mcp.tool()\ndef get_user(user_id: str) -> UserProfile:\n \"\"\"Get user profile - returns structured data\"\"\"\n return UserProfile(name=\"Alice\", age=30, email=\"alice@example.com\")\n\n\n# Classes WITHOUT type hints cannot be used for structured output\nclass UntypedConfig:\n def __init__(self, setting1, setting2):\n self.setting1 = setting1\n self.setting2 = setting2\n\n\n@mcp.tool()\ndef get_config() -> UntypedConfig:\n \"\"\"This returns unstructured output - no schema generated\"\"\"\n return UntypedConfig(\"value1\", \"value2\")\n\n\n# Lists and other types are wrapped automatically\n@mcp.tool()\ndef list_cities() -> list[str]:\n \"\"\"Get a list of cities\"\"\"\n return [\"London\", \"Paris\", \"Tokyo\"]\n # Returns: {\"result\": [\"London\", \"Paris\", \"Tokyo\"]}\n\n\n@mcp.tool()\ndef get_temperature(city: str) -> float:\n \"\"\"Get temperature as a simple float\"\"\"\n return 22.5\n # Returns: {\"result\": 22.5}\n```\n\n_Full example: [examples/snippets/servers/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/structured_output.py)_\n<!-- /snippet-source -->\n\n### Prompts\n\nPrompts are reusable templates that help LLMs interact with your server effectively:\n\n<!-- snippet-source examples/snippets/servers/basic_prompt.py -->\n```python\nfrom mcp.server.fastmcp import FastMCP\nfrom mcp.server.fastmcp.prompts import base\n\nmcp = FastMCP(name=\"Prompt Example\")\n\n\n@mcp.prompt(title=\"Code Review\")\ndef review_code(code: str) -> str:\n return f\"Please review this code:\\n\\n{code}\"\n\n\n@mcp.prompt(title=\"Debug Assistant\")\ndef debug_error(error: str) -> list[base.Message]:\n return [\n base.UserMessage(\"I'm seeing this error:\"),\n base.UserMessage(error),\n base.AssistantMessage(\"I'll help debug that. What have you tried so far?\"),\n ]\n```\n\n_Full example: [examples/snippets/servers/basic_prompt.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/basic_prompt.py)_\n<!-- /snippet-source -->\n\n### Images\n\nFastMCP provides an `Image` class that automatically handles image data:\n\n<!-- snippet-source examples/snippets/servers/images.py -->\n```python\n\"\"\"Example showing image handling with FastMCP.\"\"\"\n\nfrom PIL import Image as PILImage\n\nfrom mcp.server.fastmcp import FastMCP, Image\n\nmcp = FastMCP(\"Image Example\")\n\n\n@mcp.tool()\ndef create_thumbnail(image_path: str) -> Image:\n \"\"\"Create a thumbnail from an image\"\"\"\n img = PILImage.open(image_path)\n img.thumbnail((100, 100))\n return Image(data=img.tobytes(), format=\"png\")\n```\n\n_Full example: [examples/snippets/servers/images.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/images.py)_\n<!-- /snippet-source -->\n\n### Context\n\nThe Context object gives your tools and resources access to MCP capabilities:\n\n<!-- snippet-source examples/snippets/servers/tool_progress.py -->\n```python\nfrom mcp.server.fastmcp import Context, FastMCP\n\nmcp = FastMCP(name=\"Progress Example\")\n\n\n@mcp.tool()\nasync def long_running_task(task_name: str, ctx: Context, steps: int = 5) -> str:\n \"\"\"Execute a task with progress updates.\"\"\"\n await ctx.info(f\"Starting: {task_name}\")\n\n for i in range(steps):\n progress = (i + 1) / steps\n await ctx.report_progress(\n progress=progress,\n total=1.0,\n message=f\"Step {i + 1}/{steps}\",\n )\n await ctx.debug(f\"Completed step {i + 1}\")\n\n return f\"Task '{task_name}' completed\"\n```\n\n_Full example: [examples/snippets/servers/tool_progress.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/tool_progress.py)_\n<!-- /snippet-source -->\n\n### Completions\n\nMCP supports providing completion suggestions for prompt arguments and resource template parameters. With the context parameter, servers can provide completions based on previously resolved values:\n\nClient usage:\n\n<!-- snippet-source examples/snippets/clients/completion_client.py -->\n```python\n\"\"\"\ncd to the `examples/snippets` directory and run:\n uv run completion-client\n\"\"\"\n\nimport asyncio\nimport os\n\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom mcp.types import PromptReference, ResourceTemplateReference\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n command=\"uv\", # Using uv to run the server\n args=[\"run\", \"server\", \"completion\", \"stdio\"], # Server with completion support\n env={\"UV_INDEX\": os.environ.get(\"UV_INDEX\", \"\")},\n)\n\n\nasync def run():\n \"\"\"Run the completion client example.\"\"\"\n async with stdio_client(server_params) as (read, write):\n async with ClientSession(read, write) as session:\n # Initialize the connection\n await session.initialize()\n\n # List available resource templates\n templates = await session.list_resource_templates()\n print(\"Available resource templates:\")\n for template in templates.resourceTemplates:\n print(f\" - {template.uriTemplate}\")\n\n # List available prompts\n prompts = await session.list_prompts()\n print(\"\\nAvailable prompts:\")\n for prompt in prompts.prompts:\n print(f\" - {prompt.name}\")\n\n # Complete resource template arguments\n if templates.resourceTemplates:\n template = templates.resourceTemplates[0]\n print(f\"\\nCompleting arguments for resource template: {template.uriTemplate}\")\n\n # Complete without context\n result = await session.complete(\n ref=ResourceTemplateReference(type=\"ref/resource\", uri=template.uriTemplate),\n argument={\"name\": \"owner\", \"value\": \"model\"},\n )\n print(f\"Completions for 'owner' starting with 'model': {result.completion.values}\")\n\n # Complete with context - repo suggestions based on owner\n result = await session.complete(\n ref=ResourceTemplateReference(type=\"ref/resource\", uri=template.uriTemplate),\n argument={\"name\": \"repo\", \"value\": \"\"},\n context_arguments={\"owner\": \"modelcontextprotocol\"},\n )\n print(f\"Completions for 'repo' with owner='modelcontextprotocol': {result.completion.values}\")\n\n # Complete prompt arguments\n if prompts.prompts:\n prompt_name = prompts.prompts[0].name\n print(f\"\\nCompleting arguments for prompt: {prompt_name}\")\n\n result = await session.complete(\n ref=PromptReference(type=\"ref/prompt\", name=prompt_name),\n argument={\"name\": \"style\", \"value\": \"\"},\n )\n print(f\"Completions for 'style' argument: {result.completion.values}\")\n\n\ndef main():\n \"\"\"Entry point for the completion client.\"\"\"\n asyncio.run(run())\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n_Full example: [examples/snippets/clients/completion_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/completion_client.py)_\n<!-- /snippet-source -->\n### Elicitation\n\nRequest additional information from users. This example shows an Elicitation during a Tool Call:\n\n<!-- snippet-source examples/snippets/servers/elicitation.py -->\n```python\nfrom pydantic import BaseModel, Field\n\nfrom mcp.server.fastmcp import Context, FastMCP\n\nmcp = FastMCP(name=\"Elicitation Example\")\n\n\nclass BookingPreferences(BaseModel):\n \"\"\"Schema for collecting user preferences.\"\"\"\n\n checkAlternative: bool = Field(description=\"Would you like to check another date?\")\n alternativeDate: str = Field(\n default=\"2024-12-26\",\n description=\"Alternative date (YYYY-MM-DD)\",\n )\n\n\n@mcp.tool()\nasync def book_table(\n date: str,\n time: str,\n party_size: int,\n ctx: Context,\n) -> str:\n \"\"\"Book a table with date availability check.\"\"\"\n # Check if date is available\n if date == \"2024-12-25\":\n # Date unavailable - ask user for alternative\n result = await ctx.elicit(\n message=(f\"No tables available for {party_size} on {date}. Would you like to try another date?\"),\n schema=BookingPreferences,\n )\n\n if result.action == \"accept\" and result.data:\n if result.data.checkAlternative:\n return f\"[SUCCESS] Booked for {result.data.alternativeDate}\"\n return \"[CANCELLED] No booking made\"\n return \"[CANCELLED] Booking cancelled\"\n\n # Date available\n return f\"[SUCCESS] Booked for {date} at {time}\"\n```\n\n_Full example: [examples/snippets/servers/elicitation.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/elicitation.py)_\n<!-- /snippet-source -->\n\nThe `elicit()` method returns an `ElicitationResult` with:\n\n- `action`: \"accept\", \"decline\", or \"cancel\"\n- `data`: The validated response (only when accepted)\n- `validation_error`: Any validation error message\n\n### Sampling\n\nTools can interact with LLMs through sampling (generating text):\n\n<!-- snippet-source examples/snippets/servers/sampling.py -->\n```python\nfrom mcp.server.fastmcp import Context, FastMCP\nfrom mcp.types import SamplingMessage, TextContent\n\nmcp = FastMCP(name=\"Sampling Example\")\n\n\n@mcp.tool()\nasync def generate_poem(topic: str, ctx: Context) -> str:\n \"\"\"Generate a poem using LLM sampling.\"\"\"\n prompt = f\"Write a short poem about {topic}\"\n\n result = await ctx.session.create_message(\n messages=[\n SamplingMessage(\n role=\"user\",\n content=TextContent(type=\"text\", text=prompt),\n )\n ],\n max_tokens=100,\n )\n\n if result.content.type == \"text\":\n return result.content.text\n return str(result.content)\n```\n\n_Full example: [examples/snippets/servers/sampling.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/sampling.py)_\n<!-- /snippet-source -->\n\n### Logging and Notifications\n\nTools can send logs and notifications through the context:\n\n<!-- snippet-source examples/snippets/servers/notifications.py -->\n```python\nfrom mcp.server.fastmcp import Context, FastMCP\n\nmcp = FastMCP(name=\"Notifications Example\")\n\n\n@mcp.tool()\nasync def process_data(data: str, ctx: Context) -> str:\n \"\"\"Process data with logging.\"\"\"\n # Different log levels\n await ctx.debug(f\"Debug: Processing '{data}'\")\n await ctx.info(\"Info: Starting processing\")\n await ctx.warning(\"Warning: This is experimental\")\n await ctx.error(\"Error: (This is just a demo)\")\n\n # Notify about resource changes\n await ctx.session.send_resource_list_changed()\n\n return f\"Processed: {data}\"\n```\n\n_Full example: [examples/snippets/servers/notifications.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/notifications.py)_\n<!-- /snippet-source -->\n\n### Authentication\n\nAuthentication can be used by servers that want to expose tools accessing protected resources.\n\n`mcp.server.auth` implements OAuth 2.1 resource server functionality, where MCP servers act as Resource Servers (RS) that validate tokens issued by separate Authorization Servers (AS). This follows the [MCP authorization specification](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization) and implements RFC 9728 (Protected Resource Metadata) for AS discovery.\n\nMCP servers can use authentication by providing an implementation of the `TokenVerifier` protocol:\n\n<!-- snippet-source examples/snippets/servers/oauth_server.py -->\n```python\n\"\"\"\nRun from the repository root:\n uv run examples/snippets/servers/oauth_server.py\n\"\"\"\n\nfrom pydantic import AnyHttpUrl\n\nfrom mcp.server.auth.provider import AccessToken, TokenVerifier\nfrom mcp.server.auth.settings import AuthSettings\nfrom mcp.server.fastmcp import FastMCP\n\n\nclass SimpleTokenVerifier(TokenVerifier):\n \"\"\"Simple token verifier for demonstration.\"\"\"\n\n async def verify_token(self, token: str) -> AccessToken | None:\n pass # This is where you would implement actual token validation\n\n\n# Create FastMCP instance as a Resource Server\nmcp = FastMCP(\n \"Weather Service\",\n # Token verifier for authentication\n token_verifier=SimpleTokenVerifier(),\n # Auth settings for RFC 9728 Protected Resource Metadata\n auth=AuthSettings(\n issuer_url=AnyHttpUrl(\"https://auth.example.com\"), # Authorization Server URL\n resource_server_url=AnyHttpUrl(\"http://localhost:3001\"), # This server's URL\n required_scopes=[\"user\"],\n ),\n)\n\n\n@mcp.tool()\nasync def get_weather(city: str = \"London\") -> dict[str, str]:\n \"\"\"Get weather data for a city\"\"\"\n return {\n \"city\": city,\n \"temperature\": \"22\",\n \"condition\": \"Partly cloudy\",\n \"humidity\": \"65%\",\n }\n\n\nif __name__ == \"__main__\":\n mcp.run(transport=\"streamable-http\")\n```\n\n_Full example: [examples/snippets/servers/oauth_server.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/oauth_server.py)_\n<!-- /snippet-source -->\n\nFor a complete example with separate Authorization Server and Resource Server implementations, see [`examples/servers/simple-auth/`](examples/servers/simple-auth/).\n\n**Architecture:**\n\n- **Authorization Server (AS)**: Handles OAuth flows, user authentication, and token issuance\n- **Resource Server (RS)**: Your MCP server that validates tokens and serves protected resources\n- **Client**: Discovers AS through RFC 9728, obtains tokens, and uses them with the MCP server\n\nSee [TokenVerifier](src/mcp/server/auth/provider.py) for more details on implementing token validation.\n\n## Running Your Server\n\n### Development Mode\n\nThe fastest way to test and debug your server is with the MCP Inspector:\n\n```bash\nuv run mcp dev server.py\n\n# Add dependencies\nuv run mcp dev server.py --with pandas --with numpy\n\n# Mount local code\nuv run mcp dev server.py --with-editable .\n```\n\n### Claude Desktop Integration\n\nOnce your server is ready, install it in Claude Desktop:\n\n```bash\nuv run mcp install server.py\n\n# Custom name\nuv run mcp install server.py --name \"My Analytics Server\"\n\n# Environment variables\nuv run mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...\nuv run mcp install server.py -f .env\n```\n\n### Direct Execution\n\nFor advanced scenarios like custom deployments:\n\n<!-- snippet-source examples/snippets/servers/direct_execution.py -->\n```python\n\"\"\"Example showing direct execution of an MCP server.\n\nThis is the simplest way to run an MCP server directly.\ncd to the `examples/snippets` directory and run:\n uv run direct-execution-server\n or\n python servers/direct_execution.py\n\"\"\"\n\nfrom mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"My App\")\n\n\n@mcp.tool()\ndef hello(name: str = \"World\") -> str:\n \"\"\"Say hello to someone.\"\"\"\n return f\"Hello, {name}!\"\n\n\ndef main():\n \"\"\"Entry point for the direct execution server.\"\"\"\n mcp.run()\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n_Full example: [examples/snippets/servers/direct_execution.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/direct_execution.py)_\n<!-- /snippet-source -->\n\nRun it with:\n\n```bash\npython servers/direct_execution.py\n# or\nuv run mcp run servers/direct_execution.py\n```\n\nNote that `uv run mcp run` or `uv run mcp dev` only supports server using FastMCP and not the low-level server variant.\n\n### Streamable HTTP Transport\n\n> **Note**: Streamable HTTP transport is superseding SSE transport for production deployments.\n\n<!-- snippet-source examples/snippets/servers/streamable_config.py -->\n```python\n\"\"\"\nRun from the repository root:\n uv run examples/snippets/servers/streamable_config.py\n\"\"\"\n\nfrom mcp.server.fastmcp import FastMCP\n\n# Stateful server (maintains session state)\nmcp = FastMCP(\"StatefulServer\")\n\n# Other configuration options:\n# Stateless server (no session persistence)\n# mcp = FastMCP(\"StatelessServer\", stateless_http=True)\n\n# Stateless server (no session persistence, no sse stream with supported client)\n# mcp = FastMCP(\"StatelessServer\", stateless_http=True, json_response=True)\n\n\n# Add a simple tool to demonstrate the server\n@mcp.tool()\ndef greet(name: str = \"World\") -> str:\n \"\"\"Greet someone by name.\"\"\"\n return f\"Hello, {name}!\"\n\n\n# Run server with streamable_http transport\nif __name__ == \"__main__\":\n mcp.run(transport=\"streamable-http\")\n```\n\n_Full example: [examples/snippets/servers/streamable_config.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_config.py)_\n<!-- /snippet-source -->\n\nYou can mount multiple FastMCP servers in a Starlette application:\n\n<!-- snippet-source examples/snippets/servers/streamable_starlette_mount.py -->\n```python\n\"\"\"\nRun from the repository root:\n uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload\n\"\"\"\n\nimport contextlib\n\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\n\nfrom mcp.server.fastmcp import FastMCP\n\n# Create the Echo server\necho_mcp = FastMCP(name=\"EchoServer\", stateless_http=True)\n\n\n@echo_mcp.tool()\ndef echo(message: str) -> str:\n \"\"\"A simple echo tool\"\"\"\n return f\"Echo: {message}\"\n\n\n# Create the Math server\nmath_mcp = FastMCP(name=\"MathServer\", stateless_http=True)\n\n\n@math_mcp.tool()\ndef add_two(n: int) -> int:\n \"\"\"Tool to add two to the input\"\"\"\n return n + 2\n\n\n# Create a combined lifespan to manage both session managers\n@contextlib.asynccontextmanager\nasync def lifespan(app: Starlette):\n async with contextlib.AsyncExitStack() as stack:\n await stack.enter_async_context(echo_mcp.session_manager.run())\n await stack.enter_async_context(math_mcp.session_manager.run())\n yield\n\n\n# Create the Starlette app and mount the MCP servers\napp = Starlette(\n routes=[\n Mount(\"/echo\", echo_mcp.streamable_http_app()),\n Mount(\"/math\", math_mcp.streamable_http_app()),\n ],\n lifespan=lifespan,\n)\n```\n\n_Full example: [examples/snippets/servers/streamable_starlette_mount.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/streamable_starlette_mount.py)_\n<!-- /snippet-source -->\n\nFor low level server with Streamable HTTP implementations, see:\n\n- Stateful server: [`examples/servers/simple-streamablehttp/`](examples/servers/simple-streamablehttp/)\n- Stateless server: [`examples/servers/simple-streamablehttp-stateless/`](examples/servers/simple-streamablehttp-stateless/)\n\nThe streamable HTTP transport supports:\n\n- Stateful and stateless operation modes\n- Resumability with event stores\n- JSON or SSE response formats\n- Better scalability for multi-node deployments\n\n### Mounting to an Existing ASGI Server\n\nBy default, SSE servers are mounted at `/sse` and Streamable HTTP servers are mounted at `/mcp`. You can customize these paths using the methods described below.\n\nFor more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).\n\n#### SSE servers\n\n> **Note**: SSE transport is being superseded by [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http).\n\nYou can mount the SSE server to an existing ASGI server using the `sse_app` method. This allows you to integrate the SSE server with other ASGI applications.\n\n```python\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount, Host\nfrom mcp.server.fastmcp import FastMCP\n\n\nmcp = FastMCP(\"My App\")\n\n# Mount the SSE server to the existing ASGI server\napp = Starlette(\n routes=[\n Mount('/', app=mcp.sse_app()),\n ]\n)\n\n# or dynamically mount as host\napp.router.routes.append(Host('mcp.acme.corp', app=mcp.sse_app()))\n```\n\nWhen mounting multiple MCP servers under different paths, you can configure the mount path in several ways:\n\n```python\nfrom starlette.applications import Starlette\nfrom starlette.routing import Mount\nfrom mcp.server.fastmcp import FastMCP\n\n# Create multiple MCP servers\ngithub_mcp = FastMCP(\"GitHub API\")\nbrowser_mcp = FastMCP(\"Browser\")\ncurl_mcp = FastMCP(\"Curl\")\nsearch_mcp = FastMCP(\"Search\")\n\n# Method 1: Configure mount paths via settings (recommended for persistent configuration)\ngithub_mcp.settings.mount_path = \"/github\"\nbrowser_mcp.settings.mount_path = \"/browser\"\n\n# Method 2: Pass mount path directly to sse_app (preferred for ad-hoc mounting)\n# This approach doesn't modify the server's settings permanently\n\n# Create Starlette app with multiple mounted servers\napp = Starlette(\n routes=[\n # Using settings-based configuration\n Mount(\"/github\", app=github_mcp.sse_app()),\n Mount(\"/browser\", app=browser_mcp.sse_app()),\n # Using direct mount path parameter\n Mount(\"/curl\", app=curl_mcp.sse_app(\"/curl\")),\n Mount(\"/search\", app=search_mcp.sse_app(\"/search\")),\n ]\n)\n\n# Method 3: For direct execution, you can also pass the mount path to run()\nif __name__ == \"__main__\":\n search_mcp.run(transport=\"sse\", mount_path=\"/search\")\n```\n\nFor more information on mounting applications in Starlette, see the [Starlette documentation](https://www.starlette.io/routing/#submounting-routes).\n\n## Advanced Usage\n\n### Low-Level Server\n\nFor more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:\n\n<!-- snippet-source examples/snippets/servers/lowlevel/lifespan.py -->\n```python\n\"\"\"\nRun from the repository root:\n uv run examples/snippets/servers/lowlevel/lifespan.py\n\"\"\"\n\nfrom collections.abc import AsyncIterator\nfrom contextlib import asynccontextmanager\n\nimport mcp.server.stdio\nimport mcp.types as types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\n\n# Mock database class for example\nclass Database:\n \"\"\"Mock database class for example.\"\"\"\n\n @classmethod\n async def connect(cls) -> \"Database\":\n \"\"\"Connect to database.\"\"\"\n print(\"Database connected\")\n return cls()\n\n async def disconnect(self) -> None:\n \"\"\"Disconnect from database.\"\"\"\n print(\"Database disconnected\")\n\n async def query(self, query_str: str) -> list[dict[str, str]]:\n \"\"\"Execute a query.\"\"\"\n # Simulate database query\n return [{\"id\": \"1\", \"name\": \"Example\", \"query\": query_str}]\n\n\n@asynccontextmanager\nasync def server_lifespan(_server: Server) -> AsyncIterator[dict]:\n \"\"\"Manage server startup and shutdown lifecycle.\"\"\"\n # Initialize resources on startup\n db = await Database.connect()\n try:\n yield {\"db\": db}\n finally:\n # Clean up on shutdown\n await db.disconnect()\n\n\n# Pass lifespan to server\nserver = Server(\"example-server\", lifespan=server_lifespan)\n\n\n@server.list_tools()\nasync def handle_list_tools() -> list[types.Tool]:\n \"\"\"List available tools.\"\"\"\n return [\n types.Tool(\n name=\"query_db\",\n description=\"Query the database\",\n inputSchema={\n \"type\": \"object\",\n \"properties\": {\"query\": {\"type\": \"string\", \"description\": \"SQL query to execute\"}},\n \"required\": [\"query\"],\n },\n )\n ]\n\n\n@server.call_tool()\nasync def query_db(name: str, arguments: dict) -> list[types.TextContent]:\n \"\"\"Handle database query tool call.\"\"\"\n if name != \"query_db\":\n raise ValueError(f\"Unknown tool: {name}\")\n\n # Access lifespan context\n ctx = server.request_context\n db = ctx.lifespan_context[\"db\"]\n\n # Execute query\n results = await db.query(arguments[\"query\"])\n\n return [types.TextContent(type=\"text\", text=f\"Query results: {results}\")]\n\n\nasync def run():\n \"\"\"Run the server with lifespan management.\"\"\"\n async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n await server.run(\n read_stream,\n write_stream,\n InitializationOptions(\n server_name=\"example-server\",\n server_version=\"0.1.0\",\n capabilities=server.get_capabilities(\n notification_options=NotificationOptions(),\n experimental_capabilities={},\n ),\n ),\n )\n\n\nif __name__ == \"__main__\":\n import asyncio\n\n asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/lifespan.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/lifespan.py)_\n<!-- /snippet-source -->\n\nThe lifespan API provides:\n\n- A way to initialize resources when the server starts and clean them up when it stops\n- Access to initialized resources through the request context in handlers\n- Type-safe context passing between lifespan and request handlers\n\n<!-- snippet-source examples/snippets/servers/lowlevel/basic.py -->\n```python\n\"\"\"\nRun from the repository root:\nuv run examples/snippets/servers/lowlevel/basic.py\n\"\"\"\n\nimport asyncio\n\nimport mcp.server.stdio\nimport mcp.types as types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\n# Create a server instance\nserver = Server(\"example-server\")\n\n\n@server.list_prompts()\nasync def handle_list_prompts() -> list[types.Prompt]:\n \"\"\"List available prompts.\"\"\"\n return [\n types.Prompt(\n name=\"example-prompt\",\n description=\"An example prompt template\",\n arguments=[types.PromptArgument(name=\"arg1\", description=\"Example argument\", required=True)],\n )\n ]\n\n\n@server.get_prompt()\nasync def handle_get_prompt(name: str, arguments: dict[str, str] | None) -> types.GetPromptResult:\n \"\"\"Get a specific prompt by name.\"\"\"\n if name != \"example-prompt\":\n raise ValueError(f\"Unknown prompt: {name}\")\n\n arg1_value = (arguments or {}).get(\"arg1\", \"default\")\n\n return types.GetPromptResult(\n description=\"Example prompt\",\n messages=[\n types.PromptMessage(\n role=\"user\",\n content=types.TextContent(type=\"text\", text=f\"Example prompt text with argument: {arg1_value}\"),\n )\n ],\n )\n\n\nasync def run():\n \"\"\"Run the basic low-level server.\"\"\"\n async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n await server.run(\n read_stream,\n write_stream,\n InitializationOptions(\n server_name=\"example\",\n server_version=\"0.1.0\",\n capabilities=server.get_capabilities(\n notification_options=NotificationOptions(),\n experimental_capabilities={},\n ),\n ),\n )\n\n\nif __name__ == \"__main__\":\n asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/basic.py)_\n<!-- /snippet-source -->\n\nCaution: The `uv run mcp run` and `uv run mcp dev` tool doesn't support low-level server.\n\n#### Structured Output Support\n\nThe low-level server supports structured output for tools, allowing you to return both human-readable content and machine-readable structured data. Tools can define an `outputSchema` to validate their structured output:\n\n<!-- snippet-source examples/snippets/servers/lowlevel/structured_output.py -->\n```python\n\"\"\"\nRun from the repository root:\n uv run examples/snippets/servers/lowlevel/structured_output.py\n\"\"\"\n\nimport asyncio\nfrom typing import Any\n\nimport mcp.server.stdio\nimport mcp.types as types\nfrom mcp.server.lowlevel import NotificationOptions, Server\nfrom mcp.server.models import InitializationOptions\n\nserver = Server(\"example-server\")\n\n\n@server.list_tools()\nasync def list_tools() -> list[types.Tool]:\n \"\"\"List available tools with structured output schemas.\"\"\"\n return [\n types.Tool(\n name=\"get_weather\",\n description=\"Get current weather for a city\",\n inputSchema={\n \"type\": \"object\",\n \"properties\": {\"city\": {\"type\": \"string\", \"description\": \"City name\"}},\n \"required\": [\"city\"],\n },\n outputSchema={\n \"type\": \"object\",\n \"properties\": {\n \"temperature\": {\"type\": \"number\", \"description\": \"Temperature in Celsius\"},\n \"condition\": {\"type\": \"string\", \"description\": \"Weather condition\"},\n \"humidity\": {\"type\": \"number\", \"description\": \"Humidity percentage\"},\n \"city\": {\"type\": \"string\", \"description\": \"City name\"},\n },\n \"required\": [\"temperature\", \"condition\", \"humidity\", \"city\"],\n },\n )\n ]\n\n\n@server.call_tool()\nasync def call_tool(name: str, arguments: dict[str, Any]) -> dict[str, Any]:\n \"\"\"Handle tool calls with structured output.\"\"\"\n if name == \"get_weather\":\n city = arguments[\"city\"]\n\n # Simulated weather data - in production, call a weather API\n weather_data = {\n \"temperature\": 22.5,\n \"condition\": \"partly cloudy\",\n \"humidity\": 65,\n \"city\": city, # Include the requested city\n }\n\n # low-level server will validate structured output against the tool's\n # output schema, and additionally serialize it into a TextContent block\n # for backwards compatibility with pre-2025-06-18 clients.\n return weather_data\n else:\n raise ValueError(f\"Unknown tool: {name}\")\n\n\nasync def run():\n \"\"\"Run the structured output server.\"\"\"\n async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):\n await server.run(\n read_stream,\n write_stream,\n InitializationOptions(\n server_name=\"structured-output-example\",\n server_version=\"0.1.0\",\n capabilities=server.get_capabilities(\n notification_options=NotificationOptions(),\n experimental_capabilities={},\n ),\n ),\n )\n\n\nif __name__ == \"__main__\":\n asyncio.run(run())\n```\n\n_Full example: [examples/snippets/servers/lowlevel/structured_output.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/servers/lowlevel/structured_output.py)_\n<!-- /snippet-source -->\n\nTools can return data in three ways:\n\n1. **Content only**: Return a list of content blocks (default behavior before spec revision 2025-06-18)\n2. **Structured data only**: Return a dictionary that will be serialized to JSON (Introduced in spec revision 2025-06-18)\n3. **Both**: Return a tuple of (content, structured_data) preferred option to use for backwards compatibility\n\nWhen an `outputSchema` is defined, the server automatically validates the structured output against the schema. This ensures type safety and helps catch errors early.\n\n### Writing MCP Clients\n\nThe SDK provides a high-level client interface for connecting to MCP servers using various [transports](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports):\n\n<!-- snippet-source examples/snippets/clients/stdio_client.py -->\n```python\n\"\"\"\ncd to the `examples/snippets/clients` directory and run:\n uv run client\n\"\"\"\n\nimport asyncio\nimport os\n\nfrom pydantic import AnyUrl\n\nfrom mcp import ClientSession, StdioServerParameters, types\nfrom mcp.client.stdio import stdio_client\nfrom mcp.shared.context import RequestContext\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n command=\"uv\", # Using uv to run the server\n args=[\"run\", \"server\", \"fastmcp_quickstart\", \"stdio\"], # We're already in snippets dir\n env={\"UV_INDEX\": os.environ.get(\"UV_INDEX\", \"\")},\n)\n\n\n# Optional: create a sampling callback\nasync def handle_sampling_message(\n context: RequestContext, params: types.CreateMessageRequestParams\n) -> types.CreateMessageResult:\n print(f\"Sampling request: {params.messages}\")\n return types.CreateMessageResult(\n role=\"assistant\",\n content=types.TextContent(\n type=\"text\",\n text=\"Hello, world! from model\",\n ),\n model=\"gpt-3.5-turbo\",\n stopReason=\"endTurn\",\n )\n\n\nasync def run():\n async with stdio_client(server_params) as (read, write):\n async with ClientSession(read, write, sampling_callback=handle_sampling_message) as session:\n # Initialize the connection\n await session.initialize()\n\n # List available prompts\n prompts = await session.list_prompts()\n print(f\"Available prompts: {[p.name for p in prompts.prompts]}\")\n\n # Get a prompt (greet_user prompt from fastmcp_quickstart)\n if prompts.prompts:\n prompt = await session.get_prompt(\"greet_user\", arguments={\"name\": \"Alice\", \"style\": \"friendly\"})\n print(f\"Prompt result: {prompt.messages[0].content}\")\n\n # List available resources\n resources = await session.list_resources()\n print(f\"Available resources: {[r.uri for r in resources.resources]}\")\n\n # List available tools\n tools = await session.list_tools()\n print(f\"Available tools: {[t.name for t in tools.tools]}\")\n\n # Read a resource (greeting resource from fastmcp_quickstart)\n resource_content = await session.read_resource(AnyUrl(\"greeting://World\"))\n content_block = resource_content.contents[0]\n if isinstance(content_block, types.TextContent):\n print(f\"Resource content: {content_block.text}\")\n\n # Call a tool (add tool from fastmcp_quickstart)\n result = await session.call_tool(\"add\", arguments={\"a\": 5, \"b\": 3})\n result_unstructured = result.content[0]\n if isinstance(result_unstructured, types.TextContent):\n print(f\"Tool result: {result_unstructured.text}\")\n result_structured = result.structuredContent\n print(f\"Structured tool result: {result_structured}\")\n\n\ndef main():\n \"\"\"Entry point for the client script.\"\"\"\n asyncio.run(run())\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n_Full example: [examples/snippets/clients/stdio_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/stdio_client.py)_\n<!-- /snippet-source -->\n\nClients can also connect using [Streamable HTTP transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports#streamable-http):\n\n<!-- snippet-source examples/snippets/clients/streamable_basic.py -->\n```python\n\"\"\"\nRun from the repository root:\n uv run examples/snippets/clients/streamable_basic.py\n\"\"\"\n\nimport asyncio\n\nfrom mcp import ClientSession\nfrom mcp.client.streamable_http import streamablehttp_client\n\n\nasync def main():\n # Connect to a streamable HTTP server\n async with streamablehttp_client(\"http://localhost:8000/mcp\") as (\n read_stream,\n write_stream,\n _,\n ):\n # Create a session using the client streams\n async with ClientSession(read_stream, write_stream) as session:\n # Initialize the connection\n await session.initialize()\n # List available tools\n tools = await session.list_tools()\n print(f\"Available tools: {[tool.name for tool in tools.tools]}\")\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n_Full example: [examples/snippets/clients/streamable_basic.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/streamable_basic.py)_\n<!-- /snippet-source -->\n\n### Client Display Utilities\n\nWhen building MCP clients, the SDK provides utilities to help display human-readable names for tools, resources, and prompts:\n\n<!-- snippet-source examples/snippets/clients/display_utilities.py -->\n```python\n\"\"\"\ncd to the `examples/snippets` directory and run:\n uv run display-utilities-client\n\"\"\"\n\nimport asyncio\nimport os\n\nfrom mcp import ClientSession, StdioServerParameters\nfrom mcp.client.stdio import stdio_client\nfrom mcp.shared.metadata_utils import get_display_name\n\n# Create server parameters for stdio connection\nserver_params = StdioServerParameters(\n command=\"uv\", # Using uv to run the server\n args=[\"run\", \"server\", \"fastmcp_quickstart\", \"stdio\"],\n env={\"UV_INDEX\": os.environ.get(\"UV_INDEX\", \"\")},\n)\n\n\nasync def display_tools(session: ClientSession):\n \"\"\"Display available tools with human-readable names\"\"\"\n tools_response = await session.list_tools()\n\n for tool in tools_response.tools:\n # get_display_name() returns the title if available, otherwise the name\n display_name = get_display_name(tool)\n print(f\"Tool: {display_name}\")\n if tool.description:\n print(f\" {tool.description}\")\n\n\nasync def display_resources(session: ClientSession):\n \"\"\"Display available resources with human-readable names\"\"\"\n resources_response = await session.list_resources()\n\n for resource in resources_response.resources:\n display_name = get_display_name(resource)\n print(f\"Resource: {display_name} ({resource.uri})\")\n\n templates_response = await session.list_resource_templates()\n for template in templates_response.resourceTemplates:\n display_name = get_display_name(template)\n print(f\"Resource Template: {display_name}\")\n\n\nasync def run():\n \"\"\"Run the display utilities example.\"\"\"\n async with stdio_client(server_params) as (read, write):\n async with ClientSession(read, write) as session:\n # Initialize the connection\n await session.initialize()\n\n print(\"=== Available Tools ===\")\n await display_tools(session)\n\n print(\"\\n=== Available Resources ===\")\n await display_resources(session)\n\n\ndef main():\n \"\"\"Entry point for the display utilities client.\"\"\"\n asyncio.run(run())\n\n\nif __name__ == \"__main__\":\n main()\n```\n\n_Full example: [examples/snippets/clients/display_utilities.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/display_utilities.py)_\n<!-- /snippet-source -->\n\nThe `get_display_name()` function implements the proper precedence rules for displaying names:\n\n- For tools: `title` > `annotations.title` > `name`\n- For other objects: `title` > `name`\n\nThis ensures your client UI shows the most user-friendly names that servers provide.\n\n### OAuth Authentication for Clients\n\nThe SDK includes [authorization support](https://modelcontextprotocol.io/specification/2025-03-26/basic/authorization) for connecting to protected MCP servers:\n\n<!-- snippet-source examples/snippets/clients/oauth_client.py -->\n```python\n\"\"\"\nBefore running, specify running MCP RS server URL.\nTo spin up RS server locally, see\n examples/servers/simple-auth/README.md\n\ncd to the `examples/snippets` directory and run:\n uv run oauth-client\n\"\"\"\n\nimport asyncio\nfrom urllib.parse import parse_qs, urlparse\n\nfrom pydantic import AnyUrl\n\nfrom mcp import ClientSession\nfrom mcp.client.auth import OAuthClientProvider, TokenStorage\nfrom mcp.client.streamable_http import streamablehttp_client\nfrom mcp.shared.auth import OAuthClientInformationFull, OAuthClientMetadata, OAuthToken\n\n\nclass InMemoryTokenStorage(TokenStorage):\n \"\"\"Demo In-memory token storage implementation.\"\"\"\n\n def __init__(self):\n self.tokens: OAuthToken | None = None\n self.client_info: OAuthClientInformationFull | None = None\n\n async def get_tokens(self) -> OAuthToken | None:\n \"\"\"Get stored tokens.\"\"\"\n return self.tokens\n\n async def set_tokens(self, tokens: OAuthToken) -> None:\n \"\"\"Store tokens.\"\"\"\n self.tokens = tokens\n\n async def get_client_info(self) -> OAuthClientInformationFull | None:\n \"\"\"Get stored client information.\"\"\"\n return self.client_info\n\n async def set_client_info(self, client_info: OAuthClientInformationFull) -> None:\n \"\"\"Store client information.\"\"\"\n self.client_info = client_info\n\n\nasync def handle_redirect(auth_url: str) -> None:\n print(f\"Visit: {auth_url}\")\n\n\nasync def handle_callback() -> tuple[str, str | None]:\n callback_url = input(\"Paste callback URL: \")\n params = parse_qs(urlparse(callback_url).query)\n return params[\"code\"][0], params.get(\"state\", [None])[0]\n\n\nasync def main():\n \"\"\"Run the OAuth client example.\"\"\"\n oauth_auth = OAuthClientProvider(\n server_url=\"http://localhost:8001\",\n client_metadata=OAuthClientMetadata(\n client_name=\"Example MCP Client\",\n redirect_uris=[AnyUrl(\"http://localhost:3000/callback\")],\n grant_types=[\"authorization_code\", \"refresh_token\"],\n response_types=[\"code\"],\n scope=\"user\",\n ),\n storage=InMemoryTokenStorage(),\n redirect_handler=handle_redirect,\n callback_handler=handle_callback,\n )\n\n async with streamablehttp_client(\"http://localhost:8001/mcp\", auth=oauth_auth) as (read, write, _):\n async with ClientSession(read, write) as session:\n await session.initialize()\n\n tools = await session.list_tools()\n print(f\"Available tools: {[tool.name for tool in tools.tools]}\")\n\n resources = await session.list_resources()\n print(f\"Available resources: {[r.uri for r in resources.resources]}\")\n\n\ndef run():\n asyncio.run(main())\n\n\nif __name__ == \"__main__\":\n run()\n```\n\n_Full example: [examples/snippets/clients/oauth_client.py](https://github.com/modelcontextprotocol/python-sdk/blob/main/examples/snippets/clients/oauth_client.py)_\n<!-- /snippet-source -->\n\nFor a complete working example, see [`examples/clients/simple-auth-client/`](examples/clients/simple-auth-client/).\n\n### Parsing Tool Results\n\nWhen calling tools through MCP, the `CallToolResult` object contains the tool's response in a structured format. Understanding how to parse this result is essential for properly handling tool outputs.\n\n```python\n\"\"\"examples/snippets/clients/parsing_tool_results.py\"\"\"\n\nimport asyncio\n\nfrom mcp import ClientSession, StdioServerParameters, types\nfrom mcp.client.stdio import stdio_client\n\n\nasync def parse_tool_results():\n \"\"\"Demonstrates how to parse different types of content in CallToolResult.\"\"\"\n server_params = StdioServerParameters(\n command=\"python\", args=[\"path/to/mcp_server.py\"]\n )\n\n async with stdio_client(server_params) as (read, write):\n async with ClientSession(read, write) as session:\n await session.initialize()\n\n # Example 1: Parsing text content\n result = await session.call_tool(\"get_data\", {\"format\": \"text\"})\n for content in result.content:\n if isinstance(content, types.TextContent):\n print(f\"Text: {content.text}\")\n\n # Example 2: Parsing structured content from JSON tools\n result = await session.call_tool(\"get_user\", {\"id\": \"123\"})\n if hasattr(result, \"structuredContent\") and result.structuredContent:\n # Access structured data directly\n user_data = result.structuredContent\n print(f\"User: {user_data.get('name')}, Age: {user_data.get('age')}\")\n\n # Example 3: Parsing embedded resources\n result = await session.call_tool(\"read_config\", {})\n for content in result.content:\n if isinstance(content, types.EmbeddedResource):\n resource = content.resource\n if isinstance(resource, types.TextResourceContents):\n print(f\"Config from {resource.uri}: {resource.text}\")\n elif isinstance(resource, types.BlobResourceContents):\n print(f\"Binary data from {resource.uri}\")\n\n # Example 4: Parsing image content\n result = await session.call_tool(\"generate_chart\", {\"data\": [1, 2, 3]})\n for content in result.content:\n if isinstance(content, types.ImageContent):\n print(f\"Image ({content.mimeType}): {len(content.data)} bytes\")\n\n # Example 5: Handling errors\n result = await session.call_tool(\"failing_tool\", {})\n if result.isError:\n print(\"Tool execution failed!\")\n for content in result.content:\n if isinstance(content, types.TextContent):\n print(f\"Error: {content.text}\")\n\n\nasync def main():\n await parse_tool_results()\n\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### MCP Primitives\n\nThe MCP protocol defines three core primitives that servers can implement:\n\n| Primitive | Control | Description | Example Use |\n|-----------|-----------------------|-----------------------------------------------------|------------------------------|\n| Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |\n| Resources | Application-controlled| Contextual data managed by the client application | File contents, API responses |\n| Tools | Model-controlled | Functions exposed to the LLM to take actions | API calls, data updates |\n\n### Server Capabilities\n\nMCP servers declare capabilities during initialization:\n\n| Capability | Feature Flag | Description |\n|--------------|------------------------------|------------------------------------|\n| `prompts` | `listChanged` | Prompt template management |\n| `resources` | `subscribe`<br/>`listChanged`| Resource exposure and updates |\n| `tools` | `listChanged` | Tool discovery and execution |\n| `logging` | - | Server logging configuration |\n| `completions`| - | Argument completion suggestions |\n\n## Documentation\n\n- [Model Context Protocol documentation](https://modelcontextprotocol.io)\n- [Model Context Protocol specification](https://spec.modelcontextprotocol.io)\n- [Officially supported servers](https://github.com/modelcontextprotocol/servers)\n\n## Contributing\n\nWe are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See the [contributing guide](CONTRIBUTING.md) to get started.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "MCP server for extracting PPTX files to Marp format",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [
"mcp",
" extractor",
" marp",
" pptx"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "33c39767297d85f37138b208e80ce41e2c095d242fa6c645b6f166567ae3100e",
"md5": "c3b8895ef347351c28563320d2822f9a",
"sha256": "c79c26067492cf58c5ac8b82e18a420c8fe90b420873b5a07fe90970a5502cfc"
},
"downloads": -1,
"filename": "pptx_extractor_mcp-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c3b8895ef347351c28563320d2822f9a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 15983,
"upload_time": "2025-07-20T14:30:31",
"upload_time_iso_8601": "2025-07-20T14:30:31.678717Z",
"url": "https://files.pythonhosted.org/packages/33/c3/9767297d85f37138b208e80ce41e2c095d242fa6c645b6f166567ae3100e/pptx_extractor_mcp-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "5a3378582017bcaf1c80ebb2ee654a53c98a907d7de84c0a505d8a89981eb83c",
"md5": "58938bb1af5e22e74e5d59deadd7119f",
"sha256": "620801323c52f41816cf3adfef72a870756e3632c8df4c35087f63e8c1444768"
},
"downloads": -1,
"filename": "pptx_extractor_mcp-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "58938bb1af5e22e74e5d59deadd7119f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 1856056,
"upload_time": "2025-07-20T14:30:35",
"upload_time_iso_8601": "2025-07-20T14:30:35.251544Z",
"url": "https://files.pythonhosted.org/packages/5a/33/78582017bcaf1c80ebb2ee654a53c98a907d7de84c0a505d8a89981eb83c/pptx_extractor_mcp-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-20 14:30:35",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "pptx-extractor-mcp"
}