# MediaLLM MCP Server
MCP server that provides AI-powered media processing capabilities for FFmpeg operations through natural language commands.
MediaLLM converts natural language requests into precise FFmpeg commands and scans workspaces for media files.
**[Full Documentation](https://mediallm.arunbrahma.com/)**
## Installation
```bash
# Using pip
pip install mediallm-mcp
# Using uv (recommended)
uv add mediallm-mcp
```
## Usage
```bash
# STDIO (default)
mediallm-mcp
# Streamable HTTP
mediallm-mcp --http --port 3001
# SSE
mediallm-mcp --sse --port 3001
```
## Running in Docker
```bash
# Build image
cd packages/mediallm-mcp
docker build -t mediallm-mcp .
# Run with media directory mounted
docker run -it --rm \
-v /path/to/media:/workspace \
mediallm-mcp
```
## Accessing from Claude Desktop
Add to `claude_desktop_config.json`:
```json
{
"mcpServers": {
"mediallm-mcp": {
"command": "uvx",
"args": ["mediallm-mcp"],
"env": {}
}
}
}
```
**Config file location:**
- **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
## Accessing from Claude Code
Add to `.mcp.json` in project root:
```json
{
"mcpServers": {
"mediallm-mcp": {
"command": "uvx",
"args": ["mediallm-mcp"],
"env": {}
}
}
}
```
## Accessing from Cursor
[](https://cursor.com/en/install-mcp?name=mediallm-mcp&config=eyJjb21tYW5kIjogInV2eCIsICJhcmdzIjogWyJtZWRpYWxsbS1tY3AiXX0%3D)
Or manually add to `.cursor/mcp.json`:
```json
{
"mcpServers": {
"mediallm-mcp": {
"command": "uvx",
"args": ["mediallm-mcp"],
"env": {}
}
}
}
```
## Environment Variables (Optional) for MCP configuration
- `MEDIALLM_WORKSPACE` - Specify media directory (default: current working directory)
- `MEDIALLM_MODEL` - Override LLM model (default: llama3.1:latest)
- `MEDIALLM_OLLAMA_HOST` - Ollama server URL (default: http://localhost:11434)
- `MEDIALLM_OUTPUT_DIR` - Output directory (default: current working directory)
## Debugging
Use MCP inspector to test the connection:
```bash
npx @modelcontextprotocol/inspector mediallm-mcp
```
Raw data
{
"_id": null,
"home_page": null,
"name": "mediallm-mcp",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Arun Brahma <mithubrahma94@gmail.com>",
"keywords": "agent, ai, anthropic, claude, ffmpeg, llm, mcp, media-processing, model-context-protocol, natural-language, ollama, server, tools, video",
"author": null,
"author_email": "Arun Brahma <mithubrahma94@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/87/76/4af57f7015bc2ba07c9a96307fa4a2b4023aaeba0ed5c898c58815f1b78a/mediallm_mcp-0.0.2.tar.gz",
"platform": null,
"description": "# MediaLLM MCP Server\n\nMCP server that provides AI-powered media processing capabilities for FFmpeg operations through natural language commands.\nMediaLLM converts natural language requests into precise FFmpeg commands and scans workspaces for media files.\n\n**[Full Documentation](https://mediallm.arunbrahma.com/)**\n\n## Installation\n\n```bash\n# Using pip\npip install mediallm-mcp\n\n# Using uv (recommended)\nuv add mediallm-mcp\n```\n\n## Usage\n\n```bash\n# STDIO (default)\nmediallm-mcp\n\n# Streamable HTTP\nmediallm-mcp --http --port 3001\n\n# SSE\nmediallm-mcp --sse --port 3001\n```\n\n## Running in Docker\n\n```bash\n# Build image\ncd packages/mediallm-mcp\ndocker build -t mediallm-mcp .\n\n# Run with media directory mounted\ndocker run -it --rm \\\n -v /path/to/media:/workspace \\\n mediallm-mcp\n```\n\n## Accessing from Claude Desktop\n\nAdd to `claude_desktop_config.json`:\n\n```json\n{\n \"mcpServers\": {\n \"mediallm-mcp\": {\n \"command\": \"uvx\",\n \"args\": [\"mediallm-mcp\"],\n \"env\": {}\n }\n }\n}\n```\n\n**Config file location:**\n- **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`\n- **Windows:** `%APPDATA%\\Claude\\claude_desktop_config.json`\n\n## Accessing from Claude Code\n\nAdd to `.mcp.json` in project root:\n\n```json\n{\n \"mcpServers\": {\n \"mediallm-mcp\": {\n \"command\": \"uvx\",\n \"args\": [\"mediallm-mcp\"],\n \"env\": {}\n }\n }\n}\n```\n\n## Accessing from Cursor\n\n[](https://cursor.com/en/install-mcp?name=mediallm-mcp&config=eyJjb21tYW5kIjogInV2eCIsICJhcmdzIjogWyJtZWRpYWxsbS1tY3AiXX0%3D)\n\nOr manually add to `.cursor/mcp.json`:\n\n```json\n{\n \"mcpServers\": {\n \"mediallm-mcp\": {\n \"command\": \"uvx\",\n \"args\": [\"mediallm-mcp\"],\n \"env\": {}\n }\n }\n}\n```\n\n## Environment Variables (Optional) for MCP configuration\n\n- `MEDIALLM_WORKSPACE` - Specify media directory (default: current working directory)\n- `MEDIALLM_MODEL` - Override LLM model (default: llama3.1:latest)\n- `MEDIALLM_OLLAMA_HOST` - Ollama server URL (default: http://localhost:11434)\n- `MEDIALLM_OUTPUT_DIR` - Output directory (default: current working directory)\n\n## Debugging\n\nUse MCP inspector to test the connection:\n\n```bash\nnpx @modelcontextprotocol/inspector mediallm-mcp\n```",
"bugtrack_url": null,
"license": null,
"summary": "MCP Server for MediaLLM",
"version": "0.0.2",
"project_urls": {
"Homepage": "https://github.com/iamarunbrahma/mediallm",
"Repository": "https://github.com/iamarunbrahma/mediallm"
},
"split_keywords": [
"agent",
" ai",
" anthropic",
" claude",
" ffmpeg",
" llm",
" mcp",
" media-processing",
" model-context-protocol",
" natural-language",
" ollama",
" server",
" tools",
" video"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "f0d55f1bfbe7018d95bcbabc513b4e123b16cc3439201bb8962ff136962b0382",
"md5": "d71fca45cf7a2a1a5148949e445781a2",
"sha256": "fb2fb7cc029133a34477ed6d3dbd7eb2a65c0add3e443e79c36abf1d3b46d727"
},
"downloads": -1,
"filename": "mediallm_mcp-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d71fca45cf7a2a1a5148949e445781a2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 5767,
"upload_time": "2025-08-31T17:54:04",
"upload_time_iso_8601": "2025-08-31T17:54:04.161018Z",
"url": "https://files.pythonhosted.org/packages/f0/d5/5f1bfbe7018d95bcbabc513b4e123b16cc3439201bb8962ff136962b0382/mediallm_mcp-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "87764af57f7015bc2ba07c9a96307fa4a2b4023aaeba0ed5c898c58815f1b78a",
"md5": "cb769fcd32b4a335ebc66bb3b167d7e5",
"sha256": "a64a46fe8c4dc4fc412411873e1c8a60879ba34d911306ffaf4eb769f4f0c155"
},
"downloads": -1,
"filename": "mediallm_mcp-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "cb769fcd32b4a335ebc66bb3b167d7e5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 6853,
"upload_time": "2025-08-31T17:54:07",
"upload_time_iso_8601": "2025-08-31T17:54:07.169328Z",
"url": "https://files.pythonhosted.org/packages/87/76/4af57f7015bc2ba07c9a96307fa4a2b4023aaeba0ed5c898c58815f1b78a/mediallm_mcp-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-31 17:54:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "iamarunbrahma",
"github_project": "mediallm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "mediallm-mcp"
}