image-gen-mcp


Nameimage-gen-mcp JSON
Version 1.4.0 PyPI version JSON
download
home_pageNone
SummaryMCP image generation server with modular engines and routing
upload_time2025-09-11 04:36:58
maintainerNone
docs_urlNone
authorNone
requires_python>=3.12
licenseMIT
keywords ai dall-e gemini image-generation imagen mcp model-context-protocol openai vertex-ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🎨 Image Gen MCP Server

> *"Fine. I'll do it myself."* — Thanos (and also me, after trying five different MCP servers that couldn't mix-and-match image models)  
> I wanted a single, **simple** MCP server that lets agents generate **and** edit images across OpenAI, Google (Gemini/Imagen), Azure, Vertex, and OpenRouter—without yak‑shaving. So… here it is.

[![PyPI version](https://img.shields.io/pypi/v/image-gen-mcp.svg)](https://pypi.org/project/image-gen-mcp/) ![Python 3.12+](https://img.shields.io/badge/python-3.12%2B-blue) ![license](https://img.shields.io/badge/license-Apache%202.0-blue)

A multi‑provider **Model Context Protocol** (MCP) server for image **generation** and **editing** with a unified, type‑safe API. It returns MCP `ImageContent` blocks plus compact structured JSON so your client can route, log, or inspect results cleanly.

> [!IMPORTANT]
> This `README.md` is the canonical reference for API, capabilities, and usage. Some `/docs` files may lag behind.

---

## 🗺️ Table of Contents

- [Why this exists](#-why-this-exists)
- [Features](#-features)
- [Quick start (users)](#-quick-start-users)
- [Quick start (developers)](#-quick-start-developers)
- [Configure `mcp.json`](#-configure-mcpjson)
- [Tools API](#-tools-api)
  - [`generate_image`](#-generate_image)
  - [`edit_image`](#-edit_image)
  - [`get_model_capabilities`](#-get_model_capabilities)
- [Providers & Models](#-providers--models)
- [Python client example](#-python-client-example)
- [Environment Variables](#-environment-variables)
- [Running via FastMCP CLI](#-running-via-fastmcp-cli)
- [Troubleshooting & FAQ](#-troubleshooting--faq)
- [Contributing & Releases](#-contributing--releases)
- [License](#-license)

---

## 🧠 Why this exists

Because I couldn’t find an MCP server that spoke **multiple image providers** with **one sane schema**. Some only generated, some only edited, some required summoning three different CLIs at midnight.  
This one prioritizes:

- **One schema** across providers (AR & diffusion)
- **Minimal setup** (`uvx` or `pip`, drop a `mcp.json`, done)
- **Type‑safe I/O** with clear error shapes
- **Discoverability**: ask the server what models are live via `get_model_capabilities`

---

## ✨ Features

- **Unified tools**: `generate_image`, `edit_image`, `get_model_capabilities`
- **Providers**: OpenAI, Azure OpenAI, Google **Gemini**, **Vertex AI** (Imagen & Gemini), OpenRouter
- **Output**: MCP `ImageContent` blocks + small JSON metadata
- **Quality/size/orientation** normalization
- **Masking** support where engines allow it
- **Fail‑soft** errors with stable shape: `{ code, message, details? }`

---

## 🚀 Quick start (users)

Install and use as a published package.

```bash
# With uv (recommended)
uv add image-gen-mcp

# Or with pip
pip install image-gen-mcp
```

Then configure your MCP client.

### Configure `mcp.json`

Use `uvx` to run in an isolated env with correct deps:

```json
{
  "mcpServers": {
    "image-gen-mcp": {
      "command": "uvx",
      "args": ["--from", "image-gen-mcp", "image-gen-mcp"],
      "env": {
        "OPENAI_API_KEY": "your-key-here"
      }
    }
  }
}
```

### First call

```json
{
  "tool": "generate_image",
  "params": {
    "prompt": "A vibrant painting of a fox in a sunflower field",
    "provider": "openai",
    "model": "gpt-image-1"
  }
}
```

---

## 🧑‍💻 Quick start (developers)

Run from source for local development or contributions.

**Prereqs**
- Python **3.12+**
- `uv` (recommended)

**Install deps**

```bash
uv sync --all-extras --dev
```

**Environment**

```bash
cp .env.example .env
# Add your keys
```

**Run the server**

```bash
# stdio (direct)
python -m image_gen_mcp.main

# via FastMCP CLI
fastmcp run image_gen_mcp/main.py:app
```

### Local VS Code `mcp.json` for testing

If you use a VS Code extension or local tooling that reads `.vscode/mcp.json`, here's a safe example to run the local server (do NOT commit secrets):

```json
{
  "servers": {
    "image-gen-mcp": {
      "command": "python",
      "args": ["-m", "image_gen_mcp.main"],
      "env": {
        "# NOTE": "Replace with your local keys for testing; do not commit.",
        "OPENROUTER_API_KEY": "__REPLACE_WITH_YOUR_KEY__"
      }
    }
  },
  "inputs": []
}
```

Use this to run the server from your workspace instead of installing the package from PyPI. For CI or shared repos, store secrets in the environment or a secret manager and avoid checking them into git.

**Dev tasks**

```bash
uv run pytest -v
uv run ruff check .
uv run black --check .
uv run pyright
```

---

## 🧰 Tools API

All tools take **named parameters**. Outputs include structured JSON (for metadata/errors) and MCP `ImageContent` blocks (for actual images).

### `generate_image`

Create one or more images from a text prompt.

**Example**

```json
{
  "prompt": "A vibrant painting of a fox in a sunflower field",
  "provider": "openai",
  "model": "gpt-image-1",
  "n": 2,
  "size": "M",
  "orientation": "landscape"
}
```

**Parameters**

| Field | Type | Description |
|---|---|---|
| `prompt` | str | **Required.** Text description. |
| `provider` | enum | **Required.** `openai` \| `openrouter` \| `azure` \| `vertex` \| `gemini`. |
| `model` | enum | **Required.** Model id (see matrix). |
| `n` | int | Optional. Default 1; provider limits apply. |
| `size` | enum | Optional. `S` \| `M` \| `L`. |
| `orientation` | enum | Optional. `square` \| `portrait` \| `landscape`. |
| `quality` | enum | Optional. `draft` \| `standard` \| `high`. |
| `background` | enum | Optional. `transparent` \| `opaque` (when supported). |
| `negative_prompt` | str | Optional. Used when provider supports it. |
| `directory` | str | Optional. Filesystem directory where the server should save generated images. If omitted a unique temp directory is used. |

---

### `edit_image`

Edit an image with a prompt and optional mask.

**Example**

```json
{
  "prompt": "Remove the background and make the subject wear a red scarf",
  "provider": "openai",
  "model": "gpt-image-1",
  "images": ["data:image/png;base64,..."],
  "mask": null
}
```

**Parameters**

| Field | Type | Description |
|---|---|---|
| `prompt` | str | **Required.** Edit instruction. |
| `images` | list<str> | **Required.** One or more source images (base64, data URL, or https URL). Most models use only the first image. |
| `mask` | str | Optional. Mask as base64/data URL/https URL. |
| `provider` | enum | **Required.** See above. |
| `model` | enum | **Required.** Model id (see matrix). |
| `n` | int | Optional. Default 1; provider limits apply. |
| `size` | enum | Optional. `S` \| `M` \| `L`. |
| `orientation` | enum | Optional. `square` \| `portrait` \| `landscape`. |
| `quality` | enum | Optional. `draft` \| `standard` \| `high`. |
| `background` | enum | Optional. `transparent` \| `opaque`. |
| `negative_prompt` | str | Optional. Negative prompt. |
| `directory` | str | Optional. Filesystem directory where the server should save edited images. If omitted a unique temp directory is used. |

---

### `get_model_capabilities`

Discover which providers/models are **actually** enabled based on your environment.

**Example**

```json
{ "provider": "openai" }
```

Call with no params to list **all** enabled providers/models.

**Output**: a `CapabilitiesResponse` with providers, models, and features.

---

## 🧭 Providers & Models

Routing is handled by a `ModelFactory` that maps model → engine. A compact, curated list keeps things understandable.

### Model Matrix

| Model | Family | Providers | Generate | Edit | Mask |
|---|---|---|:---:|:---:|:---:|
| `gpt-image-1` | AR | `openai`, `azure` | ✅ | ✅ | ✅ (OpenAI/Azure) |
| `dall-e-3` | Diffusion | `openai`, `azure` | ✅ | ❌ | — |
| `gemini-2.5-flash-image-preview` | AR | `gemini`, `vertex` | ✅ | ✅ (maskless) | ❌ |
| `imagen-4.0-generate-001` | Diffusion | `vertex` | ✅ | ❌ | — |
| `imagen-3.0-generate-002` | Diffusion | `vertex` | ✅ | ❌ | — |
| `imagen-4.0-fast-generate-001` | Diffusion | `vertex` | ✅ | ❌ | — |
| `imagen-4.0-ultra-generate-001` | Diffusion | `vertex` | ✅ | ❌ | — |
| `imagen-3.0-capability-001` | Diffusion | `vertex` | ❌ | ✅ | ✅ (mask via mask config) |
| `google/gemini-2.5-flash-image-preview` | AR | `openrouter` | ✅ | ✅ (maskless) | ❌ |

### Provider Model Support

| Provider | Supported Models |
|---|---|
| `openai` | `gpt-image-1`, `dall-e-3` |
| `azure` | `gpt-image-1`, `dall-e-3` |
| `gemini` | `gemini-2.5-flash-image-preview` |
| `vertex` | `imagen-4.0-generate-001`, `imagen-3.0-generate-002`, `gemini-2.5-flash-image-preview` |
| `openrouter` | `google/gemini-2.5-flash-image-preview` |

---

## 🐍 Python client example

```python
import asyncio
from fastmcp import Client


async def main():
    # Assumes the server is running via: python -m image_gen_mcp.main
    async with Client("image_gen_mcp/main.py") as client:
        # 1) Capabilities
        caps = await client.call_tool("get_model_capabilities")
        print("Capabilities:", caps.structured_content or caps.text)

        # 2) Generate
        gen_result = await client.call_tool(
            "generate_image",
            {
                "prompt": "a watercolor fox in a forest, soft light",
                "provider": "openai",
                "model": "gpt-image-1",
            },
        )
        print("Generate Result:", gen_result.structured_content)
        print("Image blocks:", len(gen_result.content))


asyncio.run(main())
```

---

## 🔐 Environment variables

Set only what you need:

| Variable | Required for | Description |
|---|---|---|
| `OPENAI_API_KEY` | OpenAI | API key for OpenAI. |
| `AZURE_OPENAI_API_KEY` | Azure OpenAI | Azure OpenAI key. |
| `AZURE_OPENAI_ENDPOINT` | Azure OpenAI | Azure endpoint URL. |
| `AZURE_OPENAI_API_VERSION` | Azure OpenAI | Optional; default `2024-02-15-preview`. |
| `GEMINI_API_KEY` | Gemini | Gemini Developer API key. |
| `OPENROUTER_API_KEY` | OpenRouter | OpenRouter API key. |
| `VERTEX_PROJECT` | Vertex AI | GCP project id. |
| `VERTEX_LOCATION` | Vertex AI | GCP region (e.g. `us-central1`). |
| `VERTEX_CREDENTIALS_PATH` | Vertex AI | Optional path to GCP JSON; ADC supported. |

---

## 🏃 Running via FastMCP CLI

Supports multiple transports:

- **stdio:** `fastmcp run image_gen_mcp/main.py:app`
- **SSE (HTTP):** `fastmcp run image_gen_mcp/main.py:app --transport sse --host 127.0.0.1 --port 8000`
- **HTTP:** `fastmcp run image_gen_mcp/main.py:app --transport http --host 127.0.0.1 --port 8000 --path /mcp`

**Design notes**

- **Schema:** public contract in `image_gen_mcp/schema.py` (Pydantic).
- **Engines:** modular adapters in `image_gen_mcp/engines/`, selected by `ModelFactory`.
- **Capabilities:** discovered dynamically via `image_gen_mcp/settings.py`.
- **Errors:** stable JSON error `{ code, message, details? }`.

---

## ⚠️ Testing remarks

I tested this project locally using the `openrouter`-backed model only. I could not access Gemini or OpenAI from my location (Hong Kong) due to regional restrictions — thanks, US government — so I couldn't fully exercise those providers.

Because of that limitation, the `gemini`/`vertex` and `openai` (including Azure) adapters may contain bugs or untested edge cases. If you use those providers and find issues, please open an issue or, even better, submit a pull request with a fix — contributions are welcome.

Suggested info to include when filing an issue:
- Your provider and model (e.g., `openai:gpt-image-1`, `vertex:imagen-4.0-generate-001`)
- Full stderr/server logs showing the error
- Minimal reproduction steps or a short test script

Thanks — and PRs welcome!

---

## 🤝 Contributing & Releases

PRs welcome! Please run tests and linters locally.

**Release process (GitHub Actions)**

1. **Automated (recommended)**
   - Actions → **Manual Release**
   - Pick version bump: patch / minor / major
   - The workflow tags, builds the changelog, and publishes to PyPI

2. **Manual**
   - `git tag vX.Y.Z`
   - `git push origin vX.Y.Z`
   - Create a GitHub Release from the tag

---

## 📄 License

Apache-2.0 — see `LICENSE`.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "image-gen-mcp",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.12",
    "maintainer_email": "Simon Choi <simon.choi034@gmail.com>",
    "keywords": "ai, dall-e, gemini, image-generation, imagen, mcp, model-context-protocol, openai, vertex-ai",
    "author": null,
    "author_email": "Simon Choi <simon.choi034@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/33/04/1f189e0539063c98d7d3045b1d250c1817d367eea4a2afaf3dd9952a7c64/image_gen_mcp-1.4.0.tar.gz",
    "platform": null,
    "description": "# \ud83c\udfa8 Image Gen MCP Server\n\n> *\"Fine. I'll do it myself.\"* \u2014 Thanos (and also me, after trying five different MCP servers that couldn't mix-and-match image models)  \n> I wanted a single, **simple** MCP server that lets agents generate **and** edit images across OpenAI, Google (Gemini/Imagen), Azure, Vertex, and OpenRouter\u2014without yak\u2011shaving. So\u2026 here it is.\n\n[![PyPI version](https://img.shields.io/pypi/v/image-gen-mcp.svg)](https://pypi.org/project/image-gen-mcp/) ![Python 3.12+](https://img.shields.io/badge/python-3.12%2B-blue) ![license](https://img.shields.io/badge/license-Apache%202.0-blue)\n\nA multi\u2011provider **Model Context Protocol** (MCP) server for image **generation** and **editing** with a unified, type\u2011safe API. It returns MCP `ImageContent` blocks plus compact structured JSON so your client can route, log, or inspect results cleanly.\n\n> [!IMPORTANT]\n> This `README.md` is the canonical reference for API, capabilities, and usage. Some `/docs` files may lag behind.\n\n---\n\n## \ud83d\uddfa\ufe0f Table of Contents\n\n- [Why this exists](#-why-this-exists)\n- [Features](#-features)\n- [Quick start (users)](#-quick-start-users)\n- [Quick start (developers)](#-quick-start-developers)\n- [Configure `mcp.json`](#-configure-mcpjson)\n- [Tools API](#-tools-api)\n  - [`generate_image`](#-generate_image)\n  - [`edit_image`](#-edit_image)\n  - [`get_model_capabilities`](#-get_model_capabilities)\n- [Providers & Models](#-providers--models)\n- [Python client example](#-python-client-example)\n- [Environment Variables](#-environment-variables)\n- [Running via FastMCP CLI](#-running-via-fastmcp-cli)\n- [Troubleshooting & FAQ](#-troubleshooting--faq)\n- [Contributing & Releases](#-contributing--releases)\n- [License](#-license)\n\n---\n\n## \ud83e\udde0 Why this exists\n\nBecause I couldn\u2019t find an MCP server that spoke **multiple image providers** with **one sane schema**. Some only generated, some only edited, some required summoning three different CLIs at midnight.  \nThis one prioritizes:\n\n- **One schema** across providers (AR & diffusion)\n- **Minimal setup** (`uvx` or `pip`, drop a `mcp.json`, done)\n- **Type\u2011safe I/O** with clear error shapes\n- **Discoverability**: ask the server what models are live via `get_model_capabilities`\n\n---\n\n## \u2728 Features\n\n- **Unified tools**: `generate_image`, `edit_image`, `get_model_capabilities`\n- **Providers**: OpenAI, Azure OpenAI, Google **Gemini**, **Vertex AI** (Imagen & Gemini), OpenRouter\n- **Output**: MCP `ImageContent` blocks + small JSON metadata\n- **Quality/size/orientation** normalization\n- **Masking** support where engines allow it\n- **Fail\u2011soft** errors with stable shape: `{ code, message, details? }`\n\n---\n\n## \ud83d\ude80 Quick start (users)\n\nInstall and use as a published package.\n\n```bash\n# With uv (recommended)\nuv add image-gen-mcp\n\n# Or with pip\npip install image-gen-mcp\n```\n\nThen configure your MCP client.\n\n### Configure `mcp.json`\n\nUse `uvx` to run in an isolated env with correct deps:\n\n```json\n{\n  \"mcpServers\": {\n    \"image-gen-mcp\": {\n      \"command\": \"uvx\",\n      \"args\": [\"--from\", \"image-gen-mcp\", \"image-gen-mcp\"],\n      \"env\": {\n        \"OPENAI_API_KEY\": \"your-key-here\"\n      }\n    }\n  }\n}\n```\n\n### First call\n\n```json\n{\n  \"tool\": \"generate_image\",\n  \"params\": {\n    \"prompt\": \"A vibrant painting of a fox in a sunflower field\",\n    \"provider\": \"openai\",\n    \"model\": \"gpt-image-1\"\n  }\n}\n```\n\n---\n\n## \ud83e\uddd1\u200d\ud83d\udcbb Quick start (developers)\n\nRun from source for local development or contributions.\n\n**Prereqs**\n- Python **3.12+**\n- `uv` (recommended)\n\n**Install deps**\n\n```bash\nuv sync --all-extras --dev\n```\n\n**Environment**\n\n```bash\ncp .env.example .env\n# Add your keys\n```\n\n**Run the server**\n\n```bash\n# stdio (direct)\npython -m image_gen_mcp.main\n\n# via FastMCP CLI\nfastmcp run image_gen_mcp/main.py:app\n```\n\n### Local VS Code `mcp.json` for testing\n\nIf you use a VS Code extension or local tooling that reads `.vscode/mcp.json`, here's a safe example to run the local server (do NOT commit secrets):\n\n```json\n{\n  \"servers\": {\n    \"image-gen-mcp\": {\n      \"command\": \"python\",\n      \"args\": [\"-m\", \"image_gen_mcp.main\"],\n      \"env\": {\n        \"# NOTE\": \"Replace with your local keys for testing; do not commit.\",\n        \"OPENROUTER_API_KEY\": \"__REPLACE_WITH_YOUR_KEY__\"\n      }\n    }\n  },\n  \"inputs\": []\n}\n```\n\nUse this to run the server from your workspace instead of installing the package from PyPI. For CI or shared repos, store secrets in the environment or a secret manager and avoid checking them into git.\n\n**Dev tasks**\n\n```bash\nuv run pytest -v\nuv run ruff check .\nuv run black --check .\nuv run pyright\n```\n\n---\n\n## \ud83e\uddf0 Tools API\n\nAll tools take **named parameters**. Outputs include structured JSON (for metadata/errors) and MCP `ImageContent` blocks (for actual images).\n\n### `generate_image`\n\nCreate one or more images from a text prompt.\n\n**Example**\n\n```json\n{\n  \"prompt\": \"A vibrant painting of a fox in a sunflower field\",\n  \"provider\": \"openai\",\n  \"model\": \"gpt-image-1\",\n  \"n\": 2,\n  \"size\": \"M\",\n  \"orientation\": \"landscape\"\n}\n```\n\n**Parameters**\n\n| Field | Type | Description |\n|---|---|---|\n| `prompt` | str | **Required.** Text description. |\n| `provider` | enum | **Required.** `openai` \\| `openrouter` \\| `azure` \\| `vertex` \\| `gemini`. |\n| `model` | enum | **Required.** Model id (see matrix). |\n| `n` | int | Optional. Default 1; provider limits apply. |\n| `size` | enum | Optional. `S` \\| `M` \\| `L`. |\n| `orientation` | enum | Optional. `square` \\| `portrait` \\| `landscape`. |\n| `quality` | enum | Optional. `draft` \\| `standard` \\| `high`. |\n| `background` | enum | Optional. `transparent` \\| `opaque` (when supported). |\n| `negative_prompt` | str | Optional. Used when provider supports it. |\n| `directory` | str | Optional. Filesystem directory where the server should save generated images. If omitted a unique temp directory is used. |\n\n---\n\n### `edit_image`\n\nEdit an image with a prompt and optional mask.\n\n**Example**\n\n```json\n{\n  \"prompt\": \"Remove the background and make the subject wear a red scarf\",\n  \"provider\": \"openai\",\n  \"model\": \"gpt-image-1\",\n  \"images\": [\"data:image/png;base64,...\"],\n  \"mask\": null\n}\n```\n\n**Parameters**\n\n| Field | Type | Description |\n|---|---|---|\n| `prompt` | str | **Required.** Edit instruction. |\n| `images` | list&lt;str&gt; | **Required.** One or more source images (base64, data URL, or https URL). Most models use only the first image. |\n| `mask` | str | Optional. Mask as base64/data URL/https URL. |\n| `provider` | enum | **Required.** See above. |\n| `model` | enum | **Required.** Model id (see matrix). |\n| `n` | int | Optional. Default 1; provider limits apply. |\n| `size` | enum | Optional. `S` \\| `M` \\| `L`. |\n| `orientation` | enum | Optional. `square` \\| `portrait` \\| `landscape`. |\n| `quality` | enum | Optional. `draft` \\| `standard` \\| `high`. |\n| `background` | enum | Optional. `transparent` \\| `opaque`. |\n| `negative_prompt` | str | Optional. Negative prompt. |\n| `directory` | str | Optional. Filesystem directory where the server should save edited images. If omitted a unique temp directory is used. |\n\n---\n\n### `get_model_capabilities`\n\nDiscover which providers/models are **actually** enabled based on your environment.\n\n**Example**\n\n```json\n{ \"provider\": \"openai\" }\n```\n\nCall with no params to list **all** enabled providers/models.\n\n**Output**: a `CapabilitiesResponse` with providers, models, and features.\n\n---\n\n## \ud83e\udded Providers & Models\n\nRouting is handled by a `ModelFactory` that maps model \u2192 engine. A compact, curated list keeps things understandable.\n\n### Model Matrix\n\n| Model | Family | Providers | Generate | Edit | Mask |\n|---|---|---|:---:|:---:|:---:|\n| `gpt-image-1` | AR | `openai`, `azure` | \u2705 | \u2705 | \u2705 (OpenAI/Azure) |\n| `dall-e-3` | Diffusion | `openai`, `azure` | \u2705 | \u274c | \u2014 |\n| `gemini-2.5-flash-image-preview` | AR | `gemini`, `vertex` | \u2705 | \u2705 (maskless) | \u274c |\n| `imagen-4.0-generate-001` | Diffusion | `vertex` | \u2705 | \u274c | \u2014 |\n| `imagen-3.0-generate-002` | Diffusion | `vertex` | \u2705 | \u274c | \u2014 |\n| `imagen-4.0-fast-generate-001` | Diffusion | `vertex` | \u2705 | \u274c | \u2014 |\n| `imagen-4.0-ultra-generate-001` | Diffusion | `vertex` | \u2705 | \u274c | \u2014 |\n| `imagen-3.0-capability-001` | Diffusion | `vertex` | \u274c | \u2705 | \u2705 (mask via mask config) |\n| `google/gemini-2.5-flash-image-preview` | AR | `openrouter` | \u2705 | \u2705 (maskless) | \u274c |\n\n### Provider Model Support\n\n| Provider | Supported Models |\n|---|---|\n| `openai` | `gpt-image-1`, `dall-e-3` |\n| `azure` | `gpt-image-1`, `dall-e-3` |\n| `gemini` | `gemini-2.5-flash-image-preview` |\n| `vertex` | `imagen-4.0-generate-001`, `imagen-3.0-generate-002`, `gemini-2.5-flash-image-preview` |\n| `openrouter` | `google/gemini-2.5-flash-image-preview` |\n\n---\n\n## \ud83d\udc0d Python client example\n\n```python\nimport asyncio\nfrom fastmcp import Client\n\n\nasync def main():\n    # Assumes the server is running via: python -m image_gen_mcp.main\n    async with Client(\"image_gen_mcp/main.py\") as client:\n        # 1) Capabilities\n        caps = await client.call_tool(\"get_model_capabilities\")\n        print(\"Capabilities:\", caps.structured_content or caps.text)\n\n        # 2) Generate\n        gen_result = await client.call_tool(\n            \"generate_image\",\n            {\n                \"prompt\": \"a watercolor fox in a forest, soft light\",\n                \"provider\": \"openai\",\n                \"model\": \"gpt-image-1\",\n            },\n        )\n        print(\"Generate Result:\", gen_result.structured_content)\n        print(\"Image blocks:\", len(gen_result.content))\n\n\nasyncio.run(main())\n```\n\n---\n\n## \ud83d\udd10 Environment variables\n\nSet only what you need:\n\n| Variable | Required for | Description |\n|---|---|---|\n| `OPENAI_API_KEY` | OpenAI | API key for OpenAI. |\n| `AZURE_OPENAI_API_KEY` | Azure OpenAI | Azure OpenAI key. |\n| `AZURE_OPENAI_ENDPOINT` | Azure OpenAI | Azure endpoint URL. |\n| `AZURE_OPENAI_API_VERSION` | Azure OpenAI | Optional; default `2024-02-15-preview`. |\n| `GEMINI_API_KEY` | Gemini | Gemini Developer API key. |\n| `OPENROUTER_API_KEY` | OpenRouter | OpenRouter API key. |\n| `VERTEX_PROJECT` | Vertex AI | GCP project id. |\n| `VERTEX_LOCATION` | Vertex AI | GCP region (e.g. `us-central1`). |\n| `VERTEX_CREDENTIALS_PATH` | Vertex AI | Optional path to GCP JSON; ADC supported. |\n\n---\n\n## \ud83c\udfc3 Running via FastMCP CLI\n\nSupports multiple transports:\n\n- **stdio:** `fastmcp run image_gen_mcp/main.py:app`\n- **SSE (HTTP):** `fastmcp run image_gen_mcp/main.py:app --transport sse --host 127.0.0.1 --port 8000`\n- **HTTP:** `fastmcp run image_gen_mcp/main.py:app --transport http --host 127.0.0.1 --port 8000 --path /mcp`\n\n**Design notes**\n\n- **Schema:** public contract in `image_gen_mcp/schema.py` (Pydantic).\n- **Engines:** modular adapters in `image_gen_mcp/engines/`, selected by `ModelFactory`.\n- **Capabilities:** discovered dynamically via `image_gen_mcp/settings.py`.\n- **Errors:** stable JSON error `{ code, message, details? }`.\n\n---\n\n## \u26a0\ufe0f Testing remarks\n\nI tested this project locally using the `openrouter`-backed model only. I could not access Gemini or OpenAI from my location (Hong Kong) due to regional restrictions \u2014 thanks, US government \u2014 so I couldn't fully exercise those providers.\n\nBecause of that limitation, the `gemini`/`vertex` and `openai` (including Azure) adapters may contain bugs or untested edge cases. If you use those providers and find issues, please open an issue or, even better, submit a pull request with a fix \u2014 contributions are welcome.\n\nSuggested info to include when filing an issue:\n- Your provider and model (e.g., `openai:gpt-image-1`, `vertex:imagen-4.0-generate-001`)\n- Full stderr/server logs showing the error\n- Minimal reproduction steps or a short test script\n\nThanks \u2014 and PRs welcome!\n\n---\n\n## \ud83e\udd1d Contributing & Releases\n\nPRs welcome! Please run tests and linters locally.\n\n**Release process (GitHub Actions)**\n\n1. **Automated (recommended)**\n   - Actions \u2192 **Manual Release**\n   - Pick version bump: patch / minor / major\n   - The workflow tags, builds the changelog, and publishes to PyPI\n\n2. **Manual**\n   - `git tag vX.Y.Z`\n   - `git push origin vX.Y.Z`\n   - Create a GitHub Release from the tag\n\n---\n\n## \ud83d\udcc4 License\n\nApache-2.0 \u2014 see `LICENSE`.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "MCP image generation server with modular engines and routing",
    "version": "1.4.0",
    "project_urls": {
        "Changelog": "https://github.com/simonChoi034/image-gen-mcp/releases",
        "Homepage": "https://github.com/simonChoi034/image-gen-mcp",
        "Issues": "https://github.com/simonChoi034/image-gen-mcp/issues",
        "Repository": "https://github.com/simonChoi034/image-gen-mcp"
    },
    "split_keywords": [
        "ai",
        " dall-e",
        " gemini",
        " image-generation",
        " imagen",
        " mcp",
        " model-context-protocol",
        " openai",
        " vertex-ai"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b1f7bcb718febd6299f2168617ef26e4b21776fede974ce2a1675812e7647e05",
                "md5": "aa05ceee9c22d17db821edde34ba1fa5",
                "sha256": "9068c7ac85cd1ec2d49ac41e4afe9928671fbad2e3426c92942c17f707289317"
            },
            "downloads": -1,
            "filename": "image_gen_mcp-1.4.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "aa05ceee9c22d17db821edde34ba1fa5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.12",
            "size": 54300,
            "upload_time": "2025-09-11T04:36:57",
            "upload_time_iso_8601": "2025-09-11T04:36:57.254324Z",
            "url": "https://files.pythonhosted.org/packages/b1/f7/bcb718febd6299f2168617ef26e4b21776fede974ce2a1675812e7647e05/image_gen_mcp-1.4.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "33041f189e0539063c98d7d3045b1d250c1817d367eea4a2afaf3dd9952a7c64",
                "md5": "219a52a3e9b64151caabb2d016c63d8d",
                "sha256": "6fa2652a20885b3a68792c8775d83453bb320b79e58c3c5f593e6401971db6bc"
            },
            "downloads": -1,
            "filename": "image_gen_mcp-1.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "219a52a3e9b64151caabb2d016c63d8d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.12",
            "size": 2276379,
            "upload_time": "2025-09-11T04:36:58",
            "upload_time_iso_8601": "2025-09-11T04:36:58.974401Z",
            "url": "https://files.pythonhosted.org/packages/33/04/1f189e0539063c98d7d3045b1d250c1817d367eea4a2afaf3dd9952a7c64/image_gen_mcp-1.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-11 04:36:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "simonChoi034",
    "github_project": "image-gen-mcp",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "image-gen-mcp"
}
        
Elapsed time: 0.54382s