pixelle


Namepixelle JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryPixelle MCP: Convert ComfyUI workflows into MCP Tools with a single command, providing an MCP server and a Chainlit-based web UI.
upload_time2025-09-03 14:25:56
maintainerNone
docs_urlNone
authorAIDC-AI
requires_python>=3.11
licenseMIT
keywords aigc chainlit comfyui llm mcp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">🎨 Pixelle MCP - Omnimodal Agent Framework</h1>

<p align="center"><b>English</b> | <a href="README_CN.md">δΈ­ζ–‡</a></p>

<p align="center">✨ An AIGC solution based on the MCP protocol, seamlessly converting ComfyUI workflows into MCP tools with zero code, empowering LLM and ComfyUI integration.</p>

![](docs/readme-1.png)

https://github.com/user-attachments/assets/65422cef-96f9-44fe-a82b-6a124674c417


## πŸ“‹ Recent Updates

- βœ… **2025-09-03**: Architecture refactoring from three services to unified application; added CLI tool support; published to [PyPI](https://pypi.org/project/pixelle/)
- βœ… **2025-08-12**: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more


## πŸš€ Features

- βœ… πŸ”„ **Full-modal Support**: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation
- βœ… 🧩 **ComfyUI Ecosystem**: Built on [ComfyUI](https://github.com/comfyanonymous/ComfyUI), inheriting all capabilities from the open ComfyUI ecosystem
- βœ… πŸ”§ **Zero-code Development**: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools
- βœ… πŸ—„οΈ **MCP Server**: Based on the [MCP](https://modelcontextprotocol.io/introduction) protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)
- βœ… 🌐 **Web Interface**: Developed based on the [Chainlit](https://github.com/Chainlit/chainlit) framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers
- βœ… πŸ“¦ **One-click Deployment**: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box
- βœ… βš™οΈ **Simplified Configuration**: Uses environment variable configuration scheme, simple and intuitive configuration
- βœ… πŸ€– **Multi-LLM Support**: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more


## πŸ“ Project Architecture

Pixelle MCP adopts a **unified architecture design**, integrating MCP server, web interface, and file services into one application, providing:

- 🌐 **Web Interface**: Chainlit-based chat interface supporting multimodal interaction
- πŸ”Œ **MCP Endpoint**: For external MCP clients (such as Cursor, Claude Desktop) to connect
- πŸ“ **File Service**: Handles file upload, download, and storage
- πŸ› οΈ **Workflow Engine**: Automatically converts ComfyUI workflows into MCP tools

![](docs/%20mcp_structure.png)


## πŸƒβ€β™‚οΈ Quick Start

Choose the deployment method that best suits your needs, from simple to complex:

### 🎯 Method 1: One-click Experience

> πŸ’‘ **Zero configuration startup, perfect for quick experience and testing**

#### πŸš€ Temporary Run

```bash
# Start with one command, no system installation required
uvx pixelle@latest
```

#### πŸ“¦ Persistent Installation

```bash
# Install to system
pip install pixelle

# Start service
pixelle
```

After startup, it will automatically enter the **configuration wizard** to guide you through ComfyUI connection and LLM configuration.

### πŸ› οΈ Method 2: Local Development Deployment

> πŸ’‘ **Supports custom workflows and secondary development**

#### πŸ“₯ 1. Get Source Code

```bash
git clone https://github.com/AIDC-AI/Pixelle-MCP.git
cd Pixelle-MCP
```

#### πŸš€ 2. Start Service

```bash
# Interactive mode (recommended for first use, includes configuration wizard)
uv run pixelle

# Direct start (when already configured)
uv run pixelle start

# Background operation
uv run pixelle start --daemon

# Force start (terminate conflicting processes)
uv run pixelle start --force
```

#### πŸ”§ 3. Add Custom Workflows (Optional)

```bash
# Copy example workflows to data directory
cp -r workflows/* ~/.pixelle/data/custom_workflows/
```

**⚠️ Important**: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.

### πŸŽ›οΈ CLI Commands

All startup methods support the same subcommands, but with different invocation methods:

#### πŸ“¦ pip install Method
```bash
# Enter interactive mode when no parameters
pixelle

# Service management commands
pixelle start
pixelle status
pixelle stop
pixelle logs
pixelle logs --follow
```

#### πŸš€ uvx Method
```bash
# Enter interactive mode when no parameters
uvx pixelle@latest

# Service management commands
uvx pixelle@latest start
uvx pixelle@latest status
uvx pixelle@latest stop
uvx pixelle@latest logs
uvx pixelle@latest logs --follow
```

#### πŸ› οΈ uv run Method
```bash
# Enter interactive mode when no parameters
uv run pixelle

# Service management commands
uv run pixelle start
uv run pixelle status
uv run pixelle stop
uv run pixelle logs
uv run pixelle logs --follow
```

**πŸ’‘ Tip**: All methods default to interactive mode when no subcommand is provided

### 🐳 Method 3: Docker Deployment

> πŸ’‘ **Suitable for production environments and containerized deployment**

#### πŸ“‹ 1. Prepare Configuration

```bash
git clone https://github.com/AIDC-AI/Pixelle-MCP.git
cd Pixelle-MCP

# Create environment configuration file
cp .env.example .env
# Edit .env file to configure your ComfyUI address and LLM settings
```

#### πŸš€ 2. Start Container

```bash
# Start all services in background
docker compose up -d

# View logs
docker compose logs -f
```

### 🌐 Access Services

Regardless of which method you use, after startup you can access via:

- **🌐 Web Interface**: http://localhost:9004  
  *Default username and password are both `dev`, can be modified after startup*
- **πŸ”Œ MCP Endpoint**: http://localhost:9004/pixelle/mcp  
  *For MCP clients like Cursor, Claude Desktop to connect*

**πŸ’‘ Port Configuration**: Default port is 9004, can be customized via environment variable `PORT=your_port`.

### βš™οΈ Initial Configuration

On first startup, the system will automatically detect configuration status:

1. **πŸ”§ ComfyUI Connection**: Ensure ComfyUI service is running at `http://localhost:8188`
2. **πŸ€– LLM Configuration**: Configure at least one LLM provider (OpenAI, Ollama, etc.)
3. **πŸ“ Workflow Directory**: System will automatically create necessary directory structure

**πŸ†˜ Need Help?** Join community groups for support (see Community section below)

## πŸ› οΈ Add Your Own MCP Tool

⚑ One workflow = One MCP Tool

![](docs/workflow_to_mcp_tool.png)

### 🎯 1. Add the Simplest MCP Tool

* πŸ“ Build a workflow in ComfyUI for image Gaussian blur ([Get it here](docs/i_blur_ui.json)), then set the `LoadImage` node's title to `$image.image!` as shown below:
![](docs/easy-workflow.png)

* πŸ“€ Export it as an API format file and rename it to `i_blur.json`. You can export it yourself or use our pre-exported version ([Get it here](docs/i_blur.json))

* πŸ“‹ Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool

  ![](docs/ready_to_send_en.png)

* ✨ After sending, the LLM will automatically convert this workflow into an MCP Tool

  ![](docs/added_mcp_en.png)

* 🎨 Now, refresh the page and send any image to perform Gaussian blur processing via LLM

  ![](docs/use_mcp_tool_en.png)

### πŸ”Œ 2. Add a Complex MCP Tool

The steps are the same as above, only the workflow part differs (Download workflow: [UI format](docs/t2i_by_flux_turbo_ui.json) and [API format](docs/t2i_by_flux_turbo.json))

![](docs/t2i_by_flux_turbo.png)


## πŸ”§ ComfyUI Workflow Custom Specification

### 🎨 Workflow Format
The system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.

### πŸ“ Parameter Definition Specification

In the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:

```
$<param_name>.[~]<field_name>[!][:<description>]
```

#### πŸ” Syntax Explanation:
- `param_name`: The parameter name for the generated MCP tool function
- `~`: Optional, indicates URL parameter upload processing, returns relative path
- `field_name`: The corresponding input field in the node
- `!`: Indicates this parameter is required
- `description`: Description of the parameter

#### πŸ’‘ Example:

**Required parameter example:**

- Set LoadImage node title to: `$image.image!:Input image URL`
- Meaning: Creates a required parameter named `image`, mapped to the node's `image` field

**URL upload processing example:**

- Set any node title to: `$image.~image!:Input image URL`
- Meaning: Creates a required parameter named `image`, system will automatically download URL and upload to ComfyUI, returns relative path

> πŸ“ Note: `LoadImage`, `VHS_LoadAudioUpload`, `VHS_LoadVideo` and other nodes have built-in functionality, no need to add `~` marker

**Optional parameter example:**

- Set EmptyLatentImage node title to: `$width.width:Image width, default 512`
- Meaning: Creates an optional parameter named `width`, mapped to the node's `width` field, default value is 512

### 🎯 Type Inference Rules

The system automatically infers parameter types based on the current value of the node field:
- πŸ”’ `int`: Integer values (e.g. 512, 1024)
- πŸ“Š `float`: Floating-point values (e.g. 1.5, 3.14)
- βœ… `bool`: Boolean values (e.g. true, false)
- πŸ“ `str`: String values (default type)

### πŸ“€ Output Definition Specification

#### πŸ€– Method 1: Auto-detect Output Nodes
The system will automatically detect the following common output nodes:
- πŸ–ΌοΈ `SaveImage` - Image save node
- 🎬 `SaveVideo` - Video save node
- πŸ”Š `SaveAudio` - Audio save node
- πŸ“Ή `VHS_SaveVideo` - VHS video save node
- 🎡 `VHS_SaveAudio` - VHS audio save node

#### 🎯 Method 2: Manual Output Marking
> Usually used for multiple outputs
Use `$output.var_name` in any node title to mark output:
- Set node title to: `$output.result`
- The system will use this node's output as the tool's return value


### πŸ“„ Tool Description Configuration (Optional)

You can add a node titled `MCP` in the workflow to provide a tool description:

1. Add a `String (Multiline)` or similar text node (must have a single string property, and the node field should be one of: value, text, string)
2. Set the node title to: `MCP`
3. Enter a detailed tool description in the value field


### ⚠️ Important Notes

1. **πŸ”’ Parameter Validation**: Optional parameters (without !) must have default values set in the node
2. **πŸ”— Node Connections**: Fields already connected to other nodes will not be parsed as parameters
3. **🏷️ Tool Naming**: Exported file name will be used as the tool name, use meaningful English names
4. **πŸ“‹ Detailed Descriptions**: Provide detailed parameter descriptions for better user experience
5. **🎯 Export Format**: Must export as API format, do not export as UI format


## πŸ’¬ Community

Scan the QR codes below to join our communities for latest updates and technical support:

|                      Discord Community                       |                         WeChat Group                         |
| :----------------------------------------------------------: | :----------------------------------------------------------: |
| <img src="docs/discord.png" alt="Discord Community" width="250" /> | <img src="docs/wechat.png" alt="WeChat Group" width="250" /> |

## 🀝 How to Contribute

We welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:

### πŸ› Report Issues
* πŸ“‹ Submit bug reports on the [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues) page
* πŸ” Please search for similar issues before submitting
* πŸ“ Describe the reproduction steps and environment in detail

### πŸ’‘ Feature Suggestions
* πŸš€ Submit feature requests in [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues)
* πŸ’­ Describe the feature you want and its use case
* 🎯 Explain how it improves user experience

### πŸ”§ Code Contributions

#### πŸ“‹ Contribution Process
1. 🍴 Fork this repo to your GitHub account
2. 🌿 Create a feature branch: `git checkout -b feature/your-feature-name`
3. πŸ’» Develop and add corresponding tests
4. πŸ“ Commit changes: `git commit -m "feat: add your feature"`
5. πŸ“€ Push to your repo: `git push origin feature/your-feature-name`
6. πŸ”„ Create a Pull Request to the main repo

#### 🎨 Code Style
* 🐍 Python code follows [PEP 8](https://pep8.org/) style guide
* πŸ“– Add appropriate documentation and comments for new features

### 🧩 Contribute Workflows
* πŸ“¦ Share your ComfyUI workflows with the community
* πŸ› οΈ Submit tested workflow files
* πŸ“š Add usage instructions and examples for workflows

## πŸ™ Acknowledgements

❀️ Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.

* 🧩 [ComfyUI](https://github.com/comfyanonymous/ComfyUI)
* πŸ’¬ [Chainlit](https://github.com/Chainlit/chainlit)

* πŸ”Œ [MCP](https://modelcontextprotocol.io/introduction)
* 🎬 [WanVideo](https://github.com/Wan-Video/Wan2.1)
* ⚑ [Flux](https://github.com/black-forest-labs/flux)
* πŸ€– [LiteLLM](https://github.com/BerriAI/litellm)

## License
This project is released under the MIT License ([LICENSE](LICENSE), SPDX-License-identifier: MIT).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pixelle",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "AIGC, Chainlit, ComfyUI, LLM, MCP",
    "author": "AIDC-AI",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/d4/74/4e9f33e6f11bd8efe6bf6523d361d8945d4144121d9b5bfb4baf5e207319/pixelle-0.1.4.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">\ud83c\udfa8 Pixelle MCP - Omnimodal Agent Framework</h1>\n\n<p align=\"center\"><b>English</b> | <a href=\"README_CN.md\">\u4e2d\u6587</a></p>\n\n<p align=\"center\">\u2728 An AIGC solution based on the MCP protocol, seamlessly converting ComfyUI workflows into MCP tools with zero code, empowering LLM and ComfyUI integration.</p>\n\n![](docs/readme-1.png)\n\nhttps://github.com/user-attachments/assets/65422cef-96f9-44fe-a82b-6a124674c417\n\n\n## \ud83d\udccb Recent Updates\n\n- \u2705 **2025-09-03**: Architecture refactoring from three services to unified application; added CLI tool support; published to [PyPI](https://pypi.org/project/pixelle/)\n- \u2705 **2025-08-12**: Integrated the LiteLLM framework, adding multi-model support for Gemini, DeepSeek, Claude, Qwen, and more\n\n\n## \ud83d\ude80 Features\n\n- \u2705 \ud83d\udd04 **Full-modal Support**: Supports TISV (Text, Image, Sound/Speech, Video) full-modal conversion and generation\n- \u2705 \ud83e\udde9 **ComfyUI Ecosystem**: Built on [ComfyUI](https://github.com/comfyanonymous/ComfyUI), inheriting all capabilities from the open ComfyUI ecosystem\n- \u2705 \ud83d\udd27 **Zero-code Development**: Defines and implements the Workflow-as-MCP Tool solution, enabling zero-code development and dynamic addition of new MCP Tools\n- \u2705 \ud83d\uddc4\ufe0f **MCP Server**: Based on the [MCP](https://modelcontextprotocol.io/introduction) protocol, supporting integration with any MCP client (including but not limited to Cursor, Claude Desktop, etc.)\n- \u2705 \ud83c\udf10 **Web Interface**: Developed based on the [Chainlit](https://github.com/Chainlit/chainlit) framework, inheriting Chainlit's UI controls and supporting integration with more MCP Servers\n- \u2705 \ud83d\udce6 **One-click Deployment**: Supports PyPI installation, CLI commands, Docker and other deployment methods, ready to use out of the box\n- \u2705 \u2699\ufe0f **Simplified Configuration**: Uses environment variable configuration scheme, simple and intuitive configuration\n- \u2705 \ud83e\udd16 **Multi-LLM Support**: Supports multiple mainstream LLMs, including OpenAI, Ollama, Gemini, DeepSeek, Claude, Qwen, and more\n\n\n## \ud83d\udcc1 Project Architecture\n\nPixelle MCP adopts a **unified architecture design**, integrating MCP server, web interface, and file services into one application, providing:\n\n- \ud83c\udf10 **Web Interface**: Chainlit-based chat interface supporting multimodal interaction\n- \ud83d\udd0c **MCP Endpoint**: For external MCP clients (such as Cursor, Claude Desktop) to connect\n- \ud83d\udcc1 **File Service**: Handles file upload, download, and storage\n- \ud83d\udee0\ufe0f **Workflow Engine**: Automatically converts ComfyUI workflows into MCP tools\n\n![](docs/%20mcp_structure.png)\n\n\n## \ud83c\udfc3\u200d\u2642\ufe0f Quick Start\n\nChoose the deployment method that best suits your needs, from simple to complex:\n\n### \ud83c\udfaf Method 1: One-click Experience\n\n> \ud83d\udca1 **Zero configuration startup, perfect for quick experience and testing**\n\n#### \ud83d\ude80 Temporary Run\n\n```bash\n# Start with one command, no system installation required\nuvx pixelle@latest\n```\n\n#### \ud83d\udce6 Persistent Installation\n\n```bash\n# Install to system\npip install pixelle\n\n# Start service\npixelle\n```\n\nAfter startup, it will automatically enter the **configuration wizard** to guide you through ComfyUI connection and LLM configuration.\n\n### \ud83d\udee0\ufe0f Method 2: Local Development Deployment\n\n> \ud83d\udca1 **Supports custom workflows and secondary development**\n\n#### \ud83d\udce5 1. Get Source Code\n\n```bash\ngit clone https://github.com/AIDC-AI/Pixelle-MCP.git\ncd Pixelle-MCP\n```\n\n#### \ud83d\ude80 2. Start Service\n\n```bash\n# Interactive mode (recommended for first use, includes configuration wizard)\nuv run pixelle\n\n# Direct start (when already configured)\nuv run pixelle start\n\n# Background operation\nuv run pixelle start --daemon\n\n# Force start (terminate conflicting processes)\nuv run pixelle start --force\n```\n\n#### \ud83d\udd27 3. Add Custom Workflows (Optional)\n\n```bash\n# Copy example workflows to data directory\ncp -r workflows/* ~/.pixelle/data/custom_workflows/\n```\n\n**\u26a0\ufe0f Important**: Make sure to test workflows in ComfyUI first to ensure they run properly, otherwise execution will fail.\n\n### \ud83c\udf9b\ufe0f CLI Commands\n\nAll startup methods support the same subcommands, but with different invocation methods:\n\n#### \ud83d\udce6 pip install Method\n```bash\n# Enter interactive mode when no parameters\npixelle\n\n# Service management commands\npixelle start\npixelle status\npixelle stop\npixelle logs\npixelle logs --follow\n```\n\n#### \ud83d\ude80 uvx Method\n```bash\n# Enter interactive mode when no parameters\nuvx pixelle@latest\n\n# Service management commands\nuvx pixelle@latest start\nuvx pixelle@latest status\nuvx pixelle@latest stop\nuvx pixelle@latest logs\nuvx pixelle@latest logs --follow\n```\n\n#### \ud83d\udee0\ufe0f uv run Method\n```bash\n# Enter interactive mode when no parameters\nuv run pixelle\n\n# Service management commands\nuv run pixelle start\nuv run pixelle status\nuv run pixelle stop\nuv run pixelle logs\nuv run pixelle logs --follow\n```\n\n**\ud83d\udca1 Tip**: All methods default to interactive mode when no subcommand is provided\n\n### \ud83d\udc33 Method 3: Docker Deployment\n\n> \ud83d\udca1 **Suitable for production environments and containerized deployment**\n\n#### \ud83d\udccb 1. Prepare Configuration\n\n```bash\ngit clone https://github.com/AIDC-AI/Pixelle-MCP.git\ncd Pixelle-MCP\n\n# Create environment configuration file\ncp .env.example .env\n# Edit .env file to configure your ComfyUI address and LLM settings\n```\n\n#### \ud83d\ude80 2. Start Container\n\n```bash\n# Start all services in background\ndocker compose up -d\n\n# View logs\ndocker compose logs -f\n```\n\n### \ud83c\udf10 Access Services\n\nRegardless of which method you use, after startup you can access via:\n\n- **\ud83c\udf10 Web Interface**: http://localhost:9004  \n  *Default username and password are both `dev`, can be modified after startup*\n- **\ud83d\udd0c MCP Endpoint**: http://localhost:9004/pixelle/mcp  \n  *For MCP clients like Cursor, Claude Desktop to connect*\n\n**\ud83d\udca1 Port Configuration**: Default port is 9004, can be customized via environment variable `PORT=your_port`.\n\n### \u2699\ufe0f Initial Configuration\n\nOn first startup, the system will automatically detect configuration status:\n\n1. **\ud83d\udd27 ComfyUI Connection**: Ensure ComfyUI service is running at `http://localhost:8188`\n2. **\ud83e\udd16 LLM Configuration**: Configure at least one LLM provider (OpenAI, Ollama, etc.)\n3. **\ud83d\udcc1 Workflow Directory**: System will automatically create necessary directory structure\n\n**\ud83c\udd98 Need Help?** Join community groups for support (see Community section below)\n\n## \ud83d\udee0\ufe0f Add Your Own MCP Tool\n\n\u26a1 One workflow = One MCP Tool\n\n![](docs/workflow_to_mcp_tool.png)\n\n### \ud83c\udfaf 1. Add the Simplest MCP Tool\n\n* \ud83d\udcdd Build a workflow in ComfyUI for image Gaussian blur ([Get it here](docs/i_blur_ui.json)), then set the `LoadImage` node's title to `$image.image!` as shown below:\n![](docs/easy-workflow.png)\n\n* \ud83d\udce4 Export it as an API format file and rename it to `i_blur.json`. You can export it yourself or use our pre-exported version ([Get it here](docs/i_blur.json))\n\n* \ud83d\udccb Copy the exported API workflow file (must be API format), input it on the web page, and let the LLM add this Tool\n\n  ![](docs/ready_to_send_en.png)\n\n* \u2728 After sending, the LLM will automatically convert this workflow into an MCP Tool\n\n  ![](docs/added_mcp_en.png)\n\n* \ud83c\udfa8 Now, refresh the page and send any image to perform Gaussian blur processing via LLM\n\n  ![](docs/use_mcp_tool_en.png)\n\n### \ud83d\udd0c 2. Add a Complex MCP Tool\n\nThe steps are the same as above, only the workflow part differs (Download workflow: [UI format](docs/t2i_by_flux_turbo_ui.json) and [API format](docs/t2i_by_flux_turbo.json))\n\n![](docs/t2i_by_flux_turbo.png)\n\n\n## \ud83d\udd27 ComfyUI Workflow Custom Specification\n\n### \ud83c\udfa8 Workflow Format\nThe system supports ComfyUI workflows. Just design your workflow in the canvas and export it as API format. Use special syntax in node titles to define parameters and outputs.\n\n### \ud83d\udcdd Parameter Definition Specification\n\nIn the ComfyUI canvas, double-click the node title to edit, and use the following DSL syntax to define parameters:\n\n```\n$<param_name>.[~]<field_name>[!][:<description>]\n```\n\n#### \ud83d\udd0d Syntax Explanation:\n- `param_name`: The parameter name for the generated MCP tool function\n- `~`: Optional, indicates URL parameter upload processing, returns relative path\n- `field_name`: The corresponding input field in the node\n- `!`: Indicates this parameter is required\n- `description`: Description of the parameter\n\n#### \ud83d\udca1 Example:\n\n**Required parameter example:**\n\n- Set LoadImage node title to: `$image.image!:Input image URL`\n- Meaning: Creates a required parameter named `image`, mapped to the node's `image` field\n\n**URL upload processing example:**\n\n- Set any node title to: `$image.~image!:Input image URL`\n- Meaning: Creates a required parameter named `image`, system will automatically download URL and upload to ComfyUI, returns relative path\n\n> \ud83d\udcdd Note: `LoadImage`, `VHS_LoadAudioUpload`, `VHS_LoadVideo` and other nodes have built-in functionality, no need to add `~` marker\n\n**Optional parameter example:**\n\n- Set EmptyLatentImage node title to: `$width.width:Image width, default 512`\n- Meaning: Creates an optional parameter named `width`, mapped to the node's `width` field, default value is 512\n\n### \ud83c\udfaf Type Inference Rules\n\nThe system automatically infers parameter types based on the current value of the node field:\n- \ud83d\udd22 `int`: Integer values (e.g. 512, 1024)\n- \ud83d\udcca `float`: Floating-point values (e.g. 1.5, 3.14)\n- \u2705 `bool`: Boolean values (e.g. true, false)\n- \ud83d\udcdd `str`: String values (default type)\n\n### \ud83d\udce4 Output Definition Specification\n\n#### \ud83e\udd16 Method 1: Auto-detect Output Nodes\nThe system will automatically detect the following common output nodes:\n- \ud83d\uddbc\ufe0f `SaveImage` - Image save node\n- \ud83c\udfac `SaveVideo` - Video save node\n- \ud83d\udd0a `SaveAudio` - Audio save node\n- \ud83d\udcf9 `VHS_SaveVideo` - VHS video save node\n- \ud83c\udfb5 `VHS_SaveAudio` - VHS audio save node\n\n#### \ud83c\udfaf Method 2: Manual Output Marking\n> Usually used for multiple outputs\nUse `$output.var_name` in any node title to mark output:\n- Set node title to: `$output.result`\n- The system will use this node's output as the tool's return value\n\n\n### \ud83d\udcc4 Tool Description Configuration (Optional)\n\nYou can add a node titled `MCP` in the workflow to provide a tool description:\n\n1. Add a `String (Multiline)` or similar text node (must have a single string property, and the node field should be one of: value, text, string)\n2. Set the node title to: `MCP`\n3. Enter a detailed tool description in the value field\n\n\n### \u26a0\ufe0f Important Notes\n\n1. **\ud83d\udd12 Parameter Validation**: Optional parameters (without !) must have default values set in the node\n2. **\ud83d\udd17 Node Connections**: Fields already connected to other nodes will not be parsed as parameters\n3. **\ud83c\udff7\ufe0f Tool Naming**: Exported file name will be used as the tool name, use meaningful English names\n4. **\ud83d\udccb Detailed Descriptions**: Provide detailed parameter descriptions for better user experience\n5. **\ud83c\udfaf Export Format**: Must export as API format, do not export as UI format\n\n\n## \ud83d\udcac Community\n\nScan the QR codes below to join our communities for latest updates and technical support:\n\n|                      Discord Community                       |                         WeChat Group                         |\n| :----------------------------------------------------------: | :----------------------------------------------------------: |\n| <img src=\"docs/discord.png\" alt=\"Discord Community\" width=\"250\" /> | <img src=\"docs/wechat.png\" alt=\"WeChat Group\" width=\"250\" /> |\n\n## \ud83e\udd1d How to Contribute\n\nWe welcome all forms of contribution! Whether you're a developer, designer, or user, you can participate in the project in the following ways:\n\n### \ud83d\udc1b Report Issues\n* \ud83d\udccb Submit bug reports on the [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues) page\n* \ud83d\udd0d Please search for similar issues before submitting\n* \ud83d\udcdd Describe the reproduction steps and environment in detail\n\n### \ud83d\udca1 Feature Suggestions\n* \ud83d\ude80 Submit feature requests in [Issues](https://github.com/AIDC-AI/Pixelle-MCP/issues)\n* \ud83d\udcad Describe the feature you want and its use case\n* \ud83c\udfaf Explain how it improves user experience\n\n### \ud83d\udd27 Code Contributions\n\n#### \ud83d\udccb Contribution Process\n1. \ud83c\udf74 Fork this repo to your GitHub account\n2. \ud83c\udf3f Create a feature branch: `git checkout -b feature/your-feature-name`\n3. \ud83d\udcbb Develop and add corresponding tests\n4. \ud83d\udcdd Commit changes: `git commit -m \"feat: add your feature\"`\n5. \ud83d\udce4 Push to your repo: `git push origin feature/your-feature-name`\n6. \ud83d\udd04 Create a Pull Request to the main repo\n\n#### \ud83c\udfa8 Code Style\n* \ud83d\udc0d Python code follows [PEP 8](https://pep8.org/) style guide\n* \ud83d\udcd6 Add appropriate documentation and comments for new features\n\n### \ud83e\udde9 Contribute Workflows\n* \ud83d\udce6 Share your ComfyUI workflows with the community\n* \ud83d\udee0\ufe0f Submit tested workflow files\n* \ud83d\udcda Add usage instructions and examples for workflows\n\n## \ud83d\ude4f Acknowledgements\n\n\u2764\ufe0f Sincere thanks to the following organizations, projects, and teams for supporting the development and implementation of this project.\n\n* \ud83e\udde9 [ComfyUI](https://github.com/comfyanonymous/ComfyUI)\n* \ud83d\udcac [Chainlit](https://github.com/Chainlit/chainlit)\n\n* \ud83d\udd0c [MCP](https://modelcontextprotocol.io/introduction)\n* \ud83c\udfac [WanVideo](https://github.com/Wan-Video/Wan2.1)\n* \u26a1 [Flux](https://github.com/black-forest-labs/flux)\n* \ud83e\udd16 [LiteLLM](https://github.com/BerriAI/litellm)\n\n## License\nThis project is released under the MIT License ([LICENSE](LICENSE), SPDX-License-identifier: MIT).\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Pixelle MCP: Convert ComfyUI workflows into MCP Tools with a single command, providing an MCP server and a Chainlit-based web UI.",
    "version": "0.1.4",
    "project_urls": {
        "Homepage": "https://pixelle.ai",
        "Issues": "https://github.com/AIDC-AI/Pixelle-MCP/issues",
        "Repository": "https://github.com/AIDC-AI/Pixelle-MCP"
    },
    "split_keywords": [
        "aigc",
        " chainlit",
        " comfyui",
        " llm",
        " mcp"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "08bf3646e150d300de6a9eec80af9be587bc0119af57087b2cadcb725d4f95d8",
                "md5": "52970b373fdf757a2b601dce6f715886",
                "sha256": "7bbc9c5809582063ad16f359c39ccb92479af8ef6250ea04ec6ec3ad17a3a0a3"
            },
            "downloads": -1,
            "filename": "pixelle-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "52970b373fdf757a2b601dce6f715886",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 6065673,
            "upload_time": "2025-09-03T14:25:51",
            "upload_time_iso_8601": "2025-09-03T14:25:51.287098Z",
            "url": "https://files.pythonhosted.org/packages/08/bf/3646e150d300de6a9eec80af9be587bc0119af57087b2cadcb725d4f95d8/pixelle-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "d4744e9f33e6f11bd8efe6bf6523d361d8945d4144121d9b5bfb4baf5e207319",
                "md5": "6ec0df28780219c0e5baddd401ec89e0",
                "sha256": "d0dc9bbc652965f437ae3a13f217f47681a416f53c6c90c65ad9f38489c84e58"
            },
            "downloads": -1,
            "filename": "pixelle-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "6ec0df28780219c0e5baddd401ec89e0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 6005195,
            "upload_time": "2025-09-03T14:25:56",
            "upload_time_iso_8601": "2025-09-03T14:25:56.611511Z",
            "url": "https://files.pythonhosted.org/packages/d4/74/4e9f33e6f11bd8efe6bf6523d361d8945d4144121d9b5bfb4baf5e207319/pixelle-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-09-03 14:25:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AIDC-AI",
    "github_project": "Pixelle-MCP",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pixelle"
}
        
Elapsed time: 1.70515s