# Hanzo Live
[](https://discord.gg/mnfGR4Fjhp)

Hanzo Live is a tool for running and customizing real-time, interactive generative AI pipelines and models.
🚧 Here be dragons! This project is currently in **alpha**. 🚧
## Features
- Autoregressive video diffusion models
- [StreamDiffusionV2](./pipelines/streamdiffusionv2/docs/usage.md)
- [LongLive](./pipelines/longlive/docs/usage.md)
- WebRTC real-time streaming
- Low latency async video processing pipelines
- Interactive UI with text prompting, model parameter controls and video/camera/text input modes
...and more to come!
## System Requirements
Hanzo Live currently supports the following operating systems:
- Linux
- Windows
- macOS (Apple Silicon with MLX support)
### GPU Requirements
**NVIDIA GPUs (Linux/Windows):**
- Requires a Nvidia GPU with >= 24GB VRAM
- We recommend a driver that supports CUDA >= 12.8
- RTX 3090/4090/5090 recommended (newer generations will support higher FPS throughput and lower latency)
- If you do not have access to a GPU with these specs, we recommend installing on [Runpod](#runpod)
**Apple Silicon (macOS):**
- Supported on M1/M2/M3/M4 Macs with unified memory
- Automatically uses MLX (Apple's machine learning framework) with Metal backend
- No special flags needed - Apple Silicon acceleration is auto-detected
## Install
### Manual Installation
Install [uv](https://docs.astral.sh/uv/getting-started/installation/) which is needed to run the server and [Node.js](https://nodejs.org/en/download) which is needed to build the frontend.
#### Clone
```
git clone git@github.com:hanzoai/live.git
cd live
```
#### Build
This will build the frontend files which will be served by the Hanzo Live server.
```
uv run build
```
#### Run
> [!IMPORTANT]
> If you are running the server in a cloud environment, make sure to read the [Firewalls](#firewalls) section.
This will start the server and on the first run will also download required model weights. The default directory where model weights are stored is `~/.hanzo-live/models`.
```bash
uv run hanzo-live
```
The application will automatically detect your hardware:
- **NVIDIA GPU** (Linux/Windows) → Uses CUDA acceleration
- **Apple Silicon** (macOS) → Uses MLX/Metal acceleration
- **CPU fallback** → Use `--cpu` flag for testing without GPU
After the server starts up, the frontend will be available at `http://localhost:8000`.
### Runpod
Use our RunPod template to quickly set up Hanzo Live in the cloud. This is the easiest way to get started if you don't have a compatible local GPU.
> [!IMPORTANT]
> Follow the instructions in [Firewalls](#firewalls) to get a HuggingFace access token.
**Deployment Steps:**
1. **Click the Runpod template link**: [Template](https://console.runpod.io/deploy?template=aca8mw9ivw&ref=5k8hxjq3)
2. **Select your GPU**: Choose a GPU that meets the [system requirements](#system-requirements).
3. **Configure environment variables**:
- Click "Edit Template"
- Add an environment variable:
- Set name to `HF_TOKEN`
- Set value to your HuggingFace access token
- Click "Set Overrides"
4. **Deploy**: Click "Deploy On-Demand"
5. **Access the app**: Wait for deployment to complete, then open the app at port 8000
The template will automatically download model weights and configure everything needed.
## Firewalls
If you run Hanzo Live in a cloud environment with restrictive firewall settings (eg. Runpod), Hanzo Live supports using [TURN servers](https://webrtc.org/getting-started/turn-server) to establish a connection between your browser and the streaming server.
The easiest way to enable this feature is to create a HuggingFace account and a `read` [access token](https://huggingface.co/docs/hub/en/security-tokens). You can then set an environment variable before starting Hanzo Live:
```bash
# You should set this to your HuggingFace access token
export HF_TOKEN=your_token_here
```
When you start Hanzo Live, it will automatically use Cloudflare's TURN servers and you'll have 10GB of free streaming per month:
```
uv run hanzo-live
```
## Contributing
Read the [contribution guide](./docs/contributing.md).
## License
The alpha version of this project is licensed under [CC BY-NC-SA 4.0](./LICENSE).
You may use, modify, and share the code for non-commercial purposes only, provided that proper attribution is given.
We will consider re-licensing future versions under a more permissive license if/when non-commercial dependencies are refactored or replaced.
---
Copyright © 2025 Hanzo AI Inc. All rights reserved.
Raw data
{
"_id": null,
"home_page": null,
"name": "hanzo-live",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10.12",
"maintainer_email": "Hanzo AI <hello@hanzo.ai>",
"keywords": "ai, diffusion, hanzo, mlx, real-time, streaming, video, webrtc",
"author": null,
"author_email": "Hanzo AI <hello@hanzo.ai>",
"download_url": "https://files.pythonhosted.org/packages/2b/f9/f02d8b57954ad089bda8fe067b2df9d88330f2dd644b158fd78df1b316d7/hanzo_live-0.1.0a1.tar.gz",
"platform": null,
"description": "# Hanzo Live\n\n[](https://discord.gg/mnfGR4Fjhp)\n\n\n\nHanzo Live is a tool for running and customizing real-time, interactive generative AI pipelines and models.\n\n\ud83d\udea7 Here be dragons! This project is currently in **alpha**. \ud83d\udea7\n\n## Features\n\n- Autoregressive video diffusion models\n - [StreamDiffusionV2](./pipelines/streamdiffusionv2/docs/usage.md)\n - [LongLive](./pipelines/longlive/docs/usage.md)\n- WebRTC real-time streaming\n- Low latency async video processing pipelines\n- Interactive UI with text prompting, model parameter controls and video/camera/text input modes\n\n...and more to come!\n\n## System Requirements\n\nHanzo Live currently supports the following operating systems:\n\n- Linux\n- Windows\n- macOS (Apple Silicon with MLX support)\n\n### GPU Requirements\n\n**NVIDIA GPUs (Linux/Windows):**\n- Requires a Nvidia GPU with >= 24GB VRAM\n- We recommend a driver that supports CUDA >= 12.8\n- RTX 3090/4090/5090 recommended (newer generations will support higher FPS throughput and lower latency)\n- If you do not have access to a GPU with these specs, we recommend installing on [Runpod](#runpod)\n\n**Apple Silicon (macOS):**\n- Supported on M1/M2/M3/M4 Macs with unified memory\n- Automatically uses MLX (Apple's machine learning framework) with Metal backend\n- No special flags needed - Apple Silicon acceleration is auto-detected\n\n## Install\n\n### Manual Installation\n\nInstall [uv](https://docs.astral.sh/uv/getting-started/installation/) which is needed to run the server and [Node.js](https://nodejs.org/en/download) which is needed to build the frontend.\n\n#### Clone\n\n```\ngit clone git@github.com:hanzoai/live.git\ncd live\n```\n\n#### Build\n\nThis will build the frontend files which will be served by the Hanzo Live server.\n\n```\nuv run build\n```\n\n#### Run\n\n> [!IMPORTANT]\n> If you are running the server in a cloud environment, make sure to read the [Firewalls](#firewalls) section.\n\nThis will start the server and on the first run will also download required model weights. The default directory where model weights are stored is `~/.hanzo-live/models`.\n\n```bash\nuv run hanzo-live\n```\n\nThe application will automatically detect your hardware:\n- **NVIDIA GPU** (Linux/Windows) \u2192 Uses CUDA acceleration\n- **Apple Silicon** (macOS) \u2192 Uses MLX/Metal acceleration\n- **CPU fallback** \u2192 Use `--cpu` flag for testing without GPU\n\nAfter the server starts up, the frontend will be available at `http://localhost:8000`.\n\n### Runpod\n\nUse our RunPod template to quickly set up Hanzo Live in the cloud. This is the easiest way to get started if you don't have a compatible local GPU.\n\n> [!IMPORTANT]\n> Follow the instructions in [Firewalls](#firewalls) to get a HuggingFace access token.\n\n**Deployment Steps:**\n\n1. **Click the Runpod template link**: [Template](https://console.runpod.io/deploy?template=aca8mw9ivw&ref=5k8hxjq3)\n\n2. **Select your GPU**: Choose a GPU that meets the [system requirements](#system-requirements).\n\n3. **Configure environment variables**:\n - Click \"Edit Template\"\n - Add an environment variable:\n - Set name to `HF_TOKEN`\n - Set value to your HuggingFace access token\n - Click \"Set Overrides\"\n\n4. **Deploy**: Click \"Deploy On-Demand\"\n\n5. **Access the app**: Wait for deployment to complete, then open the app at port 8000\n\nThe template will automatically download model weights and configure everything needed.\n\n## Firewalls\n\nIf you run Hanzo Live in a cloud environment with restrictive firewall settings (eg. Runpod), Hanzo Live supports using [TURN servers](https://webrtc.org/getting-started/turn-server) to establish a connection between your browser and the streaming server.\n\nThe easiest way to enable this feature is to create a HuggingFace account and a `read` [access token](https://huggingface.co/docs/hub/en/security-tokens). You can then set an environment variable before starting Hanzo Live:\n\n```bash\n# You should set this to your HuggingFace access token\nexport HF_TOKEN=your_token_here\n```\n\nWhen you start Hanzo Live, it will automatically use Cloudflare's TURN servers and you'll have 10GB of free streaming per month:\n\n```\nuv run hanzo-live\n```\n\n## Contributing\n\nRead the [contribution guide](./docs/contributing.md).\n\n## License\n\nThe alpha version of this project is licensed under [CC BY-NC-SA 4.0](./LICENSE).\n\nYou may use, modify, and share the code for non-commercial purposes only, provided that proper attribution is given.\n\nWe will consider re-licensing future versions under a more permissive license if/when non-commercial dependencies are refactored or replaced.\n\n---\n\nCopyright \u00a9 2025 Hanzo AI Inc. All rights reserved.\n",
"bugtrack_url": null,
"license": null,
"summary": "Real-time AI video generation and streaming tool with WebRTC support",
"version": "0.1.0a1",
"project_urls": {
"Homepage": "https://github.com/hanzoai/live",
"Issues": "https://github.com/hanzoai/live/issues",
"Repository": "https://github.com/hanzoai/live"
},
"split_keywords": [
"ai",
" diffusion",
" hanzo",
" mlx",
" real-time",
" streaming",
" video",
" webrtc"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "bab3feb42da8dfe48a75bca0a77b8636d8237e58d6de75bee702ec5aab0a4386",
"md5": "6230f933c418c120f302ae586ededc28",
"sha256": "aa849f0d4a5c3eebd2ee5d2d6d4efdd283e58ee101800fa4df5cca2332275f68"
},
"downloads": -1,
"filename": "hanzo_live-0.1.0a1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6230f933c418c120f302ae586ededc28",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10.12",
"size": 1802147,
"upload_time": "2025-10-21T05:14:11",
"upload_time_iso_8601": "2025-10-21T05:14:11.646107Z",
"url": "https://files.pythonhosted.org/packages/ba/b3/feb42da8dfe48a75bca0a77b8636d8237e58d6de75bee702ec5aab0a4386/hanzo_live-0.1.0a1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2bf9f02d8b57954ad089bda8fe067b2df9d88330f2dd644b158fd78df1b316d7",
"md5": "497b79db77a9e3d19d69849416031039",
"sha256": "a77311a372870247ad5a839560998a65851c001697d15aedc5855a6e5a562a4a"
},
"downloads": -1,
"filename": "hanzo_live-0.1.0a1.tar.gz",
"has_sig": false,
"md5_digest": "497b79db77a9e3d19d69849416031039",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10.12",
"size": 1762697,
"upload_time": "2025-10-21T05:14:13",
"upload_time_iso_8601": "2025-10-21T05:14:13.907553Z",
"url": "https://files.pythonhosted.org/packages/2b/f9/f02d8b57954ad089bda8fe067b2df9d88330f2dd644b158fd78df1b316d7/hanzo_live-0.1.0a1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-21 05:14:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "hanzoai",
"github_project": "live",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "hanzo-live"
}