# Video Generation API v1.0
[δΈζη](https://github.com/preangelleo/video-generation-docker/blob/main/README_CN.md) | [PyPI](https://pypi.org/project/video-generation-api/)
π¬ A powerful Docker-based API for intelligent video generation with professional effects and subtitles.
## π¦ Installation
### Option 1: Install from PyPI (Recommended for Python users)
```bash
pip install video-generation-api
```
### Option 2: Docker (Recommended for production)
```bash
# Pull the Docker image
docker pull betashow/video-generation-api:latest
# Run the container
docker run -d \
--name video-api \
-p 5000:5000 \
betashow/video-generation-api:latest
```
## π Quick Start
### Python Client (if installed via pip)
```python
from video_generation_api import VideoGenerationClient
# Initialize client
client = VideoGenerationClient("http://localhost:5000")
# Create video
result = client.create_video(
image_path="image.jpg",
audio_path="audio.mp3",
subtitle_path="subtitles.srt",
effects=["zoom_in"],
output_path="output.mp4"
)
```
### API Server
If using Docker, the API will be available at `http://localhost:5000`.
If installed via pip, start the server with:
```bash
video-generation-api
```
## π Want to Deploy This on AWS?
Check out our recommended deployment solution: **[CloudBurst Fargate](https://github.com/preangelleo/cloudburst-fargate)**
CloudBurst Fargate is the next generation of our CloudBurst project, offering serverless deployment on AWS:
- π **Serverless Architecture** - No servers to manage
- π° **Pay Per Second** - Only pay for actual processing time
- β‘ **Auto-scaling** - Handle any workload automatically
- π§ **Zero Maintenance** - AWS manages all infrastructure
- π **Better Cost Efficiency** - More efficient than EC2 instances
For legacy EC2 instance deployment, see the original [CloudBurst](https://github.com/preangelleo/cloudburst) project.
Perfect for production use cases where you need to generate videos on-demand without managing servers.
## π API Documentation
### Core Endpoint: `/create_video_onestep`
A single intelligent endpoint that automatically handles all video creation scenarios based on your input parameters.
#### Request Format
**URL**: `POST http://your-server:5000/create_video_onestep`
**Headers**:
```json
{
"Content-Type": "application/json",
"X-Authentication-Key": "your-key-if-required"
}
```
**Body Parameters**:
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `input_image` | string | Yes | Base64 encoded image (JPG/PNG) |
| `input_audio` | string | Yes | Base64 encoded audio (MP3/WAV) |
| `subtitle` | string | No | Base64 encoded SRT subtitle file |
| `effects` | array | No | Effects to apply. Available: `"zoom_in"`, `"zoom_out"`, `"pan_left"`, `"pan_right"`, `"random"` |
| `language` | string | No | Subtitle language: `"chinese"` or `"english"` (default: chinese) |
| `background_box` | boolean | No | Show subtitle background (default: true) |
| `background_opacity` | float | No | Subtitle background transparency 0-1 (default: 0.2) **[See important note below](#subtitle-background-transparency)** |
| `font_size` | integer | No | Subtitle font size in pixels (default: auto-calculated based on video size) |
| `outline_color` | string | No | Subtitle outline color in ASS format (default: "&H00000000" - black) |
| `is_portrait` | boolean | No | Force portrait orientation (default: auto-detect) |
| `watermark` | string | No | Base64 encoded watermark image |
| `output_filename` | string | No | Preferred output filename |
#### Processing Scenarios
The API automatically detects and optimizes for 4 scenarios:
| Scenario | Effects | Subtitles | Description |
|----------|---------|-----------|-------------|
| **Baseline** | β | β | Simple image + audio merge (fastest) |
| **Subtitles Only** | β | β
| Basic video with professional subtitles |
| **Effects Only** | β
| β | Cinematic zoom/pan effects |
| **Full Featured** | β
| β
| Effects + professional subtitles |
#### Response Format
```json
{
"success": true,
"file_id": "f47ac10b-58cc-4372-a567-0e02b2c3d479",
"download_endpoint": "/download/f47ac10b-58cc-4372-a567-0e02b2c3d479",
"filename": "output.mp4",
"size": 15728640,
"scenario": "full_featured"
}
```
#### Complete Examples
**1. Baseline (Simplest)**
```python
import requests
import base64
def encode_file(filepath):
with open(filepath, 'rb') as f:
return base64.b64encode(f.read()).decode('utf-8')
# Prepare inputs
image_b64 = encode_file('image.jpg')
audio_b64 = encode_file('audio.mp3')
# Make request
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64
}
)
result = response.json()
if result['success']:
# Download the video
download_url = f"http://localhost:5000{result['download_endpoint']}"
video = requests.get(download_url)
with open('output.mp4', 'wb') as f:
f.write(video.content)
```
**2. With Chinese Subtitles**
```python
subtitle_b64 = encode_file('subtitles.srt')
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64,
'subtitle': subtitle_b64,
'language': 'chinese',
'background_box': True,
'background_opacity': 0.2
}
)
```
**3. With Effects**
```python
# Zoom effects (randomly picks one)
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64,
'effects': ['zoom_in', 'zoom_out'] # Randomly chooses zoom_in OR zoom_out
}
)
# Pan effects
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64,
'effects': ['pan_left'] # Pan from right to center
}
)
# Let system choose randomly from all effects
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64,
'effects': ['random'] # System picks any available effect
}
)
```
**4. Full Featured (Effects + Subtitles)**
```python
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64,
'subtitle': subtitle_b64,
'effects': ['zoom_in', 'zoom_out'],
'language': 'chinese'
}
)
```
**5. Advanced Subtitle Customization**
```python
response = requests.post('http://localhost:5000/create_video_onestep',
json={
'input_image': image_b64,
'input_audio': audio_b64,
'subtitle': subtitle_b64,
'language': 'chinese',
'font_size': 48, # Custom font size
'outline_color': '&H00FF0000', # Blue outline
'background_box': True, # Show background
'background_opacity': 0.3 # 30% transparent (dark background)
}
)
```
### Other Endpoints
#### Health Check
```bash
GET /health
```
Returns API status, FFmpeg version, and available endpoints.
#### Download Video
```bash
GET /download/{file_id}
```
Download the generated video file. Files expire after 1 hour.
#### Cleanup Expired Files
```bash
GET /cleanup
```
Manually trigger cleanup of expired files.
## π§ Authentication
The API supports two modes:
### Default Mode (No Authentication)
By default, the API is open and requires no authentication.
### Secure Mode
Set the `AUTHENTICATION_KEY` environment variable to enable authentication:
```bash
docker run -d \
-e AUTHENTICATION_KEY=your-secure-uuid-here \
-p 5000:5000 \
betashow/video-generation-api:latest
```
Then include the key in your requests:
```python
headers = {
'Content-Type': 'application/json',
'X-Authentication-Key': 'your-secure-uuid-here'
}
```
## π― Features
- **Intelligent Processing**: Automatically optimizes based on input parameters
- **Professional Subtitles**: High-quality subtitle rendering (not FFmpeg filters)
- **Auto-Orientation**: Detects portrait/landscape videos automatically
- **Cinematic Effects**: Hollywood-style zoom and pan effects
- **Multi-Language**: Supports Chinese and English with proper fonts
- **GPU Acceleration**: Automatic GPU detection and usage when available
## π¨ Advanced Subtitle Styling
### Subtitle Background Transparency
β οΈ **IMPORTANT**: The `background_opacity` parameter controls **transparency**, not opacity!
| Value | Visual Result | Description |
|-------|--------------|-------------|
| **0.0** | Solid black | Completely opaque background |
| **0.2** | Dark background | **Default** - Good readability |
| **0.5** | Semi-transparent | 50% see-through |
| **0.7** | Very transparent | Old default - quite see-through |
| **1.0** | No background | Completely transparent |
**Examples**:
- For **darker, more readable** subtitles: Use **lower** values (0.0 - 0.3)
- For **more transparent** subtitles: Use **higher** values (0.5 - 1.0)
- Recommended: **0.2** (the new default) provides excellent readability
```python
# Dark, readable background (recommended)
'background_opacity': 0.2
# Solid black background
'background_opacity': 0.0
# Very transparent (hard to read)
'background_opacity': 0.8
```
### Color Format (ASS/SSA Style)
The `outline_color` parameter uses ASS subtitle format: `&HAABBGGRR` where:
- AA = Alpha (transparency): 00 = opaque, FF = transparent
- BB = Blue component (00-FF)
- GG = Green component (00-FF)
- RR = Red component (00-FF)
**Common Colors**:
- `&H00000000` - Black (default)
- `&H00FFFFFF` - White
- `&H000000FF` - Red
- `&H0000FF00` - Green
- `&H00FF0000` - Blue
- `&H0000FFFF` - Yellow
- `&H00FF00FF` - Magenta
### Font Size Guidelines
If not specified, font size is auto-calculated based on video resolution:
- **1080p Landscape**: ~45px for Chinese, ~60px for English
- **1080p Portrait**: ~21px for Chinese, ~30px for English
- **4K Videos**: Proportionally larger
## π Requirements
- Docker
- 2GB+ RAM (4GB recommended)
- 10GB+ free disk space
- GPU (optional, for faster processing)
## π¬ Output Examples
See what this API can generate:
**English Example**:
[](https://www.youtube.com/watch?v=JiWsyuyw1ao)
**Chinese Example**:
[](https://www.youtube.com/watch?v=WYFyUAk9F6k)
**Features Demonstrated**:
- β
Professional subtitles with semi-transparent background
- β
Smooth zoom effects (Ken Burns effect)
- β
Perfect audio-visual synchronization
- β
High-quality 1080p video output
- β
Support for both English and Chinese
Both examples were generated using the "Full Featured" mode with subtitles and effects enabled.
## π³ Docker Image Details
The image includes:
- Ubuntu 22.04 base
- FFmpeg with GPU support
- Python 3.10
- Chinese fonts (LXGW WenKai Bold)
- All required video processing libraries
## π Notes
- All file inputs must be Base64 encoded
- Generated videos expire after 1 hour
- The API returns relative download paths, not full URLs
- This is designed for on-demand, disposable container usage
## π¨ Important
This Docker image is designed for temporary, on-demand usage. The container can be destroyed and recreated as needed - all paths are relative and no persistent storage is required.
---
**Ready to generate amazing videos? Start the container and make your first request!**
Raw data
{
"_id": null,
"home_page": "https://github.com/preangelleo/video-generation-docker",
"name": "video-generation-api",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "video, generation, api, docker, flask, moviepy, ffmpeg, subtitles, effects, animation",
"author": "Leo Wang",
"author_email": "Leo Wang <preangelleo@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/48/69/97d00452e4caea9faf535b0649ae530a809a79a8e8212e7eb6e96a83bae4/video_generation_api-1.0.1.tar.gz",
"platform": null,
"description": "# Video Generation API v1.0\n\n[\u4e2d\u6587\u7248](https://github.com/preangelleo/video-generation-docker/blob/main/README_CN.md) | [PyPI](https://pypi.org/project/video-generation-api/)\n\n\ud83c\udfac A powerful Docker-based API for intelligent video generation with professional effects and subtitles.\n\n## \ud83d\udce6 Installation\n\n### Option 1: Install from PyPI (Recommended for Python users)\n\n```bash\npip install video-generation-api\n```\n\n### Option 2: Docker (Recommended for production)\n\n```bash\n# Pull the Docker image\ndocker pull betashow/video-generation-api:latest\n\n# Run the container\ndocker run -d \\\n --name video-api \\\n -p 5000:5000 \\\n betashow/video-generation-api:latest\n```\n\n## \ud83d\ude80 Quick Start\n\n### Python Client (if installed via pip)\n\n```python\nfrom video_generation_api import VideoGenerationClient\n\n# Initialize client\nclient = VideoGenerationClient(\"http://localhost:5000\")\n\n# Create video\nresult = client.create_video(\n image_path=\"image.jpg\",\n audio_path=\"audio.mp3\",\n subtitle_path=\"subtitles.srt\",\n effects=[\"zoom_in\"],\n output_path=\"output.mp4\"\n)\n```\n\n### API Server\n\nIf using Docker, the API will be available at `http://localhost:5000`.\n\nIf installed via pip, start the server with:\n\n```bash\nvideo-generation-api\n```\n\n## \ud83d\ude80 Want to Deploy This on AWS?\n\nCheck out our recommended deployment solution: **[CloudBurst Fargate](https://github.com/preangelleo/cloudburst-fargate)**\n\nCloudBurst Fargate is the next generation of our CloudBurst project, offering serverless deployment on AWS:\n- \ud83d\ude80 **Serverless Architecture** - No servers to manage\n- \ud83d\udcb0 **Pay Per Second** - Only pay for actual processing time\n- \u26a1 **Auto-scaling** - Handle any workload automatically\n- \ud83d\udd27 **Zero Maintenance** - AWS manages all infrastructure\n- \ud83d\udcca **Better Cost Efficiency** - More efficient than EC2 instances\n\nFor legacy EC2 instance deployment, see the original [CloudBurst](https://github.com/preangelleo/cloudburst) project.\n\nPerfect for production use cases where you need to generate videos on-demand without managing servers.\n\n## \ud83d\udcd6 API Documentation\n\n### Core Endpoint: `/create_video_onestep`\n\nA single intelligent endpoint that automatically handles all video creation scenarios based on your input parameters.\n\n#### Request Format\n\n**URL**: `POST http://your-server:5000/create_video_onestep`\n\n**Headers**:\n```json\n{\n \"Content-Type\": \"application/json\",\n \"X-Authentication-Key\": \"your-key-if-required\"\n}\n```\n\n**Body Parameters**:\n\n| Parameter | Type | Required | Description |\n|-----------|------|----------|-------------|\n| `input_image` | string | Yes | Base64 encoded image (JPG/PNG) |\n| `input_audio` | string | Yes | Base64 encoded audio (MP3/WAV) |\n| `subtitle` | string | No | Base64 encoded SRT subtitle file |\n| `effects` | array | No | Effects to apply. Available: `\"zoom_in\"`, `\"zoom_out\"`, `\"pan_left\"`, `\"pan_right\"`, `\"random\"` |\n| `language` | string | No | Subtitle language: `\"chinese\"` or `\"english\"` (default: chinese) |\n| `background_box` | boolean | No | Show subtitle background (default: true) |\n| `background_opacity` | float | No | Subtitle background transparency 0-1 (default: 0.2) **[See important note below](#subtitle-background-transparency)** |\n| `font_size` | integer | No | Subtitle font size in pixels (default: auto-calculated based on video size) |\n| `outline_color` | string | No | Subtitle outline color in ASS format (default: \"&H00000000\" - black) |\n| `is_portrait` | boolean | No | Force portrait orientation (default: auto-detect) |\n| `watermark` | string | No | Base64 encoded watermark image |\n| `output_filename` | string | No | Preferred output filename |\n\n#### Processing Scenarios\n\nThe API automatically detects and optimizes for 4 scenarios:\n\n| Scenario | Effects | Subtitles | Description |\n|----------|---------|-----------|-------------|\n| **Baseline** | \u274c | \u274c | Simple image + audio merge (fastest) |\n| **Subtitles Only** | \u274c | \u2705 | Basic video with professional subtitles |\n| **Effects Only** | \u2705 | \u274c | Cinematic zoom/pan effects |\n| **Full Featured** | \u2705 | \u2705 | Effects + professional subtitles |\n\n#### Response Format\n\n```json\n{\n \"success\": true,\n \"file_id\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\",\n \"download_endpoint\": \"/download/f47ac10b-58cc-4372-a567-0e02b2c3d479\",\n \"filename\": \"output.mp4\",\n \"size\": 15728640,\n \"scenario\": \"full_featured\"\n}\n```\n\n#### Complete Examples\n\n**1. Baseline (Simplest)**\n```python\nimport requests\nimport base64\n\ndef encode_file(filepath):\n with open(filepath, 'rb') as f:\n return base64.b64encode(f.read()).decode('utf-8')\n\n# Prepare inputs\nimage_b64 = encode_file('image.jpg')\naudio_b64 = encode_file('audio.mp3')\n\n# Make request\nresponse = requests.post('http://localhost:5000/create_video_onestep', \n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64\n }\n)\n\nresult = response.json()\nif result['success']:\n # Download the video\n download_url = f\"http://localhost:5000{result['download_endpoint']}\"\n video = requests.get(download_url)\n with open('output.mp4', 'wb') as f:\n f.write(video.content)\n```\n\n**2. With Chinese Subtitles**\n```python\nsubtitle_b64 = encode_file('subtitles.srt')\n\nresponse = requests.post('http://localhost:5000/create_video_onestep',\n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64,\n 'subtitle': subtitle_b64,\n 'language': 'chinese',\n 'background_box': True,\n 'background_opacity': 0.2\n }\n)\n```\n\n**3. With Effects**\n```python\n# Zoom effects (randomly picks one)\nresponse = requests.post('http://localhost:5000/create_video_onestep',\n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64,\n 'effects': ['zoom_in', 'zoom_out'] # Randomly chooses zoom_in OR zoom_out\n }\n)\n\n# Pan effects\nresponse = requests.post('http://localhost:5000/create_video_onestep',\n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64,\n 'effects': ['pan_left'] # Pan from right to center\n }\n)\n\n# Let system choose randomly from all effects\nresponse = requests.post('http://localhost:5000/create_video_onestep',\n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64,\n 'effects': ['random'] # System picks any available effect\n }\n)\n```\n\n**4. Full Featured (Effects + Subtitles)**\n```python\nresponse = requests.post('http://localhost:5000/create_video_onestep',\n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64,\n 'subtitle': subtitle_b64,\n 'effects': ['zoom_in', 'zoom_out'],\n 'language': 'chinese'\n }\n)\n```\n\n**5. Advanced Subtitle Customization**\n```python\nresponse = requests.post('http://localhost:5000/create_video_onestep',\n json={\n 'input_image': image_b64,\n 'input_audio': audio_b64,\n 'subtitle': subtitle_b64,\n 'language': 'chinese',\n 'font_size': 48, # Custom font size\n 'outline_color': '&H00FF0000', # Blue outline\n 'background_box': True, # Show background\n 'background_opacity': 0.3 # 30% transparent (dark background)\n }\n)\n```\n\n### Other Endpoints\n\n#### Health Check\n```bash\nGET /health\n```\n\nReturns API status, FFmpeg version, and available endpoints.\n\n#### Download Video\n```bash\nGET /download/{file_id}\n```\n\nDownload the generated video file. Files expire after 1 hour.\n\n#### Cleanup Expired Files\n```bash\nGET /cleanup\n```\n\nManually trigger cleanup of expired files.\n\n## \ud83d\udd27 Authentication\n\nThe API supports two modes:\n\n### Default Mode (No Authentication)\nBy default, the API is open and requires no authentication.\n\n### Secure Mode\nSet the `AUTHENTICATION_KEY` environment variable to enable authentication:\n\n```bash\ndocker run -d \\\n -e AUTHENTICATION_KEY=your-secure-uuid-here \\\n -p 5000:5000 \\\n betashow/video-generation-api:latest\n```\n\nThen include the key in your requests:\n```python\nheaders = {\n 'Content-Type': 'application/json',\n 'X-Authentication-Key': 'your-secure-uuid-here'\n}\n```\n\n## \ud83c\udfaf Features\n\n- **Intelligent Processing**: Automatically optimizes based on input parameters\n- **Professional Subtitles**: High-quality subtitle rendering (not FFmpeg filters)\n- **Auto-Orientation**: Detects portrait/landscape videos automatically\n- **Cinematic Effects**: Hollywood-style zoom and pan effects\n- **Multi-Language**: Supports Chinese and English with proper fonts\n- **GPU Acceleration**: Automatic GPU detection and usage when available\n\n## \ud83c\udfa8 Advanced Subtitle Styling\n\n### Subtitle Background Transparency\n\n\u26a0\ufe0f **IMPORTANT**: The `background_opacity` parameter controls **transparency**, not opacity!\n\n| Value | Visual Result | Description |\n|-------|--------------|-------------|\n| **0.0** | Solid black | Completely opaque background |\n| **0.2** | Dark background | **Default** - Good readability |\n| **0.5** | Semi-transparent | 50% see-through |\n| **0.7** | Very transparent | Old default - quite see-through |\n| **1.0** | No background | Completely transparent |\n\n**Examples**:\n- For **darker, more readable** subtitles: Use **lower** values (0.0 - 0.3)\n- For **more transparent** subtitles: Use **higher** values (0.5 - 1.0)\n- Recommended: **0.2** (the new default) provides excellent readability\n\n```python\n# Dark, readable background (recommended)\n'background_opacity': 0.2\n\n# Solid black background\n'background_opacity': 0.0\n\n# Very transparent (hard to read)\n'background_opacity': 0.8\n```\n\n### Color Format (ASS/SSA Style)\nThe `outline_color` parameter uses ASS subtitle format: `&HAABBGGRR` where:\n- AA = Alpha (transparency): 00 = opaque, FF = transparent\n- BB = Blue component (00-FF)\n- GG = Green component (00-FF) \n- RR = Red component (00-FF)\n\n**Common Colors**:\n- `&H00000000` - Black (default)\n- `&H00FFFFFF` - White\n- `&H000000FF` - Red\n- `&H0000FF00` - Green\n- `&H00FF0000` - Blue\n- `&H0000FFFF` - Yellow\n- `&H00FF00FF` - Magenta\n\n### Font Size Guidelines\nIf not specified, font size is auto-calculated based on video resolution:\n- **1080p Landscape**: ~45px for Chinese, ~60px for English\n- **1080p Portrait**: ~21px for Chinese, ~30px for English\n- **4K Videos**: Proportionally larger\n\n## \ud83d\udccb Requirements\n\n- Docker\n- 2GB+ RAM (4GB recommended)\n- 10GB+ free disk space\n- GPU (optional, for faster processing)\n\n## \ud83c\udfac Output Examples\n\nSee what this API can generate:\n\n**English Example**:\n[](https://www.youtube.com/watch?v=JiWsyuyw1ao)\n\n**Chinese Example**:\n[](https://www.youtube.com/watch?v=WYFyUAk9F6k)\n\n**Features Demonstrated**:\n- \u2705 Professional subtitles with semi-transparent background\n- \u2705 Smooth zoom effects (Ken Burns effect)\n- \u2705 Perfect audio-visual synchronization\n- \u2705 High-quality 1080p video output\n- \u2705 Support for both English and Chinese\n\nBoth examples were generated using the \"Full Featured\" mode with subtitles and effects enabled.\n\n## \ud83d\udc33 Docker Image Details\n\nThe image includes:\n- Ubuntu 22.04 base\n- FFmpeg with GPU support\n- Python 3.10\n- Chinese fonts (LXGW WenKai Bold)\n- All required video processing libraries\n\n## \ud83d\udcdd Notes\n\n- All file inputs must be Base64 encoded\n- Generated videos expire after 1 hour\n- The API returns relative download paths, not full URLs\n- This is designed for on-demand, disposable container usage\n\n## \ud83d\udea8 Important\n\nThis Docker image is designed for temporary, on-demand usage. The container can be destroyed and recreated as needed - all paths are relative and no persistent storage is required.\n\n---\n\n**Ready to generate amazing videos? Start the container and make your first request!**\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A powerful Docker-based API for intelligent video generation with professional effects and subtitles",
"version": "1.0.1",
"project_urls": {
"Homepage": "https://github.com/preangelleo/video-generation-docker",
"documentation": "https://github.com/preangelleo/video-generation-docker#readme",
"issues": "https://github.com/preangelleo/video-generation-docker/issues",
"repository": "https://github.com/preangelleo/video-generation-docker"
},
"split_keywords": [
"video",
" generation",
" api",
" docker",
" flask",
" moviepy",
" ffmpeg",
" subtitles",
" effects",
" animation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d0bdf474a8fcd9582f3362a48d0ea8ce676403dcfcf484daf7421abda703caf7",
"md5": "e3dd19b29ef1116a6b0bb5f03cd01fbb",
"sha256": "beac833a562f79ae83a4a2e93bab63ceca33bb938b2098f179109ad71272bd6a"
},
"downloads": -1,
"filename": "video_generation_api-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e3dd19b29ef1116a6b0bb5f03cd01fbb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 29860,
"upload_time": "2025-08-08T13:13:27",
"upload_time_iso_8601": "2025-08-08T13:13:27.220243Z",
"url": "https://files.pythonhosted.org/packages/d0/bd/f474a8fcd9582f3362a48d0ea8ce676403dcfcf484daf7421abda703caf7/video_generation_api-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "486997d00452e4caea9faf535b0649ae530a809a79a8e8212e7eb6e96a83bae4",
"md5": "b5674890f0c3c099b2f59e590dbdcc3b",
"sha256": "69a9520cdb3d5bf3676e604a2d59abfdd5b2903dec1c77726b7e04b4920c4d27"
},
"downloads": -1,
"filename": "video_generation_api-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "b5674890f0c3c099b2f59e590dbdcc3b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 59599,
"upload_time": "2025-08-08T13:13:28",
"upload_time_iso_8601": "2025-08-08T13:13:28.724306Z",
"url": "https://files.pythonhosted.org/packages/48/69/97d00452e4caea9faf535b0649ae530a809a79a8e8212e7eb6e96a83bae4/video_generation_api-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-08 13:13:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "preangelleo",
"github_project": "video-generation-docker",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "flask",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "werkzeug",
"specs": [
[
"==",
"3.0.0"
]
]
},
{
"name": "moviepy",
"specs": [
[
"==",
"2.0.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.8.1.78"
]
]
},
{
"name": "opencv-python-headless",
"specs": [
[
"==",
"4.8.1.78"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"1.26.4"
]
]
},
{
"name": "Pillow",
"specs": [
[
"==",
"10.1.0"
]
]
},
{
"name": "imageio",
"specs": [
[
"==",
"2.31.5"
]
]
},
{
"name": "imageio-ffmpeg",
"specs": [
[
"==",
"0.4.9"
]
]
},
{
"name": "typing-extensions",
"specs": [
[
"==",
"4.8.0"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.28.0"
]
]
}
],
"lcname": "video-generation-api"
}