# Veotools
Concise Python SDK and MCP server for generating and extending videos with Google Veo.
## Features
- Video generation from text, image seed, or continuation from an existing video
- Seamless extension workflow (extract last-second frame → generate → stitch with trim)
- MCP tools with progress streaming (start/get/cancel, continue_video) and recent videos resource
- Model discovery (local registry + remote list, cached)
- Accurate metadata via ffprobe/OpenCV; outputs under project `output/` (override with `VEO_OUTPUT_DIR`)
## Install
```bash
pip install veotools
# Or install from source
pip install -e .
pip install "veotools[mcp]" # optional MCP CLI
# Set your API key
export GEMINI_API_KEY="your-api-key"
# Or create a .env file with:
# GEMINI_API_KEY=your-api-key
```
## SDK quick start
### Simple Video Generation
```python
import veotools as veo
# Initialize
veo.init()
# Generate video from text
result = veo.generate_from_text(
"A serene mountain landscape at sunset",
model="veo-3.0-fast-generate-preview"
)
print(f"Generated: {result.path}")
```
### Continue and stitch
```python
# Continue from an existing video (like one from your phone)
result = veo.generate_from_video(
"my_dog.mp4",
"the dog discovers a treasure chest",
extract_at=-1.0 # Use last frame
)
# Stitch them together seamlessly
final = veo.stitch_videos(
["my_dog.mp4", result.path],
overlap=1.0 # Trim 1 second overlap
)
```
## CLI
Install exposes the `veo` command. Use `-h/--help` on any subcommand.
```bash
# Basics
veo preflight
veo list-models --remote
# Generate from text
veo generate --prompt "cat riding a hat" --model veo-3.0-fast-generate-preview
# Continue a video and stitch seamlessly
veo continue --video dog.mp4 --prompt "the dog finds a treasure chest" --overlap 1.0
# Help
veo --help
veo generate --help
```
### Create a Story with Bridge
```python
# Chain operations together
bridge = veo.Bridge("my_story")
final_video = (bridge
.add_media("sunrise.jpg")
.generate("sunrise coming to life")
.add_media("my_video.mp4")
.generate("continuing the adventure")
.stitch(overlap=1.0)
.save("my_story.mp4")
)
```
## Core functions
### Generation
- `generate_from_text(prompt, model, **kwargs)` - Generate video from text
- `generate_from_image(image_path, prompt, model, **kwargs)` - Generate video from image
- `generate_from_video(video_path, prompt, extract_at, model, **kwargs)` - Continue video
### Processing
- `extract_frame(video_path, time_offset)` - Extract single frame
- `extract_frames(video_path, times)` - Extract multiple frames
- `get_video_info(video_path)` - Get video metadata
### Stitching
- `stitch_videos(video_paths, overlap)` - Stitch videos with overlap trimming
- `stitch_with_transitions(videos, transitions)` - Stitch with transition videos
### Workflow
- `Bridge()` - Create workflow chains
- `VideoResult` - Web-ready result objects
- `ProgressTracker` - Progress callback handling
## MCP tools
These functions are designed for integration with MCP servers and return deterministic JSON-friendly dicts.
### System
```python
import veotools as veo
veo.preflight()
# -> { ok: bool, gemini_api_key: bool, ffmpeg: {installed, version}, write_permissions: bool, base_path: str }
veo.version()
# -> { veotools: str | None, dependencies: {...}, ffmpeg: str | None }
```
### Non-blocking generation jobs
```python
import veotools as veo
# Start a job immediately
start = veo.generate_start({
"prompt": "A serene mountain landscape at sunset",
"model": "veo-3.0-fast-generate-preview"
})
job_id = start["job_id"]
# Poll status
status = veo.generate_get(job_id)
# -> { job_id, status, progress, message, kind, remote_operation_id?, result?, error_code?, error_message? }
# Request cancellation (cooperative)
veo.generate_cancel(job_id)
```
### Cursor MCP configuration
Add an entry in `~/.cursor/mcp.json` pointing to the installed `veo-mcp` (or your venv path):
```json
{
"mcpServers": {
"veotools": {
"command": "/Users/you/.venv/bin/veo-mcp",
"args": [],
"env": {
"GEMINI_API_KEY": "your-api-key",
"VEO_OUTPUT_DIR": "/Users/you/projects/output"
},
"disabled": false
}
}
}
```
Alternatively, use Python directly:
```json
{
"mcpServers": {
"veotools": {
"command": "/Users/you/.venv/bin/python",
"args": ["-m", "veotools.mcp_server"],
"env": { "GEMINI_API_KEY": "your-api-key" },
"disabled": false
}
}
}
```
## Model discovery
```python
models = veotools.list_models(include_remote=True)
print([m["id"] for m in models["models"] if m["id"].startswith("veo-")])
```
## Progress Tracking
```python
def my_progress(message: str, percent: int):
print(f"{message}: {percent}%")
result = veo.generate_from_text(
"sunset over ocean",
on_progress=my_progress
)
```
## Web-ready results
All results are JSON-serializable for API integration:
```python
result = veo.generate_from_text("sunset")
# Convert to dictionary
data = result.to_dict()
# Ready for JSON API
import json
json_response = json.dumps(data)
```
## Examples
See the `examples/` folder for complete examples:
- `examples/text_to_video.py`
- `examples/video_to_video.py`
- `examples/chained_workflow.py`
- `examples/all_functions.py`
## Layout
```
.
├── __init__.py
├── bridge.py
├── core.py
├── generate
│ ├── __init__.py
│ └── video.py
├── mcp_api.py
├── models.py
├── process
│ ├── __init__.py
│ └── extractor.py
└── stitch
├── __init__.py
└── seamless.py
```
## Key Concepts
### VideoResult
Web-ready result object with metadata, progress, and JSON serialization.
### Bridge Pattern
Chain operations together for complex workflows:
```python
bridge.add_media().generate().stitch().save()
```
### Progress Callbacks
Track long-running operations:
```python
on_progress=lambda msg, pct: print(f"{msg}: {pct}%")
```
### Storage Manager
Organized file management (local now, cloud-ready for future).
## Notes
- Generation usually takes 1–3 minutes
- Veo access may require allowlist
## License
MIT
## Contributing
Pull requests welcome!
## Support
For issues and questions, please use GitHub Issues.
Raw data
{
"_id": null,
"home_page": null,
"name": "veotools",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "generation, google-genai, mcp, stitching, veo, video",
"author": "frontboat",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/ea/76/cdddffd37e8a04ccf4bcd2b92427fb1073a0962779b02550031932d07237/veotools-0.1.7.tar.gz",
"platform": null,
"description": "# Veotools\n\nConcise Python SDK and MCP server for generating and extending videos with Google Veo.\n\n## Features\n- Video generation from text, image seed, or continuation from an existing video\n- Seamless extension workflow (extract last-second frame \u2192 generate \u2192 stitch with trim)\n- MCP tools with progress streaming (start/get/cancel, continue_video) and recent videos resource\n- Model discovery (local registry + remote list, cached)\n- Accurate metadata via ffprobe/OpenCV; outputs under project `output/` (override with `VEO_OUTPUT_DIR`)\n\n## Install\n\n```bash\npip install veotools\n\n# Or install from source\npip install -e .\n\npip install \"veotools[mcp]\" # optional MCP CLI\n\n# Set your API key\nexport GEMINI_API_KEY=\"your-api-key\"\n# Or create a .env file with:\n# GEMINI_API_KEY=your-api-key\n```\n\n## SDK quick start\n\n### Simple Video Generation\n\n```python\nimport veotools as veo\n\n# Initialize\nveo.init()\n\n# Generate video from text\nresult = veo.generate_from_text(\n \"A serene mountain landscape at sunset\",\n model=\"veo-3.0-fast-generate-preview\"\n)\n\nprint(f\"Generated: {result.path}\")\n```\n\n### Continue and stitch\n\n```python\n# Continue from an existing video (like one from your phone)\nresult = veo.generate_from_video(\n \"my_dog.mp4\",\n \"the dog discovers a treasure chest\",\n extract_at=-1.0 # Use last frame\n)\n\n# Stitch them together seamlessly\nfinal = veo.stitch_videos(\n [\"my_dog.mp4\", result.path],\n overlap=1.0 # Trim 1 second overlap\n)\n```\n\n## CLI\n\nInstall exposes the `veo` command. Use `-h/--help` on any subcommand.\n\n```bash\n# Basics\nveo preflight\nveo list-models --remote\n\n# Generate from text\nveo generate --prompt \"cat riding a hat\" --model veo-3.0-fast-generate-preview\n\n# Continue a video and stitch seamlessly\nveo continue --video dog.mp4 --prompt \"the dog finds a treasure chest\" --overlap 1.0\n\n# Help\nveo --help\nveo generate --help\n```\n\n### Create a Story with Bridge\n\n```python\n# Chain operations together\nbridge = veo.Bridge(\"my_story\")\n\nfinal_video = (bridge\n .add_media(\"sunrise.jpg\")\n .generate(\"sunrise coming to life\")\n .add_media(\"my_video.mp4\")\n .generate(\"continuing the adventure\")\n .stitch(overlap=1.0)\n .save(\"my_story.mp4\")\n)\n```\n\n## Core functions\n\n### Generation\n\n- `generate_from_text(prompt, model, **kwargs)` - Generate video from text\n- `generate_from_image(image_path, prompt, model, **kwargs)` - Generate video from image\n- `generate_from_video(video_path, prompt, extract_at, model, **kwargs)` - Continue video\n\n### Processing\n\n- `extract_frame(video_path, time_offset)` - Extract single frame\n- `extract_frames(video_path, times)` - Extract multiple frames\n- `get_video_info(video_path)` - Get video metadata\n\n### Stitching\n\n- `stitch_videos(video_paths, overlap)` - Stitch videos with overlap trimming\n- `stitch_with_transitions(videos, transitions)` - Stitch with transition videos\n\n### Workflow\n\n- `Bridge()` - Create workflow chains\n- `VideoResult` - Web-ready result objects\n- `ProgressTracker` - Progress callback handling\n\n## MCP tools\n\nThese functions are designed for integration with MCP servers and return deterministic JSON-friendly dicts.\n\n### System\n\n```python\nimport veotools as veo\n\nveo.preflight()\n# -> { ok: bool, gemini_api_key: bool, ffmpeg: {installed, version}, write_permissions: bool, base_path: str }\n\nveo.version()\n# -> { veotools: str | None, dependencies: {...}, ffmpeg: str | None }\n```\n\n### Non-blocking generation jobs\n\n```python\nimport veotools as veo\n\n# Start a job immediately\nstart = veo.generate_start({\n \"prompt\": \"A serene mountain landscape at sunset\",\n \"model\": \"veo-3.0-fast-generate-preview\"\n})\njob_id = start[\"job_id\"]\n\n# Poll status\nstatus = veo.generate_get(job_id)\n# -> { job_id, status, progress, message, kind, remote_operation_id?, result?, error_code?, error_message? }\n\n# Request cancellation (cooperative)\nveo.generate_cancel(job_id)\n```\n\n### Cursor MCP configuration\n\nAdd an entry in `~/.cursor/mcp.json` pointing to the installed `veo-mcp` (or your venv path):\n\n```json\n{\n \"mcpServers\": {\n \"veotools\": {\n \"command\": \"/Users/you/.venv/bin/veo-mcp\",\n \"args\": [],\n \"env\": {\n \"GEMINI_API_KEY\": \"your-api-key\",\n \"VEO_OUTPUT_DIR\": \"/Users/you/projects/output\" \n },\n \"disabled\": false\n }\n }\n}\n```\n\nAlternatively, use Python directly:\n\n```json\n{\n \"mcpServers\": {\n \"veotools\": {\n \"command\": \"/Users/you/.venv/bin/python\",\n \"args\": [\"-m\", \"veotools.mcp_server\"],\n \"env\": { \"GEMINI_API_KEY\": \"your-api-key\" },\n \"disabled\": false\n }\n }\n}\n```\n\n## Model discovery\n```python\nmodels = veotools.list_models(include_remote=True)\nprint([m[\"id\"] for m in models[\"models\"] if m[\"id\"].startswith(\"veo-\")])\n```\n\n## Progress Tracking\n\n```python\ndef my_progress(message: str, percent: int):\n print(f\"{message}: {percent}%\")\n\nresult = veo.generate_from_text(\n \"sunset over ocean\",\n on_progress=my_progress\n)\n```\n\n## Web-ready results\n\nAll results are JSON-serializable for API integration:\n\n```python\nresult = veo.generate_from_text(\"sunset\")\n\n# Convert to dictionary\ndata = result.to_dict()\n\n# Ready for JSON API\nimport json\njson_response = json.dumps(data)\n```\n\n## Examples\n\nSee the `examples/` folder for complete examples:\n\n- `examples/text_to_video.py`\n- `examples/video_to_video.py`\n- `examples/chained_workflow.py`\n- `examples/all_functions.py`\n\n## Layout\n\n```\n.\n\u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 bridge.py\n\u251c\u2500\u2500 core.py\n\u251c\u2500\u2500 generate\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 video.py\n\u251c\u2500\u2500 mcp_api.py\n\u251c\u2500\u2500 models.py\n\u251c\u2500\u2500 process\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 extractor.py\n\u2514\u2500\u2500 stitch\n \u251c\u2500\u2500 __init__.py\n \u2514\u2500\u2500 seamless.py\n```\n\n## Key Concepts\n\n### VideoResult\nWeb-ready result object with metadata, progress, and JSON serialization.\n\n### Bridge Pattern\nChain operations together for complex workflows:\n```python\nbridge.add_media().generate().stitch().save()\n```\n\n### Progress Callbacks\nTrack long-running operations:\n```python\non_progress=lambda msg, pct: print(f\"{msg}: {pct}%\")\n```\n\n### Storage Manager\nOrganized file management (local now, cloud-ready for future).\n\n## Notes\n\n- Generation usually takes 1\u20133 minutes\n- Veo access may require allowlist\n\n## License\n\nMIT\n\n## Contributing\n\nPull requests welcome!\n\n## Support\n\nFor issues and questions, please use GitHub Issues.",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python toolkit for AI-powered video generation and seamless stitching using Google's Veo models.",
"version": "0.1.7",
"project_urls": {
"Homepage": "https://github.com/frontboat/veotools",
"Issues": "https://github.com/frontboat/veotools/issues",
"Repository": "https://github.com/frontboat/veotools"
},
"split_keywords": [
"generation",
" google-genai",
" mcp",
" stitching",
" veo",
" video"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "6a789e3e183166c0dc1887cec659320c425ce77a23e9d7e8d61a1794f2d7440e",
"md5": "36c991b64fce175189ee56e735c719e5",
"sha256": "7adeaa0825a5882f32137e7cff4a287b27a946e7cf9338d5fac3b16fdc13b571"
},
"downloads": -1,
"filename": "veotools-0.1.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "36c991b64fce175189ee56e735c719e5",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 30118,
"upload_time": "2025-08-09T13:43:38",
"upload_time_iso_8601": "2025-08-09T13:43:38.805048Z",
"url": "https://files.pythonhosted.org/packages/6a/78/9e3e183166c0dc1887cec659320c425ce77a23e9d7e8d61a1794f2d7440e/veotools-0.1.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ea76cdddffd37e8a04ccf4bcd2b92427fb1073a0962779b02550031932d07237",
"md5": "2c199153b38063a35f69f7f1affe384b",
"sha256": "75dda779f496e95c7e2d960cccda069b50dee29f260960130135e021dea68edd"
},
"downloads": -1,
"filename": "veotools-0.1.7.tar.gz",
"has_sig": false,
"md5_digest": "2c199153b38063a35f69f7f1affe384b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 22166,
"upload_time": "2025-08-09T13:43:40",
"upload_time_iso_8601": "2025-08-09T13:43:40.237395Z",
"url": "https://files.pythonhosted.org/packages/ea/76/cdddffd37e8a04ccf4bcd2b92427fb1073a0962779b02550031932d07237/veotools-0.1.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-09 13:43:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "frontboat",
"github_project": "veotools",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "google-genai",
"specs": [
[
"==",
"1.29.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.12.0.88"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.2.6"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.1.1"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.4"
]
]
}
],
"lcname": "veotools"
}