# Veotools
Concise Python SDK and MCP server for generating and extending videos with Google Veo.
## Features
- Video generation from text, image seed, or continuation from an existing video
- Seamless extension workflow (extract last-second frame → generate → stitch with trim)
- MCP tools with progress streaming (start/get/cancel, continue_video) and recent videos resource
- Model discovery (local registry + remote list, cached)
- Accurate metadata via ffprobe/OpenCV; outputs under project `output/` (override with `VEO_OUTPUT_DIR`)
- Safety settings pass-through for generation (best-effort)
- Context caching helpers and `cached_content` support
## Install
```bash
pip install veotools
# Or install from source
pip install -e .
# With MCP server support
pip install "veotools[mcp]"
# For development (includes testing tools)
pip install -e ".[dev,mcp]"
# Set your API key
export GEMINI_API_KEY="your-api-key"
# Or create a .env file with:
# GEMINI_API_KEY=your-api-key
```
## SDK quick start
### Simple Video Generation
```python
import veotools as veo
# Initialize
veo.init()
# Generate video from text
result = veo.generate_from_text(
"A serene mountain landscape at sunset",
model="veo-3.0-fast-generate-preview"
)
print(f"Generated: {result.path}")
```
### Continue and stitch
```python
# Continue from an existing video (like one from your phone)
result = veo.generate_from_video(
"my_dog.mp4",
"the dog discovers a treasure chest",
extract_at=-1.0 # Use last frame
)
# Stitch them together seamlessly
final = veo.stitch_videos(
["my_dog.mp4", result.path],
overlap=1.0 # Trim 1 second overlap
)
```
## CLI
Install exposes the `veo` command. Use `-h/--help` on any subcommand.
```bash
# Basics
veo preflight
veo list-models --remote
# Generate from text (optional safety + cached content)
veo generate --prompt "cat riding a hat" --model veo-3.0-fast-generate-preview \
--safety-json "[{\"category\":\"HARM_CATEGORY_HARASSMENT\",\"threshold\":\"BLOCK_ONLY_HIGH\"}]" \
--cached-content "caches/your-cache-name"
# Continue a video and stitch seamlessly
veo continue --video dog.mp4 --prompt "the dog finds a treasure chest" --overlap 1.0 \
--safety-json "[{\"category\":\"HARM_CATEGORY_HARASSMENT\",\"threshold\":\"BLOCK_ONLY_HIGH\"}]"
# Help
veo --help
veo generate --help
```
### Create a Story with Bridge
```python
# Chain operations together
bridge = veo.Bridge("my_story")
final_video = (bridge
.add_media("sunrise.jpg")
.generate("sunrise coming to life")
.add_media("my_video.mp4")
.generate("continuing the adventure")
.stitch(overlap=1.0)
.save("my_story.mp4")
)
```
## Core functions
### Generation
- `generate_from_text(prompt, model, **kwargs)` - Generate video from text
- `generate_from_image(image_path, prompt, model, **kwargs)` - Generate video from image
- `generate_from_video(video_path, prompt, extract_at, model, **kwargs)` - Continue video
Optional config supported (best-effort pass-through):
- `aspect_ratio` (model-dependent)
- `negative_prompt`
- `person_generation` (validated per Veo model and mode)
- `safety_settings` (list of {category, threshold} or `types.SafetySetting`)
- `cached_content` (cache name string)
### Processing
- `extract_frame(video_path, time_offset)` - Extract single frame
- `extract_frames(video_path, times)` - Extract multiple frames
- `get_video_info(video_path)` - Get video metadata
### Stitching
- `stitch_videos(video_paths, overlap)` - Stitch videos with overlap trimming
- `stitch_with_transitions(videos, transitions)` - Stitch with transition videos
### Workflow
- `Bridge()` - Create workflow chains
- `VideoResult` - Web-ready result objects
- `ProgressTracker` - Progress callback handling
## MCP tools
These functions are designed for integration with MCP servers and return deterministic JSON-friendly dicts.
### System
```python
import veotools as veo
veo.preflight()
# -> { ok: bool, gemini_api_key: bool, ffmpeg: {installed, version}, write_permissions: bool, base_path: str }
veo.version()
# -> { veotools: str | None, dependencies: {...}, ffmpeg: str | None }
```
### Non-blocking generation jobs
```python
import veotools as veo
# Start a job immediately
start = veo.generate_start({
"prompt": "A serene mountain landscape at sunset",
"model": "veo-3.0-fast-generate-preview"
})
job_id = start["job_id"]
# Poll status
status = veo.generate_get(job_id)
# -> { job_id, status, progress, message, kind, remote_operation_id?, result?, error_code?, error_message? }
# Request cancellation (cooperative)
veo.generate_cancel(job_id)
```
### Caching helpers
Programmatic usage via MCP-friendly APIs:
```python
import veotools as veo
# Create a cache from files
cache = veo.cache_create_from_files(
model="gemini-1.5-flash-001",
files=["media/a11.txt"],
system_instruction="You are an expert analyzing transcripts."
)
# Use cached content in generation
start = veo.generate_start({
"prompt": "Summarize the transcript",
"model": "veo-3.0-fast-generate-preview",
"options": {"cached_content": cache.get("name")}
})
```
Manage cached content:
```python
import veotools as veo
# List caches (metadata only)
listing = veo.cache_list()
for c in listing.get("caches", []):
print(c.get("name"), c.get("display_name"), c.get("expire_time"))
# Get single cache metadata
meta = veo.cache_get(name="caches/abc123")
# Update TTL or expiry time
veo.cache_update(name="caches/abc123", ttl_seconds=600) # set TTL to 10 minutes
# or
veo.cache_update(name="caches/abc123", expire_time_iso="2025-01-27T16:02:36.473528+00:00")
# Delete cache
veo.cache_delete(name="caches/abc123")
```
### Cursor MCP configuration
Add an entry in `~/.cursor/mcp.json` pointing to the installed `veo-mcp` (or your venv path):
```json
{
"mcpServers": {
"veotools": {
"command": "/Users/you/.venv/bin/veo-mcp",
"args": [],
"env": {
"GEMINI_API_KEY": "your-api-key",
"VEO_OUTPUT_DIR": "/Users/you/projects/output"
},
"disabled": false
}
}
}
```
Alternatively, use Python directly:
```json
{
"mcpServers": {
"veotools": {
"command": "/Users/you/.venv/bin/python",
"args": ["-m", "veotools.mcp_server"],
"env": { "GEMINI_API_KEY": "your-api-key" },
"disabled": false
}
}
}
```
## Model discovery
```python
models = veotools.list_models(include_remote=True)
print([m["id"] for m in models["models"] if m["id"].startswith("veo-")])
```
## Progress Tracking
```python
def my_progress(message: str, percent: int):
print(f"{message}: {percent}%")
result = veo.generate_from_text(
"sunset over ocean",
on_progress=my_progress
)
```
## Web-ready results
All results are JSON-serializable for API integration:
```python
result = veo.generate_from_text("sunset")
# Convert to dictionary
data = result.to_dict()
# Ready for JSON API
import json
json_response = json.dumps(data)
```
## Examples
See the `examples/` folder for complete examples:
- `examples/text_to_video.py`
- `examples/video_to_video.py`
- `examples/chained_workflow.py`
- `examples/all_functions.py`
## Project Structure
```
src/veotools/
├── __init__.py # Package initialization and exports
├── core.py # Core client and storage management
├── models.py # Data models and result objects
├── cli.py # Command-line interface
├── api/
│ ├── bridge.py # Workflow orchestration API
│ └── mcp_api.py # MCP-friendly wrapper functions
├── generate/
│ └── video.py # Video generation functions
├── process/
│ └── extractor.py # Frame extraction and metadata
├── stitch/
│ └── seamless.py # Video stitching capabilities
└── server/
└── mcp_server.py # MCP server implementation
tests/ # Test suite (mirrors src structure)
├── conftest.py # Shared fixtures and configuration
├── test_core.py
├── test_models.py
├── test_api/
├── test_generate/
├── test_process/
└── test_stitch/
```
## Key Concepts
### VideoResult
Web-ready result object with metadata, progress, and JSON serialization.
### Bridge Pattern
Chain operations together for complex workflows:
```python
bridge.add_media().generate().stitch().save()
```
### Progress Callbacks
Track long-running operations:
```python
on_progress=lambda msg, pct: print(f"{msg}: {pct}%")
```
### Storage Manager
Organized file management (local now, cloud-ready for future).
## Notes
- Generation usually takes 1–3 minutes
- Veo access may require allowlist
- Person generation constraints per Veo docs:
- Veo 3: text→video allows `allow_all`; image/video-seeded allows `allow_adult`
- Veo 2: text→video allows `allow_all`, `allow_adult`, `dont_allow`; image/video-seeded allows `allow_adult`, `dont_allow`
## License
MIT
## Development
### Setting Up Development Environment
```bash
# Clone the repository
git clone https://github.com/frontboat/veotools.git
cd veotools
# Install in development mode with all dependencies
pip install -e ".[dev,mcp]"
# Set up pre-commit hooks (optional)
pre-commit install
```
### Running Tests
```bash
# Run all tests
pytest
# Run only unit tests (fast, no external dependencies)
pytest -m unit
# Run integration tests
pytest -m integration
# Run with coverage report
pytest --cov=veotools --cov-report=html
# Run tests in parallel
pytest -n auto
# Using Make commands
make test # Run all tests
make test-unit # Run only unit tests
make test-coverage # Run with coverage report
```
### Testing Guidelines
- Tests are organized to mirror the source code structure
- All tests use pytest and follow AAA pattern (Arrange-Act-Assert)
- External dependencies (API calls, ffmpeg) are mocked in unit tests
- Fixtures are defined in `tests/conftest.py`
- Mark tests appropriately: `@pytest.mark.unit`, `@pytest.mark.integration`, `@pytest.mark.slow`
### Building and Publishing
```bash
# Build the package
python -m build
# Check the package
twine check dist/*
# Upload to PyPI (requires credentials)
twine upload dist/*
```
## Contributing
Pull requests welcome! Please ensure:
- All tests pass (`make test`)
- Code follows existing style conventions
- New features include appropriate tests
- Documentation is updated as needed
## Support
For issues and questions, please use GitHub Issues.
Raw data
{
"_id": null,
"home_page": null,
"name": "veotools",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "generation, google-genai, mcp, stitching, veo, video",
"author": "frontboat",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/34/bc/65b3177f6a3451f81ba42b26782f232f1f219fdae993e5e9883a471b789a/veotools-0.1.9.tar.gz",
"platform": null,
"description": "# Veotools\n\nConcise Python SDK and MCP server for generating and extending videos with Google Veo.\n\n## Features\n- Video generation from text, image seed, or continuation from an existing video\n- Seamless extension workflow (extract last-second frame \u2192 generate \u2192 stitch with trim)\n- MCP tools with progress streaming (start/get/cancel, continue_video) and recent videos resource\n- Model discovery (local registry + remote list, cached)\n- Accurate metadata via ffprobe/OpenCV; outputs under project `output/` (override with `VEO_OUTPUT_DIR`)\n- Safety settings pass-through for generation (best-effort)\n- Context caching helpers and `cached_content` support\n\n## Install\n\n```bash\npip install veotools\n\n# Or install from source\npip install -e .\n\n# With MCP server support\npip install \"veotools[mcp]\"\n\n# For development (includes testing tools)\npip install -e \".[dev,mcp]\"\n\n# Set your API key\nexport GEMINI_API_KEY=\"your-api-key\"\n# Or create a .env file with:\n# GEMINI_API_KEY=your-api-key\n```\n\n## SDK quick start\n\n### Simple Video Generation\n\n```python\nimport veotools as veo\n\n# Initialize\nveo.init()\n\n# Generate video from text\nresult = veo.generate_from_text(\n \"A serene mountain landscape at sunset\",\n model=\"veo-3.0-fast-generate-preview\"\n)\n\nprint(f\"Generated: {result.path}\")\n```\n\n### Continue and stitch\n\n```python\n# Continue from an existing video (like one from your phone)\nresult = veo.generate_from_video(\n \"my_dog.mp4\",\n \"the dog discovers a treasure chest\",\n extract_at=-1.0 # Use last frame\n)\n\n# Stitch them together seamlessly\nfinal = veo.stitch_videos(\n [\"my_dog.mp4\", result.path],\n overlap=1.0 # Trim 1 second overlap\n)\n```\n\n## CLI\n\nInstall exposes the `veo` command. Use `-h/--help` on any subcommand.\n\n```bash\n# Basics\nveo preflight\nveo list-models --remote\n\n# Generate from text (optional safety + cached content)\nveo generate --prompt \"cat riding a hat\" --model veo-3.0-fast-generate-preview \\\n --safety-json \"[{\\\"category\\\":\\\"HARM_CATEGORY_HARASSMENT\\\",\\\"threshold\\\":\\\"BLOCK_ONLY_HIGH\\\"}]\" \\\n --cached-content \"caches/your-cache-name\"\n\n# Continue a video and stitch seamlessly\nveo continue --video dog.mp4 --prompt \"the dog finds a treasure chest\" --overlap 1.0 \\\n --safety-json \"[{\\\"category\\\":\\\"HARM_CATEGORY_HARASSMENT\\\",\\\"threshold\\\":\\\"BLOCK_ONLY_HIGH\\\"}]\"\n\n# Help\nveo --help\nveo generate --help\n```\n\n### Create a Story with Bridge\n\n```python\n# Chain operations together\nbridge = veo.Bridge(\"my_story\")\n\nfinal_video = (bridge\n .add_media(\"sunrise.jpg\")\n .generate(\"sunrise coming to life\")\n .add_media(\"my_video.mp4\")\n .generate(\"continuing the adventure\")\n .stitch(overlap=1.0)\n .save(\"my_story.mp4\")\n)\n```\n\n## Core functions\n\n### Generation\n\n- `generate_from_text(prompt, model, **kwargs)` - Generate video from text\n- `generate_from_image(image_path, prompt, model, **kwargs)` - Generate video from image\n- `generate_from_video(video_path, prompt, extract_at, model, **kwargs)` - Continue video\n\nOptional config supported (best-effort pass-through):\n- `aspect_ratio` (model-dependent)\n- `negative_prompt`\n- `person_generation` (validated per Veo model and mode)\n- `safety_settings` (list of {category, threshold} or `types.SafetySetting`)\n- `cached_content` (cache name string)\n\n### Processing\n\n- `extract_frame(video_path, time_offset)` - Extract single frame\n- `extract_frames(video_path, times)` - Extract multiple frames\n- `get_video_info(video_path)` - Get video metadata\n\n### Stitching\n\n- `stitch_videos(video_paths, overlap)` - Stitch videos with overlap trimming\n- `stitch_with_transitions(videos, transitions)` - Stitch with transition videos\n\n### Workflow\n\n- `Bridge()` - Create workflow chains\n- `VideoResult` - Web-ready result objects\n- `ProgressTracker` - Progress callback handling\n\n## MCP tools\n\nThese functions are designed for integration with MCP servers and return deterministic JSON-friendly dicts.\n\n### System\n\n```python\nimport veotools as veo\n\nveo.preflight()\n# -> { ok: bool, gemini_api_key: bool, ffmpeg: {installed, version}, write_permissions: bool, base_path: str }\n\nveo.version()\n# -> { veotools: str | None, dependencies: {...}, ffmpeg: str | None }\n```\n\n### Non-blocking generation jobs\n\n```python\nimport veotools as veo\n\n# Start a job immediately\nstart = veo.generate_start({\n \"prompt\": \"A serene mountain landscape at sunset\",\n \"model\": \"veo-3.0-fast-generate-preview\"\n})\njob_id = start[\"job_id\"]\n\n# Poll status\nstatus = veo.generate_get(job_id)\n# -> { job_id, status, progress, message, kind, remote_operation_id?, result?, error_code?, error_message? }\n\n# Request cancellation (cooperative)\nveo.generate_cancel(job_id)\n```\n\n### Caching helpers\n\nProgrammatic usage via MCP-friendly APIs:\n\n```python\nimport veotools as veo\n\n# Create a cache from files\ncache = veo.cache_create_from_files(\n model=\"gemini-1.5-flash-001\",\n files=[\"media/a11.txt\"],\n system_instruction=\"You are an expert analyzing transcripts.\"\n)\n\n# Use cached content in generation\nstart = veo.generate_start({\n \"prompt\": \"Summarize the transcript\",\n \"model\": \"veo-3.0-fast-generate-preview\",\n \"options\": {\"cached_content\": cache.get(\"name\")}\n})\n```\n\nManage cached content:\n\n```python\nimport veotools as veo\n\n# List caches (metadata only)\nlisting = veo.cache_list()\nfor c in listing.get(\"caches\", []):\n print(c.get(\"name\"), c.get(\"display_name\"), c.get(\"expire_time\"))\n\n# Get single cache metadata\nmeta = veo.cache_get(name=\"caches/abc123\")\n\n# Update TTL or expiry time\nveo.cache_update(name=\"caches/abc123\", ttl_seconds=600) # set TTL to 10 minutes\n# or\nveo.cache_update(name=\"caches/abc123\", expire_time_iso=\"2025-01-27T16:02:36.473528+00:00\")\n\n# Delete cache\nveo.cache_delete(name=\"caches/abc123\")\n```\n\n### Cursor MCP configuration\n\nAdd an entry in `~/.cursor/mcp.json` pointing to the installed `veo-mcp` (or your venv path):\n\n```json\n{\n \"mcpServers\": {\n \"veotools\": {\n \"command\": \"/Users/you/.venv/bin/veo-mcp\",\n \"args\": [],\n \"env\": {\n \"GEMINI_API_KEY\": \"your-api-key\",\n \"VEO_OUTPUT_DIR\": \"/Users/you/projects/output\" \n },\n \"disabled\": false\n }\n }\n}\n```\n\nAlternatively, use Python directly:\n\n```json\n{\n \"mcpServers\": {\n \"veotools\": {\n \"command\": \"/Users/you/.venv/bin/python\",\n \"args\": [\"-m\", \"veotools.mcp_server\"],\n \"env\": { \"GEMINI_API_KEY\": \"your-api-key\" },\n \"disabled\": false\n }\n }\n}\n```\n\n## Model discovery\n```python\nmodels = veotools.list_models(include_remote=True)\nprint([m[\"id\"] for m in models[\"models\"] if m[\"id\"].startswith(\"veo-\")])\n```\n\n## Progress Tracking\n\n```python\ndef my_progress(message: str, percent: int):\n print(f\"{message}: {percent}%\")\n\nresult = veo.generate_from_text(\n \"sunset over ocean\",\n on_progress=my_progress\n)\n```\n\n## Web-ready results\n\nAll results are JSON-serializable for API integration:\n\n```python\nresult = veo.generate_from_text(\"sunset\")\n\n# Convert to dictionary\ndata = result.to_dict()\n\n# Ready for JSON API\nimport json\njson_response = json.dumps(data)\n```\n\n## Examples\n\nSee the `examples/` folder for complete examples:\n\n- `examples/text_to_video.py`\n- `examples/video_to_video.py`\n- `examples/chained_workflow.py`\n- `examples/all_functions.py`\n\n## Project Structure\n\n```\nsrc/veotools/\n\u251c\u2500\u2500 __init__.py # Package initialization and exports\n\u251c\u2500\u2500 core.py # Core client and storage management\n\u251c\u2500\u2500 models.py # Data models and result objects\n\u251c\u2500\u2500 cli.py # Command-line interface\n\u251c\u2500\u2500 api/\n\u2502 \u251c\u2500\u2500 bridge.py # Workflow orchestration API\n\u2502 \u2514\u2500\u2500 mcp_api.py # MCP-friendly wrapper functions\n\u251c\u2500\u2500 generate/\n\u2502 \u2514\u2500\u2500 video.py # Video generation functions\n\u251c\u2500\u2500 process/\n\u2502 \u2514\u2500\u2500 extractor.py # Frame extraction and metadata\n\u251c\u2500\u2500 stitch/\n\u2502 \u2514\u2500\u2500 seamless.py # Video stitching capabilities\n\u2514\u2500\u2500 server/\n \u2514\u2500\u2500 mcp_server.py # MCP server implementation\n\ntests/ # Test suite (mirrors src structure)\n\u251c\u2500\u2500 conftest.py # Shared fixtures and configuration\n\u251c\u2500\u2500 test_core.py\n\u251c\u2500\u2500 test_models.py\n\u251c\u2500\u2500 test_api/\n\u251c\u2500\u2500 test_generate/\n\u251c\u2500\u2500 test_process/\n\u2514\u2500\u2500 test_stitch/\n```\n\n## Key Concepts\n\n### VideoResult\nWeb-ready result object with metadata, progress, and JSON serialization.\n\n### Bridge Pattern\nChain operations together for complex workflows:\n```python\nbridge.add_media().generate().stitch().save()\n```\n\n### Progress Callbacks\nTrack long-running operations:\n```python\non_progress=lambda msg, pct: print(f\"{msg}: {pct}%\")\n```\n\n### Storage Manager\nOrganized file management (local now, cloud-ready for future).\n\n## Notes\n\n- Generation usually takes 1\u20133 minutes\n- Veo access may require allowlist\n- Person generation constraints per Veo docs:\n - Veo 3: text\u2192video allows `allow_all`; image/video-seeded allows `allow_adult`\n - Veo 2: text\u2192video allows `allow_all`, `allow_adult`, `dont_allow`; image/video-seeded allows `allow_adult`, `dont_allow`\n\n## License\n\nMIT\n\n## Development\n\n### Setting Up Development Environment\n\n```bash\n# Clone the repository\ngit clone https://github.com/frontboat/veotools.git\ncd veotools\n\n# Install in development mode with all dependencies\npip install -e \".[dev,mcp]\"\n\n# Set up pre-commit hooks (optional)\npre-commit install\n```\n\n### Running Tests\n\n```bash\n# Run all tests\npytest\n\n# Run only unit tests (fast, no external dependencies)\npytest -m unit\n\n# Run integration tests\npytest -m integration\n\n# Run with coverage report\npytest --cov=veotools --cov-report=html\n\n# Run tests in parallel\npytest -n auto\n\n# Using Make commands\nmake test # Run all tests\nmake test-unit # Run only unit tests\nmake test-coverage # Run with coverage report\n```\n\n### Testing Guidelines\n\n- Tests are organized to mirror the source code structure\n- All tests use pytest and follow AAA pattern (Arrange-Act-Assert)\n- External dependencies (API calls, ffmpeg) are mocked in unit tests\n- Fixtures are defined in `tests/conftest.py`\n- Mark tests appropriately: `@pytest.mark.unit`, `@pytest.mark.integration`, `@pytest.mark.slow`\n\n### Building and Publishing\n\n```bash\n# Build the package\npython -m build\n\n# Check the package\ntwine check dist/*\n\n# Upload to PyPI (requires credentials)\ntwine upload dist/*\n```\n\n## Contributing\n\nPull requests welcome! Please ensure:\n- All tests pass (`make test`)\n- Code follows existing style conventions\n- New features include appropriate tests\n- Documentation is updated as needed\n\n## Support\n\nFor issues and questions, please use GitHub Issues.",
"bugtrack_url": null,
"license": "MIT",
"summary": "A Python toolkit for AI-powered video generation and seamless stitching using Google's Veo models.",
"version": "0.1.9",
"project_urls": {
"Homepage": "https://github.com/frontboat/veotools",
"Issues": "https://github.com/frontboat/veotools/issues",
"Repository": "https://github.com/frontboat/veotools"
},
"split_keywords": [
"generation",
" google-genai",
" mcp",
" stitching",
" veo",
" video"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "90cf8fe9b14fb9d440a2d0ccf4388b9baf19ab8e7a65baeb6a560088c03ab0c6",
"md5": "7feb99e22e1774a30ace3e8a1741bfa3",
"sha256": "04986f5c061ce5aeabc0ad36254d193402fa7743ed3570edd1d2d9f96cb791d0"
},
"downloads": -1,
"filename": "veotools-0.1.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7feb99e22e1774a30ace3e8a1741bfa3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 12716,
"upload_time": "2025-09-12T12:21:02",
"upload_time_iso_8601": "2025-09-12T12:21:02.332143Z",
"url": "https://files.pythonhosted.org/packages/90/cf/8fe9b14fb9d440a2d0ccf4388b9baf19ab8e7a65baeb6a560088c03ab0c6/veotools-0.1.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "34bc65b3177f6a3451f81ba42b26782f232f1f219fdae993e5e9883a471b789a",
"md5": "74896d6906c02ac59551936cfe4ef9fa",
"sha256": "6bf5d74ddce6254d550399b6de1d5ba14cb6084ac24429f94ff357143c22cfb9"
},
"downloads": -1,
"filename": "veotools-0.1.9.tar.gz",
"has_sig": false,
"md5_digest": "74896d6906c02ac59551936cfe4ef9fa",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 41824,
"upload_time": "2025-09-12T12:21:03",
"upload_time_iso_8601": "2025-09-12T12:21:03.841856Z",
"url": "https://files.pythonhosted.org/packages/34/bc/65b3177f6a3451f81ba42b26782f232f1f219fdae993e5e9883a471b789a/veotools-0.1.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-12 12:21:03",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "frontboat",
"github_project": "veotools",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "google-genai",
"specs": [
[
"==",
"1.30.0"
]
]
},
{
"name": "opencv-python",
"specs": [
[
"==",
"4.12.0.88"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.2.6"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
"==",
"1.1.1"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.4"
]
]
}
],
"lcname": "veotools"
}