# MCP Veo 3 Video Generation Server
A Model Context Protocol (MCP) server that provides video generation capabilities using Google's Veo 3 API through the Gemini API. Generate high-quality videos from text prompts or images with realistic motion and audio.
## Features
- 🎬 **Text-to-Video**: Generate videos from descriptive text prompts
- 🖼️ **Image-to-Video**: Animate static images with motion prompts
- 🎵 **Audio Generation**: Native audio generation with Veo 3 models
- 🎨 **Multiple Models**: Support for Veo 3, Veo 3 Fast, and Veo 2
- 📐 **Aspect Ratios**: Widescreen (16:9) and portrait (9:16) support
- ❌ **Negative Prompts**: Specify what to avoid in generated videos
- 📁 **File Management**: List and manage generated videos
- ⚡ **Async Processing**: Non-blocking video generation with progress tracking
## Supported Models
| Model | Description | Speed | Quality | Audio |
|-------|-------------|-------|---------|-------|
| `veo-3.0-generate-preview` | Latest Veo 3 with highest quality | Slower | Highest | ✅ |
| `veo-3.0-fast-generate-preview` | Optimized for speed and business use | Faster | High | ✅ |
| `veo-2.0-generate-001` | Previous generation model | Medium | Good | ❌ |
## 📦 Installation Options
```bash
# Run without installing (recommended)
uvx mcp-veo3 --output-dir ~/Videos/Generated
# Install globally
pip install mcp-veo3
# Development install
git clone && cd mcp-veo3 && uv sync
```
## Installation
### Option 1: Direct Usage (Recommended)
```bash
# No installation needed - run directly with uvx
uvx mcp-veo3 --output-dir ~/Videos/Generated
```
### Option 2: Development Setup
1. **Clone this directory**:
```bash
git clone https://github.com/dayongd1/mcp-veo3
cd mcp-veo3
```
2. **Install with uv**:
```bash
uv sync
```
Or use the automated setup:
```bash
python setup.py
```
3. **Set up API key**:
- Get your Gemini API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
- Create `.env` file: `cp env_example.txt .env`
- Edit `.env` and add your `GEMINI_API_KEY`
- Or set environment variable: `export GEMINI_API_KEY='your_key'`
## Configuration
### Environment Variables
Create a `.env` file with the following variables:
```bash
# Required
GEMINI_API_KEY=your_gemini_api_key_here
# Optional
DEFAULT_OUTPUT_DIR=generated_videos
DEFAULT_MODEL=veo-3.0-generate-preview
DEFAULT_ASPECT_RATIO=16:9
PERSON_GENERATION=dont_allow
POLL_INTERVAL=10
MAX_POLL_TIME=600
```
### MCP Client Configuration
#### Option 1: Using uvx (Recommended - after PyPI publication)
```json
{
"mcpServers": {
"veo3": {
"command": "uvx",
"args": ["mcp-veo3", "--output-dir", "~/Videos/Generated"],
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}
```
#### Option 2: Using uv run (Development)
```json
{
"mcpServers": {
"veo3": {
"command": "uv",
"args": ["run", "--directory", "/path/to/mcp-veo3", "mcp-veo3", "--output-dir", "~/Videos/Generated"],
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}
```
#### Option 3: Direct Python
```json
{
"mcpServers": {
"veo3": {
"command": "python",
"args": ["/path/to/mcp-veo3/mcp_veo3.py", "--output-dir", "~/Videos/Generated"],
"env": {
"GEMINI_API_KEY": "your_api_key_here"
}
}
}
}
```
**CLI Arguments:**
- `--output-dir` (required): Directory to save generated videos
- `--api-key` (optional): Gemini API key (overrides environment variable)
## Available Tools
### 1. generate_video
Generate a video from a text prompt.
**Parameters:**
- `prompt` (required): Text description of the video
- `model` (optional): Model to use (default: veo-3.0-generate-preview)
- `negative_prompt` (optional): What to avoid in the video
- `aspect_ratio` (optional): 16:9 or 9:16 (default: 16:9)
- `output_dir` (optional): Directory to save videos (default: generated_videos)
**Example:**
```json
{
"prompt": "A close up of two people staring at a cryptic drawing on a wall, torchlight flickering. A man murmurs, 'This must be it. That's the secret code.' The woman looks at him and whispering excitedly, 'What did you find?'",
"model": "veo-3.0-generate-preview",
"aspect_ratio": "16:9"
}
```
### 2. generate_video_from_image
Generate a video from a starting image and motion prompt.
**Parameters:**
- `prompt` (required): Text description of the desired motion/action
- `image_path` (required): Path to the starting image file
- `model` (optional): Model to use (default: veo-3.0-generate-preview)
- `negative_prompt` (optional): What to avoid in the video
- `aspect_ratio` (optional): 16:9 or 9:16 (default: 16:9)
- `output_dir` (optional): Directory to save videos (default: generated_videos)
**Example:**
```json
{
"prompt": "The person in the image starts walking forward with a confident stride",
"image_path": "./images/person_standing.jpg",
"model": "veo-3.0-generate-preview"
}
```
### 3. list_generated_videos
List all generated videos in the output directory.
**Parameters:**
- `output_dir` (optional): Directory to list videos from (default: generated_videos)
### 4. get_video_info
Get detailed information about a video file.
**Parameters:**
- `video_path` (required): Path to the video file
## Usage Examples
### Basic Text-to-Video Generation
```python
# Through MCP client
result = await mcp_client.call_tool("generate_video", {
"prompt": "A majestic waterfall in a lush forest with sunlight filtering through the trees",
"model": "veo-3.0-generate-preview"
})
```
### Image-to-Video with Negative Prompt
```python
result = await mcp_client.call_tool("generate_video_from_image", {
"prompt": "The ocean waves gently crash against the shore",
"image_path": "./beach_scene.jpg",
"negative_prompt": "people, buildings, artificial structures",
"aspect_ratio": "16:9"
})
```
### Creative Animation
```python
result = await mcp_client.call_tool("generate_video", {
"prompt": "A stylized animation of a paper airplane flying through a colorful abstract landscape",
"model": "veo-3.0-fast-generate-preview",
"aspect_ratio": "16:9"
})
```
## Prompt Writing Tips
### Effective Prompts
- **Be specific**: Include details about lighting, mood, camera angles
- **Describe motion**: Specify the type of movement you want
- **Set the scene**: Include environment and atmospheric details
- **Mention style**: Cinematic, realistic, animated, etc.
### Example Prompts
**Cinematic Realism:**
```
A tracking drone view of a red convertible driving through Palm Springs in the 1970s, warm golden hour sunlight, long shadows, cinematic camera movement
```
**Creative Animation:**
```
A stylized animation of a large oak tree with leaves blowing vigorously in strong wind, peaceful countryside setting, warm lighting
```
**Dialogue Scene:**
```
Close-up of two people having an intense conversation in a dimly lit room, dramatic lighting, one person gesturing emphatically while speaking
```
### Negative Prompts
Describe what you **don't** want to see:
- ❌ Don't use "no" or "don't": `"no cars"`
- ✅ Do describe unwanted elements: `"cars, vehicles, traffic"`
## Limitations
- **Generation Time**: 11 seconds to 6 minutes depending on complexity
- **Video Length**: 8 seconds maximum
- **Resolution**: 720p output
- **Storage**: Videos are stored on Google's servers for 2 days only
- **Regional Restrictions**: Person generation defaults to "dont_allow" in EU/UK/CH/MENA
- **Watermarking**: All videos include SynthID watermarks
## 🚨 Troubleshooting
**"API key not found"**
```bash
# Set your Gemini API key
export GEMINI_API_KEY='your_api_key_here'
# Or add to .env file
echo "GEMINI_API_KEY=your_api_key_here" >> .env
```
**"Output directory not accessible"**
```bash
# Ensure the output directory exists and is writable
mkdir -p ~/Videos/Generated
chmod 755 ~/Videos/Generated
```
**"Video generation timeout"**
```bash
# Try using the fast model for testing
uvx mcp-veo3 --output-dir ~/Videos
# Then use: model="veo-3.0-fast-generate-preview"
```
**"Import errors"**
```bash
# Install/update dependencies
uv sync
# Or with pip
pip install -r requirements.txt
```
## Error Handling
The server handles common errors gracefully:
- **Invalid API Key**: Clear error message with setup instructions
- **File Not Found**: Validation for image paths in image-to-video
- **Generation Timeout**: Configurable timeout with progress updates
- **Model Errors**: Fallback error handling with detailed messages
## Development
### Running Tests
```bash
# Install test dependencies
pip install pytest pytest-asyncio
# Run tests
pytest tests/
```
### Code Formatting
```bash
# Format code
black mcp_veo3.py
# Check linting
flake8 mcp_veo3.py
# Type checking
mypy mcp_veo3.py
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## 📚 Links
- **PyPI**: https://pypi.org/project/mcp-veo3/
- **GitHub**: https://github.com/dayongd1/mcp-veo3
- **MCP Docs**: https://modelcontextprotocol.io/
- **Veo 3 API**: https://ai.google.dev/gemini-api/docs/video
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Support
- **Documentation**: [Google Veo 3 API Docs](https://ai.google.dev/gemini-api/docs/video)
- **API Key**: [Get your Gemini API key](https://makersuite.google.com/app/apikey)
- **Issues**: Report bugs and feature requests in the GitHub issues
## Changelog
### v1.0.1
- **🔧 API Fix**: Updated to match official Veo 3 API specification
- **Removed unsupported parameters**: aspect_ratio, negative_prompt, person_generation
- **Simplified API calls**: Now using only model and prompt parameters as per official docs
- **Fixed video generation errors**: Resolved "unexpected keyword argument" issues
- **Updated documentation**: Added notes about current API limitations
### v1.0.0
- Initial release
- Support for Veo 3, Veo 3 Fast, and Veo 2 models
- Text-to-video and image-to-video generation
- FastMCP framework with progress tracking
- Comprehensive error handling and logging
- File management utilities
- uv/uvx support for easy installation
---
**Built with FastMCP** | **Python 3.10+** | **MIT License**
Raw data
{
"_id": null,
"home_page": null,
"name": "mcp-veo3",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "Dayong Huang <dayong.d1@gmail.com>",
"keywords": "ai, fastmcp, gemini, generation, google, mcp, model-context-protocol, veo, video",
"author": null,
"author_email": "Dayong Huang <dayong.d1@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/07/01/5943102f3e89b2eb460f48f818afa37c46c8084ba7c0fe0d7412d0c135bd/mcp_veo3-1.0.1.tar.gz",
"platform": null,
"description": "# MCP Veo 3 Video Generation Server\n\nA Model Context Protocol (MCP) server that provides video generation capabilities using Google's Veo 3 API through the Gemini API. Generate high-quality videos from text prompts or images with realistic motion and audio.\n\n## Features\n\n- \ud83c\udfac **Text-to-Video**: Generate videos from descriptive text prompts\n- \ud83d\uddbc\ufe0f **Image-to-Video**: Animate static images with motion prompts\n- \ud83c\udfb5 **Audio Generation**: Native audio generation with Veo 3 models\n- \ud83c\udfa8 **Multiple Models**: Support for Veo 3, Veo 3 Fast, and Veo 2\n- \ud83d\udcd0 **Aspect Ratios**: Widescreen (16:9) and portrait (9:16) support\n- \u274c **Negative Prompts**: Specify what to avoid in generated videos\n- \ud83d\udcc1 **File Management**: List and manage generated videos\n- \u26a1 **Async Processing**: Non-blocking video generation with progress tracking\n\n## Supported Models\n\n| Model | Description | Speed | Quality | Audio |\n|-------|-------------|-------|---------|-------|\n| `veo-3.0-generate-preview` | Latest Veo 3 with highest quality | Slower | Highest | \u2705 |\n| `veo-3.0-fast-generate-preview` | Optimized for speed and business use | Faster | High | \u2705 |\n| `veo-2.0-generate-001` | Previous generation model | Medium | Good | \u274c |\n\n## \ud83d\udce6 Installation Options\n\n```bash\n# Run without installing (recommended)\nuvx mcp-veo3 --output-dir ~/Videos/Generated\n\n# Install globally\npip install mcp-veo3\n\n# Development install\ngit clone && cd mcp-veo3 && uv sync\n```\n\n## Installation\n\n### Option 1: Direct Usage (Recommended)\n```bash\n# No installation needed - run directly with uvx\nuvx mcp-veo3 --output-dir ~/Videos/Generated\n```\n\n### Option 2: Development Setup\n1. **Clone this directory**:\n ```bash\n git clone https://github.com/dayongd1/mcp-veo3\n cd mcp-veo3\n ```\n\n2. **Install with uv**:\n ```bash\n uv sync\n ```\n \n Or use the automated setup:\n ```bash\n python setup.py\n ```\n\n3. **Set up API key**:\n - Get your Gemini API key from [Google AI Studio](https://makersuite.google.com/app/apikey)\n - Create `.env` file: `cp env_example.txt .env`\n - Edit `.env` and add your `GEMINI_API_KEY`\n - Or set environment variable: `export GEMINI_API_KEY='your_key'`\n\n## Configuration\n\n### Environment Variables\n\nCreate a `.env` file with the following variables:\n\n```bash\n# Required\nGEMINI_API_KEY=your_gemini_api_key_here\n\n# Optional\nDEFAULT_OUTPUT_DIR=generated_videos\nDEFAULT_MODEL=veo-3.0-generate-preview\nDEFAULT_ASPECT_RATIO=16:9\nPERSON_GENERATION=dont_allow\nPOLL_INTERVAL=10\nMAX_POLL_TIME=600\n```\n\n### MCP Client Configuration\n\n#### Option 1: Using uvx (Recommended - after PyPI publication)\n```json\n{\n \"mcpServers\": {\n \"veo3\": {\n \"command\": \"uvx\",\n \"args\": [\"mcp-veo3\", \"--output-dir\", \"~/Videos/Generated\"],\n \"env\": {\n \"GEMINI_API_KEY\": \"your_api_key_here\"\n }\n }\n }\n}\n```\n\n#### Option 2: Using uv run (Development)\n```json\n{\n \"mcpServers\": {\n \"veo3\": {\n \"command\": \"uv\",\n \"args\": [\"run\", \"--directory\", \"/path/to/mcp-veo3\", \"mcp-veo3\", \"--output-dir\", \"~/Videos/Generated\"],\n \"env\": {\n \"GEMINI_API_KEY\": \"your_api_key_here\"\n }\n }\n }\n}\n```\n\n#### Option 3: Direct Python\n```json\n{\n \"mcpServers\": {\n \"veo3\": {\n \"command\": \"python\",\n \"args\": [\"/path/to/mcp-veo3/mcp_veo3.py\", \"--output-dir\", \"~/Videos/Generated\"],\n \"env\": {\n \"GEMINI_API_KEY\": \"your_api_key_here\"\n }\n }\n }\n}\n```\n\n**CLI Arguments:**\n- `--output-dir` (required): Directory to save generated videos\n- `--api-key` (optional): Gemini API key (overrides environment variable)\n\n## Available Tools\n\n### 1. generate_video\n\nGenerate a video from a text prompt.\n\n**Parameters:**\n- `prompt` (required): Text description of the video\n- `model` (optional): Model to use (default: veo-3.0-generate-preview)\n- `negative_prompt` (optional): What to avoid in the video\n- `aspect_ratio` (optional): 16:9 or 9:16 (default: 16:9)\n- `output_dir` (optional): Directory to save videos (default: generated_videos)\n\n**Example:**\n```json\n{\n \"prompt\": \"A close up of two people staring at a cryptic drawing on a wall, torchlight flickering. A man murmurs, 'This must be it. That's the secret code.' The woman looks at him and whispering excitedly, 'What did you find?'\",\n \"model\": \"veo-3.0-generate-preview\",\n \"aspect_ratio\": \"16:9\"\n}\n```\n\n### 2. generate_video_from_image\n\nGenerate a video from a starting image and motion prompt.\n\n**Parameters:**\n- `prompt` (required): Text description of the desired motion/action\n- `image_path` (required): Path to the starting image file\n- `model` (optional): Model to use (default: veo-3.0-generate-preview)\n- `negative_prompt` (optional): What to avoid in the video\n- `aspect_ratio` (optional): 16:9 or 9:16 (default: 16:9)\n- `output_dir` (optional): Directory to save videos (default: generated_videos)\n\n**Example:**\n```json\n{\n \"prompt\": \"The person in the image starts walking forward with a confident stride\",\n \"image_path\": \"./images/person_standing.jpg\",\n \"model\": \"veo-3.0-generate-preview\"\n}\n```\n\n### 3. list_generated_videos\n\nList all generated videos in the output directory.\n\n**Parameters:**\n- `output_dir` (optional): Directory to list videos from (default: generated_videos)\n\n### 4. get_video_info\n\nGet detailed information about a video file.\n\n**Parameters:**\n- `video_path` (required): Path to the video file\n\n## Usage Examples\n\n### Basic Text-to-Video Generation\n\n```python\n# Through MCP client\nresult = await mcp_client.call_tool(\"generate_video\", {\n \"prompt\": \"A majestic waterfall in a lush forest with sunlight filtering through the trees\",\n \"model\": \"veo-3.0-generate-preview\"\n})\n```\n\n### Image-to-Video with Negative Prompt\n\n```python\nresult = await mcp_client.call_tool(\"generate_video_from_image\", {\n \"prompt\": \"The ocean waves gently crash against the shore\",\n \"image_path\": \"./beach_scene.jpg\",\n \"negative_prompt\": \"people, buildings, artificial structures\",\n \"aspect_ratio\": \"16:9\"\n})\n```\n\n### Creative Animation\n\n```python\nresult = await mcp_client.call_tool(\"generate_video\", {\n \"prompt\": \"A stylized animation of a paper airplane flying through a colorful abstract landscape\",\n \"model\": \"veo-3.0-fast-generate-preview\",\n \"aspect_ratio\": \"16:9\"\n})\n```\n\n## Prompt Writing Tips\n\n### Effective Prompts\n- **Be specific**: Include details about lighting, mood, camera angles\n- **Describe motion**: Specify the type of movement you want\n- **Set the scene**: Include environment and atmospheric details\n- **Mention style**: Cinematic, realistic, animated, etc.\n\n### Example Prompts\n\n**Cinematic Realism:**\n```\nA tracking drone view of a red convertible driving through Palm Springs in the 1970s, warm golden hour sunlight, long shadows, cinematic camera movement\n```\n\n**Creative Animation:**\n```\nA stylized animation of a large oak tree with leaves blowing vigorously in strong wind, peaceful countryside setting, warm lighting\n```\n\n**Dialogue Scene:**\n```\nClose-up of two people having an intense conversation in a dimly lit room, dramatic lighting, one person gesturing emphatically while speaking\n```\n\n### Negative Prompts\nDescribe what you **don't** want to see:\n- \u274c Don't use \"no\" or \"don't\": `\"no cars\"` \n- \u2705 Do describe unwanted elements: `\"cars, vehicles, traffic\"`\n\n## Limitations\n\n- **Generation Time**: 11 seconds to 6 minutes depending on complexity\n- **Video Length**: 8 seconds maximum\n- **Resolution**: 720p output\n- **Storage**: Videos are stored on Google's servers for 2 days only\n- **Regional Restrictions**: Person generation defaults to \"dont_allow\" in EU/UK/CH/MENA\n- **Watermarking**: All videos include SynthID watermarks\n\n## \ud83d\udea8 Troubleshooting\n\n**\"API key not found\"**\n```bash\n# Set your Gemini API key\nexport GEMINI_API_KEY='your_api_key_here'\n# Or add to .env file\necho \"GEMINI_API_KEY=your_api_key_here\" >> .env\n```\n\n**\"Output directory not accessible\"**\n```bash\n# Ensure the output directory exists and is writable\nmkdir -p ~/Videos/Generated\nchmod 755 ~/Videos/Generated\n```\n\n**\"Video generation timeout\"**\n```bash\n# Try using the fast model for testing\nuvx mcp-veo3 --output-dir ~/Videos\n# Then use: model=\"veo-3.0-fast-generate-preview\"\n```\n\n**\"Import errors\"**\n```bash\n# Install/update dependencies\nuv sync\n# Or with pip\npip install -r requirements.txt\n```\n\n## Error Handling\n\nThe server handles common errors gracefully:\n\n- **Invalid API Key**: Clear error message with setup instructions\n- **File Not Found**: Validation for image paths in image-to-video\n- **Generation Timeout**: Configurable timeout with progress updates\n- **Model Errors**: Fallback error handling with detailed messages\n\n## Development\n\n### Running Tests\n\n```bash\n# Install test dependencies\npip install pytest pytest-asyncio\n\n# Run tests\npytest tests/\n```\n\n### Code Formatting\n\n```bash\n# Format code\nblack mcp_veo3.py\n\n# Check linting\nflake8 mcp_veo3.py\n\n# Type checking\nmypy mcp_veo3.py\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests if applicable\n5. Submit a pull request\n\n## \ud83d\udcda Links\n\n- **PyPI**: https://pypi.org/project/mcp-veo3/\n- **GitHub**: https://github.com/dayongd1/mcp-veo3\n- **MCP Docs**: https://modelcontextprotocol.io/\n- **Veo 3 API**: https://ai.google.dev/gemini-api/docs/video\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Support\n\n- **Documentation**: [Google Veo 3 API Docs](https://ai.google.dev/gemini-api/docs/video)\n- **API Key**: [Get your Gemini API key](https://makersuite.google.com/app/apikey)\n- **Issues**: Report bugs and feature requests in the GitHub issues\n\n## Changelog\n\n### v1.0.1\n- **\ud83d\udd27 API Fix**: Updated to match official Veo 3 API specification\n- **Removed unsupported parameters**: aspect_ratio, negative_prompt, person_generation\n- **Simplified API calls**: Now using only model and prompt parameters as per official docs\n- **Fixed video generation errors**: Resolved \"unexpected keyword argument\" issues\n- **Updated documentation**: Added notes about current API limitations\n\n### v1.0.0\n- Initial release\n- Support for Veo 3, Veo 3 Fast, and Veo 2 models\n- Text-to-video and image-to-video generation\n- FastMCP framework with progress tracking\n- Comprehensive error handling and logging\n- File management utilities\n- uv/uvx support for easy installation\n\n---\n\n**Built with FastMCP** | **Python 3.10+** | **MIT License**\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "MCP Veo 3 Video Generator - A Model Context Protocol server for Veo 3 video generation",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/dayongd1/mcp-veo3/issues",
"Changelog": "https://github.com/dayongd1/mcp-veo3/releases",
"Documentation": "https://github.com/dayongd1/mcp-veo3/blob/main/README.md",
"Homepage": "https://github.com/dayongd1/mcp-veo3",
"Repository": "https://github.com/dayongd1/mcp-veo3"
},
"split_keywords": [
"ai",
" fastmcp",
" gemini",
" generation",
" google",
" mcp",
" model-context-protocol",
" veo",
" video"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "e7eec96ddbd1e94c142bffb31dc7422d4256bddd347e523a36fc85b86aca2953",
"md5": "33960dec063f0b4757b9eb61661ad150",
"sha256": "79b2cb5e96afe20672cf185731ebf02d520e9db8ec48c2c9d165b85725f2cadf"
},
"downloads": -1,
"filename": "mcp_veo3-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "33960dec063f0b4757b9eb61661ad150",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 10408,
"upload_time": "2025-08-14T02:16:05",
"upload_time_iso_8601": "2025-08-14T02:16:05.771110Z",
"url": "https://files.pythonhosted.org/packages/e7/ee/c96ddbd1e94c142bffb31dc7422d4256bddd347e523a36fc85b86aca2953/mcp_veo3-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "07015943102f3e89b2eb460f48f818afa37c46c8084ba7c0fe0d7412d0c135bd",
"md5": "0d6c9714df8c891b0503e92ae3952a50",
"sha256": "74a2125efaa55bd317c0b10b7f996c6e490bbe541b1b1e4e1cbf6dc4eee39c37"
},
"downloads": -1,
"filename": "mcp_veo3-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "0d6c9714df8c891b0503e92ae3952a50",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 11925,
"upload_time": "2025-08-14T02:16:06",
"upload_time_iso_8601": "2025-08-14T02:16:06.884170Z",
"url": "https://files.pythonhosted.org/packages/07/01/5943102f3e89b2eb460f48f818afa37c46c8084ba7c0fe0d7412d0c135bd/mcp_veo3-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-14 02:16:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dayongd1",
"github_project": "mcp-veo3",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "fastmcp",
"specs": [
[
">=",
"0.9.0"
]
]
},
{
"name": "google-genai",
"specs": [
[
">=",
"0.3.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"1.0.0"
]
]
},
{
"name": "boto3",
"specs": [
[
">=",
"1.26.0"
]
]
},
{
"name": "Pillow",
"specs": [
[
">=",
"10.0.0"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.31.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"7.0.0"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
">=",
"0.21.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"23.0.0"
]
]
},
{
"name": "flake8",
"specs": [
[
">=",
"6.0.0"
]
]
},
{
"name": "mypy",
"specs": [
[
">=",
"1.0.0"
]
]
}
],
"lcname": "mcp-veo3"
}