# Replicate Batch Process
**[δΈζη README](https://github.com/preangelleo/replicate_batch_process/blob/main/README_CN.md)** | **English** | **[PyPI Package](https://pypi.org/project/replicate-batch-process/)**
[](https://badge.fury.io/py/replicate-batch-process)
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
Intelligent batch processing tool for Replicate models with **automatic fallback mechanisms** and concurrent processing.
## β¨ Key Features
- π **Intelligent Fallback System** - Automatic model switching on incompatibility
- β‘ **Smart Concurrency Control** - Adaptive rate limiting and batch processing
- π― **Three Usage Modes** - Single, batch same-model, and mixed-model processing
- π **Custom File Naming** - Ordered output with correspondence control
- π‘οΈ **Error Resilience** - Comprehensive retry and recovery mechanisms
- β
**Model Validation** - Automatic detection of unsupported models with clear error messages
## π Requirements
### System Requirements
- **Python**: 3.8, 3.9, 3.10, 3.11, or 3.12
- **Operating System**: Windows, macOS, Linux
- **Memory**: Minimum 4GB RAM recommended
### Dependencies
```txt
replicate>=0.15.0 # Replicate API client
requests>=2.25.0 # HTTP library
asyncio-throttle>=1.0.2 # Rate limiting for async operations
python-dotenv>=0.19.0 # Environment variable management
```
### API Requirements
- **Replicate API Token**: Required ([Get one here](https://replicate.com/account/api-tokens))
- **Network**: Stable internet connection for API calls
- **Rate Limits**: 600 requests/minute (shared across all models)
## π¦ Installation
```bash
pip install replicate-batch-process
```
## π Quick Start
### 1. Set up API Token
```bash
# Option 1: Interactive setup
replicate-init
# Option 2: Manual setup
export REPLICATE_API_TOKEN="your-token-here"
# Option 3: .env file
echo "REPLICATE_API_TOKEN=your-token-here" > .env
```
### 2. Single Image Generation
```python
from replicate_batch_process import replicate_model_calling
file_paths = replicate_model_calling(
prompt="A beautiful sunset over mountains",
model_name="qwen/qwen-image", # Use supported model
output_filepath="output/sunset.jpg"
)
```
### 3. Batch Processing (Async Required)
#### Basic Batch Processing
```python
import asyncio
from replicate_batch_process import intelligent_batch_process
async def main():
# Process multiple prompts with the same model
files = await intelligent_batch_process(
prompts=["sunset", "city", "forest"],
model_name="qwen/qwen-image",
max_concurrent=8,
output_filepath=["output/sunset.png", "output/city.png", "output/forest.png"]
)
print(f"Generated {len(files)} images:")
for file in files:
print(f" - {file}")
return files
# Run the async function
if __name__ == "__main__":
results = asyncio.run(main())
```
#### Advanced Mixed-Model Batch Processing
```python
import asyncio
from replicate_batch_process import IntelligentBatchProcessor, BatchRequest
async def advanced_batch():
processor = IntelligentBatchProcessor()
# Create requests with different models and parameters
requests = [
BatchRequest(
prompt="A futuristic city",
model_name="qwen/qwen-image",
output_filepath="output/city.png",
kwargs={"aspect_ratio": "16:9"}
),
BatchRequest(
prompt="A magical forest",
model_name="google/imagen-4-ultra",
output_filepath="output/forest.png",
kwargs={"output_quality": 90}
),
BatchRequest(
prompt="Character with red hair",
model_name="black-forest-labs/flux-kontext-max",
output_filepath="output/character.png",
kwargs={"input_image": "reference.jpg"}
)
]
# Process all requests concurrently
results = await processor.process_intelligent_batch(requests, max_concurrent=5)
# Handle results
successful = [r for r in results if r.success]
failed = [r for r in results if not r.success]
print(f"β
Success: {len(successful)}/{len(results)}")
for result in failed:
print(f"β Failed: {result.error}")
return results
# Run with proper async context
if __name__ == "__main__":
asyncio.run(advanced_batch())
```
## π Supported Models
### Image Generation Models
| Model | Price | Specialization | Reference Image Support |
|-------|-------|----------------|-------------------------|
| **black-forest-labs/flux-dev** | $0.025 | Fast generation, minimal censorship | β |
| **black-forest-labs/flux-kontext-max** | $0.08 | Image editing, character consistency | β
|
| **qwen/qwen-image** | $0.025 | Text rendering, cover images | β |
| **google/imagen-4-ultra** | $0.06 | High-quality detailed images | β |
### Video Generation Models
| Model | Price | Specialization | Reference Image Support |
|-------|-------|----------------|-------------------------|
| **google/veo-3-fast** | $3.32/call | Fast video with audio | β
|
| **kwaivgi/kling-v2.1-master** | $0.28/sec | 1080p video, 5-10 second duration | β
|
> β οΈ **Note**: Using unsupported models will return a clear error message: "Model '{model_name}' is not supported. Please use one of the supported models listed above."
## π Intelligent Fallback System
**Automatic model switching when issues arise:**
### Reference Image Auto-Detection
```python
# User provides reference image to non-supporting model
replicate_model_calling(
prompt="Generate based on this image",
model_name="black-forest-labs/flux-dev", # Doesn't support reference images
input_image="path/to/image.jpg" # β Auto-switches to flux-kontext-max
)
```
### Parameter Compatibility Handling
```python
# Unsupported parameters automatically cleaned and model switched
replicate_model_calling(
prompt="Generate image",
model_name="black-forest-labs/flux-kontext-max",
guidance=3.5, # Unsupported parameter
num_outputs=2 # β Auto-switches to compatible model
)
```
### API Error Recovery
Automatic fallback chain: `Flux Dev` β `Qwen Image` β `Imagen 4 Ultra`
## π Usage Scenarios
| Mode | Use Case | Command |
|------|----------|---------|
| **Single** | One-off generation, testing | `replicate_model_calling()` |
| **Batch Same** | Multiple prompts, same model | `intelligent_batch_process()` |
| **Mixed Models** | Different models/parameters | `IntelligentBatchProcessor()` |
## π§ Smart Processing Strategies
The system automatically selects optimal processing strategy:
- **Immediate Processing**: Tasks β€ available quota β Full concurrency
- **Window Processing**: Tasks β€ 600 but > current quota β Wait then batch
- **Dynamic Queue**: Tasks > 600 β Continuous processing with queue management
## βοΈ Configuration
### API Keys
Get your Replicate API token: [replicate.com/account/api-tokens](https://replicate.com/account/api-tokens)
### Custom Fallback Rules
Modify `config.py`:
```python
FALLBACK_MODELS = {
'your-model': {
'fail': {
'fallback_model': 'backup-model',
'condition': 'api_error'
}
}
}
```
## β οΈ Common Pitfalls
1. **Async/Await**: Batch functions must be called within async context
2. **Model Names**: Use exact model names from supported list above
3. **File Paths**: Ensure output directories exist before processing
4. **Rate Limits**: Keep max_concurrent β€ 15 to avoid 429 errors
## π Performance Benchmarks
### Real-World Test Results (v1.0.7)
*Tested on: Python 3.9.16, macOS, Replicate API*
| Task | Model | Time | Success Rate | Notes |
|------|-------|------|-------------|-------|
| **Single Image** | qwen/qwen-image | 11.7s | 100% | Fastest for single generation |
| **Batch (3 images)** | qwen/qwen-image | 23.2s | 100% | ~7.7s per image with concurrency |
| **Batch (10 images)** | qwen/qwen-image | 45s | 100% | Optimal with max_concurrent=8 |
| **Mixed Models (3)** | Various | 28s | 100% | Parallel processing advantage |
| **With Fallback** | flux-kontext β qwen | 15s | 100% | Includes fallback overhead |
| **Large Batch (50)** | qwen/qwen-image | 3.5min | 98% | 2% retry on rate limits |
### Concurrency Performance
| Concurrent Tasks | Time per Image | Efficiency | Recommended For |
|-----------------|----------------|------------|-----------------|
| 1 (Sequential) | 12s | Baseline | Testing/Debug |
| 5 | 4.8s | 250% faster | Conservative usage |
| 8 | 3.2s | 375% faster | **Optimal balance** |
| 12 | 2.8s | 428% faster | Aggressive, risk of 429 |
| 15+ | Variable | Diminishing returns | Not recommended |
## π Rate Limiting
- **Replicate API**: 600 requests/minute (shared across all models)
- **Recommended Concurrency**: 5-8 (conservative) to 12 (aggressive)
- **Auto-Retry**: Built-in 429 error handling with exponential backoff
## β οΈ Known Issues & Workarounds
### Fixed in v1.0.7
β
**FileOutput Handling Bug** (v1.0.2-1.0.6)
- **Issue**: Kontext Max model created 814 empty files when returning single image
- **Root Cause**: Incorrect iteration over bytes instead of file object
- **Fix**: Added intelligent output type detection with `hasattr(output, 'read')` check
- **Status**: β
Fully resolved
β
**Parameter Routing Bug** (v1.0.2-1.0.6)
- **Issue**: `output_filepath` incorrectly placed in kwargs for batch processing
- **Fix**: Corrected parameter assignment in BatchRequest
- **Status**: β
Fully resolved
### Current Limitations (v1.0.7)
β οΈ **Kontext Max Input Image**
- **Issue**: Input image parameter sometimes fails with Kontext Max model
- **Workaround**: Automatic fallback to Qwen model (transparent to user)
- **Impact**: Minor - generation still succeeds via fallback
- **Fix Timeline**: Investigating for v1.0.8
βΉοΈ **Rate Limiting on Large Batches**
- **Issue**: Batches >50 may hit rate limits even with throttling
- **Workaround**: Use chunking strategy (see Best Practices)
- **Impact**: Minor - automatic retry handles most cases
### Reporting Issues
Found a bug? Please report at: https://github.com/preangelleo/replicate_batch_process/issues
## π¦ Migration Guide
### Upgrading from v1.0.6 β v1.0.7
```bash
pip install --upgrade replicate-batch-process==1.0.7
```
**Changes:**
- β
Fixed FileOutput handling bug (814 empty files issue)
- β
Enhanced README documentation
- β
No API changes - drop-in replacement
**Action Required:** None - fully backward compatible
### Upgrading from v1.0.2-1.0.5 β v1.0.7
```bash
pip install --upgrade replicate-batch-process==1.0.7
```
**Critical Fixes:**
1. **intelligent_batch_process parameter bug** - Now correctly handles output_filepath
2. **FileOutput compatibility** - No more empty file creation
3. **Model validation** - Clear error messages for unsupported models
**Code Changes Needed:** None, but review error handling for better messages
### Upgrading from v0.x β v1.0.7
**Breaking Changes:**
- Package renamed from `replicate_batch` to `replicate-batch-process`
- New import structure:
```python
# Old (v0.x)
from replicate_batch import process_batch
# New (v1.0.7)
from replicate_batch_process import intelligent_batch_process
```
**Migration Steps:**
1. Uninstall old package: `pip uninstall replicate_batch`
2. Install new package: `pip install replicate-batch-process`
3. Update imports in your code
4. Test with small batches first
### Version History
| Version | Release Date | Key Changes |
|---------|-------------|-------------|
| v1.0.7 | 2025-01-05 | FileOutput fix, README improvements |
| v1.0.6 | 2025-01-05 | Bug fixes, model validation |
| v1.0.5 | 2025-01-04 | Parameter handling improvements |
| v1.0.4 | 2025-01-04 | Model support documentation |
| v1.0.3 | 2025-01-03 | Initial stable release |
## π‘ Best Practices
```python
# For large batches, use chunking
def process_large_batch(prompts, chunk_size=50):
for chunk in chunks(prompts, chunk_size):
files = await intelligent_batch_process(chunk, model_name)
yield files
# Error handling with complete example
from replicate_batch_process import IntelligentBatchProcessor, BatchRequest
async def batch_with_error_handling():
processor = IntelligentBatchProcessor()
requests = [
BatchRequest(prompt="sunset", model_name="qwen/qwen-image", output_filepath="output/sunset.png"),
BatchRequest(prompt="city", model_name="qwen/qwen-image", output_filepath="output/city.png"),
]
results = await processor.process_intelligent_batch(requests)
for result in results:
if result.success:
print(f"β
Generated: {result.file_paths}")
else:
print(f"β Failed: {result.error}")
asyncio.run(batch_with_error_handling())
```
## ποΈ Project Structure
```
replicate-batch-process/
βββ main.py # Single image generation
βββ intelligent_batch_processor.py # Batch processing engine
βββ config.py # Model configurations & fallbacks
βββ init_environment.py # Environment setup
βββ example_usage.py # Complete examples
```
## π§ Development
```bash
# Clone repository
git clone https://github.com/preangelleo/replicate_batch_process.git
# Install in development mode
pip install -e .
# Run examples
python example_usage.py
```
## π License
MIT License - see [LICENSE](LICENSE) file for details.
## π€ Contributing
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'Add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
5. Open a Pull Request
## π Links
- **PyPI**: https://pypi.org/project/replicate-batch-process/
- **GitHub**: https://github.com/preangelleo/replicate_batch_process
- **Issues**: https://github.com/preangelleo/replicate_batch_process/issues
---
**Made with β€οΈ for the AI community**
Raw data
{
"_id": null,
"home_page": "https://github.com/preangelleo/replicate_batch_process",
"name": "replicate-batch-process",
"maintainer": "preangelleo",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "replicate, ai, batch-processing, image-generation, machine-learning, api-client, fallback-mechanism, concurrent-processing",
"author": "preangelleo",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/87/be/8cea17ada681f80d987368f3936285c67e737c54a487093f883903cc43c3/replicate_batch_process-1.0.9.tar.gz",
"platform": null,
"description": "# Replicate Batch Process\n\n**[\u4e2d\u6587\u7248 README](https://github.com/preangelleo/replicate_batch_process/blob/main/README_CN.md)** | **English** | **[PyPI Package](https://pypi.org/project/replicate-batch-process/)**\n\n[](https://badge.fury.io/py/replicate-batch-process)\n[](https://www.python.org/downloads/)\n[](https://opensource.org/licenses/MIT)\n\nIntelligent batch processing tool for Replicate models with **automatic fallback mechanisms** and concurrent processing.\n\n## \u2728 Key Features\n\n- \ud83d\udd04 **Intelligent Fallback System** - Automatic model switching on incompatibility\n- \u26a1 **Smart Concurrency Control** - Adaptive rate limiting and batch processing\n- \ud83c\udfaf **Three Usage Modes** - Single, batch same-model, and mixed-model processing\n- \ud83d\udcdd **Custom File Naming** - Ordered output with correspondence control\n- \ud83d\udee1\ufe0f **Error Resilience** - Comprehensive retry and recovery mechanisms\n- \u2705 **Model Validation** - Automatic detection of unsupported models with clear error messages\n\n## \ud83d\udccb Requirements\n\n### System Requirements\n- **Python**: 3.8, 3.9, 3.10, 3.11, or 3.12\n- **Operating System**: Windows, macOS, Linux\n- **Memory**: Minimum 4GB RAM recommended\n\n### Dependencies\n```txt\nreplicate>=0.15.0 # Replicate API client\nrequests>=2.25.0 # HTTP library\nasyncio-throttle>=1.0.2 # Rate limiting for async operations\npython-dotenv>=0.19.0 # Environment variable management\n```\n\n### API Requirements\n- **Replicate API Token**: Required ([Get one here](https://replicate.com/account/api-tokens))\n- **Network**: Stable internet connection for API calls\n- **Rate Limits**: 600 requests/minute (shared across all models)\n\n## \ud83d\udce6 Installation\n\n```bash\npip install replicate-batch-process\n```\n\n## \ud83d\ude80 Quick Start\n\n### 1. Set up API Token\n```bash\n# Option 1: Interactive setup\nreplicate-init\n\n# Option 2: Manual setup\nexport REPLICATE_API_TOKEN=\"your-token-here\"\n\n# Option 3: .env file\necho \"REPLICATE_API_TOKEN=your-token-here\" > .env\n```\n\n### 2. Single Image Generation\n```python\nfrom replicate_batch_process import replicate_model_calling\n\nfile_paths = replicate_model_calling(\n prompt=\"A beautiful sunset over mountains\",\n model_name=\"qwen/qwen-image\", # Use supported model\n output_filepath=\"output/sunset.jpg\"\n)\n```\n\n### 3. Batch Processing (Async Required)\n\n#### Basic Batch Processing\n```python\nimport asyncio\nfrom replicate_batch_process import intelligent_batch_process\n\nasync def main():\n # Process multiple prompts with the same model\n files = await intelligent_batch_process(\n prompts=[\"sunset\", \"city\", \"forest\"],\n model_name=\"qwen/qwen-image\",\n max_concurrent=8,\n output_filepath=[\"output/sunset.png\", \"output/city.png\", \"output/forest.png\"]\n )\n \n print(f\"Generated {len(files)} images:\")\n for file in files:\n print(f\" - {file}\")\n \n return files\n\n# Run the async function\nif __name__ == \"__main__\":\n results = asyncio.run(main())\n```\n\n#### Advanced Mixed-Model Batch Processing\n```python\nimport asyncio\nfrom replicate_batch_process import IntelligentBatchProcessor, BatchRequest\n\nasync def advanced_batch():\n processor = IntelligentBatchProcessor()\n \n # Create requests with different models and parameters\n requests = [\n BatchRequest(\n prompt=\"A futuristic city\",\n model_name=\"qwen/qwen-image\",\n output_filepath=\"output/city.png\",\n kwargs={\"aspect_ratio\": \"16:9\"}\n ),\n BatchRequest(\n prompt=\"A magical forest\",\n model_name=\"google/imagen-4-ultra\",\n output_filepath=\"output/forest.png\",\n kwargs={\"output_quality\": 90}\n ),\n BatchRequest(\n prompt=\"Character with red hair\",\n model_name=\"black-forest-labs/flux-kontext-max\",\n output_filepath=\"output/character.png\",\n kwargs={\"input_image\": \"reference.jpg\"}\n )\n ]\n \n # Process all requests concurrently\n results = await processor.process_intelligent_batch(requests, max_concurrent=5)\n \n # Handle results\n successful = [r for r in results if r.success]\n failed = [r for r in results if not r.success]\n \n print(f\"\u2705 Success: {len(successful)}/{len(results)}\")\n for result in failed:\n print(f\"\u274c Failed: {result.error}\")\n \n return results\n\n# Run with proper async context\nif __name__ == \"__main__\":\n asyncio.run(advanced_batch())\n```\n\n## \ud83d\udccb Supported Models\n\n### Image Generation Models\n| Model | Price | Specialization | Reference Image Support |\n|-------|-------|----------------|-------------------------|\n| **black-forest-labs/flux-dev** | $0.025 | Fast generation, minimal censorship | \u274c |\n| **black-forest-labs/flux-kontext-max** | $0.08 | Image editing, character consistency | \u2705 |\n| **qwen/qwen-image** | $0.025 | Text rendering, cover images | \u274c |\n| **google/imagen-4-ultra** | $0.06 | High-quality detailed images | \u274c |\n\n### Video Generation Models\n| Model | Price | Specialization | Reference Image Support |\n|-------|-------|----------------|-------------------------|\n| **google/veo-3-fast** | $3.32/call | Fast video with audio | \u2705 |\n| **kwaivgi/kling-v2.1-master** | $0.28/sec | 1080p video, 5-10 second duration | \u2705 |\n\n> \u26a0\ufe0f **Note**: Using unsupported models will return a clear error message: \"Model '{model_name}' is not supported. Please use one of the supported models listed above.\"\n\n## \ud83d\udd04 Intelligent Fallback System\n\n**Automatic model switching when issues arise:**\n\n### Reference Image Auto-Detection\n```python\n# User provides reference image to non-supporting model\nreplicate_model_calling(\n prompt=\"Generate based on this image\",\n model_name=\"black-forest-labs/flux-dev\", # Doesn't support reference images\n input_image=\"path/to/image.jpg\" # \u2192 Auto-switches to flux-kontext-max\n)\n```\n\n### Parameter Compatibility Handling\n```python\n# Unsupported parameters automatically cleaned and model switched\nreplicate_model_calling(\n prompt=\"Generate image\",\n model_name=\"black-forest-labs/flux-kontext-max\",\n guidance=3.5, # Unsupported parameter\n num_outputs=2 # \u2192 Auto-switches to compatible model\n)\n```\n\n### API Error Recovery\nAutomatic fallback chain: `Flux Dev` \u2192 `Qwen Image` \u2192 `Imagen 4 Ultra`\n\n## \ud83d\udccb Usage Scenarios\n\n| Mode | Use Case | Command |\n|------|----------|---------|\n| **Single** | One-off generation, testing | `replicate_model_calling()` |\n| **Batch Same** | Multiple prompts, same model | `intelligent_batch_process()` |\n| **Mixed Models** | Different models/parameters | `IntelligentBatchProcessor()` |\n\n## \ud83e\udde0 Smart Processing Strategies\n\nThe system automatically selects optimal processing strategy:\n\n- **Immediate Processing**: Tasks \u2264 available quota \u2192 Full concurrency\n- **Window Processing**: Tasks \u2264 600 but > current quota \u2192 Wait then batch\n- **Dynamic Queue**: Tasks > 600 \u2192 Continuous processing with queue management\n\n## \u2699\ufe0f Configuration\n\n### API Keys\nGet your Replicate API token: [replicate.com/account/api-tokens](https://replicate.com/account/api-tokens)\n\n### Custom Fallback Rules\nModify `config.py`:\n```python\nFALLBACK_MODELS = {\n 'your-model': {\n 'fail': {\n 'fallback_model': 'backup-model',\n 'condition': 'api_error'\n }\n }\n}\n```\n\n## \u26a0\ufe0f Common Pitfalls\n\n1. **Async/Await**: Batch functions must be called within async context\n2. **Model Names**: Use exact model names from supported list above\n3. **File Paths**: Ensure output directories exist before processing \n4. **Rate Limits**: Keep max_concurrent \u2264 15 to avoid 429 errors\n\n## \ud83d\udcca Performance Benchmarks\n\n### Real-World Test Results (v1.0.7)\n*Tested on: Python 3.9.16, macOS, Replicate API*\n\n| Task | Model | Time | Success Rate | Notes |\n|------|-------|------|-------------|-------|\n| **Single Image** | qwen/qwen-image | 11.7s | 100% | Fastest for single generation |\n| **Batch (3 images)** | qwen/qwen-image | 23.2s | 100% | ~7.7s per image with concurrency |\n| **Batch (10 images)** | qwen/qwen-image | 45s | 100% | Optimal with max_concurrent=8 |\n| **Mixed Models (3)** | Various | 28s | 100% | Parallel processing advantage |\n| **With Fallback** | flux-kontext \u2192 qwen | 15s | 100% | Includes fallback overhead |\n| **Large Batch (50)** | qwen/qwen-image | 3.5min | 98% | 2% retry on rate limits |\n\n### Concurrency Performance\n| Concurrent Tasks | Time per Image | Efficiency | Recommended For |\n|-----------------|----------------|------------|-----------------|\n| 1 (Sequential) | 12s | Baseline | Testing/Debug |\n| 5 | 4.8s | 250% faster | Conservative usage |\n| 8 | 3.2s | 375% faster | **Optimal balance** |\n| 12 | 2.8s | 428% faster | Aggressive, risk of 429 |\n| 15+ | Variable | Diminishing returns | Not recommended |\n\n## \ud83d\udcca Rate Limiting\n\n- **Replicate API**: 600 requests/minute (shared across all models)\n- **Recommended Concurrency**: 5-8 (conservative) to 12 (aggressive)\n- **Auto-Retry**: Built-in 429 error handling with exponential backoff\n\n## \u26a0\ufe0f Known Issues & Workarounds\n\n### Fixed in v1.0.7\n\u2705 **FileOutput Handling Bug** (v1.0.2-1.0.6)\n- **Issue**: Kontext Max model created 814 empty files when returning single image\n- **Root Cause**: Incorrect iteration over bytes instead of file object\n- **Fix**: Added intelligent output type detection with `hasattr(output, 'read')` check\n- **Status**: \u2705 Fully resolved\n\n\u2705 **Parameter Routing Bug** (v1.0.2-1.0.6)\n- **Issue**: `output_filepath` incorrectly placed in kwargs for batch processing\n- **Fix**: Corrected parameter assignment in BatchRequest\n- **Status**: \u2705 Fully resolved\n\n### Current Limitations (v1.0.7)\n\u26a0\ufe0f **Kontext Max Input Image**\n- **Issue**: Input image parameter sometimes fails with Kontext Max model\n- **Workaround**: Automatic fallback to Qwen model (transparent to user)\n- **Impact**: Minor - generation still succeeds via fallback\n- **Fix Timeline**: Investigating for v1.0.8\n\n\u2139\ufe0f **Rate Limiting on Large Batches**\n- **Issue**: Batches >50 may hit rate limits even with throttling\n- **Workaround**: Use chunking strategy (see Best Practices)\n- **Impact**: Minor - automatic retry handles most cases\n\n### Reporting Issues\nFound a bug? Please report at: https://github.com/preangelleo/replicate_batch_process/issues\n\n## \ud83d\udce6 Migration Guide\n\n### Upgrading from v1.0.6 \u2192 v1.0.7\n```bash\npip install --upgrade replicate-batch-process==1.0.7\n```\n\n**Changes:**\n- \u2705 Fixed FileOutput handling bug (814 empty files issue)\n- \u2705 Enhanced README documentation\n- \u2705 No API changes - drop-in replacement\n\n**Action Required:** None - fully backward compatible\n\n### Upgrading from v1.0.2-1.0.5 \u2192 v1.0.7\n```bash\npip install --upgrade replicate-batch-process==1.0.7\n```\n\n**Critical Fixes:**\n1. **intelligent_batch_process parameter bug** - Now correctly handles output_filepath\n2. **FileOutput compatibility** - No more empty file creation\n3. **Model validation** - Clear error messages for unsupported models\n\n**Code Changes Needed:** None, but review error handling for better messages\n\n### Upgrading from v0.x \u2192 v1.0.7\n\n**Breaking Changes:**\n- Package renamed from `replicate_batch` to `replicate-batch-process`\n- New import structure:\n ```python\n # Old (v0.x)\n from replicate_batch import process_batch\n \n # New (v1.0.7)\n from replicate_batch_process import intelligent_batch_process\n ```\n\n**Migration Steps:**\n1. Uninstall old package: `pip uninstall replicate_batch`\n2. Install new package: `pip install replicate-batch-process`\n3. Update imports in your code\n4. Test with small batches first\n\n### Version History\n| Version | Release Date | Key Changes |\n|---------|-------------|-------------|\n| v1.0.7 | 2025-01-05 | FileOutput fix, README improvements |\n| v1.0.6 | 2025-01-05 | Bug fixes, model validation |\n| v1.0.5 | 2025-01-04 | Parameter handling improvements |\n| v1.0.4 | 2025-01-04 | Model support documentation |\n| v1.0.3 | 2025-01-03 | Initial stable release |\n\n## \ud83d\udca1 Best Practices\n\n```python\n# For large batches, use chunking\ndef process_large_batch(prompts, chunk_size=50):\n for chunk in chunks(prompts, chunk_size):\n files = await intelligent_batch_process(chunk, model_name)\n yield files\n\n# Error handling with complete example\nfrom replicate_batch_process import IntelligentBatchProcessor, BatchRequest\n\nasync def batch_with_error_handling():\n processor = IntelligentBatchProcessor()\n requests = [\n BatchRequest(prompt=\"sunset\", model_name=\"qwen/qwen-image\", output_filepath=\"output/sunset.png\"),\n BatchRequest(prompt=\"city\", model_name=\"qwen/qwen-image\", output_filepath=\"output/city.png\"),\n ]\n results = await processor.process_intelligent_batch(requests)\n \n for result in results:\n if result.success:\n print(f\"\u2705 Generated: {result.file_paths}\")\n else:\n print(f\"\u274c Failed: {result.error}\")\n\nasyncio.run(batch_with_error_handling())\n```\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nreplicate-batch-process/\n\u251c\u2500\u2500 main.py # Single image generation\n\u251c\u2500\u2500 intelligent_batch_processor.py # Batch processing engine\n\u251c\u2500\u2500 config.py # Model configurations & fallbacks\n\u251c\u2500\u2500 init_environment.py # Environment setup\n\u2514\u2500\u2500 example_usage.py # Complete examples\n```\n\n## \ud83d\udd27 Development\n\n```bash\n# Clone repository\ngit clone https://github.com/preangelleo/replicate_batch_process.git\n\n# Install in development mode\npip install -e .\n\n# Run examples\npython example_usage.py\n```\n\n## \ud83d\udcc4 License\n\nMIT License - see [LICENSE](LICENSE) file for details.\n\n## \ud83e\udd1d Contributing\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## \ud83d\udd17 Links\n\n- **PyPI**: https://pypi.org/project/replicate-batch-process/\n- **GitHub**: https://github.com/preangelleo/replicate_batch_process\n- **Issues**: https://github.com/preangelleo/replicate_batch_process/issues\n\n---\n\n**Made with \u2764\ufe0f for the AI community**\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Intelligent batch processing tool for Replicate models with automatic fallback mechanisms",
"version": "1.0.9",
"project_urls": {
"Bug Reports": "https://github.com/preangelleo/replicate_batch_process/issues",
"Changelog": "https://github.com/preangelleo/replicate_batch_process/releases",
"Documentation": "https://github.com/preangelleo/replicate_batch_process#readme",
"Homepage": "https://github.com/preangelleo/replicate_batch_process",
"Repository": "https://github.com/preangelleo/replicate_batch_process.git"
},
"split_keywords": [
"replicate",
" ai",
" batch-processing",
" image-generation",
" machine-learning",
" api-client",
" fallback-mechanism",
" concurrent-processing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0d62bed95f682e25f25d415a3a39d97b5787369c830809f332a07baf609ef567",
"md5": "0172d246931494ab4f301040a1cae9bf",
"sha256": "c19e53711a0689dc8d68bf6d0afd7c1beecf7cac7492eafb01fe7e13a8a037e8"
},
"downloads": -1,
"filename": "replicate_batch_process-1.0.9-py3-none-any.whl",
"has_sig": false,
"md5_digest": "0172d246931494ab4f301040a1cae9bf",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 37605,
"upload_time": "2025-08-12T10:22:29",
"upload_time_iso_8601": "2025-08-12T10:22:29.504043Z",
"url": "https://files.pythonhosted.org/packages/0d/62/bed95f682e25f25d415a3a39d97b5787369c830809f332a07baf609ef567/replicate_batch_process-1.0.9-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "87be8cea17ada681f80d987368f3936285c67e737c54a487093f883903cc43c3",
"md5": "bcba12d36c74ef11f237da66f5bf409f",
"sha256": "162a6065c36583439a7fa78ccf87ad413517c09ccfb63907d53051ddf0335572"
},
"downloads": -1,
"filename": "replicate_batch_process-1.0.9.tar.gz",
"has_sig": false,
"md5_digest": "bcba12d36c74ef11f237da66f5bf409f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 64488,
"upload_time": "2025-08-12T10:22:31",
"upload_time_iso_8601": "2025-08-12T10:22:31.595988Z",
"url": "https://files.pythonhosted.org/packages/87/be/8cea17ada681f80d987368f3936285c67e737c54a487093f883903cc43c3/replicate_batch_process-1.0.9.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-12 10:22:31",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "preangelleo",
"github_project": "replicate_batch_process",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "replicate",
"specs": [
[
">=",
"0.15.0"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.25.0"
]
]
},
{
"name": "asyncio-throttle",
"specs": [
[
">=",
"1.0.2"
]
]
},
{
"name": "python-dotenv",
"specs": [
[
">=",
"0.19.0"
]
]
},
{
"name": "Pillow",
"specs": [
[
">=",
"9.0.0"
]
]
}
],
"lcname": "replicate-batch-process"
}