| Name | LaraQueue JSON |
| Version |
1.2.0
JSON |
| download |
| home_page | None |
| Summary | Simple and lightweight queue synchronization between Python and Laravel using Redis with Horizon support |
| upload_time | 2025-10-18 11:44:04 |
| maintainer | None |
| docs_url | None |
| author | Anton Mashkovtsev |
| requires_python | >=3.7 |
| license | MIT |
| keywords |
laravel
queue
redis
python
job
worker
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
## LaraQueue
Simple and lightweight queue synchronization between Python and Laravel using Redis. Process Laravel jobs in Python and vice versa.
> **Fork Notice:** This package is a fork of the original [python-laravel-queue](https://github.com/sinanbekar/python-laravel-queue) by [@sinanbekar](https://github.com/sinanbekar). This version includes critical bug fixes, comprehensive tests, and updated compatibility with newer dependencies.
**🚀 NEW in v1.0.0: Full Async Support with asyncio for high-performance applications!**
**NOTE: This package is now stable and production-ready with both synchronous and asynchronous APIs.**
### ✨ New Features
#### 🚀 Async Support (v1.0.0)
**Full asyncio support for high loads:**
- **Asynchronous processing** - use `AsyncQueue` for maximum performance
- **Parallel processing** - configurable number of concurrent tasks
- **AsyncIOEventEmitter** - asynchronous event handlers
- **High performance** - up to 50+ concurrent tasks
- **asyncio compatibility** - full integration with Python async/await ecosystem
```python
import asyncio
import aioredis
from lara_queue import AsyncQueue
async def main():
# Create async Redis client
redis_client = await aioredis.from_url("redis://localhost:6379")
# Create async queue
queue = AsyncQueue(
client=redis_client,
queue='async_worker',
max_concurrent_jobs=20, # 20 concurrent tasks
enable_metrics=True
)
# Async handler
@queue.handler
async def process_email(data):
job_data = data.get('data', {})
await asyncio.sleep(0.1) # Async work
print(f"Email sent: {job_data.get('to')}")
# Add tasks asynchronously
for i in range(100):
await queue.push('App\\Jobs\\EmailJob', {
'to': f'user{i}@example.com',
'subject': f'Email {i}'
})
# Start processing
await queue.listen()
# Run
asyncio.run(main())
```
#### 🛡️ Robust Error Handling (v0.0.3)
The package now includes a comprehensive error handling system:
- **Automatic reconnection** to Redis when connection is lost
- **Retry logic** with smart delays
- **Detailed logging** of all operations and errors
- **Protection against invalid data** - worker continues running when encountering problematic messages
#### 🔄 Graceful Shutdown (v0.0.3)
Advanced signal handling for clean worker termination:
- **Signal handlers** for SIGINT (Ctrl+C) and SIGTERM (kill)
- **Current job completion** - waits for job to finish before stopping
- **Automatic registration** - handlers are set up when you call `listen()`
- **Manual shutdown** - programmatically trigger shutdown with `queue.shutdown()`
- **No job loss** - ensures current job completes successfully
#### 💀 Dead Letter Queue (v0.0.4)
Advanced job failure handling with retry mechanisms:
- **Automatic retry** with exponential backoff (5s, 10s, 20s, 40s, max 60s)
- **Configurable max retries** (default: 3 attempts)
- **Dead letter queue** for permanently failed jobs
- **Job reprocessing** from dead letter queue
- **Comprehensive failure tracking** with error details and timestamps
#### 🔄 Advanced Retry Mechanism (v0.0.5)
Powerful and flexible retry system with multiple strategies:
- **Multiple retry strategies**: Exponential, Linear, Fixed, Custom
- **Configurable retry parameters**: delays, max attempts, jitter
- **Exception-based retry control**: retry only for specific error types
- **Retry statistics and monitoring**: track success rates and performance
- **Runtime configuration updates**: change retry settings without restart
- **Jitter support**: prevent thundering herd problems
#### 📊 Metrics & Monitoring (v0.0.5)
Comprehensive metrics collection and performance monitoring:
- **Real-time metrics**: track processed, successful, and failed jobs
- **Performance analytics**: average processing time, throughput, min/max times
- **Job type breakdown**: metrics per job type with success rates
- **Error tracking**: detailed error counts and types
- **Historical data**: configurable history size for trend analysis
- **Memory efficient**: automatic cleanup of old metrics data
```python
# Create queue with Dead Letter Queue
queue = Queue(
redis_client,
queue='email_worker',
dead_letter_queue='email_failed', # Custom DLQ name
max_retries=3 # Retry failed jobs 3 times
)
# Get failed jobs
failed_jobs = queue.get_dead_letter_jobs(limit=100)
# Reprocess a failed job
queue.reprocess_dead_letter_job(failed_jobs[0])
# Clear all failed jobs
queue.clear_dead_letter_queue()
```
#### 🔄 Advanced Retry Configuration
```python
from lara_queue import Queue, RetryStrategy
# Exponential backoff strategy (default)
queue_exponential = Queue(
redis_client,
queue='email_worker',
max_retries=5,
retry_strategy=RetryStrategy.EXPONENTIAL,
retry_delay=2, # Initial delay: 2s
retry_max_delay=60, # Max delay: 60s
retry_backoff_multiplier=2.0, # Multiply by 2 each time
retry_jitter=True, # Add randomness to prevent thundering herd
retry_exceptions=[ValueError, ConnectionError] # Only retry these exceptions
)
# Linear retry strategy
queue_linear = Queue(
redis_client,
queue='notification_worker',
max_retries=4,
retry_strategy=RetryStrategy.LINEAR,
retry_delay=5, # Each retry: 5s, 10s, 15s, 20s
retry_jitter=False # No randomness for predictable delays
)
# Fixed delay strategy
queue_fixed = Queue(
redis_client,
queue='report_worker',
max_retries=3,
retry_strategy=RetryStrategy.FIXED,
retry_delay=10, # Always 10 seconds between retries
retry_jitter=True # Add some randomness
)
# Custom retry function
def fibonacci_retry_delay(attempt: int) -> int:
"""Fibonacci-based retry delay: 1, 1, 2, 3, 5, 8, 13..."""
if attempt <= 1:
return 1
elif attempt == 2:
return 1
else:
a, b = 1, 1
for _ in range(attempt - 2):
a, b = b, a + b
return min(b, 20) # Cap at 20 seconds
queue_custom = Queue(
redis_client,
queue='analytics_worker',
max_retries=6,
retry_strategy=RetryStrategy.CUSTOM,
retry_custom_function=fibonacci_retry_delay,
retry_exceptions=[Exception] # Retry for all exceptions
)
# Monitor retry statistics
stats = queue_exponential.get_retry_statistics()
print(f"Total retries: {stats['total_retries']}")
print(f"Success rate: {stats['success_rate']:.1f}%")
print(f"Dead letter jobs: {stats['dead_letter_jobs']}")
# Update retry configuration at runtime
queue_exponential.update_retry_config(
max_retries=7,
retry_delay=1,
retry_strategy=RetryStrategy.LINEAR
)
# Reset retry statistics
queue_exponential.reset_retry_statistics()
```
#### 📊 Metrics Configuration
```python
from lara_queue import Queue, MetricsCollector
# Create queue with metrics enabled
queue = Queue(
redis_client,
queue='monitored_worker',
enable_metrics=True, # Enable metrics collection
metrics_history_size=1000 # Keep last 1000 jobs in history
)
# Get comprehensive metrics
metrics = queue.get_metrics()
print(f"Total processed: {metrics['general']['total_processed']}")
print(f"Success rate: {metrics['general']['success_rate']:.1f}%")
print(f"Throughput: {metrics['performance']['throughput_per_second']:.2f} jobs/sec")
print(f"Avg processing time: {metrics['performance']['avg_processing_time']:.3f}s")
# Get metrics for specific job type
email_metrics = queue.get_job_type_metrics('App\\Jobs\\EmailJob')
if email_metrics:
print(f"Email jobs: {email_metrics['total']} total, {email_metrics['success_rate']:.1f}% success")
# Get recent job history
recent_jobs = queue.get_recent_jobs(limit=10)
for job in recent_jobs:
status = "✅" if job['success'] else "❌"
print(f"{status} {job['name']} - {job['processing_time']:.3f}s")
# Get performance summary
summary = queue.get_performance_summary()
print(f"Uptime: {summary['general']['uptime_seconds']:.1f}s")
print(f"Total retries: {summary['general']['total_retries']}")
# Reset metrics
queue.reset_metrics()
# Disable metrics for better performance
queue_no_metrics = Queue(
redis_client,
queue='high_performance_worker',
enable_metrics=False # Disable metrics collection
)
```
#### 🏷️ Type Hints (v0.0.4)
Complete type annotations for better IDE support and code safety:
- **Full type coverage** for all methods and parameters
- **IDE autocompletion** and type checking
- **Runtime type safety** with proper annotations
- **Optional parameters** with `Optional[T]` types
- **Generic types** for collections and data structures
```python
from typing import Dict, List, Any, Optional
from lara_queue import Queue
# Typed queue creation
queue: Queue = Queue(
client=redis_client,
queue='typed_worker',
dead_letter_queue='typed_failed',
max_retries=3
)
# Typed job processing
@queue.handler
def process_email(data: Dict[str, Any]) -> None:
email_type: str = data.get('type', 'unknown')
recipient: str = data.get('recipient', 'unknown')
subject: Optional[str] = data.get('subject')
# Type-safe processing
if 'invalid' in recipient.lower():
raise ValueError(f"Invalid email address: {recipient}")
print(f"Email sent to {recipient}")
# Typed DLQ operations
failed_jobs: List[Dict[str, Any]] = queue.get_dead_letter_jobs(limit=100)
success: bool = queue.reprocess_dead_letter_job(failed_jobs[0])
cleared_count: int = queue.clear_dead_letter_queue()
```
```python
import logging
# Enable logging for debugging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger('lara_queue')
logger.setLevel(logging.DEBUG)
```
### Installation
```bash
pip install LaraQueue
```
### Usage
#### 🚀 Async Usage (Recommended for High Performance)
For high-performance applications, use the async API:
```python
import asyncio
import aioredis
from lara_queue import AsyncQueue, RetryStrategy
async def main():
# Create async Redis client
redis_client = await aioredis.from_url("redis://localhost:6379")
# Create async queue with high performance settings
queue = AsyncQueue(
client=redis_client,
queue='async_worker',
max_concurrent_jobs=20, # Process 20 jobs simultaneously
enable_metrics=True,
retry_strategy=RetryStrategy.EXPONENTIAL,
max_retries=3
)
# Async job handler
@queue.handler
async def process_email(data):
job_data = data.get('data', {})
# Simulate async work (API calls, database operations, etc.)
await asyncio.sleep(0.1)
print(f"Email sent to: {job_data.get('to')}")
# Add jobs asynchronously
for i in range(100):
await queue.push('App\\Jobs\\EmailJob', {
'to': f'user{i}@example.com',
'subject': f'Welcome Email {i}',
'body': 'Welcome to our service!'
})
# Start processing
await queue.listen()
# Run the async application
asyncio.run(main())
```
#### High-Performance Async Example
```python
import asyncio
import aioredis
from lara_queue import AsyncQueue
async def high_performance_worker():
redis_client = await aioredis.from_url("redis://localhost:6379")
# High-performance queue configuration
queue = AsyncQueue(
client=redis_client,
queue='high_perf_worker',
max_concurrent_jobs=50, # 50 concurrent jobs
enable_metrics=True,
metrics_history_size=10000
)
@queue.handler
async def fast_processor(data):
job_data = data.get('data', {})
# Fast async processing
await asyncio.sleep(0.05) # 50ms processing time
# Your business logic here
result = await process_business_logic(job_data)
return result
# Process thousands of jobs efficiently
await queue.listen()
async def process_business_logic(data):
# Simulate business logic
await asyncio.sleep(0.02)
return f"Processed: {data.get('id')}"
# Run high-performance worker
asyncio.run(high_performance_worker())
```
#### Async with Laravel Integration
```python
import asyncio
import aioredis
from lara_queue import AsyncQueue
async def laravel_async_integration():
redis_client = await aioredis.from_url("redis://localhost:6379")
# Queue for processing Laravel jobs
queue = AsyncQueue(
client=redis_client,
queue='python_worker', # Queue name Laravel sends to
max_concurrent_jobs=10
)
@queue.handler
async def handle_laravel_email(data):
job_data = data.get('data', {})
# Process Laravel email job
await send_email_async(
to=job_data.get('to'),
subject=job_data.get('subject'),
body=job_data.get('body')
)
@queue.handler
async def handle_laravel_notification(data):
job_data = data.get('data', {})
# Process Laravel notification
await send_notification_async(
user_id=job_data.get('user_id'),
message=job_data.get('message')
)
# Send jobs to Laravel
laravel_queue = AsyncQueue(
client=redis_client,
queue='laravel_worker' # Queue name Laravel listens to
)
await laravel_queue.push('App\\Jobs\\UpdateUserJob', {
'user_id': 123,
'data': {'last_login': time.time()}
})
# Start processing
await queue.listen()
async def send_email_async(to, subject, body):
# Your async email sending logic
await asyncio.sleep(0.1)
print(f"Email sent to {to}")
async def send_notification_async(user_id, message):
# Your async notification logic
await asyncio.sleep(0.05)
print(f"Notification sent to user {user_id}")
# Run Laravel integration
asyncio.run(laravel_async_integration())
```
#### Synchronous Usage (Legacy)
#### Listen for jobs in Python
```python
from lara_queue import Queue
from redis import Redis
r = Redis(host='localhost', port=6379, db=0)
queue_python = Queue(r, queue='python')
@queue_python.handler
def handle(data):
name = data['name'] # job name
job_data = data['data'] # job data
print('Processing: ' + job_data['a'] + ' ' + job_data['b'] + ' ' + job_data['c'])
queue_python.listen()
```
#### Send jobs from Laravel
```php
<?php
$job = new \App\Jobs\TestJob('hi', 'send to', 'python');
dispatch($job)->onQueue('python');
```
#### Send jobs to Laravel from Python
```python
from lara_queue import Queue
from redis import Redis
r = Redis(host='localhost', port=6379, db=0)
queue_laravel = Queue(r, queue='laravel')
queue_laravel.push('App\\Jobs\\TestJob', {'a': 'hello', 'b': 'send to', 'c': 'laravel'})
```
#### TestJob in Laravel
```php
<?php
namespace App\Jobs;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Illuminate\Support\Facades\Log;
class TestJob implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public $a, $b, $c;
/**
* Create a new job instance.
*
* @return void
*/
public function __construct($a, $b, $c)
{
$this->a = $a;
$this->b = $b;
$this->c = $c;
}
/**
* Execute the job.
*
* @return void
*/
public function handle()
{
Log::info('TEST: ' . $this->a . ' ' . $this->b . ' ' . $this->c);
}
}
```
#### Process jobs in Laravel
You need to `:listen` (or `:work`) the preferred queue name to handle jobs sent from Python in Laravel.
```bash
php artisan queue:listen --queue=laravel
```
### Graceful Shutdown Example
```python
import logging
from lara_queue import Queue
from redis import Redis
# Setup logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
r = Redis(host='localhost', port=6379, db=0)
queue = Queue(r, queue='python_worker')
@queue.handler
def handle_job(data):
logger.info(f"Processing job: {data['name']}")
# Simulate some work
import time
time.sleep(5)
logger.info("Job completed!")
logger.info("Worker starting...")
logger.info("Press Ctrl+C to trigger graceful shutdown")
logger.info("Current job will complete before stopping")
try:
queue.listen() # Signal handlers auto-registered
except KeyboardInterrupt:
logger.info("Worker stopped gracefully")
```
### Manual Shutdown Example
```python
queue = Queue(r, queue='test')
@queue.handler
def handle_job(data):
# Process job
process_data(data)
# Trigger shutdown programmatically
if should_stop():
queue.shutdown()
queue.listen()
```
### Error Handling Example
```python
from lara_queue import Queue
from redis import Redis
from redis.exceptions import ConnectionError
try:
r = Redis(host='localhost', port=6379, db=0)
queue = Queue(r, queue='python_worker')
@queue.handler
def handle_job(data):
print(f"Processing job: {data['name']}")
queue.listen() # Worker is now resilient to Redis errors!
except ConnectionError as e:
print(f"Failed to connect to Redis: {e}")
except KeyboardInterrupt:
print("Worker stopped gracefully")
```
### Retry Strategy Recommendations
| Strategy | Use Case | Example |
|----------|----------|---------|
| **Exponential** | Network/DB temporary failures | API calls, database connections |
| **Linear** | Predictable resource limits | Rate-limited APIs, queue backpressure |
| **Fixed** | Simple retry scenarios | File processing, simple validations |
| **Custom** | Complex business logic | Fibonacci delays, circuit breaker patterns |
**Best Practices:**
- Use **jitter=True** to prevent thundering herd problems
- Set **retry_exceptions** to only retry recoverable errors
- Monitor **retry statistics** to optimize your retry strategy
- Use **dead letter queues** for permanently failed jobs
- Consider **max_delay** limits to prevent excessive wait times
### Features
- ✅ **Async Support (v1.0.0)** - Full asyncio support for high-performance applications
- ✅ **Concurrent Processing** - Configurable concurrent job processing (up to 50+ jobs)
- ✅ **Redis driver support** - Queue communication between Python and Laravel
- ✅ **Bidirectional job processing** - Send and receive jobs in both directions
- ✅ **PHP object serialization** - Compatible with Laravel's job serialization format
- ✅ **Event-driven architecture** - Simple decorator-based job handlers (sync & async)
- ✅ **Automatic reconnection** - Resilient to network issues
- ✅ **Comprehensive error handling** - Detailed logging and error recovery
- ✅ **Graceful shutdown** - Signal handling (SIGINT, SIGTERM) with job completion
- ✅ **Advanced retry mechanisms** - Multiple strategies with full configurability
- ✅ **Retry statistics and monitoring** - Track performance and success rates
- ✅ **Comprehensive metrics collection** - Real-time performance monitoring
- ✅ **Production ready** - Battle-tested with extensive test coverage
- ✅ **Tested** - 100+ unit and integration tests included (sync + async)
### Requirements
- Python 3.7+
- Redis 4.0+
- Laravel 8+ (for Laravel side)
- aioredis 2.0+ (for async support)
### Performance Recommendations
#### Async vs Sync Performance
| Feature | Sync Queue | Async Queue | Performance Gain |
|---------|------------|-------------|------------------|
| **Concurrent Jobs** | 1 | 1-50+ | 10-50x faster |
| **Throughput** | ~100 jobs/sec | ~1000+ jobs/sec | 10x+ faster |
| **Memory Usage** | Lower | Slightly higher | ~20% more |
| **CPU Usage** | Higher | Lower | ~30% less |
| **I/O Efficiency** | Blocking | Non-blocking | Much better |
#### Recommended Settings
```python
# High Performance Async Configuration
queue = AsyncQueue(
client=redis_client,
queue='high_perf',
max_concurrent_jobs=20, # Adjust based on your system
enable_metrics=True,
retry_strategy=RetryStrategy.EXPONENTIAL,
max_retries=3
)
# For CPU-intensive tasks
queue = AsyncQueue(
client=redis_client,
queue='cpu_intensive',
max_concurrent_jobs=4, # Match CPU cores
enable_metrics=True
)
# For I/O-intensive tasks (API calls, DB operations)
queue = AsyncQueue(
client=redis_client,
queue='io_intensive',
max_concurrent_jobs=50, # High concurrency
enable_metrics=True
)
```
### Development
```bash
# Install development dependencies
pip install -e .
pip install -r requirements-dev.txt
# Run tests
pytest tests/ -v
# Run async tests
pytest tests/test_async_queue.py -v
# Run specific test file
pytest tests/test_error_handling.py -v
```
### Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
### License
MIT License - see LICENSE file for details.
### Credits
- Original package: [python-laravel-queue](https://github.com/sinanbekar/python-laravel-queue) by [@sinanbekar](https://github.com/sinanbekar)
- This fork maintained with critical bug fixes and improvements
Raw data
{
"_id": null,
"home_page": null,
"name": "LaraQueue",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "laravel, queue, redis, python, job, worker",
"author": "Anton Mashkovtsev",
"author_email": "Mashkovtsev Anton <mashkovtsev@protonmail.com>",
"download_url": "https://files.pythonhosted.org/packages/c1/78/f085cd1f3781b9b1896d83c4d927cce0a64feb60de1515dabb486feeba57/laraqueue-1.2.0.tar.gz",
"platform": null,
"description": "## LaraQueue\r\n\r\nSimple and lightweight queue synchronization between Python and Laravel using Redis. Process Laravel jobs in Python and vice versa.\r\n\r\n> **Fork Notice:** This package is a fork of the original [python-laravel-queue](https://github.com/sinanbekar/python-laravel-queue) by [@sinanbekar](https://github.com/sinanbekar). This version includes critical bug fixes, comprehensive tests, and updated compatibility with newer dependencies.\r\n\r\n**\ud83d\ude80 NEW in v1.0.0: Full Async Support with asyncio for high-performance applications!**\r\n\r\n**NOTE: This package is now stable and production-ready with both synchronous and asynchronous APIs.**\r\n\r\n### \u2728 New Features\r\n\r\n#### \ud83d\ude80 Async Support (v1.0.0)\r\n\r\n**Full asyncio support for high loads:**\r\n\r\n- **Asynchronous processing** - use `AsyncQueue` for maximum performance\r\n- **Parallel processing** - configurable number of concurrent tasks\r\n- **AsyncIOEventEmitter** - asynchronous event handlers\r\n- **High performance** - up to 50+ concurrent tasks\r\n- **asyncio compatibility** - full integration with Python async/await ecosystem\r\n\r\n```python\r\nimport asyncio\r\nimport aioredis\r\nfrom lara_queue import AsyncQueue\r\n\r\nasync def main():\r\n # Create async Redis client\r\n redis_client = await aioredis.from_url(\"redis://localhost:6379\")\r\n \r\n # Create async queue\r\n queue = AsyncQueue(\r\n client=redis_client,\r\n queue='async_worker',\r\n max_concurrent_jobs=20, # 20 concurrent tasks\r\n enable_metrics=True\r\n )\r\n \r\n # Async handler\r\n @queue.handler\r\n async def process_email(data):\r\n job_data = data.get('data', {})\r\n await asyncio.sleep(0.1) # Async work\r\n print(f\"Email sent: {job_data.get('to')}\")\r\n \r\n # Add tasks asynchronously\r\n for i in range(100):\r\n await queue.push('App\\\\Jobs\\\\EmailJob', {\r\n 'to': f'user{i}@example.com',\r\n 'subject': f'Email {i}'\r\n })\r\n \r\n # Start processing\r\n await queue.listen()\r\n\r\n# Run\r\nasyncio.run(main())\r\n```\r\n\r\n#### \ud83d\udee1\ufe0f Robust Error Handling (v0.0.3)\r\n\r\nThe package now includes a comprehensive error handling system:\r\n\r\n- **Automatic reconnection** to Redis when connection is lost\r\n- **Retry logic** with smart delays\r\n- **Detailed logging** of all operations and errors\r\n- **Protection against invalid data** - worker continues running when encountering problematic messages\r\n\r\n#### \ud83d\udd04 Graceful Shutdown (v0.0.3)\r\n\r\nAdvanced signal handling for clean worker termination:\r\n\r\n- **Signal handlers** for SIGINT (Ctrl+C) and SIGTERM (kill)\r\n- **Current job completion** - waits for job to finish before stopping\r\n- **Automatic registration** - handlers are set up when you call `listen()`\r\n- **Manual shutdown** - programmatically trigger shutdown with `queue.shutdown()`\r\n- **No job loss** - ensures current job completes successfully\r\n\r\n#### \ud83d\udc80 Dead Letter Queue (v0.0.4)\r\n\r\nAdvanced job failure handling with retry mechanisms:\r\n\r\n- **Automatic retry** with exponential backoff (5s, 10s, 20s, 40s, max 60s)\r\n- **Configurable max retries** (default: 3 attempts)\r\n- **Dead letter queue** for permanently failed jobs\r\n- **Job reprocessing** from dead letter queue\r\n- **Comprehensive failure tracking** with error details and timestamps\r\n\r\n#### \ud83d\udd04 Advanced Retry Mechanism (v0.0.5)\r\n\r\nPowerful and flexible retry system with multiple strategies:\r\n\r\n- **Multiple retry strategies**: Exponential, Linear, Fixed, Custom\r\n- **Configurable retry parameters**: delays, max attempts, jitter\r\n- **Exception-based retry control**: retry only for specific error types\r\n- **Retry statistics and monitoring**: track success rates and performance\r\n- **Runtime configuration updates**: change retry settings without restart\r\n- **Jitter support**: prevent thundering herd problems\r\n\r\n#### \ud83d\udcca Metrics & Monitoring (v0.0.5)\r\n\r\nComprehensive metrics collection and performance monitoring:\r\n\r\n- **Real-time metrics**: track processed, successful, and failed jobs\r\n- **Performance analytics**: average processing time, throughput, min/max times\r\n- **Job type breakdown**: metrics per job type with success rates\r\n- **Error tracking**: detailed error counts and types\r\n- **Historical data**: configurable history size for trend analysis\r\n- **Memory efficient**: automatic cleanup of old metrics data\r\n\r\n```python\r\n# Create queue with Dead Letter Queue\r\nqueue = Queue(\r\n redis_client, \r\n queue='email_worker',\r\n dead_letter_queue='email_failed', # Custom DLQ name\r\n max_retries=3 # Retry failed jobs 3 times\r\n)\r\n\r\n# Get failed jobs\r\nfailed_jobs = queue.get_dead_letter_jobs(limit=100)\r\n\r\n# Reprocess a failed job\r\nqueue.reprocess_dead_letter_job(failed_jobs[0])\r\n\r\n# Clear all failed jobs\r\nqueue.clear_dead_letter_queue()\r\n```\r\n\r\n#### \ud83d\udd04 Advanced Retry Configuration\r\n\r\n```python\r\nfrom lara_queue import Queue, RetryStrategy\r\n\r\n# Exponential backoff strategy (default)\r\nqueue_exponential = Queue(\r\n redis_client,\r\n queue='email_worker',\r\n max_retries=5,\r\n retry_strategy=RetryStrategy.EXPONENTIAL,\r\n retry_delay=2, # Initial delay: 2s\r\n retry_max_delay=60, # Max delay: 60s\r\n retry_backoff_multiplier=2.0, # Multiply by 2 each time\r\n retry_jitter=True, # Add randomness to prevent thundering herd\r\n retry_exceptions=[ValueError, ConnectionError] # Only retry these exceptions\r\n)\r\n\r\n# Linear retry strategy\r\nqueue_linear = Queue(\r\n redis_client,\r\n queue='notification_worker',\r\n max_retries=4,\r\n retry_strategy=RetryStrategy.LINEAR,\r\n retry_delay=5, # Each retry: 5s, 10s, 15s, 20s\r\n retry_jitter=False # No randomness for predictable delays\r\n)\r\n\r\n# Fixed delay strategy\r\nqueue_fixed = Queue(\r\n redis_client,\r\n queue='report_worker',\r\n max_retries=3,\r\n retry_strategy=RetryStrategy.FIXED,\r\n retry_delay=10, # Always 10 seconds between retries\r\n retry_jitter=True # Add some randomness\r\n)\r\n\r\n# Custom retry function\r\ndef fibonacci_retry_delay(attempt: int) -> int:\r\n \"\"\"Fibonacci-based retry delay: 1, 1, 2, 3, 5, 8, 13...\"\"\"\r\n if attempt <= 1:\r\n return 1\r\n elif attempt == 2:\r\n return 1\r\n else:\r\n a, b = 1, 1\r\n for _ in range(attempt - 2):\r\n a, b = b, a + b\r\n return min(b, 20) # Cap at 20 seconds\r\n\r\nqueue_custom = Queue(\r\n redis_client,\r\n queue='analytics_worker',\r\n max_retries=6,\r\n retry_strategy=RetryStrategy.CUSTOM,\r\n retry_custom_function=fibonacci_retry_delay,\r\n retry_exceptions=[Exception] # Retry for all exceptions\r\n)\r\n\r\n# Monitor retry statistics\r\nstats = queue_exponential.get_retry_statistics()\r\nprint(f\"Total retries: {stats['total_retries']}\")\r\nprint(f\"Success rate: {stats['success_rate']:.1f}%\")\r\nprint(f\"Dead letter jobs: {stats['dead_letter_jobs']}\")\r\n\r\n# Update retry configuration at runtime\r\nqueue_exponential.update_retry_config(\r\n max_retries=7,\r\n retry_delay=1,\r\n retry_strategy=RetryStrategy.LINEAR\r\n)\r\n\r\n# Reset retry statistics\r\nqueue_exponential.reset_retry_statistics()\r\n```\r\n\r\n#### \ud83d\udcca Metrics Configuration\r\n\r\n```python\r\nfrom lara_queue import Queue, MetricsCollector\r\n\r\n# Create queue with metrics enabled\r\nqueue = Queue(\r\n redis_client,\r\n queue='monitored_worker',\r\n enable_metrics=True, # Enable metrics collection\r\n metrics_history_size=1000 # Keep last 1000 jobs in history\r\n)\r\n\r\n# Get comprehensive metrics\r\nmetrics = queue.get_metrics()\r\nprint(f\"Total processed: {metrics['general']['total_processed']}\")\r\nprint(f\"Success rate: {metrics['general']['success_rate']:.1f}%\")\r\nprint(f\"Throughput: {metrics['performance']['throughput_per_second']:.2f} jobs/sec\")\r\nprint(f\"Avg processing time: {metrics['performance']['avg_processing_time']:.3f}s\")\r\n\r\n# Get metrics for specific job type\r\nemail_metrics = queue.get_job_type_metrics('App\\\\Jobs\\\\EmailJob')\r\nif email_metrics:\r\n print(f\"Email jobs: {email_metrics['total']} total, {email_metrics['success_rate']:.1f}% success\")\r\n\r\n# Get recent job history\r\nrecent_jobs = queue.get_recent_jobs(limit=10)\r\nfor job in recent_jobs:\r\n status = \"\u2705\" if job['success'] else \"\u274c\"\r\n print(f\"{status} {job['name']} - {job['processing_time']:.3f}s\")\r\n\r\n# Get performance summary\r\nsummary = queue.get_performance_summary()\r\nprint(f\"Uptime: {summary['general']['uptime_seconds']:.1f}s\")\r\nprint(f\"Total retries: {summary['general']['total_retries']}\")\r\n\r\n# Reset metrics\r\nqueue.reset_metrics()\r\n\r\n# Disable metrics for better performance\r\nqueue_no_metrics = Queue(\r\n redis_client,\r\n queue='high_performance_worker',\r\n enable_metrics=False # Disable metrics collection\r\n)\r\n```\r\n\r\n#### \ud83c\udff7\ufe0f Type Hints (v0.0.4)\r\n\r\nComplete type annotations for better IDE support and code safety:\r\n\r\n- **Full type coverage** for all methods and parameters\r\n- **IDE autocompletion** and type checking\r\n- **Runtime type safety** with proper annotations\r\n- **Optional parameters** with `Optional[T]` types\r\n- **Generic types** for collections and data structures\r\n\r\n```python\r\nfrom typing import Dict, List, Any, Optional\r\nfrom lara_queue import Queue\r\n\r\n# Typed queue creation\r\nqueue: Queue = Queue(\r\n client=redis_client,\r\n queue='typed_worker',\r\n dead_letter_queue='typed_failed',\r\n max_retries=3\r\n)\r\n\r\n# Typed job processing\r\n@queue.handler\r\ndef process_email(data: Dict[str, Any]) -> None:\r\n email_type: str = data.get('type', 'unknown')\r\n recipient: str = data.get('recipient', 'unknown')\r\n subject: Optional[str] = data.get('subject')\r\n \r\n # Type-safe processing\r\n if 'invalid' in recipient.lower():\r\n raise ValueError(f\"Invalid email address: {recipient}\")\r\n \r\n print(f\"Email sent to {recipient}\")\r\n\r\n# Typed DLQ operations\r\nfailed_jobs: List[Dict[str, Any]] = queue.get_dead_letter_jobs(limit=100)\r\nsuccess: bool = queue.reprocess_dead_letter_job(failed_jobs[0])\r\ncleared_count: int = queue.clear_dead_letter_queue()\r\n```\r\n\r\n```python\r\nimport logging\r\n\r\n# Enable logging for debugging\r\nlogging.basicConfig(level=logging.INFO)\r\nlogger = logging.getLogger('lara_queue')\r\nlogger.setLevel(logging.DEBUG)\r\n```\r\n\r\n### Installation\r\n\r\n```bash\r\npip install LaraQueue\r\n```\r\n\r\n### Usage\r\n\r\n#### \ud83d\ude80 Async Usage (Recommended for High Performance)\r\n\r\nFor high-performance applications, use the async API:\r\n\r\n```python\r\nimport asyncio\r\nimport aioredis\r\nfrom lara_queue import AsyncQueue, RetryStrategy\r\n\r\nasync def main():\r\n # Create async Redis client\r\n redis_client = await aioredis.from_url(\"redis://localhost:6379\")\r\n \r\n # Create async queue with high performance settings\r\n queue = AsyncQueue(\r\n client=redis_client,\r\n queue='async_worker',\r\n max_concurrent_jobs=20, # Process 20 jobs simultaneously\r\n enable_metrics=True,\r\n retry_strategy=RetryStrategy.EXPONENTIAL,\r\n max_retries=3\r\n )\r\n \r\n # Async job handler\r\n @queue.handler\r\n async def process_email(data):\r\n job_data = data.get('data', {})\r\n \r\n # Simulate async work (API calls, database operations, etc.)\r\n await asyncio.sleep(0.1)\r\n \r\n print(f\"Email sent to: {job_data.get('to')}\")\r\n \r\n # Add jobs asynchronously\r\n for i in range(100):\r\n await queue.push('App\\\\Jobs\\\\EmailJob', {\r\n 'to': f'user{i}@example.com',\r\n 'subject': f'Welcome Email {i}',\r\n 'body': 'Welcome to our service!'\r\n })\r\n \r\n # Start processing\r\n await queue.listen()\r\n\r\n# Run the async application\r\nasyncio.run(main())\r\n```\r\n\r\n#### High-Performance Async Example\r\n\r\n```python\r\nimport asyncio\r\nimport aioredis\r\nfrom lara_queue import AsyncQueue\r\n\r\nasync def high_performance_worker():\r\n redis_client = await aioredis.from_url(\"redis://localhost:6379\")\r\n \r\n # High-performance queue configuration\r\n queue = AsyncQueue(\r\n client=redis_client,\r\n queue='high_perf_worker',\r\n max_concurrent_jobs=50, # 50 concurrent jobs\r\n enable_metrics=True,\r\n metrics_history_size=10000\r\n )\r\n \r\n @queue.handler\r\n async def fast_processor(data):\r\n job_data = data.get('data', {})\r\n \r\n # Fast async processing\r\n await asyncio.sleep(0.05) # 50ms processing time\r\n \r\n # Your business logic here\r\n result = await process_business_logic(job_data)\r\n return result\r\n \r\n # Process thousands of jobs efficiently\r\n await queue.listen()\r\n\r\nasync def process_business_logic(data):\r\n # Simulate business logic\r\n await asyncio.sleep(0.02)\r\n return f\"Processed: {data.get('id')}\"\r\n\r\n# Run high-performance worker\r\nasyncio.run(high_performance_worker())\r\n```\r\n\r\n#### Async with Laravel Integration\r\n\r\n```python\r\nimport asyncio\r\nimport aioredis\r\nfrom lara_queue import AsyncQueue\r\n\r\nasync def laravel_async_integration():\r\n redis_client = await aioredis.from_url(\"redis://localhost:6379\")\r\n \r\n # Queue for processing Laravel jobs\r\n queue = AsyncQueue(\r\n client=redis_client,\r\n queue='python_worker', # Queue name Laravel sends to\r\n max_concurrent_jobs=10\r\n )\r\n \r\n @queue.handler\r\n async def handle_laravel_email(data):\r\n job_data = data.get('data', {})\r\n \r\n # Process Laravel email job\r\n await send_email_async(\r\n to=job_data.get('to'),\r\n subject=job_data.get('subject'),\r\n body=job_data.get('body')\r\n )\r\n \r\n @queue.handler\r\n async def handle_laravel_notification(data):\r\n job_data = data.get('data', {})\r\n \r\n # Process Laravel notification\r\n await send_notification_async(\r\n user_id=job_data.get('user_id'),\r\n message=job_data.get('message')\r\n )\r\n \r\n # Send jobs to Laravel\r\n laravel_queue = AsyncQueue(\r\n client=redis_client,\r\n queue='laravel_worker' # Queue name Laravel listens to\r\n )\r\n \r\n await laravel_queue.push('App\\\\Jobs\\\\UpdateUserJob', {\r\n 'user_id': 123,\r\n 'data': {'last_login': time.time()}\r\n })\r\n \r\n # Start processing\r\n await queue.listen()\r\n\r\nasync def send_email_async(to, subject, body):\r\n # Your async email sending logic\r\n await asyncio.sleep(0.1)\r\n print(f\"Email sent to {to}\")\r\n\r\nasync def send_notification_async(user_id, message):\r\n # Your async notification logic\r\n await asyncio.sleep(0.05)\r\n print(f\"Notification sent to user {user_id}\")\r\n\r\n# Run Laravel integration\r\nasyncio.run(laravel_async_integration())\r\n```\r\n\r\n#### Synchronous Usage (Legacy)\r\n\r\n#### Listen for jobs in Python\r\n\r\n```python\r\nfrom lara_queue import Queue\r\nfrom redis import Redis\r\n\r\nr = Redis(host='localhost', port=6379, db=0)\r\nqueue_python = Queue(r, queue='python')\r\n\r\n@queue_python.handler\r\ndef handle(data):\r\n name = data['name'] # job name\r\n job_data = data['data'] # job data\r\n print('Processing: ' + job_data['a'] + ' ' + job_data['b'] + ' ' + job_data['c'])\r\n\r\nqueue_python.listen()\r\n```\r\n\r\n#### Send jobs from Laravel\r\n\r\n```php\r\n<?php\r\n$job = new \\App\\Jobs\\TestJob('hi', 'send to', 'python');\r\ndispatch($job)->onQueue('python');\r\n```\r\n\r\n#### Send jobs to Laravel from Python\r\n\r\n```python\r\nfrom lara_queue import Queue\r\nfrom redis import Redis\r\n\r\nr = Redis(host='localhost', port=6379, db=0)\r\nqueue_laravel = Queue(r, queue='laravel')\r\nqueue_laravel.push('App\\\\Jobs\\\\TestJob', {'a': 'hello', 'b': 'send to', 'c': 'laravel'})\r\n```\r\n\r\n#### TestJob in Laravel\r\n\r\n```php\r\n<?php\r\n\r\nnamespace App\\Jobs;\r\n\r\nuse Illuminate\\Bus\\Queueable;\r\nuse Illuminate\\Contracts\\Queue\\ShouldQueue;\r\nuse Illuminate\\Foundation\\Bus\\Dispatchable;\r\nuse Illuminate\\Queue\\InteractsWithQueue;\r\nuse Illuminate\\Queue\\SerializesModels;\r\nuse Illuminate\\Support\\Facades\\Log;\r\n\r\nclass TestJob implements ShouldQueue\r\n{\r\n use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;\r\n\r\n public $a, $b, $c;\r\n\r\n /**\r\n * Create a new job instance.\r\n *\r\n * @return void\r\n */\r\n public function __construct($a, $b, $c)\r\n {\r\n $this->a = $a;\r\n $this->b = $b;\r\n $this->c = $c;\r\n }\r\n\r\n /**\r\n * Execute the job.\r\n *\r\n * @return void\r\n */\r\n public function handle()\r\n {\r\n Log::info('TEST: ' . $this->a . ' ' . $this->b . ' ' . $this->c);\r\n }\r\n}\r\n```\r\n\r\n#### Process jobs in Laravel\r\n\r\nYou need to `:listen` (or `:work`) the preferred queue name to handle jobs sent from Python in Laravel.\r\n\r\n```bash\r\nphp artisan queue:listen --queue=laravel\r\n```\r\n\r\n### Graceful Shutdown Example\r\n\r\n```python\r\nimport logging\r\nfrom lara_queue import Queue\r\nfrom redis import Redis\r\n\r\n# Setup logging\r\nlogging.basicConfig(level=logging.INFO)\r\nlogger = logging.getLogger(__name__)\r\n\r\nr = Redis(host='localhost', port=6379, db=0)\r\nqueue = Queue(r, queue='python_worker')\r\n\r\n@queue.handler\r\ndef handle_job(data):\r\n logger.info(f\"Processing job: {data['name']}\")\r\n # Simulate some work\r\n import time\r\n time.sleep(5)\r\n logger.info(\"Job completed!\")\r\n\r\nlogger.info(\"Worker starting...\")\r\nlogger.info(\"Press Ctrl+C to trigger graceful shutdown\")\r\nlogger.info(\"Current job will complete before stopping\")\r\n\r\ntry:\r\n queue.listen() # Signal handlers auto-registered\r\nexcept KeyboardInterrupt:\r\n logger.info(\"Worker stopped gracefully\")\r\n```\r\n\r\n### Manual Shutdown Example\r\n\r\n```python\r\nqueue = Queue(r, queue='test')\r\n\r\n@queue.handler\r\ndef handle_job(data):\r\n # Process job\r\n process_data(data)\r\n \r\n # Trigger shutdown programmatically\r\n if should_stop():\r\n queue.shutdown()\r\n\r\nqueue.listen()\r\n```\r\n\r\n### Error Handling Example\r\n\r\n```python\r\nfrom lara_queue import Queue\r\nfrom redis import Redis\r\nfrom redis.exceptions import ConnectionError\r\n\r\ntry:\r\n r = Redis(host='localhost', port=6379, db=0)\r\n queue = Queue(r, queue='python_worker')\r\n \r\n @queue.handler\r\n def handle_job(data):\r\n print(f\"Processing job: {data['name']}\")\r\n \r\n queue.listen() # Worker is now resilient to Redis errors!\r\n \r\nexcept ConnectionError as e:\r\n print(f\"Failed to connect to Redis: {e}\")\r\nexcept KeyboardInterrupt:\r\n print(\"Worker stopped gracefully\")\r\n```\r\n\r\n### Retry Strategy Recommendations\r\n\r\n| Strategy | Use Case | Example |\r\n|----------|----------|---------|\r\n| **Exponential** | Network/DB temporary failures | API calls, database connections |\r\n| **Linear** | Predictable resource limits | Rate-limited APIs, queue backpressure |\r\n| **Fixed** | Simple retry scenarios | File processing, simple validations |\r\n| **Custom** | Complex business logic | Fibonacci delays, circuit breaker patterns |\r\n\r\n**Best Practices:**\r\n- Use **jitter=True** to prevent thundering herd problems\r\n- Set **retry_exceptions** to only retry recoverable errors\r\n- Monitor **retry statistics** to optimize your retry strategy\r\n- Use **dead letter queues** for permanently failed jobs\r\n- Consider **max_delay** limits to prevent excessive wait times\r\n\r\n### Features\r\n\r\n- \u2705 **Async Support (v1.0.0)** - Full asyncio support for high-performance applications\r\n- \u2705 **Concurrent Processing** - Configurable concurrent job processing (up to 50+ jobs)\r\n- \u2705 **Redis driver support** - Queue communication between Python and Laravel\r\n- \u2705 **Bidirectional job processing** - Send and receive jobs in both directions\r\n- \u2705 **PHP object serialization** - Compatible with Laravel's job serialization format\r\n- \u2705 **Event-driven architecture** - Simple decorator-based job handlers (sync & async)\r\n- \u2705 **Automatic reconnection** - Resilient to network issues\r\n- \u2705 **Comprehensive error handling** - Detailed logging and error recovery\r\n- \u2705 **Graceful shutdown** - Signal handling (SIGINT, SIGTERM) with job completion\r\n- \u2705 **Advanced retry mechanisms** - Multiple strategies with full configurability\r\n- \u2705 **Retry statistics and monitoring** - Track performance and success rates\r\n- \u2705 **Comprehensive metrics collection** - Real-time performance monitoring\r\n- \u2705 **Production ready** - Battle-tested with extensive test coverage\r\n- \u2705 **Tested** - 100+ unit and integration tests included (sync + async)\r\n\r\n### Requirements\r\n\r\n- Python 3.7+\r\n- Redis 4.0+\r\n- Laravel 8+ (for Laravel side)\r\n- aioredis 2.0+ (for async support)\r\n\r\n### Performance Recommendations\r\n\r\n#### Async vs Sync Performance\r\n\r\n| Feature | Sync Queue | Async Queue | Performance Gain |\r\n|---------|------------|-------------|------------------|\r\n| **Concurrent Jobs** | 1 | 1-50+ | 10-50x faster |\r\n| **Throughput** | ~100 jobs/sec | ~1000+ jobs/sec | 10x+ faster |\r\n| **Memory Usage** | Lower | Slightly higher | ~20% more |\r\n| **CPU Usage** | Higher | Lower | ~30% less |\r\n| **I/O Efficiency** | Blocking | Non-blocking | Much better |\r\n\r\n#### Recommended Settings\r\n\r\n```python\r\n# High Performance Async Configuration\r\nqueue = AsyncQueue(\r\n client=redis_client,\r\n queue='high_perf',\r\n max_concurrent_jobs=20, # Adjust based on your system\r\n enable_metrics=True,\r\n retry_strategy=RetryStrategy.EXPONENTIAL,\r\n max_retries=3\r\n)\r\n\r\n# For CPU-intensive tasks\r\nqueue = AsyncQueue(\r\n client=redis_client,\r\n queue='cpu_intensive',\r\n max_concurrent_jobs=4, # Match CPU cores\r\n enable_metrics=True\r\n)\r\n\r\n# For I/O-intensive tasks (API calls, DB operations)\r\nqueue = AsyncQueue(\r\n client=redis_client,\r\n queue='io_intensive',\r\n max_concurrent_jobs=50, # High concurrency\r\n enable_metrics=True\r\n)\r\n```\r\n\r\n### Development\r\n\r\n```bash\r\n# Install development dependencies\r\npip install -e .\r\npip install -r requirements-dev.txt\r\n\r\n# Run tests\r\npytest tests/ -v\r\n\r\n# Run async tests\r\npytest tests/test_async_queue.py -v\r\n\r\n# Run specific test file\r\npytest tests/test_error_handling.py -v\r\n```\r\n\r\n### Contributing\r\n\r\nContributions are welcome! Please feel free to submit a Pull Request.\r\n\r\n### License\r\n\r\nMIT License - see LICENSE file for details.\r\n\r\n### Credits\r\n\r\n- Original package: [python-laravel-queue](https://github.com/sinanbekar/python-laravel-queue) by [@sinanbekar](https://github.com/sinanbekar)\r\n- This fork maintained with critical bug fixes and improvements\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Simple and lightweight queue synchronization between Python and Laravel using Redis with Horizon support",
"version": "1.2.0",
"project_urls": {
"Bug Tracker": "https://github.com/bat0n/lara-queue/issues",
"Documentation": "https://github.com/bat0n/lara-queue#readme",
"Homepage": "https://github.com/bat0n/lara-queue",
"Repository": "https://github.com/bat0n/lara-queue"
},
"split_keywords": [
"laravel",
" queue",
" redis",
" python",
" job",
" worker"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7e498b5f7b744d1624ccda7edf52ce79ed435a9f66b2b0aca0b21a94b29a6e45",
"md5": "04fc820221b1b261d05cb40e9852c8ef",
"sha256": "b7d1147fe6afacc3e02626fb0e91b59253792b1d197a2c8b4e9b2eb988b80c42"
},
"downloads": -1,
"filename": "laraqueue-1.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "04fc820221b1b261d05cb40e9852c8ef",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 40307,
"upload_time": "2025-10-18T11:44:03",
"upload_time_iso_8601": "2025-10-18T11:44:03.441678Z",
"url": "https://files.pythonhosted.org/packages/7e/49/8b5f7b744d1624ccda7edf52ce79ed435a9f66b2b0aca0b21a94b29a6e45/laraqueue-1.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "c178f085cd1f3781b9b1896d83c4d927cce0a64feb60de1515dabb486feeba57",
"md5": "46013c17eba052d91229449ba07affc8",
"sha256": "02544c7733c3a56b3448d77f748bc58bf1d2842fe92b3c3e020c8cb95a71e863"
},
"downloads": -1,
"filename": "laraqueue-1.2.0.tar.gz",
"has_sig": false,
"md5_digest": "46013c17eba052d91229449ba07affc8",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 91192,
"upload_time": "2025-10-18T11:44:04",
"upload_time_iso_8601": "2025-10-18T11:44:04.557432Z",
"url": "https://files.pythonhosted.org/packages/c1/78/f085cd1f3781b9b1896d83c4d927cce0a64feb60de1515dabb486feeba57/laraqueue-1.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-18 11:44:04",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bat0n",
"github_project": "lara-queue",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "laraqueue"
}