# Surreal Commands
A distributed task queue system similar to Celery, built with Python, SurrealDB, and LangChain. This system allows you to define, submit, and execute asynchronous commands/tasks with real-time processing capabilities.
## Features
- **Real-time Processing**: Uses SurrealDB LIVE queries for instant command pickup
- **Concurrent Execution**: Configurable concurrent task execution with semaphore controls
- **Type Safety**: Pydantic models for input/output validation
- **LangChain Integration**: Commands are LangChain Runnables for maximum flexibility
- **Dynamic CLI**: Auto-generates CLI from registered commands
- **Status Tracking**: Track command status through lifecycle (new � running � completed/failed)
- **Persistent Queue**: Commands persist in SurrealDB across worker restarts
- **Comprehensive Logging**: Built-in logging with loguru
## Architecture Overview
```mermaid
graph TD
CLI[CLI Interface] --> SurrealDB[(SurrealDB Queue)]
SurrealDB --> Worker[Worker Process]
Worker --> Registry[Command Registry]
Registry --> Commands[Registered Commands]
Worker --> |Execute| Commands
Commands --> |Results| SurrealDB
```
## Installation
Install the package using pip:
```bash
pip install surreal-commands
```
Set up environment variables in `.env`:
```env
# SurrealDB Configuration
SURREAL_URL=ws://localhost:8000/rpc
SURREAL_USER=root
SURREAL_PASSWORD=root
SURREAL_NAMESPACE=test
SURREAL_DATABASE=test
```
4. Ensure SurrealDB is running:
```bash
# Using Docker
docker run --rm -p 8000:8000 surrealdb/surrealdb:latest start --user root --pass root
# Or install locally
# See: https://surrealdb.com/install
```
## Quick Start
### 1. Define Commands
Create your commands using the `@command` decorator:
```python
# my_app/tasks.py
from surreal_commands import command, submit_command
from pydantic import BaseModel
class ProcessInput(BaseModel):
message: str
uppercase: bool = False
class ProcessOutput(BaseModel):
result: str
length: int
@command("process_text") # Auto-detects app name as "my_app"
def process_text(input_data: ProcessInput) -> ProcessOutput:
result = input_data.message.upper() if input_data.uppercase else input_data.message
return ProcessOutput(result=result, length=len(result))
# Alternative: explicit app name
@command("analyze", app="analytics")
def analyze_data(input_data: ProcessInput) -> ProcessOutput:
return ProcessOutput(result=f"Analyzed: {input_data.message}", length=len(input_data.message))
```
### 2. Submit and Monitor Commands
```python
from surreal_commands import submit_command, wait_for_command_sync
# Submit a command
cmd_id = submit_command("my_app", "process_text", {
"message": "hello world",
"uppercase": True
})
print(f"Submitted command: {cmd_id}")
# Wait for completion
result = wait_for_command_sync(cmd_id, timeout=30)
if result.is_success():
print(f"Result: {result.result}")
```
### 3. Start the Worker
```bash
# Start the worker process (import modules from environment variable)
export SURREAL_COMMANDS_MODULES="tasks"
surreal-commands-worker
# Or specify modules directly via CLI
surreal-commands-worker --import-modules "tasks"
# With debug logging
surreal-commands-worker --debug --import-modules "tasks"
# With custom concurrent task limit
surreal-commands-worker --max-tasks 10 --import-modules "tasks"
# Import multiple modules
surreal-commands-worker --import-modules "tasks,my_app.commands"
```
### 4. Monitor with CLI Tools
```bash
# View command dashboard
surreal-commands-dashboard
# View real-time logs
surreal-commands-logs
```
## Library Structure
```
surreal-commands/
├── apps/ # Your command applications
│ └── text_utils/ # Example app
│ ├── __init__.py
│ └── commands.py # Command definitions
├── cli/ # CLI components
│ ├── __init__.py
│ ├── launcher.py # Dynamic CLI generator
│ ├── dashboard.py # (Future) Dashboard UI
│ └── logs.py # (Future) Log viewer
├── commands/ # Core command system
│ ├── __init__.py
│ ├── command_service.py # Command lifecycle management
│ ├── executor.py # Command execution engine
│ ├── loader.py # Command discovery
│ ├── registry.py # Command registry (singleton)
│ ├── registry_types.py # Type definitions
│ └── worker.py # Worker process
├── repository/ # Database layer
│ └── __init__.py # SurrealDB helpers
├── cli.py # CLI entry point
├── run_worker.py # Worker entry point
└── .env # Environment configuration
```
## Core Components
### Command Registry
- Singleton pattern for global command management
- Stores commands as LangChain Runnables
- Organizes commands by app namespace
### Command Service
- Manages command lifecycle
- Validates arguments against schemas
- Updates command status in real-time
### Worker
- Long-running process polling SurrealDB
- Processes existing commands on startup
- Listens for new commands via LIVE queries
- Configurable concurrency limits
### Executor
- Handles sync/async command execution
- Type conversion and validation
- Streaming support for long-running tasks
## Advanced Usage
### Custom Command with Complex Types
```python
from typing import List, Optional
from datetime import datetime
from pydantic import BaseModel, Field
class AnalysisInput(BaseModel):
data: List[float]
method: str = Field(default="mean", description="Analysis method")
threshold: Optional[float] = None
class AnalysisOutput(BaseModel):
result: float
method_used: str
items_processed: int
warnings: List[str] = []
def analyze_data(input_data: AnalysisInput) -> AnalysisOutput:
# Your analysis logic here
pass
```
### Async Commands
```python
async def async_process(input_data: MyInput) -> MyOutput:
# Async processing
await some_async_operation()
return MyOutput(...)
# LangChain handles both sync and async
command = RunnableLambda(async_process)
```
### Working with Execution Context
Commands can access execution metadata (command_id, execution time, etc.) using the **CommandInput** and **CommandOutput** base classes. This is the recommended approach that works with all registration methods.
#### Using CommandInput and CommandOutput
```python
from surreal_commands import command, CommandInput, CommandOutput, ExecutionContext
# Input that can access execution context
class ProcessInput(CommandInput):
message: str
uppercase: bool = False
# Output that includes execution metadata
class ProcessOutput(CommandOutput):
result: str
# command_id, execution_time, and execution_metadata are inherited
@command("process_with_context")
def process_with_context(input_data: ProcessInput) -> ProcessOutput:
# Access execution context from input
ctx = input_data.execution_context
if ctx:
command_id = ctx.command_id
app_name = ctx.app_name
user_context = ctx.user_context or {}
user_id = user_context.get("user_id", "anonymous")
else:
command_id = "unknown"
user_id = "anonymous"
# Process the message
result = input_data.message.upper() if input_data.uppercase else input_data.message
result = f"Processed by {user_id}: {result}"
# Return output - framework automatically populates:
# - command_id (from execution context)
# - execution_time (measured by framework)
# - execution_metadata (additional context info)
return ProcessOutput(result=result)
```
#### Benefits of the New Pattern
- **Works with all registration methods** (decorator and direct registry.register())
- **Type-safe** with full IDE support
- **Automatic metadata population** in outputs
- **Backward compatible** - existing commands continue to work
- **Clean API** that aligns with LangChain's expectations
#### Base Class Usage Options
1. **CommandInput only**: Access execution context in your command
2. **CommandOutput only**: Get automatic execution metadata in outputs
3. **Both**: Full execution context support with automatic metadata
4. **Neither**: Regular commands without execution context (backward compatible)
```python
# Option 1: Access context only
class MyInput(CommandInput):
data: str
@command("context_input_only")
def my_command(input_data: MyInput) -> RegularOutput:
ctx = input_data.execution_context
# ... use context ...
return RegularOutput(result="done")
# Option 2: Output metadata only
class MyOutput(CommandOutput):
result: str
@command("context_output_only")
def my_command(input_data: RegularInput) -> MyOutput:
# ... process ...
return MyOutput(result="done") # Gets auto-populated metadata
# Option 3: Full context support
class MyInput(CommandInput):
data: str
class MyOutput(CommandOutput):
result: str
@command("full_context")
def my_command(input_data: MyInput) -> MyOutput:
# ... access context and get metadata ...
```
#### ExecutionContext Properties
- `command_id`: Database ID of the command
- `execution_started_at`: When execution began
- `app_name`: Application name
- `command_name`: Command name
- `user_context`: Optional CLI context (user_id, scope, etc.)
#### CommandOutput Auto-populated Fields
- `command_id`: Automatically set from execution context
- `execution_time`: Measured execution time in seconds
- `execution_metadata`: Additional execution information
#### Migration from Old Pattern
If you have commands using the old pattern with `execution_context` as a parameter:
```python
# OLD (causes errors):
def old_command(input_data: MyInput, execution_context: ExecutionContext):
command_id = execution_context.command_id
# ...
# NEW (recommended):
class MyInput(CommandInput):
# your fields here
def new_command(input_data: MyInput):
command_id = input_data.execution_context.command_id if input_data.execution_context else None
# ...
```
See `examples/migration_example.py` for detailed migration examples.
## Monitoring
### Check Command Status
```python
# check_results.py
import asyncio
from repository import db_connection
async def check_status():
async with db_connection() as db:
commands = await db.query(
"SELECT * FROM command ORDER BY created DESC LIMIT 10"
)
for cmd in commands:
print(f"{cmd['id']}: {cmd['status']}")
asyncio.run(check_status())
```
### View Logs
```bash
# Worker logs with debug info
uv run python run_worker.py --debug
# Filter logs by level
LOGURU_LEVEL=INFO uv run python run_worker.py
```
## Database Schema
Commands are stored in SurrealDB with the following structure:
```javascript
{
id: "command:unique_id",
app: "app_name",
name: "command_name",
args: { /* command arguments */ },
context: { /* optional context */ },
status: "new" | "running" | "completed" | "failed",
result: { /* command output */ },
error_message: "error details if failed",
created: "2024-01-07T10:30:00Z",
updated: "2024-01-07T10:30:05Z"
}
```
## Development
### Adding New Commands
1. Create a new app directory under `apps/`
2. Define your command models and logic
3. Register with the command registry
4. Restart the worker to pick up new commands
### Running Tests
```bash
# Run tests (when implemented)
uv run pytest
# With coverage
uv run pytest --cov=commands
```
### Debugging
Use the debug mode to see detailed logs:
```bash
# Debug CLI
LOGURU_LEVEL=DEBUG uv run python cli.py text_utils uppercase --text "test"
# Debug Worker
uv run python run_worker.py --debug
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## Future Enhancements
- [ ] Web dashboard for monitoring
- [ ] Command scheduling (cron-like)
- [ ] Priority queues
- [ ] Result callbacks
- [ ] Retry mechanisms
- [ ] Command chaining/workflows
- [ ] Metrics and monitoring
- [ ] REST API endpoint
- [ ] Command result TTL
- [ ] Dead letter queue
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- Inspired by Celery's design patterns
- Built with SurrealDB for real-time capabilities
- Leverages LangChain for flexible command definitions
Raw data
{
"_id": null,
"home_page": null,
"name": "surreal-commands",
"maintainer": "Surreal Commands Contributors",
"docs_url": null,
"requires_python": ">=3.10.6",
"maintainer_email": null,
"keywords": "async, celery, distributed, langchain, surrealdb, task-queue",
"author": "Surreal Commands Contributors",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/7f/89/c0234679e93c619583e1b8552079114f59ae77c31013001113a80bd6216a/surreal_commands-1.1.1.tar.gz",
"platform": null,
"description": "# Surreal Commands\n\nA distributed task queue system similar to Celery, built with Python, SurrealDB, and LangChain. This system allows you to define, submit, and execute asynchronous commands/tasks with real-time processing capabilities.\n\n## Features\n\n- **Real-time Processing**: Uses SurrealDB LIVE queries for instant command pickup\n- **Concurrent Execution**: Configurable concurrent task execution with semaphore controls\n- **Type Safety**: Pydantic models for input/output validation\n- **LangChain Integration**: Commands are LangChain Runnables for maximum flexibility\n- **Dynamic CLI**: Auto-generates CLI from registered commands\n- **Status Tracking**: Track command status through lifecycle (new \ufffd running \ufffd completed/failed)\n- **Persistent Queue**: Commands persist in SurrealDB across worker restarts\n- **Comprehensive Logging**: Built-in logging with loguru\n\n## Architecture Overview\n\n```mermaid\ngraph TD\n CLI[CLI Interface] --> SurrealDB[(SurrealDB Queue)]\n SurrealDB --> Worker[Worker Process]\n Worker --> Registry[Command Registry]\n Registry --> Commands[Registered Commands]\n Worker --> |Execute| Commands\n Commands --> |Results| SurrealDB\n```\n\n## Installation\n\nInstall the package using pip:\n\n```bash\npip install surreal-commands\n```\n\nSet up environment variables in `.env`:\n```env\n# SurrealDB Configuration\nSURREAL_URL=ws://localhost:8000/rpc\nSURREAL_USER=root\nSURREAL_PASSWORD=root\nSURREAL_NAMESPACE=test\nSURREAL_DATABASE=test\n```\n\n4. Ensure SurrealDB is running:\n```bash\n# Using Docker\ndocker run --rm -p 8000:8000 surrealdb/surrealdb:latest start --user root --pass root\n\n# Or install locally\n# See: https://surrealdb.com/install\n```\n\n## Quick Start\n\n### 1. Define Commands\n\nCreate your commands using the `@command` decorator:\n\n```python\n# my_app/tasks.py\nfrom surreal_commands import command, submit_command\nfrom pydantic import BaseModel\n\nclass ProcessInput(BaseModel):\n message: str\n uppercase: bool = False\n\nclass ProcessOutput(BaseModel):\n result: str\n length: int\n\n@command(\"process_text\") # Auto-detects app name as \"my_app\"\ndef process_text(input_data: ProcessInput) -> ProcessOutput:\n result = input_data.message.upper() if input_data.uppercase else input_data.message\n return ProcessOutput(result=result, length=len(result))\n\n# Alternative: explicit app name\n@command(\"analyze\", app=\"analytics\")\ndef analyze_data(input_data: ProcessInput) -> ProcessOutput:\n return ProcessOutput(result=f\"Analyzed: {input_data.message}\", length=len(input_data.message))\n```\n\n### 2. Submit and Monitor Commands\n\n```python\nfrom surreal_commands import submit_command, wait_for_command_sync\n\n# Submit a command\ncmd_id = submit_command(\"my_app\", \"process_text\", {\n \"message\": \"hello world\", \n \"uppercase\": True\n})\n\nprint(f\"Submitted command: {cmd_id}\")\n\n# Wait for completion\nresult = wait_for_command_sync(cmd_id, timeout=30)\nif result.is_success():\n print(f\"Result: {result.result}\")\n```\n\n### 3. Start the Worker\n\n```bash\n# Start the worker process (import modules from environment variable)\nexport SURREAL_COMMANDS_MODULES=\"tasks\"\nsurreal-commands-worker\n\n# Or specify modules directly via CLI\nsurreal-commands-worker --import-modules \"tasks\"\n\n# With debug logging\nsurreal-commands-worker --debug --import-modules \"tasks\"\n\n# With custom concurrent task limit\nsurreal-commands-worker --max-tasks 10 --import-modules \"tasks\"\n\n# Import multiple modules\nsurreal-commands-worker --import-modules \"tasks,my_app.commands\"\n```\n\n### 4. Monitor with CLI Tools\n\n```bash\n# View command dashboard\nsurreal-commands-dashboard\n\n# View real-time logs\nsurreal-commands-logs\n```\n\n## Library Structure\n\n```\nsurreal-commands/\n\u251c\u2500\u2500 apps/ # Your command applications\n\u2502 \u2514\u2500\u2500 text_utils/ # Example app\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u2514\u2500\u2500 commands.py # Command definitions\n\u251c\u2500\u2500 cli/ # CLI components\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 launcher.py # Dynamic CLI generator\n\u2502 \u251c\u2500\u2500 dashboard.py # (Future) Dashboard UI\n\u2502 \u2514\u2500\u2500 logs.py # (Future) Log viewer\n\u251c\u2500\u2500 commands/ # Core command system\n\u2502 \u251c\u2500\u2500 __init__.py\n\u2502 \u251c\u2500\u2500 command_service.py # Command lifecycle management\n\u2502 \u251c\u2500\u2500 executor.py # Command execution engine\n\u2502 \u251c\u2500\u2500 loader.py # Command discovery\n\u2502 \u251c\u2500\u2500 registry.py # Command registry (singleton)\n\u2502 \u251c\u2500\u2500 registry_types.py # Type definitions\n\u2502 \u2514\u2500\u2500 worker.py # Worker process\n\u251c\u2500\u2500 repository/ # Database layer\n\u2502 \u2514\u2500\u2500 __init__.py # SurrealDB helpers\n\u251c\u2500\u2500 cli.py # CLI entry point\n\u251c\u2500\u2500 run_worker.py # Worker entry point\n\u2514\u2500\u2500 .env # Environment configuration\n```\n\n## Core Components\n\n### Command Registry\n- Singleton pattern for global command management\n- Stores commands as LangChain Runnables\n- Organizes commands by app namespace\n\n### Command Service\n- Manages command lifecycle\n- Validates arguments against schemas\n- Updates command status in real-time\n\n### Worker\n- Long-running process polling SurrealDB\n- Processes existing commands on startup\n- Listens for new commands via LIVE queries\n- Configurable concurrency limits\n\n### Executor\n- Handles sync/async command execution\n- Type conversion and validation\n- Streaming support for long-running tasks\n\n## Advanced Usage\n\n### Custom Command with Complex Types\n\n```python\nfrom typing import List, Optional\nfrom datetime import datetime\nfrom pydantic import BaseModel, Field\n\nclass AnalysisInput(BaseModel):\n data: List[float]\n method: str = Field(default=\"mean\", description=\"Analysis method\")\n threshold: Optional[float] = None\n\nclass AnalysisOutput(BaseModel):\n result: float\n method_used: str\n items_processed: int\n warnings: List[str] = []\n\ndef analyze_data(input_data: AnalysisInput) -> AnalysisOutput:\n # Your analysis logic here\n pass\n```\n\n### Async Commands\n\n```python\nasync def async_process(input_data: MyInput) -> MyOutput:\n # Async processing\n await some_async_operation()\n return MyOutput(...)\n\n# LangChain handles both sync and async\ncommand = RunnableLambda(async_process)\n```\n\n### Working with Execution Context\n\nCommands can access execution metadata (command_id, execution time, etc.) using the **CommandInput** and **CommandOutput** base classes. This is the recommended approach that works with all registration methods.\n\n#### Using CommandInput and CommandOutput\n\n```python\nfrom surreal_commands import command, CommandInput, CommandOutput, ExecutionContext\n\n# Input that can access execution context\nclass ProcessInput(CommandInput):\n message: str\n uppercase: bool = False\n\n# Output that includes execution metadata\nclass ProcessOutput(CommandOutput):\n result: str\n # command_id, execution_time, and execution_metadata are inherited\n\n@command(\"process_with_context\")\ndef process_with_context(input_data: ProcessInput) -> ProcessOutput:\n # Access execution context from input\n ctx = input_data.execution_context\n \n if ctx:\n command_id = ctx.command_id\n app_name = ctx.app_name\n user_context = ctx.user_context or {}\n user_id = user_context.get(\"user_id\", \"anonymous\")\n else:\n command_id = \"unknown\"\n user_id = \"anonymous\"\n \n # Process the message\n result = input_data.message.upper() if input_data.uppercase else input_data.message\n result = f\"Processed by {user_id}: {result}\"\n \n # Return output - framework automatically populates:\n # - command_id (from execution context)\n # - execution_time (measured by framework)\n # - execution_metadata (additional context info)\n return ProcessOutput(result=result)\n```\n\n#### Benefits of the New Pattern\n\n- **Works with all registration methods** (decorator and direct registry.register())\n- **Type-safe** with full IDE support\n- **Automatic metadata population** in outputs\n- **Backward compatible** - existing commands continue to work\n- **Clean API** that aligns with LangChain's expectations\n\n#### Base Class Usage Options\n\n1. **CommandInput only**: Access execution context in your command\n2. **CommandOutput only**: Get automatic execution metadata in outputs\n3. **Both**: Full execution context support with automatic metadata\n4. **Neither**: Regular commands without execution context (backward compatible)\n\n```python\n# Option 1: Access context only\nclass MyInput(CommandInput):\n data: str\n\n@command(\"context_input_only\")\ndef my_command(input_data: MyInput) -> RegularOutput:\n ctx = input_data.execution_context\n # ... use context ...\n return RegularOutput(result=\"done\")\n\n# Option 2: Output metadata only\nclass MyOutput(CommandOutput):\n result: str\n\n@command(\"context_output_only\")\ndef my_command(input_data: RegularInput) -> MyOutput:\n # ... process ...\n return MyOutput(result=\"done\") # Gets auto-populated metadata\n\n# Option 3: Full context support\nclass MyInput(CommandInput):\n data: str\n\nclass MyOutput(CommandOutput):\n result: str\n\n@command(\"full_context\")\ndef my_command(input_data: MyInput) -> MyOutput:\n # ... access context and get metadata ...\n```\n\n#### ExecutionContext Properties\n\n- `command_id`: Database ID of the command\n- `execution_started_at`: When execution began\n- `app_name`: Application name \n- `command_name`: Command name\n- `user_context`: Optional CLI context (user_id, scope, etc.)\n\n#### CommandOutput Auto-populated Fields\n\n- `command_id`: Automatically set from execution context\n- `execution_time`: Measured execution time in seconds\n- `execution_metadata`: Additional execution information\n\n#### Migration from Old Pattern\n\nIf you have commands using the old pattern with `execution_context` as a parameter:\n\n```python\n# OLD (causes errors):\ndef old_command(input_data: MyInput, execution_context: ExecutionContext):\n command_id = execution_context.command_id\n # ...\n\n# NEW (recommended):\nclass MyInput(CommandInput):\n # your fields here\n\ndef new_command(input_data: MyInput):\n command_id = input_data.execution_context.command_id if input_data.execution_context else None\n # ...\n```\n\nSee `examples/migration_example.py` for detailed migration examples.\n\n## Monitoring\n\n### Check Command Status\n\n```python\n# check_results.py\nimport asyncio\nfrom repository import db_connection\n\nasync def check_status():\n async with db_connection() as db:\n commands = await db.query(\n \"SELECT * FROM command ORDER BY created DESC LIMIT 10\"\n )\n for cmd in commands:\n print(f\"{cmd['id']}: {cmd['status']}\")\n\nasyncio.run(check_status())\n```\n\n### View Logs\n\n```bash\n# Worker logs with debug info\nuv run python run_worker.py --debug\n\n# Filter logs by level\nLOGURU_LEVEL=INFO uv run python run_worker.py\n```\n\n## Database Schema\n\nCommands are stored in SurrealDB with the following structure:\n\n```javascript\n{\n id: \"command:unique_id\",\n app: \"app_name\",\n name: \"command_name\",\n args: { /* command arguments */ },\n context: { /* optional context */ },\n status: \"new\" | \"running\" | \"completed\" | \"failed\",\n result: { /* command output */ },\n error_message: \"error details if failed\",\n created: \"2024-01-07T10:30:00Z\",\n updated: \"2024-01-07T10:30:05Z\"\n}\n```\n\n## Development\n\n### Adding New Commands\n\n1. Create a new app directory under `apps/`\n2. Define your command models and logic\n3. Register with the command registry\n4. Restart the worker to pick up new commands\n\n### Running Tests\n\n```bash\n# Run tests (when implemented)\nuv run pytest\n\n# With coverage\nuv run pytest --cov=commands\n```\n\n### Debugging\n\nUse the debug mode to see detailed logs:\n```bash\n# Debug CLI\nLOGURU_LEVEL=DEBUG uv run python cli.py text_utils uppercase --text \"test\"\n\n# Debug Worker\nuv run python run_worker.py --debug\n```\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests if applicable\n5. Submit a pull request\n\n## Future Enhancements\n\n- [ ] Web dashboard for monitoring\n- [ ] Command scheduling (cron-like)\n- [ ] Priority queues\n- [ ] Result callbacks\n- [ ] Retry mechanisms\n- [ ] Command chaining/workflows\n- [ ] Metrics and monitoring\n- [ ] REST API endpoint\n- [ ] Command result TTL\n- [ ] Dead letter queue\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- Inspired by Celery's design patterns\n- Built with SurrealDB for real-time capabilities\n- Leverages LangChain for flexible command definitions",
"bugtrack_url": null,
"license": "MIT",
"summary": "A distributed task queue system similar to Celery, built with Python, SurrealDB, and LangChain",
"version": "1.1.1",
"project_urls": {
"Bug Tracker": "https://github.com/lfnovo/surreal-commands/issues",
"Documentation": "https://github.com/lfnovo/surreal-commands/blob/main/README.md",
"Homepage": "https://github.com/lfnovo/surreal-commands",
"Repository": "https://github.com/lfnovo/surreal-commands"
},
"split_keywords": [
"async",
" celery",
" distributed",
" langchain",
" surrealdb",
" task-queue"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "cefea69762f6eacf14ec3b97eb16f97fc2fd276679caace36168adadc3264df2",
"md5": "6049067652af9d389f59317efe03fde8",
"sha256": "d4be9ee5bfbcfe9d3d962d161b2639dd9fdc8dcde0ce8fe0e3b4444c646fe421"
},
"downloads": -1,
"filename": "surreal_commands-1.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6049067652af9d389f59317efe03fde8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10.6",
"size": 28960,
"upload_time": "2025-07-09T14:00:59",
"upload_time_iso_8601": "2025-07-09T14:00:59.766795Z",
"url": "https://files.pythonhosted.org/packages/ce/fe/a69762f6eacf14ec3b97eb16f97fc2fd276679caace36168adadc3264df2/surreal_commands-1.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "7f89c0234679e93c619583e1b8552079114f59ae77c31013001113a80bd6216a",
"md5": "1c44f6965a6bf0372dad0d2e1ee2a63b",
"sha256": "6994f65d3a7574f965fea887dc2ad3797177260b3a07aba07246bf4d61926574"
},
"downloads": -1,
"filename": "surreal_commands-1.1.1.tar.gz",
"has_sig": false,
"md5_digest": "1c44f6965a6bf0372dad0d2e1ee2a63b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10.6",
"size": 156443,
"upload_time": "2025-07-09T14:01:01",
"upload_time_iso_8601": "2025-07-09T14:01:01.245939Z",
"url": "https://files.pythonhosted.org/packages/7f/89/c0234679e93c619583e1b8552079114f59ae77c31013001113a80bd6216a/surreal_commands-1.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-09 14:01:01",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lfnovo",
"github_project": "surreal-commands",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "surreal-commands"
}