# Gleitzeit
A workflow orchestration system for coordinating LLM tasks, Python code execution, and tool integrations.
Supports parallel task execution, dependency management, and batch file processing.
## Quick Start
Get up and running with Gleitzeit in 5 minutes!
### Prerequisites
- Python 3.8 or higher
- Ollama installed (for LLM features)
- Redis (optional, for production persistence)
- Docker (optional, for isolated Python execution)
### Installation
```bash
git clone https://github.com/leifmarkthaler/gleitzeit.git
cd gleitzeit
uv pip install -e .
```
### Step 1: Start Ollama
```bash
# Start Ollama server
ollama serve
# In another terminal, pull a model
ollama pull llama3.2
```
### Step 2: Create Your First Workflow
Create `hello_workflow.yaml`:
```yaml
name: "Hello World Workflow"
tasks:
- id: "greeting"
method: "llm/chat"
parameters:
model: "llama3.2"
messages:
- role: "user"
content: "Say hello and tell me an interesting fact!"
- id: "followup"
method: "llm/chat"
dependencies: ["greeting"]
parameters:
model: "llama3.2"
messages:
- role: "user"
content: "That's interesting! Now tell me more about: ${greeting.response}"
```
### Step 3: Run the Workflow
**Using CLI**
```bash
gleitzeit run hello_workflow.yaml
```
**Or using Python**
```python
import asyncio
from gleitzeit import GleitzeitClient
async with GleitzeitClient() as client:
result = await client.run_workflow("workflow.yaml")
```
## Core Concepts
### Protocols & Providers
- **Protocols**: Define standardized interfaces (LLM, Python, MCP)
- **Providers**: Implement protocol methods (OllamaProvider, PythonProvider, MCPHubProvider)
- **Registry**: Maps methods to providers and validates calls
### Resource Management
- **Hubs**: Manage compute resources (OllamaHub for LLM servers, DockerHub for containers)
- **ResourceManager**: Orchestrates multiple hubs and allocates resources
- **Auto-discovery**: Automatically finds available Ollama instances
### Workflow Execution
- **ExecutionEngine**: Central orchestrator for workflow execution
- **TaskQueue**: Manages task scheduling with dependency resolution
- **Parallel Execution**: Independent tasks run concurrently
- **Parameter Substitution**: Pass results between tasks using `${task_id.field}`
### Persistence
Gleitzeit includes a unified persistence layer with automatic fallback:
- **Redis** (if available) - High performance
- **SQLite** (fallback) - Local database
- **Memory** (last resort) - In-process storage
## Python Client
### Using GleitzeitClient
```python
from gleitzeit import GleitzeitClient
async with GleitzeitClient() as client:
# Auto-detects API or native mode
result = await client.run_workflow("workflow.yaml")
# Force specific mode
async with GleitzeitClient(mode="api") as client:
# Uses REST API
pass
async with GleitzeitClient(mode="native") as client:
# Direct execution engine
pass
```
### How It Works Internally
The `GleitzeitClient` handles all the complexity for you. When you use it in `native` mode, it automatically:
1. Creates and configures the ExecutionEngine
2. Registers all necessary providers
3. Starts the engine (no manual start needed!)
4. Submits your workflow
5. Handles cleanup on exit
Here's what happens under the hood:
```python
# This is what GleitzeitClient does internally (you don't need to do this!)
async with GleitzeitClient(mode="native") as client:
# Client automatically:
# - Creates ExecutionEngine
# - Registers providers (Ollama, Python, MCP, etc.)
# - Starts the engine
# - Now you just submit workflows:
result = await client.run_workflow("workflow.yaml")
# The engine is already running, workflow executes automatically!
```
### Available Client Methods
```python
# Run workflows
result = await client.run_workflow("workflow.yaml")
result = await client.run_workflow(workflow_dict)
# Chat with LLMs (via Ollama)
response = await client.chat("Hello", model="llama3.2")
# Execute Python scripts
result = await client.execute_python_script("script.py", args={"key": "value"})
# Batch process files
results = await client.batch_process(
directory="docs",
pattern="*.txt",
prompt="Summarize",
model="llama3.2"
)
# Direct task execution
task_result = await client.execute_task(task)
```
### Creating and Submitting Tasks Programmatically
```python
from gleitzeit import GleitzeitClient
async with GleitzeitClient() as client:
# Submit individual task
result = await client.execute_task({
"method": "llm/chat",
"parameters": {
"model": "llama3.2",
"messages": [{"role": "user", "content": "Hello!"}]
}
})
# Or create a workflow programmatically
workflow = {
"name": "My Dynamic Workflow",
"tasks": [
{
"id": "task1",
"method": "llm/chat",
"parameters": {
"model": "llama3.2",
"messages": [{"role": "user", "content": "Write a haiku"}]
}
},
{
"id": "task2",
"method": "python/execute",
"dependencies": ["task1"],
"parameters": {
"code": "print('Task 1 result:', '${task1.response}')"
}
}
]
}
# Submit the workflow
results = await client.run_workflow(workflow)
```
## Workflow Examples
### Basic Workflow with Dependencies
```yaml
name: "Analysis Pipeline"
tasks:
- id: "load_data"
method: "python/execute"
parameters:
script: "scripts/load_data.py"
args:
input: "data.csv"
- id: "analyze"
method: "llm/chat"
dependencies: ["load_data"]
parameters:
model: "llama3.2"
messages:
- role: "user"
content: "Analyze this data: ${load_data.result}"
- id: "save_results"
method: "python/execute"
dependencies: ["analyze"]
parameters:
script: "scripts/save_results.py"
args:
content: "${analyze.response}"
output: "report.md"
```
### Chain Task Results
Create a story by chaining LLM responses:
```yaml
name: "Story Chain"
tasks:
- id: "character"
method: "llm/chat"
parameters:
model: "llama3.2"
messages:
- role: "user"
content: "Create a unique character for a story in one sentence"
- id: "setting"
method: "llm/chat"
dependencies: ["character"]
parameters:
model: "llama3.2"
messages:
- role: "user"
content: "Create a setting for this character: ${character.response}"
- id: "plot"
method: "llm/chat"
dependencies: ["character", "setting"]
parameters:
model: "llama3.2"
messages:
- role: "user"
content: |
Write a short story plot with:
Character: ${character.response}
Setting: ${setting.response}
```
### MCP (Model Context Protocol) Integration
Use external MCP server tools (requires server configuration):
```yaml
name: "MCP Tools Example"
tasks:
# Read file using filesystem MCP server
- id: "read_config"
method: "mcp/tool.fs.read"
parameters:
path: "./config.json"
# Write file using filesystem MCP server
- id: "save_output"
method: "mcp/tool.fs.write"
dependencies: ["read_config"]
parameters:
path: "./output.json"
content: "Processed: ${read_config.content}"
# Combine with LLM for analysis
- id: "analyze"
method: "llm/chat"
dependencies: ["read_config"]
parameters:
model: "llama3.2"
messages:
- role: "user"
content: "Analyze this configuration: ${read_config.content}"
```
### Multi-Model Workflow
Use different models for different tasks:
```yaml
name: "Multi-Model Analysis"
tasks:
- id: "fast_response"
method: "llm/chat"
parameters:
model: "llama3.2:1b" # Fast small model
messages:
- role: "user"
content: "Quick summary of quantum computing"
- id: "detailed_response"
method: "llm/chat"
parameters:
model: "llama3.2:7b" # Larger model for detail
messages:
- role: "user"
content: "Explain quantum computing in detail with examples"
- id: "combine"
method: "llm/chat"
dependencies: ["fast_response", "detailed_response"]
parameters:
model: "llama3.2"
messages:
- role: "user"
content: |
Combine these two explanations into one comprehensive summary:
Quick: ${fast_response.response}
Detailed: ${detailed_response.response}
```
## Supported Protocols
### LLM Protocol (`llm/v1`)
**Provider**: OllamaProvider
**Methods**:
- `llm/chat` - Text generation with conversation history
- `llm/vision` - Image analysis with vision models
- `llm/generate` - Direct text generation
- `llm/embeddings` - Generate text embeddings
**Models**: Any Ollama model (llama3.2, mistral, codellama, llava, etc.)
### Python Protocol (`python/v1`)
**Provider**: PythonProvider
**Methods**:
- `python/execute` - Execute Python script files
- `python/validate` - Validate Python syntax
- `python/info` - Get provider information
**Security**: Scripts run in subprocess isolation or Docker containers
### MCP Protocol (`mcp/v1`)
**Provider**: MCPHubProvider
**Methods**:
- `mcp/tool.*` - Execute MCP tools from registered servers
- `mcp/tools/list` - List available tools
- `mcp/servers` - List MCP servers
- `mcp/ping` - Health check
**External Servers**: Any MCP-compliant server (stdio/websocket/HTTP)
**Note**: Configure servers in `~/.gleitzeit/config.yaml` or via environment
## CLI Commands
```bash
# Run workflows
gleitzeit run workflow.yaml
gleitzeit run workflow.yaml --local # Force native mode
gleitzeit run workflow.yaml --watch # Watch for changes
# Check status
gleitzeit status
gleitzeit status --resources
# Batch processing
gleitzeit batch documents --pattern "*.txt" --prompt "Summarize"
# Configuration
gleitzeit config show
gleitzeit config set default_model llama3.2
# Start API server
gleitzeit serve --port 8000
```
## Resource Hubs
### OllamaHub
Manages Ollama LLM server instances:
- Auto-discovers running instances on configurable ports
- Health monitoring and metrics collection
- Model-aware load balancing
- Connection pooling for performance
### DockerHub (Optional)
Manages Docker containers for isolated Python execution:
- Container lifecycle management
- Resource limits enforcement
- Security isolation
### MCPHub
Manages MCP (Model Context Protocol) server instances:
- Supports stdio, WebSocket, and HTTP connections
- Automatic tool discovery and registration
- Health monitoring and auto-restart
- Tool routing and load balancing
- Configurable via YAML or environment variables
## Deployment Modes
### Development Mode
```python
# Direct execution engine, no server needed
client = GleitzeitClient(mode="native")
```
### Production Mode
```bash
# Start API server
gleitzeit serve --port 8000
# Client connects to API
client = GleitzeitClient(mode="api", api_host="localhost", api_port=8000)
```
### Auto Mode (Default)
```python
# Automatically uses API if available, otherwise native
client = GleitzeitClient() # mode="auto" is default
```
## Configuration
### Config File (`~/.gleitzeit/config.yaml`)
```yaml
default_model: llama3.2
ollama:
discovery_ports: [11434, 11435, 11436]
auto_discover: true
persistence:
type: auto
redis:
url: redis://localhost:6379
batch:
max_concurrent: 5
max_file_size: 1048576
mcp:
auto_discover: true
servers:
- name: "filesystem"
connection_type: "stdio"
command: ["npx", "-y", "@modelcontextprotocol/server-filesystem"]
tool_prefix: "fs."
```
### Environment Variables
```bash
# Ollama settings
export GLEITZEIT_OLLAMA_URL=http://localhost:11434
export GLEITZEIT_DEFAULT_MODEL=llama3.2
# Persistence
export GLEITZEIT_PERSISTENCE_TYPE=auto # auto|redis|sql|memory
export GLEITZEIT_REDIS_URL=redis://localhost:6379
export GLEITZEIT_SQL_DB_PATH=~/.gleitzeit/workflows.db
# API server
export GLEITZEIT_API_HOST=0.0.0.0
export GLEITZEIT_API_PORT=8000
```
## Advanced Features
### Parallel Task Execution
Tasks without dependencies run concurrently:
```yaml
tasks:
- id: "task1" # Runs immediately
method: "llm/chat"
- id: "task2" # Runs in parallel with task1
method: "llm/chat"
- id: "combine" # Waits for both
dependencies: ["task1", "task2"]
method: "python/execute"
```
### Batch Processing
Process multiple files in parallel:
#### Create Test Files
```bash
mkdir documents
echo "Python is a great language" > documents/python.txt
echo "JavaScript powers the web" > documents/javascript.txt
echo "Rust is fast and safe" > documents/rust.txt
```
#### Using CLI
```bash
gleitzeit batch documents \
--pattern "*.txt" \
--prompt "Summarize this file and rate the programming language mentioned from 1-10"
```
#### Using Python API
```python
results = await client.batch_process(
directory="documents",
pattern="**/*.txt", # Recursive
prompt="Extract key points",
model="llama3.2",
max_concurrent=10
)
```
#### Batch Workflow
```yaml
name: "Batch Document Analysis"
type: "batch"
batch:
directory: "documents"
pattern: "*.txt"
template:
method: "llm/chat"
model: "llama3.2"
messages:
- role: "user"
content: "Analyze this document and provide a summary"
```
### Dynamic Workflows with Python
Create workflows programmatically:
```python
import asyncio
from gleitzeit import GleitzeitClient
async def dynamic_workflow():
async with GleitzeitClient() as client:
# Generate a question
question = await client.execute_task({
"method": "llm/chat",
"parameters": {
"model": "llama3.2",
"messages": [
{"role": "user", "content": "Generate a random question about science"}
]
}
})
# Answer the generated question
answer = await client.execute_task({
"method": "llm/chat",
"parameters": {
"model": "llama3.2",
"messages": [
{"role": "user", "content": f"Answer this: {question['response']}"}
]
}
})
# Fact-check the answer
verification = await client.execute_task({
"method": "llm/chat",
"parameters": {
"model": "llama3.2",
"messages": [
{"role": "user",
"content": f"Is this answer correct? {answer['response']}"}
]
}
})
return {
"question": question['response'],
"answer": answer['response'],
"verification": verification['response']
}
result = asyncio.run(dynamic_workflow())
print(result)
```
### Error Handling & Retries
```yaml
tasks:
- id: "resilient_task"
method: "llm/chat"
retry:
max_attempts: 3
delay: 2
exponential_backoff: true
parameters:
timeout: 30
```
## Testing
```bash
# Run all tests
pytest
# Run specific test suites
pytest tests/unit/
pytest tests/integration/
pytest tests/workflows/
# Test with real execution
python tests/workflow_test_suite.py --execute
```
## Common Issues & Solutions
### Ollama Connection Issues
```bash
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Restart Ollama
killall ollama
ollama serve
```
### Workflow Debugging
```bash
# Enable debug mode
export GLEITZEIT_DEBUG=true
gleitzeit run workflow.yaml
# Check task details
gleitzeit status --verbose
```
### Performance Tips
- Use `--local` flag to force native mode for development
- Configure Redis for production persistence
- Adjust `max_concurrent` for batch processing based on resources
- Use smaller models (e.g., llama3.2:1b) for simple tasks
## Documentation
- [Installation](docs/installation.md) - Detailed installation guide
- [Core Concepts](docs/concepts.md) - Understand the architecture
- [Workflows](docs/workflows.md) - Creating complex workflows
- [MCP Integration](docs/mcp.md) - Model Context Protocol support
- [CLI Reference](docs/cli.md) - Command-line interface
- [Python API](docs/api.md) - Complete API reference
- [Providers](docs/providers.md) - Available providers and creating custom ones
- [Configuration](docs/configuration.md) - Configuration options
- [Troubleshooting](docs/troubleshooting.md) - Common issues and solutions
## Requirements
- Python 3.8+
- Ollama (for LLM operations)
- Redis (optional, for persistence)
- Docker (optional, for isolated Python execution)
## License
MIT
Raw data
{
"_id": null,
"home_page": "https://github.com/leifmarkthaler/gleitzeit",
"name": "gleitzeit",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "Leif Markthaler <leif.markthaler@gmail.com>",
"keywords": "workflow, orchestration, automation, unified-persistence, hub-provider, async, llm, mcp, model-context-protocol, task-automation, python-execution, docker, redis, sqlite",
"author": "Leif Markthaler",
"author_email": "Leif Markthaler <leif.markthaler@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/1c/ee/10b8aa8c5b25c30825f699cd901eb08f33805c98666285b798724b551bad/gleitzeit-0.0.5.tar.gz",
"platform": null,
"description": "# Gleitzeit\n\nA workflow orchestration system for coordinating LLM tasks, Python code execution, and tool integrations.\nSupports parallel task execution, dependency management, and batch file processing.\n\n## Quick Start\n\nGet up and running with Gleitzeit in 5 minutes!\n\n### Prerequisites\n\n- Python 3.8 or higher\n- Ollama installed (for LLM features)\n- Redis (optional, for production persistence)\n- Docker (optional, for isolated Python execution)\n\n### Installation\n```bash\ngit clone https://github.com/leifmarkthaler/gleitzeit.git\ncd gleitzeit\nuv pip install -e .\n```\n\n### Step 1: Start Ollama\n\n```bash\n# Start Ollama server\nollama serve\n\n# In another terminal, pull a model\nollama pull llama3.2\n```\n\n### Step 2: Create Your First Workflow\n\nCreate `hello_workflow.yaml`:\n\n```yaml\nname: \"Hello World Workflow\"\ntasks:\n - id: \"greeting\"\n method: \"llm/chat\"\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"Say hello and tell me an interesting fact!\"\n\n - id: \"followup\"\n method: \"llm/chat\"\n dependencies: [\"greeting\"]\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"That's interesting! Now tell me more about: ${greeting.response}\"\n```\n\n### Step 3: Run the Workflow\n\n**Using CLI**\n```bash\ngleitzeit run hello_workflow.yaml\n```\n\n**Or using Python**\n```python\nimport asyncio\nfrom gleitzeit import GleitzeitClient\n\nasync with GleitzeitClient() as client:\n result = await client.run_workflow(\"workflow.yaml\")\n```\n\n## Core Concepts\n\n### Protocols & Providers\n- **Protocols**: Define standardized interfaces (LLM, Python, MCP)\n- **Providers**: Implement protocol methods (OllamaProvider, PythonProvider, MCPHubProvider)\n- **Registry**: Maps methods to providers and validates calls\n\n### Resource Management\n- **Hubs**: Manage compute resources (OllamaHub for LLM servers, DockerHub for containers)\n- **ResourceManager**: Orchestrates multiple hubs and allocates resources\n- **Auto-discovery**: Automatically finds available Ollama instances\n\n### Workflow Execution\n- **ExecutionEngine**: Central orchestrator for workflow execution\n- **TaskQueue**: Manages task scheduling with dependency resolution\n- **Parallel Execution**: Independent tasks run concurrently\n- **Parameter Substitution**: Pass results between tasks using `${task_id.field}`\n\n### Persistence\nGleitzeit includes a unified persistence layer with automatic fallback:\n- **Redis** (if available) - High performance\n- **SQLite** (fallback) - Local database\n- **Memory** (last resort) - In-process storage\n\n## Python Client\n\n### Using GleitzeitClient\n\n```python\nfrom gleitzeit import GleitzeitClient\n\nasync with GleitzeitClient() as client:\n # Auto-detects API or native mode\n result = await client.run_workflow(\"workflow.yaml\")\n \n # Force specific mode\n async with GleitzeitClient(mode=\"api\") as client:\n # Uses REST API\n pass\n \n async with GleitzeitClient(mode=\"native\") as client:\n # Direct execution engine\n pass\n```\n\n### How It Works Internally\n\nThe `GleitzeitClient` handles all the complexity for you. When you use it in `native` mode, it automatically:\n\n1. Creates and configures the ExecutionEngine\n2. Registers all necessary providers \n3. Starts the engine (no manual start needed!)\n4. Submits your workflow\n5. Handles cleanup on exit\n\nHere's what happens under the hood:\n\n```python\n# This is what GleitzeitClient does internally (you don't need to do this!)\nasync with GleitzeitClient(mode=\"native\") as client:\n # Client automatically:\n # - Creates ExecutionEngine\n # - Registers providers (Ollama, Python, MCP, etc.)\n # - Starts the engine\n # - Now you just submit workflows:\n \n result = await client.run_workflow(\"workflow.yaml\")\n # The engine is already running, workflow executes automatically!\n```\n\n### Available Client Methods\n\n```python\n# Run workflows\nresult = await client.run_workflow(\"workflow.yaml\")\nresult = await client.run_workflow(workflow_dict)\n\n# Chat with LLMs (via Ollama)\nresponse = await client.chat(\"Hello\", model=\"llama3.2\")\n\n# Execute Python scripts\nresult = await client.execute_python_script(\"script.py\", args={\"key\": \"value\"})\n\n# Batch process files\nresults = await client.batch_process(\n directory=\"docs\",\n pattern=\"*.txt\",\n prompt=\"Summarize\",\n model=\"llama3.2\"\n)\n\n# Direct task execution\ntask_result = await client.execute_task(task)\n```\n\n### Creating and Submitting Tasks Programmatically\n\n```python\nfrom gleitzeit import GleitzeitClient\n\nasync with GleitzeitClient() as client:\n # Submit individual task\n result = await client.execute_task({\n \"method\": \"llm/chat\",\n \"parameters\": {\n \"model\": \"llama3.2\",\n \"messages\": [{\"role\": \"user\", \"content\": \"Hello!\"}]\n }\n })\n \n # Or create a workflow programmatically\n workflow = {\n \"name\": \"My Dynamic Workflow\",\n \"tasks\": [\n {\n \"id\": \"task1\",\n \"method\": \"llm/chat\",\n \"parameters\": {\n \"model\": \"llama3.2\",\n \"messages\": [{\"role\": \"user\", \"content\": \"Write a haiku\"}]\n }\n },\n {\n \"id\": \"task2\",\n \"method\": \"python/execute\",\n \"dependencies\": [\"task1\"],\n \"parameters\": {\n \"code\": \"print('Task 1 result:', '${task1.response}')\"\n }\n }\n ]\n }\n \n # Submit the workflow\n results = await client.run_workflow(workflow)\n```\n\n## Workflow Examples\n\n### Basic Workflow with Dependencies\n\n```yaml\nname: \"Analysis Pipeline\"\ntasks:\n - id: \"load_data\"\n method: \"python/execute\"\n parameters:\n script: \"scripts/load_data.py\"\n args:\n input: \"data.csv\"\n \n - id: \"analyze\"\n method: \"llm/chat\"\n dependencies: [\"load_data\"]\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"Analyze this data: ${load_data.result}\"\n \n - id: \"save_results\"\n method: \"python/execute\"\n dependencies: [\"analyze\"]\n parameters:\n script: \"scripts/save_results.py\"\n args:\n content: \"${analyze.response}\"\n output: \"report.md\"\n```\n\n### Chain Task Results\n\nCreate a story by chaining LLM responses:\n\n```yaml\nname: \"Story Chain\"\ntasks:\n - id: \"character\"\n method: \"llm/chat\"\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"Create a unique character for a story in one sentence\"\n\n - id: \"setting\"\n method: \"llm/chat\"\n dependencies: [\"character\"]\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"Create a setting for this character: ${character.response}\"\n\n - id: \"plot\"\n method: \"llm/chat\"\n dependencies: [\"character\", \"setting\"]\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: |\n Write a short story plot with:\n Character: ${character.response}\n Setting: ${setting.response}\n```\n\n### MCP (Model Context Protocol) Integration\n\nUse external MCP server tools (requires server configuration):\n\n```yaml\nname: \"MCP Tools Example\"\ntasks:\n # Read file using filesystem MCP server\n - id: \"read_config\"\n method: \"mcp/tool.fs.read\"\n parameters:\n path: \"./config.json\"\n \n # Write file using filesystem MCP server\n - id: \"save_output\"\n method: \"mcp/tool.fs.write\"\n dependencies: [\"read_config\"]\n parameters:\n path: \"./output.json\"\n content: \"Processed: ${read_config.content}\"\n \n # Combine with LLM for analysis\n - id: \"analyze\"\n method: \"llm/chat\"\n dependencies: [\"read_config\"]\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"Analyze this configuration: ${read_config.content}\"\n```\n\n### Multi-Model Workflow\n\nUse different models for different tasks:\n\n```yaml\nname: \"Multi-Model Analysis\"\ntasks:\n - id: \"fast_response\"\n method: \"llm/chat\"\n parameters:\n model: \"llama3.2:1b\" # Fast small model\n messages:\n - role: \"user\"\n content: \"Quick summary of quantum computing\"\n\n - id: \"detailed_response\"\n method: \"llm/chat\"\n parameters:\n model: \"llama3.2:7b\" # Larger model for detail\n messages:\n - role: \"user\"\n content: \"Explain quantum computing in detail with examples\"\n\n - id: \"combine\"\n method: \"llm/chat\"\n dependencies: [\"fast_response\", \"detailed_response\"]\n parameters:\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: |\n Combine these two explanations into one comprehensive summary:\n Quick: ${fast_response.response}\n Detailed: ${detailed_response.response}\n```\n\n## Supported Protocols\n\n### LLM Protocol (`llm/v1`)\n**Provider**: OllamaProvider \n**Methods**:\n- `llm/chat` - Text generation with conversation history\n- `llm/vision` - Image analysis with vision models\n- `llm/generate` - Direct text generation\n- `llm/embeddings` - Generate text embeddings\n\n**Models**: Any Ollama model (llama3.2, mistral, codellama, llava, etc.)\n\n### Python Protocol (`python/v1`)\n**Provider**: PythonProvider \n**Methods**:\n- `python/execute` - Execute Python script files\n- `python/validate` - Validate Python syntax\n- `python/info` - Get provider information\n\n**Security**: Scripts run in subprocess isolation or Docker containers\n\n### MCP Protocol (`mcp/v1`)\n**Provider**: MCPHubProvider \n**Methods**:\n- `mcp/tool.*` - Execute MCP tools from registered servers\n- `mcp/tools/list` - List available tools\n- `mcp/servers` - List MCP servers \n- `mcp/ping` - Health check\n\n**External Servers**: Any MCP-compliant server (stdio/websocket/HTTP) \n**Note**: Configure servers in `~/.gleitzeit/config.yaml` or via environment\n\n## CLI Commands\n\n```bash\n# Run workflows\ngleitzeit run workflow.yaml\ngleitzeit run workflow.yaml --local # Force native mode\ngleitzeit run workflow.yaml --watch # Watch for changes\n\n# Check status\ngleitzeit status\ngleitzeit status --resources\n\n# Batch processing\ngleitzeit batch documents --pattern \"*.txt\" --prompt \"Summarize\"\n\n# Configuration\ngleitzeit config show\ngleitzeit config set default_model llama3.2\n\n# Start API server\ngleitzeit serve --port 8000\n```\n\n## Resource Hubs\n\n### OllamaHub\nManages Ollama LLM server instances:\n- Auto-discovers running instances on configurable ports \n- Health monitoring and metrics collection\n- Model-aware load balancing\n- Connection pooling for performance\n\n### DockerHub (Optional)\nManages Docker containers for isolated Python execution:\n- Container lifecycle management\n- Resource limits enforcement\n- Security isolation\n\n### MCPHub\nManages MCP (Model Context Protocol) server instances:\n- Supports stdio, WebSocket, and HTTP connections\n- Automatic tool discovery and registration\n- Health monitoring and auto-restart\n- Tool routing and load balancing\n- Configurable via YAML or environment variables\n\n## Deployment Modes\n\n### Development Mode\n```python\n# Direct execution engine, no server needed\nclient = GleitzeitClient(mode=\"native\")\n```\n\n### Production Mode\n```bash\n# Start API server\ngleitzeit serve --port 8000\n\n# Client connects to API\nclient = GleitzeitClient(mode=\"api\", api_host=\"localhost\", api_port=8000)\n```\n\n### Auto Mode (Default)\n```python\n# Automatically uses API if available, otherwise native\nclient = GleitzeitClient() # mode=\"auto\" is default\n```\n\n## Configuration\n\n### Config File (`~/.gleitzeit/config.yaml`)\n```yaml\ndefault_model: llama3.2\nollama:\n discovery_ports: [11434, 11435, 11436]\n auto_discover: true\npersistence:\n type: auto\n redis:\n url: redis://localhost:6379\nbatch:\n max_concurrent: 5\n max_file_size: 1048576\nmcp:\n auto_discover: true\n servers:\n - name: \"filesystem\"\n connection_type: \"stdio\"\n command: [\"npx\", \"-y\", \"@modelcontextprotocol/server-filesystem\"]\n tool_prefix: \"fs.\"\n```\n\n### Environment Variables\n```bash\n# Ollama settings\nexport GLEITZEIT_OLLAMA_URL=http://localhost:11434\nexport GLEITZEIT_DEFAULT_MODEL=llama3.2\n\n# Persistence\nexport GLEITZEIT_PERSISTENCE_TYPE=auto # auto|redis|sql|memory\nexport GLEITZEIT_REDIS_URL=redis://localhost:6379\nexport GLEITZEIT_SQL_DB_PATH=~/.gleitzeit/workflows.db\n\n\n# API server\nexport GLEITZEIT_API_HOST=0.0.0.0\nexport GLEITZEIT_API_PORT=8000\n```\n\n## Advanced Features\n\n### Parallel Task Execution\nTasks without dependencies run concurrently:\n```yaml\ntasks:\n - id: \"task1\" # Runs immediately\n method: \"llm/chat\"\n - id: \"task2\" # Runs in parallel with task1\n method: \"llm/chat\"\n - id: \"combine\" # Waits for both\n dependencies: [\"task1\", \"task2\"]\n method: \"python/execute\"\n```\n\n### Batch Processing\n\nProcess multiple files in parallel:\n\n#### Create Test Files\n```bash\nmkdir documents\necho \"Python is a great language\" > documents/python.txt\necho \"JavaScript powers the web\" > documents/javascript.txt\necho \"Rust is fast and safe\" > documents/rust.txt\n```\n\n#### Using CLI\n```bash\ngleitzeit batch documents \\\n --pattern \"*.txt\" \\\n --prompt \"Summarize this file and rate the programming language mentioned from 1-10\"\n```\n\n#### Using Python API\n```python\nresults = await client.batch_process(\n directory=\"documents\",\n pattern=\"**/*.txt\", # Recursive\n prompt=\"Extract key points\",\n model=\"llama3.2\",\n max_concurrent=10\n)\n```\n\n#### Batch Workflow\n```yaml\nname: \"Batch Document Analysis\"\ntype: \"batch\"\n\nbatch:\n directory: \"documents\"\n pattern: \"*.txt\"\n\ntemplate:\n method: \"llm/chat\"\n model: \"llama3.2\"\n messages:\n - role: \"user\"\n content: \"Analyze this document and provide a summary\"\n```\n\n### Dynamic Workflows with Python\n\nCreate workflows programmatically:\n\n```python\nimport asyncio\nfrom gleitzeit import GleitzeitClient\n\nasync def dynamic_workflow():\n async with GleitzeitClient() as client:\n # Generate a question\n question = await client.execute_task({\n \"method\": \"llm/chat\",\n \"parameters\": {\n \"model\": \"llama3.2\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"Generate a random question about science\"}\n ]\n }\n })\n \n # Answer the generated question\n answer = await client.execute_task({\n \"method\": \"llm/chat\",\n \"parameters\": {\n \"model\": \"llama3.2\",\n \"messages\": [\n {\"role\": \"user\", \"content\": f\"Answer this: {question['response']}\"}\n ]\n }\n })\n \n # Fact-check the answer\n verification = await client.execute_task({\n \"method\": \"llm/chat\",\n \"parameters\": {\n \"model\": \"llama3.2\",\n \"messages\": [\n {\"role\": \"user\", \n \"content\": f\"Is this answer correct? {answer['response']}\"}\n ]\n }\n })\n \n return {\n \"question\": question['response'],\n \"answer\": answer['response'],\n \"verification\": verification['response']\n }\n\nresult = asyncio.run(dynamic_workflow())\nprint(result)\n```\n\n### Error Handling & Retries\n```yaml\ntasks:\n - id: \"resilient_task\"\n method: \"llm/chat\"\n retry:\n max_attempts: 3\n delay: 2\n exponential_backoff: true\n parameters:\n timeout: 30\n```\n\n## Testing\n\n```bash\n# Run all tests\npytest\n\n# Run specific test suites\npytest tests/unit/\npytest tests/integration/\npytest tests/workflows/\n\n# Test with real execution\npython tests/workflow_test_suite.py --execute\n```\n\n## Common Issues & Solutions\n\n### Ollama Connection Issues\n```bash\n# Check if Ollama is running\ncurl http://localhost:11434/api/tags\n\n# Restart Ollama\nkillall ollama\nollama serve\n```\n\n### Workflow Debugging\n```bash\n# Enable debug mode\nexport GLEITZEIT_DEBUG=true\ngleitzeit run workflow.yaml\n\n# Check task details\ngleitzeit status --verbose\n```\n\n### Performance Tips\n- Use `--local` flag to force native mode for development\n- Configure Redis for production persistence\n- Adjust `max_concurrent` for batch processing based on resources\n- Use smaller models (e.g., llama3.2:1b) for simple tasks\n\n## Documentation\n\n- [Installation](docs/installation.md) - Detailed installation guide\n- [Core Concepts](docs/concepts.md) - Understand the architecture\n- [Workflows](docs/workflows.md) - Creating complex workflows\n- [MCP Integration](docs/mcp.md) - Model Context Protocol support\n- [CLI Reference](docs/cli.md) - Command-line interface\n- [Python API](docs/api.md) - Complete API reference\n- [Providers](docs/providers.md) - Available providers and creating custom ones\n- [Configuration](docs/configuration.md) - Configuration options\n- [Troubleshooting](docs/troubleshooting.md) - Common issues and solutions\n\n## Requirements\n\n- Python 3.8+\n- Ollama (for LLM operations)\n- Redis (optional, for persistence)\n- Docker (optional, for isolated Python execution)\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": null,
"summary": "Unified workflow orchestration system with LLM, Python, and MCP support",
"version": "0.0.5",
"project_urls": {
"Bug Reports": "https://github.com/leifmarkthaler/gleitzeit/issues",
"Changelog": "https://github.com/leifmarkthaler/gleitzeit/blob/main/CHANGELOG.md",
"Documentation": "https://github.com/leifmarkthaler/gleitzeit#readme",
"Homepage": "https://github.com/leifmarkthaler/gleitzeit",
"Source": "https://github.com/leifmarkthaler/gleitzeit"
},
"split_keywords": [
"workflow",
" orchestration",
" automation",
" unified-persistence",
" hub-provider",
" async",
" llm",
" mcp",
" model-context-protocol",
" task-automation",
" python-execution",
" docker",
" redis",
" sqlite"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "8a72acee59cfaaf4bce15ca40743f9661551516a71827a32c0ecd783d1a629aa",
"md5": "8407bf38fe79d036aa6fadc1391a3a29",
"sha256": "2b07a130be1d785a12faa2d66316c9db540cba4d5c1147e6055b2edd0f45692d"
},
"downloads": -1,
"filename": "gleitzeit-0.0.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8407bf38fe79d036aa6fadc1391a3a29",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 235264,
"upload_time": "2025-08-21T09:45:11",
"upload_time_iso_8601": "2025-08-21T09:45:11.520460Z",
"url": "https://files.pythonhosted.org/packages/8a/72/acee59cfaaf4bce15ca40743f9661551516a71827a32c0ecd783d1a629aa/gleitzeit-0.0.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1cee10b8aa8c5b25c30825f699cd901eb08f33805c98666285b798724b551bad",
"md5": "e483d2b62eb3b7f70987fdc74ff56e7b",
"sha256": "e9abf532da77bc3301d7c21f155abeede2ecc594f6fa38ed8234dfbba40d49e4"
},
"downloads": -1,
"filename": "gleitzeit-0.0.5.tar.gz",
"has_sig": false,
"md5_digest": "e483d2b62eb3b7f70987fdc74ff56e7b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 278590,
"upload_time": "2025-08-21T09:45:16",
"upload_time_iso_8601": "2025-08-21T09:45:16.435380Z",
"url": "https://files.pythonhosted.org/packages/1c/ee/10b8aa8c5b25c30825f699cd901eb08f33805c98666285b798724b551bad/gleitzeit-0.0.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 09:45:16",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "leifmarkthaler",
"github_project": "gleitzeit",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "click",
"specs": [
[
">=",
"8.0.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0.0"
]
]
},
{
"name": "aiohttp",
"specs": [
[
">=",
"3.12.0"
]
]
},
{
"name": "jsonschema",
"specs": [
[
">=",
"4.0.0"
]
]
},
{
"name": "aiosqlite",
"specs": [
[
">=",
"0.21.0"
]
]
},
{
"name": "redis",
"specs": [
[
">=",
"4.5.0"
]
]
},
{
"name": "aiofiles",
"specs": [
[
">=",
"24.1.0"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.24.0"
]
]
}
],
"lcname": "gleitzeit"
}