# Pulsar Workflow Engine v0.1
A declarative workflow engine for orchestrating AI agents in YAML-defined workflows. Supports multi-provider AI models with conditional logic, template rendering, and async execution.
## Features
- **YAML-based Workflows**: Define complex AI agent orchestrations using simple YAML syntax
- **Multi-Provider Support**: OpenAI, Anthropic Claude, and local Ollama models
- **Conditional Branching**: Dynamic workflow paths based on agent responses
- **Template Rendering**: Jinja2-based variable substitution and dynamic prompts
- **Async Execution**: Concurrent agent execution with progress tracking
- **State Management**: Dot-notation access with history tracking
- **CLI Interface**: Rich terminal UI with history, validation, and status commands
- **Plugin System**: Extensible architecture for custom step handlers
- **Error Handling**: Comprehensive retry logic with exponential backoff
## Installation
### Option 1: Standalone Executable (Recommended)
Download and run the standalone executable - no dependencies required:
```bash
# Download and run the installer
curl -fsSL https://raw.githubusercontent.com/lsalihi/pulsar-compose/main/install-executable.sh | bash
# Or download manually and run
wget https://github.com/lsalihi/pulsar-compose/releases/latest/download/pulsar-compose-linux-x64.tar.gz
tar -xzf pulsar-compose-linux-x64.tar.gz
sudo cp -r pulsar /usr/local/bin/
sudo chmod +x /usr/local/bin/pulsar
```
**Available executables:**
- Linux x64: `pulsar-compose-linux-x64.tar.gz`
- macOS x64: `pulsar-compose-macos-x64.tar.gz`
- macOS ARM64: `pulsar-compose-macos-arm64.tar.gz`
### Option 2: Docker Container
Run Pulsar Compose in a container:
```bash
# Pull and run
docker run -it --rm lsalihi/pulsar-compose:latest pulsar --help
# Mount workflows
docker run -v $(pwd):/workflows -it --rm lsalihi/pulsar-compose:latest pulsar run /workflows/workflow.yml
```
### Option 3: PyPI Package
Install from PyPI (requires Python 3.9+):
```bash
pip install pulsar-compose
```
### Option 4: From Source
For development or custom builds:
#### Prerequisites
- Python 3.9+
- Poetry (for dependency management)
- Ollama (for local AI models, optional)
#### Setup
```bash
# Clone and install
git clone https://github.com/lsalihi/pulsar-compose.git
cd pulsar-compose
poetry install
# Install Ollama (optional, for local models)
curl -fsSL https://ollama.ai/install.sh | sh
ollama pull llama2 # or your preferred model
```
## Quick Start
### 1. Create a Simple Workflow
Create `examples/simple_chain.yml`:
```yaml
version: "0.1"
name: "Simple AI Chain"
description: "A basic workflow with two AI agents"
steps:
- name: "analyze"
type: "agent"
agent:
provider: "local"
model: "llama2"
prompt: "Analyze this topic: {{input}}"
output: "analysis"
- name: "summarize"
type: "agent"
agent:
provider: "local"
model: "llama2"
prompt: "Summarize this analysis: {{analysis}}"
output: "summary"
```
### 2. Run the Workflow
```bash
# Validate the workflow
pulsar validate examples/simple_chain.yml
# Execute with input
pulsar run examples/simple_chain.yml --input "artificial intelligence"
# View execution history
pulsar logs
# Check status
pulsar status
```
## Workflow Specification
### Basic Structure
```yaml
version: "0.1"
name: "My Workflow"
description: "Workflow description"
variables: # Optional global variables
temperature: 0.7
steps:
- name: "step1"
type: "agent"
# ... step configuration
```
### Step Types
#### Agent Steps
Execute AI models with custom prompts:
```yaml
- name: "generate_ideas"
type: "agent"
agent:
provider: "openai" # or "anthropic" or "local"
model: "gpt-4" # provider-specific model name
prompt: "Generate ideas for: {{input}}"
temperature: 0.8 # optional
max_tokens: 1000 # optional
output: "ideas" # variable to store result
```
#### Conditional Steps
Branch workflow based on conditions:
```yaml
- name: "check_quality"
type: "condition"
condition: "{{len(ideas.split()) > 10}}"
then: "high_quality"
else: "low_quality"
```
### Variables and Templates
Use Jinja2 templating for dynamic content:
```yaml
variables:
system_prompt: "You are a helpful assistant"
user_name: "Alice"
steps:
- name: "greet"
type: "agent"
agent:
provider: "local"
model: "llama2"
prompt: |
{{system_prompt}}
Hello {{user_name}}, how can I help you today?
```
### State Access
Access workflow state using dot notation:
```yaml
# Access nested data
condition: "{{results.agent1.score > 0.8}}"
# Access lists
condition: "{{len(history.steps) > 5}}"
# Access previous outputs
prompt: "Previous result: {{previous_step.output}}"
```
## Configuration
### Environment Variables
Set API keys and configuration:
```bash
# OpenAI
export OPENAI_API_KEY="your-key-here"
# Anthropic
export ANTHROPIC_API_KEY="your-key-here"
# Ollama (local)
export OLLAMA_BASE_URL="http://localhost:11434"
```
### CLI Configuration
The CLI supports various commands:
```bash
# Initialize a new workflow
pulsar workflow init my-workflow.yml
# List available workflows
pulsar workflow list
# Run with custom variables
pulsar run workflow.yml --var temperature=0.9 --var model=gpt-4
# Get help
pulsar --help
```
## Advanced Features
### Error Handling
Built-in retry logic with exponential backoff:
```yaml
steps:
- name: "unreliable_agent"
type: "agent"
agent:
provider: "openai"
model: "gpt-4"
prompt: "Process: {{input}}"
retry:
attempts: 3
backoff: 2.0
output: "result"
```
### Async Execution
Steps can run concurrently when independent:
```yaml
steps:
- name: "parallel_task_1"
type: "agent"
# ... config
depends_on: [] # No dependencies
- name: "parallel_task_2"
type: "agent"
# ... config
depends_on: [] # No dependencies
- name: "combine_results"
type: "agent"
depends_on: ["parallel_task_1", "parallel_task_2"]
# ... combine the results
```
### Plugin System
Extend functionality with custom step handlers:
```python
from step_handlers.base import BaseStepHandler
class CustomHandler(BaseStepHandler):
async def execute(self, step, state, context):
# Custom logic here
pass
```
## Examples
See the `examples/` directory for complete workflows:
- `simple_chain.yml`: Basic agent chaining
- `conditional_workflow.yml`: Branching logic example
## Development
### Running Tests
```bash
# Unit tests
poetry run pytest tests/ -v
# Integration tests (requires Ollama)
poetry run python test_ollama.py
```
### Project Structure
```
pulsar-compose/
├── models/ # Pydantic models and validation
├── agents/ # AI provider implementations
├── engine/ # Workflow execution engine
├── step_handlers/ # Step type handlers
├── cli/ # Command-line interface
├── tests/ # Unit and integration tests
└── examples/ # Sample workflows
```
### Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
MIT License - see LICENSE file for details.
## Support
For issues and questions:
- Check the examples in `examples/`
- Review the test files for usage patterns
- Open an issue on GitHub
Raw data
{
"_id": null,
"home_page": null,
"name": "pulsar-compose",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "ai, workflow, orchestration, llm, automation, docker",
"author": null,
"author_email": "Pulsar Team <hello@pulsar-compose.dev>",
"download_url": "https://files.pythonhosted.org/packages/a8/36/41b0e76a42759e455b4ac6381c1bb76c93059891bce9a75a1b70b2ef2732/pulsar_compose-0.1.0.tar.gz",
"platform": null,
"description": "# Pulsar Workflow Engine v0.1\n\nA declarative workflow engine for orchestrating AI agents in YAML-defined workflows. Supports multi-provider AI models with conditional logic, template rendering, and async execution.\n\n## Features\n\n- **YAML-based Workflows**: Define complex AI agent orchestrations using simple YAML syntax\n- **Multi-Provider Support**: OpenAI, Anthropic Claude, and local Ollama models\n- **Conditional Branching**: Dynamic workflow paths based on agent responses\n- **Template Rendering**: Jinja2-based variable substitution and dynamic prompts\n- **Async Execution**: Concurrent agent execution with progress tracking\n- **State Management**: Dot-notation access with history tracking\n- **CLI Interface**: Rich terminal UI with history, validation, and status commands\n- **Plugin System**: Extensible architecture for custom step handlers\n- **Error Handling**: Comprehensive retry logic with exponential backoff\n\n## Installation\n\n### Option 1: Standalone Executable (Recommended)\n\nDownload and run the standalone executable - no dependencies required:\n\n```bash\n# Download and run the installer\ncurl -fsSL https://raw.githubusercontent.com/lsalihi/pulsar-compose/main/install-executable.sh | bash\n\n# Or download manually and run\nwget https://github.com/lsalihi/pulsar-compose/releases/latest/download/pulsar-compose-linux-x64.tar.gz\ntar -xzf pulsar-compose-linux-x64.tar.gz\nsudo cp -r pulsar /usr/local/bin/\nsudo chmod +x /usr/local/bin/pulsar\n```\n\n**Available executables:**\n- Linux x64: `pulsar-compose-linux-x64.tar.gz`\n- macOS x64: `pulsar-compose-macos-x64.tar.gz`\n- macOS ARM64: `pulsar-compose-macos-arm64.tar.gz`\n\n### Option 2: Docker Container\n\nRun Pulsar Compose in a container:\n\n```bash\n# Pull and run\ndocker run -it --rm lsalihi/pulsar-compose:latest pulsar --help\n\n# Mount workflows\ndocker run -v $(pwd):/workflows -it --rm lsalihi/pulsar-compose:latest pulsar run /workflows/workflow.yml\n```\n\n### Option 3: PyPI Package\n\nInstall from PyPI (requires Python 3.9+):\n\n```bash\npip install pulsar-compose\n```\n\n### Option 4: From Source\n\nFor development or custom builds:\n\n#### Prerequisites\n- Python 3.9+\n- Poetry (for dependency management)\n- Ollama (for local AI models, optional)\n\n#### Setup\n```bash\n# Clone and install\ngit clone https://github.com/lsalihi/pulsar-compose.git\ncd pulsar-compose\npoetry install\n\n# Install Ollama (optional, for local models)\ncurl -fsSL https://ollama.ai/install.sh | sh\nollama pull llama2 # or your preferred model\n```\n\n## Quick Start\n\n### 1. Create a Simple Workflow\n\nCreate `examples/simple_chain.yml`:\n\n```yaml\nversion: \"0.1\"\nname: \"Simple AI Chain\"\ndescription: \"A basic workflow with two AI agents\"\n\nsteps:\n - name: \"analyze\"\n type: \"agent\"\n agent:\n provider: \"local\"\n model: \"llama2\"\n prompt: \"Analyze this topic: {{input}}\"\n output: \"analysis\"\n\n - name: \"summarize\"\n type: \"agent\"\n agent:\n provider: \"local\"\n model: \"llama2\"\n prompt: \"Summarize this analysis: {{analysis}}\"\n output: \"summary\"\n```\n\n### 2. Run the Workflow\n\n```bash\n# Validate the workflow\npulsar validate examples/simple_chain.yml\n\n# Execute with input\npulsar run examples/simple_chain.yml --input \"artificial intelligence\"\n\n# View execution history\npulsar logs\n\n# Check status\npulsar status\n```\n\n## Workflow Specification\n\n### Basic Structure\n\n```yaml\nversion: \"0.1\"\nname: \"My Workflow\"\ndescription: \"Workflow description\"\nvariables: # Optional global variables\n temperature: 0.7\nsteps:\n - name: \"step1\"\n type: \"agent\"\n # ... step configuration\n```\n\n### Step Types\n\n#### Agent Steps\n\nExecute AI models with custom prompts:\n\n```yaml\n- name: \"generate_ideas\"\n type: \"agent\"\n agent:\n provider: \"openai\" # or \"anthropic\" or \"local\"\n model: \"gpt-4\" # provider-specific model name\n prompt: \"Generate ideas for: {{input}}\"\n temperature: 0.8 # optional\n max_tokens: 1000 # optional\n output: \"ideas\" # variable to store result\n```\n\n#### Conditional Steps\n\nBranch workflow based on conditions:\n\n```yaml\n- name: \"check_quality\"\n type: \"condition\"\n condition: \"{{len(ideas.split()) > 10}}\"\n then: \"high_quality\"\n else: \"low_quality\"\n```\n\n### Variables and Templates\n\nUse Jinja2 templating for dynamic content:\n\n```yaml\nvariables:\n system_prompt: \"You are a helpful assistant\"\n user_name: \"Alice\"\n\nsteps:\n - name: \"greet\"\n type: \"agent\"\n agent:\n provider: \"local\"\n model: \"llama2\"\n prompt: |\n {{system_prompt}}\n\n Hello {{user_name}}, how can I help you today?\n```\n\n### State Access\n\nAccess workflow state using dot notation:\n\n```yaml\n# Access nested data\ncondition: \"{{results.agent1.score > 0.8}}\"\n\n# Access lists\ncondition: \"{{len(history.steps) > 5}}\"\n\n# Access previous outputs\nprompt: \"Previous result: {{previous_step.output}}\"\n```\n\n## Configuration\n\n### Environment Variables\n\nSet API keys and configuration:\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=\"your-key-here\"\n\n# Anthropic\nexport ANTHROPIC_API_KEY=\"your-key-here\"\n\n# Ollama (local)\nexport OLLAMA_BASE_URL=\"http://localhost:11434\"\n```\n\n### CLI Configuration\n\nThe CLI supports various commands:\n\n```bash\n# Initialize a new workflow\npulsar workflow init my-workflow.yml\n\n# List available workflows\npulsar workflow list\n\n# Run with custom variables\npulsar run workflow.yml --var temperature=0.9 --var model=gpt-4\n\n# Get help\npulsar --help\n```\n\n## Advanced Features\n\n### Error Handling\n\nBuilt-in retry logic with exponential backoff:\n\n```yaml\nsteps:\n - name: \"unreliable_agent\"\n type: \"agent\"\n agent:\n provider: \"openai\"\n model: \"gpt-4\"\n prompt: \"Process: {{input}}\"\n retry:\n attempts: 3\n backoff: 2.0\n output: \"result\"\n```\n\n### Async Execution\n\nSteps can run concurrently when independent:\n\n```yaml\nsteps:\n - name: \"parallel_task_1\"\n type: \"agent\"\n # ... config\n depends_on: [] # No dependencies\n\n - name: \"parallel_task_2\"\n type: \"agent\"\n # ... config\n depends_on: [] # No dependencies\n\n - name: \"combine_results\"\n type: \"agent\"\n depends_on: [\"parallel_task_1\", \"parallel_task_2\"]\n # ... combine the results\n```\n\n### Plugin System\n\nExtend functionality with custom step handlers:\n\n```python\nfrom step_handlers.base import BaseStepHandler\n\nclass CustomHandler(BaseStepHandler):\n async def execute(self, step, state, context):\n # Custom logic here\n pass\n```\n\n## Examples\n\nSee the `examples/` directory for complete workflows:\n\n- `simple_chain.yml`: Basic agent chaining\n- `conditional_workflow.yml`: Branching logic example\n\n## Development\n\n### Running Tests\n\n```bash\n# Unit tests\npoetry run pytest tests/ -v\n\n# Integration tests (requires Ollama)\npoetry run python test_ollama.py\n```\n\n### Project Structure\n\n```\npulsar-compose/\n\u251c\u2500\u2500 models/ # Pydantic models and validation\n\u251c\u2500\u2500 agents/ # AI provider implementations\n\u251c\u2500\u2500 engine/ # Workflow execution engine\n\u251c\u2500\u2500 step_handlers/ # Step type handlers\n\u251c\u2500\u2500 cli/ # Command-line interface\n\u251c\u2500\u2500 tests/ # Unit and integration tests\n\u2514\u2500\u2500 examples/ # Sample workflows\n```\n\n### Contributing\n\n1. Fork the repository\n2. Create a feature branch\n3. Add tests for new functionality\n4. Ensure all tests pass\n5. Submit a pull request\n\n## License\n\nMIT License - see LICENSE file for details.\n\n## Support\n\nFor issues and questions:\n- Check the examples in `examples/`\n- Review the test files for usage patterns\n- Open an issue on GitHub\n",
"bugtrack_url": null,
"license": null,
"summary": "Docker-like AI workflow orchestration engine",
"version": "0.1.0",
"project_urls": {
"Bug Reports": "https://github.com/lsalihi/pulsar-compose/issues",
"Changelog": "https://github.com/lsalihi/pulsar-compose/blob/main/CHANGELOG.md",
"Documentation": "https://pulsar-compose.readthedocs.io",
"Homepage": "https://github.com/lsalihi/pulsar-compose",
"Repository": "https://github.com/lsalihi/pulsar-compose"
},
"split_keywords": [
"ai",
" workflow",
" orchestration",
" llm",
" automation",
" docker"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4da18c0c8fc35742bbb6e94b918b0f85aa0ef88d10e4ecabec869e5e8628d922",
"md5": "41e7d1f23398804bb60d5669cb543ff2",
"sha256": "6807a136b22c32a82d0f086b2f93003fe561152bb8a24b45fa7d2bdcd5623dcd"
},
"downloads": -1,
"filename": "pulsar_compose-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "41e7d1f23398804bb60d5669cb543ff2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 46747,
"upload_time": "2025-10-31T00:34:06",
"upload_time_iso_8601": "2025-10-31T00:34:06.697513Z",
"url": "https://files.pythonhosted.org/packages/4d/a1/8c0c8fc35742bbb6e94b918b0f85aa0ef88d10e4ecabec869e5e8628d922/pulsar_compose-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a83641b0e76a42759e455b4ac6381c1bb76c93059891bce9a75a1b70b2ef2732",
"md5": "b8325d7b10765a1d7f5f62cc13d37ab7",
"sha256": "e6dab0d7368081e6de0c8013d40adef1948f9137565dd28809e8296afd152b99"
},
"downloads": -1,
"filename": "pulsar_compose-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "b8325d7b10765a1d7f5f62cc13d37ab7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 38555,
"upload_time": "2025-10-31T00:34:08",
"upload_time_iso_8601": "2025-10-31T00:34:08.098682Z",
"url": "https://files.pythonhosted.org/packages/a8/36/41b0e76a42759e455b4ac6381c1bb76c93059891bce9a75a1b70b2ef2732/pulsar_compose-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-31 00:34:08",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lsalihi",
"github_project": "pulsar-compose",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "pulsar-compose"
}