<img src="https://raw.githubusercontent.com/dialectus-ai/dialectus-engine/main/assets/logo.png" alt="Dialectus Engine" width="350">
<br />
# Dialectus Engine
A Python library for orchestrating AI-powered debates with multi-provider model support.



> **Ready-to-Use CLI:** Want to run debates right away? Check out the [dialectus-cli](https://github.com/dialectus-ai/dialectus-cli) - a command-line interface that uses this engine to run debates locally with a beautiful terminal UI.
## Overview
The Dialectus Engine is a standalone Python library that provides core debate orchestration logic, including participant coordination, turn management, AI judge integration, and multi-provider model support. It's designed to be imported and used by other applications to build debate systems.
**Applications using this engine:**
- **[dialectus-cli](https://github.com/dialectus-ai/dialectus-cli)** - Command-line interface with Rich terminal UI and SQLite storage
- **dialectus-web** - Web application with React frontend and FastAPI backend (private repository)
## Components
- **Core Engine** (`debate_engine/`) - Main debate orchestration logic
- **Models** (`models/`) - AI model provider integrations (Ollama, OpenRouter, Anthropic)
- **Configuration** (`config/`) - System configuration management
- **Judges** (`judges/`) - AI judge implementations with ensemble support
- **Formats** (`formats/`) - Debate format definitions (Oxford, Parliamentary, Socratic, Public Forum)
- **Moderation** (`moderation/`) - Optional content safety system for debate topics
## Installation
### From PyPI
**Using uv (recommended):**
```bash
uv pip install dialectus-engine
```
**Using pip:**
```bash
pip install dialectus-engine
```
### From Source
**Using uv (recommended, faster):**
```bash
# Clone the repository
git clone https://github.com/dialectus-ai/dialectus-engine.git
cd dialectus-engine
# Install in development mode with all dev dependencies
uv sync
# Or install without dev dependencies
uv pip install -e .
```
**Using pip:**
```bash
# Clone the repository
git clone https://github.com/dialectus-ai/dialectus-engine.git
cd dialectus-engine
# Install in development mode
pip install -e .
# Or install with dev dependencies
pip install -e ".[dev]"
```
### As a Dependency
Add to your `pyproject.toml`:
```toml
[project]
dependencies = [
"dialectus-engine>=0.1.0",
]
```
Or install directly from git:
```bash
# Using uv
uv pip install git+https://github.com/dialectus-ai/dialectus-engine.git@main
# Using pip
pip install git+https://github.com/dialectus-ai/dialectus-engine.git@main
```
## Quick Start
```python
import asyncio
from debate_engine.core import DebateEngine
from models.manager import ModelManager
from config.settings import AppConfig, ModelConfig
async def run_debate():
# Load configuration
config = AppConfig.from_json_file("debate_config.json")
# Set up model manager
model_manager = ModelManager()
# Register models
for model_id, model_config in config.models.items():
model_manager.register_model(model_id, model_config)
# Create debate engine
engine = DebateEngine(
config=config,
model_manager=model_manager
)
# Run debate
transcript = await engine.run_debate()
print(transcript)
asyncio.run(run_debate())
```
## Configuration
The engine uses `debate_config.json` for system configuration. To get started:
```bash
# Linux/Mac: Copy the example configuration
cp debate_config.example.json debate_config.json
# Windows (PowerShell):
# copy debate_config.example.json debate_config.json
# Edit with your settings and API keys
# Linux/Mac: nano debate_config.json
# Windows: notepad debate_config.json
# Or use your preferred editor (VS Code, vim, etc.)
```
Key configuration sections:
- **Models**: Define debate participants with provider, personality, and parameters
- **Providers**: Configure Ollama (local), OpenRouter (cloud), and Anthropic (cloud) settings
- **Judging**: Set evaluation criteria and judge models
- **Debate**: Default topic, format, and word limits
- **Moderation** (optional): Content safety for user-provided topics
For detailed configuration documentation, see [CONFIG_GUIDE.md](CONFIG_GUIDE.md).
## Development Workflows
### Running Tests and Type Checking
**Using uv (recommended):**
```bash
# Run tests
uv run pytest
# Type check with Pyright
uv run pyright
# Lint with ruff
uv run ruff check .
# Format with ruff
uv run ruff format .
```
**Using pip:**
```bash
# Ensure dev dependencies are installed
pip install -e ".[dev]"
# Run tests
pytest
# Type check with Pyright
pyright
# Lint and format
ruff check .
ruff format .
```
### Building Distribution
**Using uv:**
```bash
# Build wheel and sdist
uv build
# Install locally from wheel
uv pip install dist/dialectus_engine-*.whl
```
**Using pip:**
```bash
# Build wheel and sdist
python -m build
# Install locally
pip install dist/dialectus_engine-*.whl
```
### Managing Dependencies
**Using uv:**
```bash
# Add a new dependency
# 1. Edit pyproject.toml [project.dependencies] section
# 2. Update lock file and sync environment:
uv lock && uv sync
# Upgrade all dependencies (within version constraints)
uv lock --upgrade
# Upgrade specific package
uv lock --upgrade-package httpx
# Add dev dependency
# 1. Edit pyproject.toml [project.optional-dependencies.dev]
# 2. Run:
uv sync
```
**Using pip:**
```bash
# Add a new dependency
# 1. Edit pyproject.toml dependencies
# 2. Reinstall:
pip install -e ".[dev]"
```
### Why uv?
- **10-100x faster** than pip for installs and resolution
- **Reproducible builds** via `uv.lock` (cross-platform, includes hashes)
- **Python 3.14 ready** - Takes advantage of free-threading for even better performance
- **Single source of truth** - Dependencies in `pyproject.toml`, lock file auto-generated
- **Compatible** - `pip` still works perfectly with `pyproject.toml`
## Features
### Multi-Provider Model Support
- **Ollama**: Local model management with hardware optimization
- **OpenRouter**: Cloud model access to 100+ models
- **Anthropic**: Direct access to Claude models (3.5 Sonnet, Haiku, Opus, etc.)
- **Async streaming**: Chunk-by-chunk response generation for all providers
- **Auto-discovery**: Dynamic model listing from all configured providers
- **Caching**: In-memory cache with TTL for model metadata
- **Cost tracking**: Token usage and cost calculation for cloud providers
### Debate Formats
- **Oxford**: Classic opening/rebuttal/closing structure
- **Parliamentary**: British-style government vs. opposition
- **Socratic**: Question-driven dialogue format
- **Public Forum**: American high school debate style
### AI Judge System
- **LLM-based evaluation**: Detailed criterion scoring
- **Ensemble judging**: Aggregate decisions from multiple judges
- **Structured decisions**: JSON-serializable judge results
- **Configurable criteria**: Logic, evidence, persuasiveness, etc.
### Content Moderation (Optional)
- **Multi-provider support**: Ollama (local), OpenRouter, OpenAI moderation API
- **Safety categories**: Harassment, hate speech, violence, sexual content, dangerous activities
- **Flexible deployment**: Enable for production APIs, disable for trusted environments
- **Graceful error handling**: Provider-specific rate limit handling and retry logic
## Architecture
Key architectural principles:
- **Library-first**: Designed to be imported by other applications
- **Provider agnostic**: Support for multiple AI model sources
- **Async by default**: All model interactions are async
- **Type-safe**: Strict Pyright configuration with modern type hints
- **Pydantic everywhere**: All config and data models use Pydantic v2
- **Configurable**: JSON-based configuration with validation
### Technology Stack
- **Python 3.13+** with modern type hints (`X | None`, `list[T]`, `dict[K, V]`)
- **Pydantic v2** for data validation and settings management
- **OpenAI SDK** for OpenRouter API integration (streaming support)
- **httpx** for async HTTP requests (Ollama provider)
- **asyncio** for concurrent debate operations
## Usage Examples
### Listing Available Models
```python
from models.manager import ModelManager
async def list_models():
manager = ModelManager()
models = await manager.get_all_models()
for model_id, model_info in models.items():
print(f"{model_id}: {model_info.description}")
```
### Running a Custom Format
```python
from formats.registry import format_registry
# Get available formats
formats = format_registry.list_formats()
# Load a specific format
oxford = format_registry.get_format("oxford")
phases = oxford.phases()
```
### Ensemble Judging
```python
from judges.factory import JudgeFactory
# Create judge with multiple models
config.judging.judge_models = ["openthinker:7b", "llama3.2:3b", "qwen2.5:3b"]
judge = JudgeFactory.create_judge(config.judging, model_manager)
# Get aggregated decision
decision = await judge.judge_debate(context)
```
### Content Moderation
```python
from dialectus.engine.moderation import ModerationManager, TopicRejectedError
# Create moderation manager
manager = ModerationManager(config.moderation, config.system)
# Validate user-provided topic
user_topic = "Should AI be regulated?"
try:
result = await manager.moderate_topic(user_topic)
# Topic is safe, proceed with debate
print(f"Topic approved with confidence: {result.confidence}")
except TopicRejectedError as e:
# Topic violates content policy
print(f"Topic rejected: {e.reason}")
print(f"Violated categories: {', '.join(e.categories)}")
```
For comprehensive moderation testing and setup instructions, see [MODERATION_TESTING.md](MODERATION_TESTING.md).
## Provider Setup
### Anthropic (Claude Models)
To use Anthropic's Claude models, you'll need an API key:
1. **Get an API key**: Sign up at [console.anthropic.com](https://console.anthropic.com/)
2. **Set your API key** (choose one method):
**Environment variable (recommended):**
```bash
export ANTHROPIC_API_KEY="sk-ant-api03-..."
```
**Or in `debate_config.json`:**
```json
{
"system": {
"anthropic": {
"api_key": "sk-ant-api03-...",
"base_url": "https://api.anthropic.com/v1",
"max_retries": 3,
"timeout": 60
}
}
}
```
3. **Configure a model**:
```json
{
"models": {
"model_a": {
"name": "claude-3-5-sonnet-20241022",
"provider": "anthropic",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
}
}
```
**Available Claude models:**
- `claude-3-5-sonnet-20241022` - Latest, most intelligent (best for debates)
- `claude-3-5-haiku-20241022` - Fastest and most economical
- `claude-3-opus-20240229` - Most capable Claude 3 model
- `claude-3-sonnet-20240229` - Balanced performance
- `claude-3-haiku-20240307` - Budget-friendly option
### OpenRouter
To use OpenRouter's model marketplace:
1. **Get an API key**: Sign up at [openrouter.ai](https://openrouter.ai/)
2. **Set your API key**:
```bash
export OPENROUTER_API_KEY="sk-or-v1-..."
```
3. **Configure a model**:
```json
{
"models": {
"model_a": {
"name": "anthropic/claude-3.5-sonnet",
"provider": "openrouter",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
}
}
```
### Ollama (Local Models)
To use local models via Ollama:
1. **Install Ollama**: Download from [ollama.com](https://ollama.com/)
2. **Pull models**:
```bash
ollama pull llama3.2:3b
ollama pull qwen2.5:7b
```
3. **Configure**:
```json
{
"models": {
"model_a": {
"name": "llama3.2:3b",
"provider": "ollama",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
},
"system": {
"ollama_base_url": "http://localhost:11434"
}
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "dialectus-engine",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "ai, debate, llm, ollama, openrouter, artificial-intelligence",
"author": "Dialectus AI",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/91/66/daacfeef9741ca862057d6b4749752e6e9cc158382569d54b8412c29d582/dialectus_engine-0.4.0.tar.gz",
"platform": null,
"description": "<img src=\"https://raw.githubusercontent.com/dialectus-ai/dialectus-engine/main/assets/logo.png\" alt=\"Dialectus Engine\" width=\"350\">\n\n<br />\n\n# Dialectus Engine\n\nA Python library for orchestrating AI-powered debates with multi-provider model support.\n\n\n\n\n\n> **Ready-to-Use CLI:** Want to run debates right away? Check out the [dialectus-cli](https://github.com/dialectus-ai/dialectus-cli) - a command-line interface that uses this engine to run debates locally with a beautiful terminal UI.\n\n## Overview\n\nThe Dialectus Engine is a standalone Python library that provides core debate orchestration logic, including participant coordination, turn management, AI judge integration, and multi-provider model support. It's designed to be imported and used by other applications to build debate systems.\n\n**Applications using this engine:**\n- **[dialectus-cli](https://github.com/dialectus-ai/dialectus-cli)** - Command-line interface with Rich terminal UI and SQLite storage\n- **dialectus-web** - Web application with React frontend and FastAPI backend (private repository)\n\n## Components\n\n- **Core Engine** (`debate_engine/`) - Main debate orchestration logic\n- **Models** (`models/`) - AI model provider integrations (Ollama, OpenRouter, Anthropic)\n- **Configuration** (`config/`) - System configuration management\n- **Judges** (`judges/`) - AI judge implementations with ensemble support\n- **Formats** (`formats/`) - Debate format definitions (Oxford, Parliamentary, Socratic, Public Forum)\n- **Moderation** (`moderation/`) - Optional content safety system for debate topics\n\n## Installation\n\n### From PyPI\n\n**Using uv (recommended):**\n```bash\nuv pip install dialectus-engine\n```\n\n**Using pip:**\n```bash\npip install dialectus-engine\n```\n\n### From Source\n\n**Using uv (recommended, faster):**\n```bash\n# Clone the repository\ngit clone https://github.com/dialectus-ai/dialectus-engine.git\ncd dialectus-engine\n\n# Install in development mode with all dev dependencies\nuv sync\n\n# Or install without dev dependencies\nuv pip install -e .\n```\n\n**Using pip:**\n```bash\n# Clone the repository\ngit clone https://github.com/dialectus-ai/dialectus-engine.git\ncd dialectus-engine\n\n# Install in development mode\npip install -e .\n\n# Or install with dev dependencies\npip install -e \".[dev]\"\n```\n\n### As a Dependency\n\nAdd to your `pyproject.toml`:\n\n```toml\n[project]\ndependencies = [\n \"dialectus-engine>=0.1.0\",\n]\n```\n\nOr install directly from git:\n\n```bash\n# Using uv\nuv pip install git+https://github.com/dialectus-ai/dialectus-engine.git@main\n\n# Using pip\npip install git+https://github.com/dialectus-ai/dialectus-engine.git@main\n```\n\n## Quick Start\n\n```python\nimport asyncio\nfrom debate_engine.core import DebateEngine\nfrom models.manager import ModelManager\nfrom config.settings import AppConfig, ModelConfig\n\nasync def run_debate():\n # Load configuration\n config = AppConfig.from_json_file(\"debate_config.json\")\n\n # Set up model manager\n model_manager = ModelManager()\n\n # Register models\n for model_id, model_config in config.models.items():\n model_manager.register_model(model_id, model_config)\n\n # Create debate engine\n engine = DebateEngine(\n config=config,\n model_manager=model_manager\n )\n\n # Run debate\n transcript = await engine.run_debate()\n print(transcript)\n\nasyncio.run(run_debate())\n```\n\n## Configuration\n\nThe engine uses `debate_config.json` for system configuration. To get started:\n\n```bash\n# Linux/Mac: Copy the example configuration\ncp debate_config.example.json debate_config.json\n\n# Windows (PowerShell):\n# copy debate_config.example.json debate_config.json\n\n# Edit with your settings and API keys\n# Linux/Mac: nano debate_config.json\n# Windows: notepad debate_config.json\n# Or use your preferred editor (VS Code, vim, etc.)\n```\n\nKey configuration sections:\n- **Models**: Define debate participants with provider, personality, and parameters\n- **Providers**: Configure Ollama (local), OpenRouter (cloud), and Anthropic (cloud) settings\n- **Judging**: Set evaluation criteria and judge models\n- **Debate**: Default topic, format, and word limits\n- **Moderation** (optional): Content safety for user-provided topics\n\nFor detailed configuration documentation, see [CONFIG_GUIDE.md](CONFIG_GUIDE.md).\n\n## Development Workflows\n\n### Running Tests and Type Checking\n\n**Using uv (recommended):**\n```bash\n# Run tests\nuv run pytest\n\n# Type check with Pyright\nuv run pyright\n\n# Lint with ruff\nuv run ruff check .\n\n# Format with ruff\nuv run ruff format .\n```\n\n**Using pip:**\n```bash\n# Ensure dev dependencies are installed\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Type check with Pyright\npyright\n\n# Lint and format\nruff check .\nruff format .\n```\n\n### Building Distribution\n\n**Using uv:**\n```bash\n# Build wheel and sdist\nuv build\n\n# Install locally from wheel\nuv pip install dist/dialectus_engine-*.whl\n```\n\n**Using pip:**\n```bash\n# Build wheel and sdist\npython -m build\n\n# Install locally\npip install dist/dialectus_engine-*.whl\n```\n\n### Managing Dependencies\n\n**Using uv:**\n```bash\n# Add a new dependency\n# 1. Edit pyproject.toml [project.dependencies] section\n# 2. Update lock file and sync environment:\nuv lock && uv sync\n\n# Upgrade all dependencies (within version constraints)\nuv lock --upgrade\n\n# Upgrade specific package\nuv lock --upgrade-package httpx\n\n# Add dev dependency\n# 1. Edit pyproject.toml [project.optional-dependencies.dev]\n# 2. Run:\nuv sync\n```\n\n**Using pip:**\n```bash\n# Add a new dependency\n# 1. Edit pyproject.toml dependencies\n# 2. Reinstall:\npip install -e \".[dev]\"\n```\n\n### Why uv?\n\n- **10-100x faster** than pip for installs and resolution\n- **Reproducible builds** via `uv.lock` (cross-platform, includes hashes)\n- **Python 3.14 ready** - Takes advantage of free-threading for even better performance\n- **Single source of truth** - Dependencies in `pyproject.toml`, lock file auto-generated\n- **Compatible** - `pip` still works perfectly with `pyproject.toml`\n\n## Features\n\n### Multi-Provider Model Support\n- **Ollama**: Local model management with hardware optimization\n- **OpenRouter**: Cloud model access to 100+ models\n- **Anthropic**: Direct access to Claude models (3.5 Sonnet, Haiku, Opus, etc.)\n- **Async streaming**: Chunk-by-chunk response generation for all providers\n- **Auto-discovery**: Dynamic model listing from all configured providers\n- **Caching**: In-memory cache with TTL for model metadata\n- **Cost tracking**: Token usage and cost calculation for cloud providers\n\n### Debate Formats\n- **Oxford**: Classic opening/rebuttal/closing structure\n- **Parliamentary**: British-style government vs. opposition\n- **Socratic**: Question-driven dialogue format\n- **Public Forum**: American high school debate style\n\n### AI Judge System\n- **LLM-based evaluation**: Detailed criterion scoring\n- **Ensemble judging**: Aggregate decisions from multiple judges\n- **Structured decisions**: JSON-serializable judge results\n- **Configurable criteria**: Logic, evidence, persuasiveness, etc.\n\n### Content Moderation (Optional)\n- **Multi-provider support**: Ollama (local), OpenRouter, OpenAI moderation API\n- **Safety categories**: Harassment, hate speech, violence, sexual content, dangerous activities\n- **Flexible deployment**: Enable for production APIs, disable for trusted environments\n- **Graceful error handling**: Provider-specific rate limit handling and retry logic\n\n## Architecture\n\nKey architectural principles:\n- **Library-first**: Designed to be imported by other applications\n- **Provider agnostic**: Support for multiple AI model sources\n- **Async by default**: All model interactions are async\n- **Type-safe**: Strict Pyright configuration with modern type hints\n- **Pydantic everywhere**: All config and data models use Pydantic v2\n- **Configurable**: JSON-based configuration with validation\n\n### Technology Stack\n- **Python 3.13+** with modern type hints (`X | None`, `list[T]`, `dict[K, V]`)\n- **Pydantic v2** for data validation and settings management\n- **OpenAI SDK** for OpenRouter API integration (streaming support)\n- **httpx** for async HTTP requests (Ollama provider)\n- **asyncio** for concurrent debate operations\n\n## Usage Examples\n\n### Listing Available Models\n\n```python\nfrom models.manager import ModelManager\n\nasync def list_models():\n manager = ModelManager()\n models = await manager.get_all_models()\n for model_id, model_info in models.items():\n print(f\"{model_id}: {model_info.description}\")\n```\n\n### Running a Custom Format\n\n```python\nfrom formats.registry import format_registry\n\n# Get available formats\nformats = format_registry.list_formats()\n\n# Load a specific format\noxford = format_registry.get_format(\"oxford\")\nphases = oxford.phases()\n```\n\n### Ensemble Judging\n\n```python\nfrom judges.factory import JudgeFactory\n\n# Create judge with multiple models\nconfig.judging.judge_models = [\"openthinker:7b\", \"llama3.2:3b\", \"qwen2.5:3b\"]\njudge = JudgeFactory.create_judge(config.judging, model_manager)\n\n# Get aggregated decision\ndecision = await judge.judge_debate(context)\n```\n\n### Content Moderation\n\n```python\nfrom dialectus.engine.moderation import ModerationManager, TopicRejectedError\n\n# Create moderation manager\nmanager = ModerationManager(config.moderation, config.system)\n\n# Validate user-provided topic\nuser_topic = \"Should AI be regulated?\"\n\ntry:\n result = await manager.moderate_topic(user_topic)\n # Topic is safe, proceed with debate\n print(f\"Topic approved with confidence: {result.confidence}\")\nexcept TopicRejectedError as e:\n # Topic violates content policy\n print(f\"Topic rejected: {e.reason}\")\n print(f\"Violated categories: {', '.join(e.categories)}\")\n```\n\nFor comprehensive moderation testing and setup instructions, see [MODERATION_TESTING.md](MODERATION_TESTING.md).\n\n## Provider Setup\n\n### Anthropic (Claude Models)\n\nTo use Anthropic's Claude models, you'll need an API key:\n\n1. **Get an API key**: Sign up at [console.anthropic.com](https://console.anthropic.com/)\n\n2. **Set your API key** (choose one method):\n\n **Environment variable (recommended):**\n ```bash\n export ANTHROPIC_API_KEY=\"sk-ant-api03-...\"\n ```\n\n **Or in `debate_config.json`:**\n ```json\n {\n \"system\": {\n \"anthropic\": {\n \"api_key\": \"sk-ant-api03-...\",\n \"base_url\": \"https://api.anthropic.com/v1\",\n \"max_retries\": 3,\n \"timeout\": 60\n }\n }\n }\n ```\n\n3. **Configure a model**:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"claude-3-5-sonnet-20241022\",\n \"provider\": \"anthropic\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n }\n }\n ```\n\n**Available Claude models:**\n- `claude-3-5-sonnet-20241022` - Latest, most intelligent (best for debates)\n- `claude-3-5-haiku-20241022` - Fastest and most economical\n- `claude-3-opus-20240229` - Most capable Claude 3 model\n- `claude-3-sonnet-20240229` - Balanced performance\n- `claude-3-haiku-20240307` - Budget-friendly option\n\n### OpenRouter\n\nTo use OpenRouter's model marketplace:\n\n1. **Get an API key**: Sign up at [openrouter.ai](https://openrouter.ai/)\n\n2. **Set your API key**:\n ```bash\n export OPENROUTER_API_KEY=\"sk-or-v1-...\"\n ```\n\n3. **Configure a model**:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"anthropic/claude-3.5-sonnet\",\n \"provider\": \"openrouter\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n }\n }\n ```\n\n### Ollama (Local Models)\n\nTo use local models via Ollama:\n\n1. **Install Ollama**: Download from [ollama.com](https://ollama.com/)\n\n2. **Pull models**:\n ```bash\n ollama pull llama3.2:3b\n ollama pull qwen2.5:7b\n ```\n\n3. **Configure**:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"llama3.2:3b\",\n \"provider\": \"ollama\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n },\n \"system\": {\n \"ollama_base_url\": \"http://localhost:11434\"\n }\n }\n ```\n",
"bugtrack_url": null,
"license": null,
"summary": "Dialectus AI debate engine framework",
"version": "0.4.0",
"project_urls": {
"Homepage": "https://github.com/dialectus-ai/dialectus-engine",
"Issues": "https://github.com/dialectus-ai/dialectus-engine/issues",
"Repository": "https://github.com/dialectus-ai/dialectus-engine"
},
"split_keywords": [
"ai",
" debate",
" llm",
" ollama",
" openrouter",
" artificial-intelligence"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0c01bfac171899331f51d301681988c96523405aa39c8f9ecad0991695d1d15e",
"md5": "2f7cf380710028ecf4d6d2c3dd12776e",
"sha256": "6d246a1125893382c6b23ccee69479ba65e2b9275662939b58b801128e15cc95"
},
"downloads": -1,
"filename": "dialectus_engine-0.4.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2f7cf380710028ecf4d6d2c3dd12776e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 116275,
"upload_time": "2025-10-22T01:06:40",
"upload_time_iso_8601": "2025-10-22T01:06:40.908504Z",
"url": "https://files.pythonhosted.org/packages/0c/01/bfac171899331f51d301681988c96523405aa39c8f9ecad0991695d1d15e/dialectus_engine-0.4.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9166daacfeef9741ca862057d6b4749752e6e9cc158382569d54b8412c29d582",
"md5": "1d4b8778afc06a139c4452f9ab2838ad",
"sha256": "bc04bb9a5bc65b723ef3f245b51ccc5865ca75069d0285b27ee184024549d3aa"
},
"downloads": -1,
"filename": "dialectus_engine-0.4.0.tar.gz",
"has_sig": false,
"md5_digest": "1d4b8778afc06a139c4452f9ab2838ad",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 100785,
"upload_time": "2025-10-22T01:06:42",
"upload_time_iso_8601": "2025-10-22T01:06:42.215873Z",
"url": "https://files.pythonhosted.org/packages/91/66/daacfeef9741ca862057d6b4749752e6e9cc158382569d54b8412c29d582/dialectus_engine-0.4.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-22 01:06:42",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "dialectus-ai",
"github_project": "dialectus-engine",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "dialectus-engine"
}