<img src="https://raw.githubusercontent.com/dialectus-ai/dialectus-engine/main/assets/logo.png" alt="Dialectus CLI" width="350">
# Dialectus CLI
Command-line interface for the Dialectus AI debate system. Run AI debates locally with Ollama or cloud models via OpenRouter, Anthropic, or OpenAI.
> **Related Project:** This CLI uses the [dialectus-engine](https://github.com/dialectus-ai/dialectus-engine) library for all debate orchestration. Check out the engine repository for the core debate logic, API documentation, and library usage examples.
<img src="https://github.com/user-attachments/assets/fba4d1f8-9561-4971-a2fa-ec24f01865a8" alt="CLI" width=700>
## Installation
### From PyPI
**Using uv (recommended):**
```bash
uv pip install dialectus-cli
```
**Using pip:**
```bash
pip install dialectus-cli
```
### From Source
**Using uv (recommended, faster):**
```bash
# Clone the repository
git clone https://github.com/Dialectus-AI/dialectus-cli
cd dialectus-cli
# Install in development mode with all dev dependencies
uv sync
# Or install without dev dependencies
uv pip install -e .
```
**Using pip:**
```bash
# Clone the repository
git clone https://github.com/Dialectus-AI/dialectus-cli
cd dialectus-cli
# Install in development mode
pip install -e .
# Or install with dev dependencies
pip install -e ".[dev]"
```
## Requirements
- **Python 3.12+**
- **uv** (recommended): Fast Python package manager - [Install uv](https://docs.astral.sh/uv/getting-started/installation/)
- **Ollama** (if using local models): Running at `http://localhost:11434`
- **API keys** (if using cloud models): Set via environment variables
- **OpenAI**: For GPT-4.1, GPT-4o, GPT-4o Mini, etc.
- **Anthropic**: For Claude models (3.5 Sonnet, Haiku, etc.)
- **OpenRouter**: For access to 100+ models including Claude, GPT-4, Llama, etc.
### Environment Variables
```bash
# Linux/macOS
export OPENAI_API_KEY="sk-your-openai-key"
export ANTHROPIC_API_KEY="sk-ant-api03-..."
export OPENROUTER_API_KEY="sk-or-v1-..."
# Windows PowerShell
$env:OPENAI_API_KEY="sk-your-openai-key"
$env:ANTHROPIC_API_KEY="sk-ant-api03-..."
$env:OPENROUTER_API_KEY="sk-or-v1-..."
# Windows CMD
set OPENAI_API_KEY=sk-your-openai-key
set ANTHROPIC_API_KEY=sk-ant-api03-...
set OPENROUTER_API_KEY=sk-or-v1-...
```
## Quick Start
After installation, the `dialectus` command is available:
```bash
# Copy example config
cp debate_config.example.json debate_config.json
# Edit with your preferred models and API keys
nano debate_config.json # or your favorite editor
# Run a debate
dialectus debate
```
## Configuration
Edit `debate_config.json` to configure:
- **Models**: Debate participants (Ollama, OpenRouter, Anthropic, or OpenAI)
- **Ollama** (local): `"provider": "ollama"`, `"name": "llama3.2:3b"`
- **OpenRouter** (cloud): `"provider": "openrouter"`, `"name": "anthropic/claude-3.5-sonnet"`
- **Anthropic** (direct): `"provider": "anthropic"`, `"name": "claude-3-5-sonnet-20241022"`
- **OpenAI** (direct): `"provider": "openai"`, `"name": "gpt-4o-mini"`
- **Judging**: AI judge models and evaluation criteria
- Use a single judge: `"judge_models": ["openthinker:7b"]`
- Use ensemble judging with multiple judges: `"judge_models": ["openthinker:7b", "llama3.2:3b", "qwen2.5:3b"]`
- The engine aggregates multiple judges using majority voting with consensus analysis
- **System**: Provider settings (Ollama/OpenRouter/Anthropic/OpenAI), topic generation, logging
## Commands
All commands work identically across platforms:
### Start a Debate
```bash
uv run dialectus debate
uv run dialectus debate --topic "Should AI be regulated?"
uv run dialectus debate --format oxford
uv run dialectus debate --interactive
```
### List Available Models
```bash
uv run dialectus list-models
```
### View Saved Transcripts
```bash
uv run dialectus transcripts
uv run dialectus transcripts --limit 50
```
## Database
Transcripts are saved to SQLite database at `~/.dialectus/debates.db`
## Provider Setup
### OpenAI (GPT Models)
**Use OpenAI's native API for GPT-4.1, GPT-4o, GPT-4o Mini, and more:**
1. **Get an API key**: Create one at [platform.openai.com](https://platform.openai.com/)
2. **Set your API key** (choose one method):
**Environment variable (recommended):**
```bash
export OPENAI_API_KEY="sk-your-openai-key"
```
**Or in `debate_config.json`:**
```json
{
"system": {
"openai": {
"api_key": "sk-your-openai-key",
"base_url": "https://api.openai.com/v1",
"max_retries": 3,
"timeout": 60
}
}
}
```
3. **Configure models** using OpenAI model IDs:
```json
{
"models": {
"model_a": {
"name": "gpt-4o-mini",
"provider": "openai",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
}
}
```
**Popular OpenAI models:**
- `gpt-4.1` – frontier reasoning with multimodal support
- `gpt-4.1-mini` – cost-efficient GPT-4.1 variant
- `gpt-4o` – balanced quality and speed
- `gpt-4o-mini` – fast, low-cost assistant model
### Anthropic (Claude Models)
**Direct access to Claude models with official Anthropic API:**
1. **Get an API key**: Sign up at [console.anthropic.com](https://console.anthropic.com/)
2. **Set your API key** (choose one method):
**Environment variable (recommended):**
```bash
export ANTHROPIC_API_KEY="sk-ant-api03-..."
```
**Or in `debate_config.json`:**
```json
{
"system": {
"anthropic": {
"api_key": "sk-ant-api03-...",
"base_url": "https://api.anthropic.com/v1",
"max_retries": 3,
"timeout": 60
}
}
}
```
3. **Configure models** using official model names:
```json
{
"models": {
"model_a": {
"name": "claude-3-5-sonnet-20241022",
"provider": "anthropic",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
}
}
```
**Available Claude models:**
- `claude-3-5-sonnet-20241022` - Latest, most intelligent (best for debates)
- `claude-3-5-haiku-20241022` - Fastest and most economical
- `claude-3-opus-20240229` - Most capable Claude 3 model
- `claude-3-sonnet-20240229` - Balanced performance
- `claude-3-haiku-20240307` - Budget-friendly option
### OpenRouter
**Access to 100+ models including Claude, GPT-4, Llama, and more:**
1. **Get an API key**: Sign up at [openrouter.ai](https://openrouter.ai/)
2. **Set your API key**:
```bash
export OPENROUTER_API_KEY="sk-or-v1-..."
```
3. **Configure models** using OpenRouter's naming:
```json
{
"models": {
"model_a": {
"name": "anthropic/claude-3.5-sonnet",
"provider": "openrouter",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
}
}
```
### Ollama (Local Models)
**Run models locally without any API keys:**
1. **Install Ollama**: Download from [ollama.com](https://ollama.com/)
2. **Pull models**:
```bash
ollama pull llama3.2:3b
ollama pull qwen2.5:7b
```
3. **Configure**:
```json
{
"models": {
"model_a": {
"name": "llama3.2:3b",
"provider": "ollama",
"personality": "analytical",
"max_tokens": 300,
"temperature": 0.7
}
},
"system": {
"ollama_base_url": "http://localhost:11434"
}
}
```
## Architecture
```
CLI → DebateRunner → DebateEngine → Rich Console
↓
SQLite Database
```
- **No API layer** - Imports [dialectus-engine](https://github.com/dialectus-ai/dialectus-engine) directly as a Python library
- **Local-first** - Runs completely offline with Ollama
- **SQLite storage** - Simple, portable database
For more details on the core engine implementation, see the [dialectus-engine repository](https://github.com/dialectus-ai/dialectus-engine).
## Development
### Running Tests and Type Checking
**Using uv (recommended):**
```bash
# Run tests
uv run pytest
# Run tests with verbose output
uv run pytest -v
# Run with coverage
uv run pytest --cov=dialectus
# Type check with Pyright
uv run pyright
# Lint with ruff
uv run ruff check .
# Format with ruff
uv run ruff format .
```
**Using pip:**
```bash
# Ensure dev dependencies are installed
pip install -e ".[dev]"
# Run tests
pytest
# Type check with Pyright
pyright
# Lint and format
ruff check .
ruff format .
```
### Building Distribution
**Using uv:**
```bash
# Build wheel and sdist
uv build
# Install locally from wheel
uv pip install dist/dialectus_cli-*.whl
```
**Using pip:**
```bash
# Build wheel and sdist
python -m build
# Install locally
pip install dist/dialectus_cli-*.whl
```
### Managing Dependencies
**Using uv:**
```bash
# Add a new dependency
# 1. Edit pyproject.toml [project.dependencies] section
# 2. Update lock file and sync environment:
uv lock && uv sync
# Upgrade all dependencies (within version constraints)
uv lock --upgrade
# Upgrade specific package
uv lock --upgrade-package rich
# Add dev dependency
# 1. Edit pyproject.toml [project.optional-dependencies.dev]
# 2. Run:
uv sync
```
**Using pip:**
```bash
# Add a new dependency
# 1. Edit pyproject.toml dependencies
# 2. Reinstall:
pip install -e ".[dev]"
```
### Why uv?
- **10-100x faster** than pip for installs and resolution
- **Reproducible builds** via `uv.lock` (cross-platform, includes hashes)
- **Python 3.14 ready** - Takes advantage of free-threading for even better performance
- **Single source of truth** - Dependencies in `pyproject.toml`, lock file auto-generated
- **Compatible** - `pip` still works perfectly with `pyproject.toml`
## License
MIT (open source)
Raw data
{
"_id": null,
"home_page": null,
"name": "dialectus-cli",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "ai, debate, cli, artificial-intelligence",
"author": "Dialectus AI",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/a4/43/2134ee0f6c4a6d10e03386d687467f69e015e18e2ed6f7231332f666c6e6/dialectus_cli-0.3.0.tar.gz",
"platform": null,
"description": "<img src=\"https://raw.githubusercontent.com/dialectus-ai/dialectus-engine/main/assets/logo.png\" alt=\"Dialectus CLI\" width=\"350\">\n\n# Dialectus CLI\n\nCommand-line interface for the Dialectus AI debate system. Run AI debates locally with Ollama or cloud models via OpenRouter, Anthropic, or OpenAI.\n\n> **Related Project:** This CLI uses the [dialectus-engine](https://github.com/dialectus-ai/dialectus-engine) library for all debate orchestration. Check out the engine repository for the core debate logic, API documentation, and library usage examples.\n\n<img src=\"https://github.com/user-attachments/assets/fba4d1f8-9561-4971-a2fa-ec24f01865a8\" alt=\"CLI\" width=700>\n\n## Installation\n\n### From PyPI\n\n**Using uv (recommended):**\n```bash\nuv pip install dialectus-cli\n```\n\n**Using pip:**\n```bash\npip install dialectus-cli\n```\n\n### From Source\n\n**Using uv (recommended, faster):**\n```bash\n# Clone the repository\ngit clone https://github.com/Dialectus-AI/dialectus-cli\ncd dialectus-cli\n\n# Install in development mode with all dev dependencies\nuv sync\n\n# Or install without dev dependencies\nuv pip install -e .\n```\n\n**Using pip:**\n```bash\n# Clone the repository\ngit clone https://github.com/Dialectus-AI/dialectus-cli\ncd dialectus-cli\n\n# Install in development mode\npip install -e .\n\n# Or install with dev dependencies\npip install -e \".[dev]\"\n```\n\n## Requirements\n\n- **Python 3.12+**\n- **uv** (recommended): Fast Python package manager - [Install uv](https://docs.astral.sh/uv/getting-started/installation/)\n- **Ollama** (if using local models): Running at `http://localhost:11434`\n- **API keys** (if using cloud models): Set via environment variables\n - **OpenAI**: For GPT-4.1, GPT-4o, GPT-4o Mini, etc.\n - **Anthropic**: For Claude models (3.5 Sonnet, Haiku, etc.)\n - **OpenRouter**: For access to 100+ models including Claude, GPT-4, Llama, etc.\n\n### Environment Variables\n\n```bash\n# Linux/macOS\nexport OPENAI_API_KEY=\"sk-your-openai-key\"\nexport ANTHROPIC_API_KEY=\"sk-ant-api03-...\"\nexport OPENROUTER_API_KEY=\"sk-or-v1-...\"\n\n# Windows PowerShell\n$env:OPENAI_API_KEY=\"sk-your-openai-key\"\n$env:ANTHROPIC_API_KEY=\"sk-ant-api03-...\"\n$env:OPENROUTER_API_KEY=\"sk-or-v1-...\"\n\n# Windows CMD\nset OPENAI_API_KEY=sk-your-openai-key\nset ANTHROPIC_API_KEY=sk-ant-api03-...\nset OPENROUTER_API_KEY=sk-or-v1-...\n```\n\n## Quick Start\n\nAfter installation, the `dialectus` command is available:\n\n```bash\n# Copy example config\ncp debate_config.example.json debate_config.json\n\n# Edit with your preferred models and API keys\nnano debate_config.json # or your favorite editor\n\n# Run a debate\ndialectus debate\n```\n\n## Configuration\n\nEdit `debate_config.json` to configure:\n- **Models**: Debate participants (Ollama, OpenRouter, Anthropic, or OpenAI)\n - **Ollama** (local): `\"provider\": \"ollama\"`, `\"name\": \"llama3.2:3b\"`\n - **OpenRouter** (cloud): `\"provider\": \"openrouter\"`, `\"name\": \"anthropic/claude-3.5-sonnet\"`\n - **Anthropic** (direct): `\"provider\": \"anthropic\"`, `\"name\": \"claude-3-5-sonnet-20241022\"`\n - **OpenAI** (direct): `\"provider\": \"openai\"`, `\"name\": \"gpt-4o-mini\"`\n- **Judging**: AI judge models and evaluation criteria\n - Use a single judge: `\"judge_models\": [\"openthinker:7b\"]`\n - Use ensemble judging with multiple judges: `\"judge_models\": [\"openthinker:7b\", \"llama3.2:3b\", \"qwen2.5:3b\"]`\n - The engine aggregates multiple judges using majority voting with consensus analysis\n- **System**: Provider settings (Ollama/OpenRouter/Anthropic/OpenAI), topic generation, logging\n\n## Commands\n\nAll commands work identically across platforms:\n\n### Start a Debate\n```bash\nuv run dialectus debate\nuv run dialectus debate --topic \"Should AI be regulated?\"\nuv run dialectus debate --format oxford\nuv run dialectus debate --interactive\n```\n\n### List Available Models\n```bash\nuv run dialectus list-models\n```\n\n### View Saved Transcripts\n```bash\nuv run dialectus transcripts\nuv run dialectus transcripts --limit 50\n```\n\n## Database\n\nTranscripts are saved to SQLite database at `~/.dialectus/debates.db`\n\n## Provider Setup\n\n### OpenAI (GPT Models)\n\n**Use OpenAI's native API for GPT-4.1, GPT-4o, GPT-4o Mini, and more:**\n\n1. **Get an API key**: Create one at [platform.openai.com](https://platform.openai.com/)\n\n2. **Set your API key** (choose one method):\n\n **Environment variable (recommended):**\n ```bash\n export OPENAI_API_KEY=\"sk-your-openai-key\"\n ```\n\n **Or in `debate_config.json`:**\n ```json\n {\n \"system\": {\n \"openai\": {\n \"api_key\": \"sk-your-openai-key\",\n \"base_url\": \"https://api.openai.com/v1\",\n \"max_retries\": 3,\n \"timeout\": 60\n }\n }\n }\n ```\n\n3. **Configure models** using OpenAI model IDs:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"gpt-4o-mini\",\n \"provider\": \"openai\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n }\n }\n ```\n\n**Popular OpenAI models:**\n- `gpt-4.1` \u2013 frontier reasoning with multimodal support\n- `gpt-4.1-mini` \u2013 cost-efficient GPT-4.1 variant\n- `gpt-4o` \u2013 balanced quality and speed\n- `gpt-4o-mini` \u2013 fast, low-cost assistant model\n\n### Anthropic (Claude Models)\n\n**Direct access to Claude models with official Anthropic API:**\n\n1. **Get an API key**: Sign up at [console.anthropic.com](https://console.anthropic.com/)\n\n2. **Set your API key** (choose one method):\n\n **Environment variable (recommended):**\n ```bash\n export ANTHROPIC_API_KEY=\"sk-ant-api03-...\"\n ```\n\n **Or in `debate_config.json`:**\n ```json\n {\n \"system\": {\n \"anthropic\": {\n \"api_key\": \"sk-ant-api03-...\",\n \"base_url\": \"https://api.anthropic.com/v1\",\n \"max_retries\": 3,\n \"timeout\": 60\n }\n }\n }\n ```\n\n3. **Configure models** using official model names:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"claude-3-5-sonnet-20241022\",\n \"provider\": \"anthropic\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n }\n }\n ```\n\n**Available Claude models:**\n- `claude-3-5-sonnet-20241022` - Latest, most intelligent (best for debates)\n- `claude-3-5-haiku-20241022` - Fastest and most economical\n- `claude-3-opus-20240229` - Most capable Claude 3 model\n- `claude-3-sonnet-20240229` - Balanced performance\n- `claude-3-haiku-20240307` - Budget-friendly option\n\n### OpenRouter\n\n**Access to 100+ models including Claude, GPT-4, Llama, and more:**\n\n1. **Get an API key**: Sign up at [openrouter.ai](https://openrouter.ai/)\n\n2. **Set your API key**:\n ```bash\n export OPENROUTER_API_KEY=\"sk-or-v1-...\"\n ```\n\n3. **Configure models** using OpenRouter's naming:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"anthropic/claude-3.5-sonnet\",\n \"provider\": \"openrouter\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n }\n }\n ```\n\n### Ollama (Local Models)\n\n**Run models locally without any API keys:**\n\n1. **Install Ollama**: Download from [ollama.com](https://ollama.com/)\n\n2. **Pull models**:\n ```bash\n ollama pull llama3.2:3b\n ollama pull qwen2.5:7b\n ```\n\n3. **Configure**:\n ```json\n {\n \"models\": {\n \"model_a\": {\n \"name\": \"llama3.2:3b\",\n \"provider\": \"ollama\",\n \"personality\": \"analytical\",\n \"max_tokens\": 300,\n \"temperature\": 0.7\n }\n },\n \"system\": {\n \"ollama_base_url\": \"http://localhost:11434\"\n }\n }\n ```\n\n## Architecture\n\n```\nCLI \u2192 DebateRunner \u2192 DebateEngine \u2192 Rich Console\n \u2193\n SQLite Database\n```\n\n- **No API layer** - Imports [dialectus-engine](https://github.com/dialectus-ai/dialectus-engine) directly as a Python library\n- **Local-first** - Runs completely offline with Ollama\n- **SQLite storage** - Simple, portable database\n\nFor more details on the core engine implementation, see the [dialectus-engine repository](https://github.com/dialectus-ai/dialectus-engine).\n\n## Development\n\n### Running Tests and Type Checking\n\n**Using uv (recommended):**\n```bash\n# Run tests\nuv run pytest\n\n# Run tests with verbose output\nuv run pytest -v\n\n# Run with coverage\nuv run pytest --cov=dialectus\n\n# Type check with Pyright\nuv run pyright\n\n# Lint with ruff\nuv run ruff check .\n\n# Format with ruff\nuv run ruff format .\n```\n\n**Using pip:**\n```bash\n# Ensure dev dependencies are installed\npip install -e \".[dev]\"\n\n# Run tests\npytest\n\n# Type check with Pyright\npyright\n\n# Lint and format\nruff check .\nruff format .\n```\n\n### Building Distribution\n\n**Using uv:**\n```bash\n# Build wheel and sdist\nuv build\n\n# Install locally from wheel\nuv pip install dist/dialectus_cli-*.whl\n```\n\n**Using pip:**\n```bash\n# Build wheel and sdist\npython -m build\n\n# Install locally\npip install dist/dialectus_cli-*.whl\n```\n\n### Managing Dependencies\n\n**Using uv:**\n```bash\n# Add a new dependency\n# 1. Edit pyproject.toml [project.dependencies] section\n# 2. Update lock file and sync environment:\nuv lock && uv sync\n\n# Upgrade all dependencies (within version constraints)\nuv lock --upgrade\n\n# Upgrade specific package\nuv lock --upgrade-package rich\n\n# Add dev dependency\n# 1. Edit pyproject.toml [project.optional-dependencies.dev]\n# 2. Run:\nuv sync\n```\n\n**Using pip:**\n```bash\n# Add a new dependency\n# 1. Edit pyproject.toml dependencies\n# 2. Reinstall:\npip install -e \".[dev]\"\n```\n\n### Why uv?\n\n- **10-100x faster** than pip for installs and resolution\n- **Reproducible builds** via `uv.lock` (cross-platform, includes hashes)\n- **Python 3.14 ready** - Takes advantage of free-threading for even better performance\n- **Single source of truth** - Dependencies in `pyproject.toml`, lock file auto-generated\n- **Compatible** - `pip` still works perfectly with `pyproject.toml`\n\n## License\n\nMIT (open source)\n",
"bugtrack_url": null,
"license": null,
"summary": "Command-line interface for the Dialectus AI debate system",
"version": "0.3.0",
"project_urls": {
"Homepage": "https://github.com/Dialectus-AI/dialectus-cli",
"Issues": "https://github.com/Dialectus-AI/dialectus-cli/issues",
"Repository": "https://github.com/Dialectus-AI/dialectus-cli"
},
"split_keywords": [
"ai",
" debate",
" cli",
" artificial-intelligence"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3a21deffacd2060f049c1d62fa2087ac673b85f382a1933f815d97600b158f43",
"md5": "87df283cbd2d24a5cbd839fb85ab34f3",
"sha256": "aafa9815c546ca27038d3c0e343a9264e9930adc2dac6a4b164325e62e8eb400"
},
"downloads": -1,
"filename": "dialectus_cli-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "87df283cbd2d24a5cbd839fb85ab34f3",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 5999,
"upload_time": "2025-10-22T02:15:15",
"upload_time_iso_8601": "2025-10-22T02:15:15.812839Z",
"url": "https://files.pythonhosted.org/packages/3a/21/deffacd2060f049c1d62fa2087ac673b85f382a1933f815d97600b158f43/dialectus_cli-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a4432134ee0f6c4a6d10e03386d687467f69e015e18e2ed6f7231332f666c6e6",
"md5": "6071718089ebc523975880fffd8d58fc",
"sha256": "2a6c2de4418ed0b6433f410a070b65db0cc0a6c8ff8fad6e94a32c861ce83ce6"
},
"downloads": -1,
"filename": "dialectus_cli-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "6071718089ebc523975880fffd8d58fc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 13742,
"upload_time": "2025-10-22T02:15:17",
"upload_time_iso_8601": "2025-10-22T02:15:17.071925Z",
"url": "https://files.pythonhosted.org/packages/a4/43/2134ee0f6c4a6d10e03386d687467f69e015e18e2ed6f7231332f666c6e6/dialectus_cli-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-22 02:15:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Dialectus-AI",
"github_project": "dialectus-cli",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "dialectus-cli"
}