# Cadence ๐ค Multi-agents AI Framework
A plugin-based multi-agent conversational AI framework built on FastAPI, designed for building intelligent chatbot
systems with extensible the plugin/modularization architecture.

## ๐ Features
- **Multi-Agent Orchestration**: Intelligent routing and coordination between AI agents
- **Plugin System**: Extensible architecture for custom agents and tools
- **Parallel Tool Execution**: Concurrent tool calls for improved performance and efficiency
- **Multi-LLM Support**: OpenAI, Anthropic, Google AI, and more
- **Flexible Storage**: PostgreSQL, Redis, MongoDB, and in-memory backends
- **REST API**: FastAPI-based API with automatic documentation
- **Streamlit UI**: Built-in web interface for testing and management
- **Docker Support**: Containerized deployment with Docker Compose
## ๐ฆ Installation & Usage
### ๐ฏ For End Users (Quick Start)
**Install the package:**
```bash
pip install cadence-py
```
**Verify installation:**
```bash
# Check if cadence is available
python -m cadence --help
# Should show available commands and options
```
**Run the application:**
```bash
# Start the API server
python -m cadence start api
# Start with custom host/port
python -m cadence start api --host 0.0.0.0 --port 8000
# Start the Streamlit UI
python -m cadence start ui
# Start both API and UI
python -m cadence start all
```
**Available commands:**
```bash
# Show help
python -m cadence --help
# Show status
python -m cadence status
# Manage plugins
python -m cadence plugins
# Show configuration
python -m cadence config
# Health check
python -m cadence health
```
### ๐ ๏ธ For Developers (Build from Source)
If you want to contribute, develop plugins, or customize the framework:
#### Prerequisites
- Python 3.13+
- Poetry (for dependency management)
- Docker (optional, for containerized deployment)
#### Development Setup
1. **Clone the repository**
```bash
git clone https://github.com/jonaskahn/cadence.git
cd cadence
```
2. **Install dependencies**
```bash
poetry install
poetry install --with local # Include local SDK development
```
3. **Set up environment variables**
```bash
cp .env.example .env
# Edit .env with your API keys and configuration
```
4. **Run the application**
```bash
poetry run python -m cadence start api
```
## โ๏ธ Configuration
### Environment Variables
All configuration is done through environment variables with the `CADENCE_` prefix:
```bash
# LLM Provider Configuration
CADENCE_DEFAULT_LLM_PROVIDER=openai
CADENCE_OPENAI_API_KEY=your-openai-key
CADENCE_ANTHROPIC_API_KEY=your-claude-key
CADENCE_GOOGLE_API_KEY=your-gemini-key
# Storage Configuration
CADENCE_CONVERSATION_STORAGE_BACKEND=memory # or postgresql
CADENCE_POSTGRES_URL=postgresql://user:pass@localhost/cadence
# Plugin Configuration
CADENCE_PLUGINS_DIR=["./plugins/src/cadence_plugins"]
# Server Configuration
CADENCE_API_HOST=0.0.0.0
CADENCE_API_PORT=8000
CADENCE_DEBUG=true
# Advanced Configuration
CADENCE_MAX_AGENT_HOPS=25
CADENCE_GRAPH_RECURSION_LIMIT=50
# Session Management
CADENCE_SESSION_TIMEOUT=3600
CADENCE_MAX_SESSION_HISTORY=100
```
### Configuration File
You can also use a `.env` file for local development:
```bash
# .env
CADENCE_DEFAULT_LLM_PROVIDER=openai
CADENCE_OPENAI_API_KEY=your_actual_openai_api_key_here
CADENCE_ANTHROPIC_API_KEY=your_actual_claude_api_key_here
CADENCE_GOOGLE_API_KEY=your_actual_gemini_api_key_here
CADENCE_APP_NAME="Cadence ๐ค Multi-agents AI Framework"
CADENCE_DEBUG=false
CADENCE_PLUGINS_DIR=./plugins/src/cadence_example_plugins
CADENCE_API_HOST=0.0.0.0
CADENCE_API_PORT=8000
# For production, you might want to use PostgreSQL
CADENCE_CONVERSATION_STORAGE_BACKEND=postgresql
CADENCE_POSTGRES_URL=postgresql://user:pass@localhost/cadence
# For development, you can use the built-in UI
CADENCE_UI_HOST=0.0.0.0
CADENCE_UI_PORT=8501
# Plugin Configuration
CADENCE_PLUGINS_DIR=./plugins/src/cadence_example_plugins
CADENCE_MAX_AGENT_HOPS=25
CADENCE_GRAPH_RECURSION_LIMIT=50
# Parallel Tool Calls Configuration
# Individual agents can control parallel tool execution in their constructor:
# super().__init__(metadata, parallel_tool_calls=True) # Enable (default)
# super().__init__(metadata, parallel_tool_calls=False) # Disable
```
## ๐ Usage
### Command Line Interface
Cadence provides a comprehensive CLI for management tasks:
```bash
# Start the server
python -m cadence start api --host 0.0.0.0 --port 8000
# Show status
python -m cadence status
# Manage plugins
python -m cadence plugins
# Show configuration
python -m cadence config
# Health check
python -m cadence health
```
### API Usage
The framework exposes a REST API for programmatic access:
```python
import requests
# Send a message
response = requests.post("http://localhost:8000/api/v1/chat", json={
"message": "Hello, how are you?",
"user_id": "user123",
"org_id": "org456"
})
print(response.json())
```
### Plugin Development
Create custom agents and tools using the Cadence SDK with enhanced routing capabilities:
```python
from cadence_sdk import BaseAgent, BasePlugin, PluginMetadata, tool
class MyPlugin(BasePlugin):
@staticmethod
def get_metadata() -> PluginMetadata:
return PluginMetadata(
name="my_agent",
version="1.0.0",
description="My custom AI agent",
capabilities=["custom_task"],
agent_type="specialized",
dependencies=["cadence_sdk>=1.0.2,<2.0.0"],
)
@staticmethod
def create_agent() -> BaseAgent:
return MyAgent(MyPlugin.get_metadata())
class MyAgent(BaseAgent):
def __init__(self, metadata: PluginMetadata):
super().__init__(metadata)
def get_tools(self):
from .tools import my_custom_tool
return [my_custom_tool]
def get_system_prompt(self) -> str:
return "You are a helpful AI assistant."
@staticmethod
def should_continue(state: dict) -> str:
"""Enhanced routing decision - decide whether to continue or return to coordinator.
This is the REAL implementation from the Cadence SDK - it's much simpler than you might expect!
The method simply checks if the agent's response has tool calls and routes accordingly.
"""
last_msg = state.get("messages", [])[-1] if state.get("messages") else None
if not last_msg:
return "back"
tool_calls = getattr(last_msg, "tool_calls", None)
return "continue" if tool_calls else "back"
# Parallel Tool Calls Support
# BaseAgent supports parallel tool execution for improved performance
class MyAgent(BaseAgent):
def __init__(self, metadata: PluginMetadata):
# Enable parallel tool calls (default: True)
super().__init__(metadata, parallel_tool_calls=True)
def get_tools(self):
return [my_tool1, my_tool2, my_tool3]
def get_system_prompt(self) -> str:
return "You are an agent that can execute multiple tools in parallel."
@tool
def my_custom_tool(input_data: str) -> str:
"""A custom tool for specific operations."""
return f"Processed: {input_data}"
```
**Enhanced Features:**
- **Intelligent Routing**: Agents automatically decide when to use tools or return to coordinator
- **Fake Tool Calls**: Consistent routing flow even when agents answer directly
- **No Circular Routing**: Eliminated infinite loops through proper edge configuration
- **Better Debugging**: Clear routing decisions and comprehensive logging
**Key Implementation Details:**
- **`should_continue` is a static method**: Uses `@staticmethod` decorator
- **Automatic fake tool calls**: The SDK automatically creates fake "back" tool calls when agents answer directly
- **Consistent routing**: All responses go through the same flow regardless of whether tools are used
## ๐ณ Docker Deployment
### Quick Start with Docker Compose
```bash
# Start all services
docker-compose -f docker/compose.yaml up -d
# View logs
docker-compose -f docker/compose.yaml logs -f
# Stop services
docker-compose -f docker/compose.yaml down
```
### Custom Docker Build
```bash
# Build the image
./build.sh
# Run the container
docker run -p 8000:8000 ifelsedotone/cadence:latest
```
## ๐งช Testing
Run the test suite to ensure everything works correctly:
```bash
# Install test dependencies
poetry install --with dev
# Run tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=src/cadence
# Run specific test categories
poetry run pytest -m "unit"
poetry run pytest -m "integration"
```
## ๐ Documentation
- [Quick Start Guide](docs/getting-started/quick-start.md)
- [Architecture Overview](docs/concepts/architecture.md)
- [Plugin Development](docs/plugins/overview.md)
- [API Reference](docs/api/)
- [Deployment Guide](docs/deployment/)
## ๐ค Contributing
We welcome contributions! Please see our [Contributing Guide](docs/contributing/development.md) for details.
### Development Setup
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests
5. Submit a pull request
## ๐ License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## ๐ Acknowledgments
- Built on [FastAPI](https://fastapi.tiangolo.com/) for high-performance APIs
- Powered by [LangChain](https://langchain.com/) and [LangGraph](https://langchain.com/langgraph) for AI orchestration
- UI built with [Streamlit](https://streamlit.io/) for rapid development
- Containerized with [Docker](https://www.docker.com/) for easy deployment
## ๐ Support
- **Issues**: [GitHub Issues](https://github.com/jonaskahn/cadence/issues)
- **Discussions**: [GitHub Discussions](https://github.com/jonaskahn/cadence/discussions)
- **Documentation**: [Read the Docs](https://cadence.readthedocs.io/)
---
**Made with โค๏ธ by the Cadence AI Team**
Raw data
{
"_id": null,
"home_page": null,
"name": "cadence-py",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.14,>=3.13",
"maintainer_email": null,
"keywords": "ai, agents, cadence-ai, orchestration, coordination, langchain, langgraph, multi-agent, cli",
"author": "Jonas Kahn",
"author_email": "me@ifelse.one",
"download_url": "https://files.pythonhosted.org/packages/87/57/fa993f206d31dad1b43925fa6a8519c353ee48e77dbfe417b55bb7ab6c99/cadence_py-1.0.7.tar.gz",
"platform": null,
"description": "# Cadence \ud83e\udd16 Multi-agents AI Framework\n\nA plugin-based multi-agent conversational AI framework built on FastAPI, designed for building intelligent chatbot\nsystems with extensible the plugin/modularization architecture.\n\n\n\n## \ud83d\ude80 Features\n\n- **Multi-Agent Orchestration**: Intelligent routing and coordination between AI agents\n- **Plugin System**: Extensible architecture for custom agents and tools\n- **Parallel Tool Execution**: Concurrent tool calls for improved performance and efficiency\n- **Multi-LLM Support**: OpenAI, Anthropic, Google AI, and more\n- **Flexible Storage**: PostgreSQL, Redis, MongoDB, and in-memory backends\n- **REST API**: FastAPI-based API with automatic documentation\n- **Streamlit UI**: Built-in web interface for testing and management\n- **Docker Support**: Containerized deployment with Docker Compose\n\n## \ud83d\udce6 Installation & Usage\n\n### \ud83c\udfaf For End Users (Quick Start)\n\n**Install the package:**\n\n```bash\npip install cadence-py\n```\n\n**Verify installation:**\n\n```bash\n# Check if cadence is available\npython -m cadence --help\n\n# Should show available commands and options\n```\n\n**Run the application:**\n\n```bash\n# Start the API server\npython -m cadence start api\n\n# Start with custom host/port\npython -m cadence start api --host 0.0.0.0 --port 8000\n\n# Start the Streamlit UI\npython -m cadence start ui\n\n# Start both API and UI\npython -m cadence start all\n```\n\n**Available commands:**\n\n```bash\n# Show help\npython -m cadence --help\n\n# Show status\npython -m cadence status\n\n# Manage plugins\npython -m cadence plugins\n\n# Show configuration\npython -m cadence config\n\n# Health check\npython -m cadence health\n```\n\n### \ud83d\udee0\ufe0f For Developers (Build from Source)\n\nIf you want to contribute, develop plugins, or customize the framework:\n\n#### Prerequisites\n\n- Python 3.13+\n- Poetry (for dependency management)\n- Docker (optional, for containerized deployment)\n\n#### Development Setup\n\n1. **Clone the repository**\n\n ```bash\n git clone https://github.com/jonaskahn/cadence.git\n cd cadence\n ```\n\n2. **Install dependencies**\n\n ```bash\n poetry install\n poetry install --with local # Include local SDK development\n ```\n\n3. **Set up environment variables**\n\n ```bash\n cp .env.example .env\n # Edit .env with your API keys and configuration\n ```\n\n4. **Run the application**\n\n ```bash\n poetry run python -m cadence start api\n ```\n\n## \u2699\ufe0f Configuration\n\n### Environment Variables\n\nAll configuration is done through environment variables with the `CADENCE_` prefix:\n\n```bash\n# LLM Provider Configuration\nCADENCE_DEFAULT_LLM_PROVIDER=openai\nCADENCE_OPENAI_API_KEY=your-openai-key\nCADENCE_ANTHROPIC_API_KEY=your-claude-key\nCADENCE_GOOGLE_API_KEY=your-gemini-key\n\n# Storage Configuration\nCADENCE_CONVERSATION_STORAGE_BACKEND=memory # or postgresql\nCADENCE_POSTGRES_URL=postgresql://user:pass@localhost/cadence\n\n# Plugin Configuration\nCADENCE_PLUGINS_DIR=[\"./plugins/src/cadence_plugins\"]\n\n# Server Configuration\nCADENCE_API_HOST=0.0.0.0\nCADENCE_API_PORT=8000\nCADENCE_DEBUG=true\n\n# Advanced Configuration\nCADENCE_MAX_AGENT_HOPS=25\nCADENCE_GRAPH_RECURSION_LIMIT=50\n\n# Session Management\nCADENCE_SESSION_TIMEOUT=3600\nCADENCE_MAX_SESSION_HISTORY=100\n```\n\n### Configuration File\n\nYou can also use a `.env` file for local development:\n\n```bash\n# .env\nCADENCE_DEFAULT_LLM_PROVIDER=openai\nCADENCE_OPENAI_API_KEY=your_actual_openai_api_key_here\nCADENCE_ANTHROPIC_API_KEY=your_actual_claude_api_key_here\nCADENCE_GOOGLE_API_KEY=your_actual_gemini_api_key_here\n\nCADENCE_APP_NAME=\"Cadence \ud83e\udd16 Multi-agents AI Framework\"\nCADENCE_DEBUG=false\n\nCADENCE_PLUGINS_DIR=./plugins/src/cadence_example_plugins\n\nCADENCE_API_HOST=0.0.0.0\nCADENCE_API_PORT=8000\n\n# For production, you might want to use PostgreSQL\nCADENCE_CONVERSATION_STORAGE_BACKEND=postgresql\nCADENCE_POSTGRES_URL=postgresql://user:pass@localhost/cadence\n\n# For development, you can use the built-in UI\nCADENCE_UI_HOST=0.0.0.0\nCADENCE_UI_PORT=8501\n\n# Plugin Configuration\nCADENCE_PLUGINS_DIR=./plugins/src/cadence_example_plugins\nCADENCE_MAX_AGENT_HOPS=25\nCADENCE_GRAPH_RECURSION_LIMIT=50\n\n# Parallel Tool Calls Configuration\n# Individual agents can control parallel tool execution in their constructor:\n# super().__init__(metadata, parallel_tool_calls=True) # Enable (default)\n# super().__init__(metadata, parallel_tool_calls=False) # Disable\n```\n\n## \ud83d\ude80 Usage\n\n### Command Line Interface\n\nCadence provides a comprehensive CLI for management tasks:\n\n```bash\n# Start the server\npython -m cadence start api --host 0.0.0.0 --port 8000\n\n# Show status\npython -m cadence status\n\n# Manage plugins\npython -m cadence plugins\n\n# Show configuration\npython -m cadence config\n\n# Health check\npython -m cadence health\n```\n\n### API Usage\n\nThe framework exposes a REST API for programmatic access:\n\n```python\nimport requests\n\n# Send a message\nresponse = requests.post(\"http://localhost:8000/api/v1/chat\", json={\n \"message\": \"Hello, how are you?\",\n \"user_id\": \"user123\",\n \"org_id\": \"org456\"\n})\n\nprint(response.json())\n```\n\n### Plugin Development\n\nCreate custom agents and tools using the Cadence SDK with enhanced routing capabilities:\n\n```python\nfrom cadence_sdk import BaseAgent, BasePlugin, PluginMetadata, tool\n\n\nclass MyPlugin(BasePlugin):\n @staticmethod\n def get_metadata() -> PluginMetadata:\n return PluginMetadata(\n name=\"my_agent\",\n version=\"1.0.0\",\n description=\"My custom AI agent\",\n capabilities=[\"custom_task\"],\n agent_type=\"specialized\",\n dependencies=[\"cadence_sdk>=1.0.2,<2.0.0\"],\n )\n\n @staticmethod\n def create_agent() -> BaseAgent:\n return MyAgent(MyPlugin.get_metadata())\n\n\nclass MyAgent(BaseAgent):\n def __init__(self, metadata: PluginMetadata):\n super().__init__(metadata)\n\n def get_tools(self):\n from .tools import my_custom_tool\n return [my_custom_tool]\n\n def get_system_prompt(self) -> str:\n return \"You are a helpful AI assistant.\"\n\n @staticmethod\n def should_continue(state: dict) -> str:\n \"\"\"Enhanced routing decision - decide whether to continue or return to coordinator.\n\n This is the REAL implementation from the Cadence SDK - it's much simpler than you might expect!\n The method simply checks if the agent's response has tool calls and routes accordingly.\n \"\"\"\n last_msg = state.get(\"messages\", [])[-1] if state.get(\"messages\") else None\n if not last_msg:\n return \"back\"\n\n tool_calls = getattr(last_msg, \"tool_calls\", None)\n return \"continue\" if tool_calls else \"back\"\n\n\n# Parallel Tool Calls Support\n# BaseAgent supports parallel tool execution for improved performance\n\nclass MyAgent(BaseAgent):\n def __init__(self, metadata: PluginMetadata):\n # Enable parallel tool calls (default: True)\n super().__init__(metadata, parallel_tool_calls=True)\n\n def get_tools(self):\n return [my_tool1, my_tool2, my_tool3]\n\n def get_system_prompt(self) -> str:\n return \"You are an agent that can execute multiple tools in parallel.\"\n\n\n@tool\ndef my_custom_tool(input_data: str) -> str:\n \"\"\"A custom tool for specific operations.\"\"\"\n return f\"Processed: {input_data}\"\n```\n\n**Enhanced Features:**\n\n- **Intelligent Routing**: Agents automatically decide when to use tools or return to coordinator\n- **Fake Tool Calls**: Consistent routing flow even when agents answer directly\n- **No Circular Routing**: Eliminated infinite loops through proper edge configuration\n- **Better Debugging**: Clear routing decisions and comprehensive logging\n\n**Key Implementation Details:**\n\n- **`should_continue` is a static method**: Uses `@staticmethod` decorator\n- **Automatic fake tool calls**: The SDK automatically creates fake \"back\" tool calls when agents answer directly\n- **Consistent routing**: All responses go through the same flow regardless of whether tools are used\n\n## \ud83d\udc33 Docker Deployment\n\n### Quick Start with Docker Compose\n\n```bash\n# Start all services\ndocker-compose -f docker/compose.yaml up -d\n\n# View logs\ndocker-compose -f docker/compose.yaml logs -f\n\n# Stop services\ndocker-compose -f docker/compose.yaml down\n```\n\n### Custom Docker Build\n\n```bash\n# Build the image\n./build.sh\n\n# Run the container\ndocker run -p 8000:8000 ifelsedotone/cadence:latest\n```\n\n## \ud83e\uddea Testing\n\nRun the test suite to ensure everything works correctly:\n\n```bash\n# Install test dependencies\npoetry install --with dev\n\n# Run tests\npoetry run pytest\n\n# Run with coverage\npoetry run pytest --cov=src/cadence\n\n# Run specific test categories\npoetry run pytest -m \"unit\"\npoetry run pytest -m \"integration\"\n```\n\n## \ud83d\udcda Documentation\n\n- [Quick Start Guide](docs/getting-started/quick-start.md)\n- [Architecture Overview](docs/concepts/architecture.md)\n- [Plugin Development](docs/plugins/overview.md)\n- [API Reference](docs/api/)\n- [Deployment Guide](docs/deployment/)\n\n## \ud83e\udd1d Contributing\n\nWe welcome contributions! Please see our [Contributing Guide](docs/contributing/development.md) for details.\n\n### Development Setup\n\n1. Fork the repository\n2. Create a feature branch\n3. Make your changes\n4. Add tests\n5. Submit a pull request\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## \ud83d\ude4f Acknowledgments\n\n- Built on [FastAPI](https://fastapi.tiangolo.com/) for high-performance APIs\n- Powered by [LangChain](https://langchain.com/) and [LangGraph](https://langchain.com/langgraph) for AI orchestration\n- UI built with [Streamlit](https://streamlit.io/) for rapid development\n- Containerized with [Docker](https://www.docker.com/) for easy deployment\n\n## \ud83d\udcde Support\n\n- **Issues**: [GitHub Issues](https://github.com/jonaskahn/cadence/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/jonaskahn/cadence/discussions)\n- **Documentation**: [Read the Docs](https://cadence.readthedocs.io/)\n\n---\n\n**Made with \u2764\ufe0f by the Cadence AI Team**\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Cadence AI - Multi-agent AI orchestration system with plugin management",
"version": "1.0.7",
"project_urls": {
"Documentation": "https://cadence.readthedocs.io/",
"Homepage": "https://github.com/jonaskahn/cadence",
"Repository": "https://github.com/jonaskahn/cadence.git",
"issues": "https://github.com/jonaskahn/cadence/issues"
},
"split_keywords": [
"ai",
" agents",
" cadence-ai",
" orchestration",
" coordination",
" langchain",
" langgraph",
" multi-agent",
" cli"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "162df754eaae8058568397cca47d0d04deb39fd5bcf00d5a915005160c29e0cd",
"md5": "6902e43cebbffefa0dfc2a1178925f27",
"sha256": "4c6c07b701846363975dc4ab1a611d50f2a5fccb8a2053011d7b88b6214e2520"
},
"downloads": -1,
"filename": "cadence_py-1.0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "6902e43cebbffefa0dfc2a1178925f27",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.14,>=3.13",
"size": 111516,
"upload_time": "2025-09-03T09:38:01",
"upload_time_iso_8601": "2025-09-03T09:38:01.469947Z",
"url": "https://files.pythonhosted.org/packages/16/2d/f754eaae8058568397cca47d0d04deb39fd5bcf00d5a915005160c29e0cd/cadence_py-1.0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8757fa993f206d31dad1b43925fa6a8519c353ee48e77dbfe417b55bb7ab6c99",
"md5": "595a789c5946a49d5be68e3b82031718",
"sha256": "9b8698feee07591ece0a1e122abbfec4583c249735ba458158c808109a752709"
},
"downloads": -1,
"filename": "cadence_py-1.0.7.tar.gz",
"has_sig": false,
"md5_digest": "595a789c5946a49d5be68e3b82031718",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.14,>=3.13",
"size": 81324,
"upload_time": "2025-09-03T09:38:06",
"upload_time_iso_8601": "2025-09-03T09:38:06.956519Z",
"url": "https://files.pythonhosted.org/packages/87/57/fa993f206d31dad1b43925fa6a8519c353ee48e77dbfe417b55bb7ab6c99/cadence_py-1.0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-03 09:38:06",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jonaskahn",
"github_project": "cadence",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "cadence-py"
}