Name | liteswarm JSON |
Version |
0.1.1
JSON |
| download |
home_page | None |
Summary | A lightweight framework for building AI agent systems |
upload_time | 2024-12-09 23:26:49 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.11 |
license | MIT License Copyright (c) 2024 GlyphyAI Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
ai
agents
llm
swarm
multi-agent
agent-systems
agent-orchestration
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# LiteSwarm
LiteSwarm is a lightweight, extensible framework for building AI agent systems. It provides a minimal yet powerful foundation for creating both simple chatbots and complex agent teams, with customization possible at every level.
The framework is LLM-agnostic and supports 100+ language models through [litellm](https://github.com/BerriAI/litellm), including:
- OpenAI
- Anthropic (Claude)
- Google (Gemini)
- Azure OpenAI
- AWS Bedrock
- And many more
## Quick Navigation
- [Installation](#installation)
- [Requirements](#requirements)
- [Key Features](#key-features)
- [Basic Usage](#basic-usage)
- [Advanced Features](#advanced-features)
- [Key Concepts](#key-concepts)
- [Best Practices](#best-practices)
- [Examples](#examples)
- [Contributing](#contributing)
- [License](#license)
## Installation
Choose your preferred installation method:
Using pip:
```bash
pip install liteswarm
```
Using uv (recommended for faster installation):
```bash
uv pip install liteswarm
```
Using poetry:
```bash
poetry add liteswarm
```
Using pipx (for CLI tools):
```bash
pipx install liteswarm
```
## Requirements
- Python 3.11 or higher
- Async support (asyncio)
- A valid API key for your chosen LLM provider
### API Keys
You can provide your API key in two ways:
1. Through environment variables:
```bash
# For OpenAI
export OPENAI_API_KEY=sk-...
# For Anthropic
export ANTHROPIC_API_KEY=sk-ant-...
# For Google
export GOOGLE_API_KEY=...
```
or using os.environ:
```python
import os
# For OpenAI
os.environ["OPENAI_API_KEY"] = "sk-..."
# For Anthropic
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-..."
# For Google
os.environ["GOOGLE_API_KEY"] = "..."
```
2. Using a `.env` file:
```env
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
```
3. Using the `LLM` class:
```python
from liteswarm.types import LLM
llm = LLM(
model="gpt-4o",
api_key="sk-...", # or api_base, api_version, etc.
)
```
See [litellm's documentation](https://docs.litellm.ai/docs/providers) for a complete list of supported providers and their environment variables.
## Key Features
- **Lightweight Core**: Minimal base implementation that's easy to understand and extend
- **LLM Agnostic**: Support for 100+ language models through litellm
- **Flexible Agent System**: Create agents with custom instructions and capabilities
- **Tool Integration**: Easy integration of Python functions as agent tools
- **Structured Outputs**: Built-in support for validating and parsing agent responses
- **Multi-Agent Teams**: Coordinate multiple specialized agents for complex tasks
- **Streaming Support**: Real-time response streaming with customizable handlers
- **Context Management**: Smart handling of conversation history and context
- **Cost Tracking**: Optional tracking of token usage and API costs
## Basic Usage
### Simple Agent
```python
from liteswarm.core import Swarm
from liteswarm.types import LLM, Agent
# Create an agent
agent = Agent(
id="assistant",
instructions="You are a helpful AI assistant.",
llm=LLM(
model="claude-3-5-haiku-20241022",
temperature=0.7,
),
)
# Create swarm and execute
swarm = Swarm()
result = await swarm.execute(
agent=agent,
prompt="Hello!",
)
print(result.content)
```
### Agent with Tools
```python
from liteswarm.core import Swarm
from liteswarm.types import LLM, Agent
def calculate_sum(a: int, b: int) -> int:
"""Calculate the sum of two numbers."""
return a + b
agent = Agent(
id="math_agent",
instructions="Use tools for calculations. Never calculate yourself.",
llm=LLM(
model="claude-3-5-haiku-20241022",
tools=[calculate_sum],
tool_choice="auto",
),
)
# Create swarm and execute
swarm = Swarm()
result = await swarm.execute(
agent=agent,
prompt="What is 2 + 2?",
)
print(result.content)
```
## Advanced Features
### Agent Switching
Agents can dynamically switch to other agents during execution:
```python
from liteswarm.core import Swarm
from liteswarm.types import LLM, Agent, ToolResult
# Create specialized agents
math_agent = Agent(
id="math",
instructions="You are a math expert.",
llm=LLM(model="gpt-4o"),
)
def switch_to_math() -> ToolResult:
"""Switch to math agent for calculations."""
return ToolResult(
content="Switching to math expert",
agent=math_agent,
)
# Create main agent with switching capability
main_agent = Agent(
id="assistant",
instructions="Help users and switch to math agent for calculations.",
llm=LLM(
model="gpt-4o",
tools=[switch_to_math],
tool_choice="auto",
),
)
# Agent will automatically switch when needed
swarm = Swarm()
result = await swarm.execute(
agent=main_agent,
prompt="What is 234 * 567?",
)
```
### Agent Teams
The SwarmTeam class (from `liteswarm.experimental`) provides an experimental framework for orchestrating complex agent workflows with automated planning. It follows a two-phase process:
1. **Planning Phase**:
- Analyzes the prompt to create a structured plan
- Breaks down work into specific tasks with dependencies
- Supports interactive feedback loop for plan refinement
- Validates task types and team capabilities
2. **Execution Phase**:
- Executes tasks in dependency order
- Assigns tasks to capable team members
- Tracks progress and maintains execution state
- Produces an artifact with results and updates
Here's a complete example:
```python
from liteswarm.core import Swarm
from liteswarm.experimental import SwarmTeam
from liteswarm.types import (
LLM,
Agent,
ArtifactStatus,
ContextVariables,
Plan,
PlanFeedbackHandler,
Task,
TaskDefinition,
TeamMember,
)
# 1. Define task types
class ReviewTask(Task):
pr_url: str
review_type: str # "security", "performance", etc.
class ImplementTask(Task):
feature_name: str
requirements: list[str]
# 2. Create task definitions with instructions
review_def = TaskDefinition(
task_schema=ReviewTask,
task_instructions="Review {task.pr_url} focusing on {task.review_type} aspects.",
)
implement_def = TaskDefinition(
task_schema=ImplementTask,
task_instructions="Implement {task.feature_name} following requirements:\n{task.requirements}",
)
# 3. Create specialized agents
review_agent = Agent(
id="reviewer",
instructions="You are a code reviewer focusing on quality and security.",
llm=LLM(model="gpt-4o"),
)
dev_agent = Agent(
id="developer",
instructions="You are a developer implementing new features.",
llm=LLM(model="gpt-4o"),
)
# 4. Create team members with capabilities
team_members = [
TeamMember(
id="senior-reviewer",
agent=review_agent,
task_types=[ReviewTask],
),
TeamMember(
id="backend-dev",
agent=dev_agent,
task_types=[ImplementTask],
),
]
# 5. Create the team
swarm = Swarm(include_usage=True)
team = SwarmTeam(
swarm=swarm,
members=team_members,
task_definitions=[review_def, implement_def],
)
# 6. Optional: Add plan feedback handler
class InteractiveFeedback(PlanFeedbackHandler):
async def handle(
self,
plan: Plan,
prompt: str,
context: ContextVariables | None,
) -> tuple[str, ContextVariables | None] | None:
"""Allow user to review and modify the plan before execution."""
print("\nProposed plan:")
for task in plan.tasks:
print(f"- {task.title}")
if input("\nApprove? [y/N]: ").lower() != "y":
return "Please revise the plan", context
else:
return None
# 7. Execute workflow with planning
artifact = await team.execute(
prompt="Implement a login feature and review it for security",
context=ContextVariables(
pr_url="github.com/org/repo/123",
security_checklist=["SQL injection", "XSS", "CSRF"],
),
feedback_handler=InteractiveFeedback(),
)
# 8. Check results
if artifact.status == ArtifactStatus.COMPLETED:
print("Tasks completed:")
for result in artifact.task_results:
print(f"- {result.task.title}: {result.task.status}")
```
The SwarmTeam will:
1. Create a plan with appropriate tasks and dependencies
2. Allow plan review/modification through feedback handler
3. Execute tasks in correct order using capable team members
4. Produce an artifact containing all results and updates
See `examples/software_team/run.py` for a complete implementation of a development team.
### Streaming Responses
```python
async for response in swarm.stream(
agent=agent,
prompt="Generate a long response...",
):
print(response.content, end="", flush=True)
```
### Context Variables
```python
result = await swarm.execute(
agent=agent,
prompt="Greet the user",
context_variables=ContextVariables(
user_name="Alice",
language="en",
),
)
```
### Structured Outputs
LiteSwarm provides two layers of structured output handling:
1. **LLM-level Response Format**:
- Set via `response_format` in `LLM` class
- Provider-specific structured output support
- For OpenAI/Anthropic: Direct JSON schema enforcement
- For other providers: Manual prompt engineering
2. **Framework-level Response Format**:
- Set in `TaskDefinition` and `PlanningAgent`
- Provider-agnostic parsing and validation
- Supports both Pydantic models and custom parsers
- Handles response repair and validation
Using Swarm directly with LLM-level response format:
```python
from pydantic import BaseModel
from liteswarm.core.swarm import Swarm
from liteswarm.types import LLM, Agent
class ReviewOutput(BaseModel):
issues: list[str]
approved: bool
agent = Agent(
id="reviewer",
instructions="Review code and provide structured feedback",
llm=LLM(
model="gpt-4o",
response_format=ReviewOutput, # Direct OpenAI JSON schema support
),
)
code = """
def calculate_sum(a: int, b: int) -> int:
\"\"\"Calculate the sum of two numbers.\"\"\"
return a - b
"""
swarm = Swarm()
result = await swarm.execute(
agent=agent,
prompt=f"Review the code and provide structured feedback:\n{code}",
)
# Currently, the content is the raw JSON output from the LLM,
# so we need to parse it manually using a response_format Pydantic model.
output = ReviewOutput.model_validate_json(result.content)
if output.issues:
print("Issues:")
for issue in output.issues:
print(f"- {issue}")
print(f"\nApproved: {output.approved}")
```
Using SwarmTeam with both layers (recommended for complex workflows):
```python
from typing import Literal
from pydantic import BaseModel
from liteswarm.core import Swarm
from liteswarm.experimental import LitePlanningAgent, SwarmTeam
from liteswarm.types import (
LLM,
Agent,
ArtifactStatus,
ContextVariables,
Plan,
Task,
TaskDefinition,
TeamMember,
)
# Define output schema for code reviews
class CodeReviewOutput(BaseModel):
issues: list[str]
approved: bool
suggested_fixes: list[str]
# Define task type with literal constraints
class ReviewTask(Task):
type: Literal["code-review"]
code: str
language: str
review_type: Literal["general", "security", "performance"]
# Define plan schema for planning agent
class CodeReviewPlan(Plan):
tasks: list[ReviewTask]
# Create dynamic task instructions
def build_review_task_instructions(task: ReviewTask, context: ContextVariables) -> str:
prompt = (
"Review the provided code focusing on {task.review_type} aspects.\n"
"Code to review:\n{task.code}"
)
return prompt.format(task=task)
# Create task definition with response format
review_def = TaskDefinition(
task_schema=ReviewTask,
task_instructions=build_review_task_instructions,
# Framework-level: Used to parse and validate responses
task_response_format=CodeReviewOutput,
)
# Create review agent with LLM-level response format
review_agent = Agent(
id="code-reviewer",
instructions="You are an expert code reviewer.",
llm=LLM(
model="gpt-4o",
# LLM-level: Direct OpenAI JSON schema support
response_format=CodeReviewOutput,
),
)
# Create planning agent with LLM-level response format
planning_agent = Agent(
id="planning-agent",
instructions="You are a planning agent that creates plans for code review tasks.",
llm=LLM(
model="gpt-4o",
# LLM-level: Direct OpenAI JSON schema support
response_format=CodeReviewPlan,
),
)
# Create dynamic planning prompt
PLANNING_PROMPT_TEMPLATE = """
User Request:
<request>{PROMPT}</request>
Code Context:
<code language="{LANGUAGE}" review_type="{REVIEW_TYPE}">
{CODE}
</code>
Please create a review plan consisting of 1 task.
""".strip()
def build_planning_prompt_template(prompt: str, context: ContextVariables) -> str:
code = context.get("code", "")
language = context.get("language", "")
review_type = context.get("review_type", "")
return PLANNING_PROMPT_TEMPLATE.format(
PROMPT=prompt,
CODE=code,
LANGUAGE=language,
REVIEW_TYPE=review_type,
)
# Create team with both layers of structured outputs
swarm = Swarm()
team = SwarmTeam(
swarm=swarm,
members=[
TeamMember(
id="senior-reviewer",
agent=review_agent,
task_types=[ReviewTask],
),
],
task_definitions=[review_def],
planning_agent=LitePlanningAgent(
swarm=swarm,
agent=planning_agent,
prompt_template=build_planning_prompt_template,
task_definitions=[review_def],
# Framework-level: Used to parse planning responses
response_format=CodeReviewPlan,
),
)
# Execute review
code = """
def calculate_sum(a: int, b: int) -> int:
\"\"\"Calculate the sum of two numbers.\"\"\"
return a - bs
"""
artifact = await team.execute(
prompt="Review this Python code",
context=ContextVariables(
code=code,
language="python",
review_type="general",
),
)
# Access structured output
if artifact.status == ArtifactStatus.COMPLETED:
for result in artifact.task_results:
# Output is automatically parsed into CodeReviewOutput
output = result.output
if not isinstance(output, CodeReviewOutput):
raise TypeError(f"Unexpected output type: {type(output)}")
print(f"\nReview by: {result.assignee.id}")
print("\nIssues found:")
for issue in output.issues:
print(f"- {issue}")
print("\nSuggested fixes:")
for fix in output.suggested_fixes:
print(f"- {fix}")
print(f"\nApproved: {output.approved}")
```
This example demonstrates:
1. **LLM-level Format** (Provider-specific):
- `response_format=CodeReviewOutput` in review agent's LLM
- `response_format=CodeReviewPlan` in planning agent's LLM
- OpenAI will enforce JSON schema at generation time
2. **Framework-level Format** (Provider-agnostic):
- `task_response_format=CodeReviewOutput` in task definition
- `response_format=CodeReviewPlan` in planning agent
- Framework handles parsing, validation, and repair
The two-layer approach ensures:
- Structured outputs work with any LLM provider
- Automatic parsing and validation
- Consistent interface across providers
- Fallback to prompt-based formatting
- Response repair capabilities
See `examples/structured_outputs/run.py` for more examples of different structured output strategies.
> **Note about OpenAI Structured Outputs**
>
> OpenAI's JSON schema support has certain limitations:
> - No default values in Pydantic models
> - No `oneOf` in union types (must use discriminated unions)
> - Some advanced Pydantic features may not be supported
>
> While LiteSwarm's base `Task` and `Plan` types are designed to be OpenAI-compatible, this compatibility must be maintained by users when subclassing these types. For example:
>
> ```python
> # OpenAI-compatible task type
> class ReviewTask(Task):
> type: Literal["code-review"] # Discriminator field
> code: str # Required field, no default
> language: str # Required field, no default
>
> # Not OpenAI-compatible - has default value
> review_type: str = "general" # Will work with other providers
> ```
>
> We provide utilities to help maintain compatibility:
> - `liteswarm.utils.pydantic` module contains helpers for:
> - Converting Pydantic schemas to OpenAI format
> - Restoring objects from OpenAI responses
> - Handling schema transformations
>
> See `examples/structured_outputs/strategies/openai_pydantic.py` for practical examples of using these utilities.
>
> Remember: Base `Task` and `Plan` are OpenAI-compatible, but maintaining compatibility in subclasses is the user's responsibility if OpenAI structured outputs are needed.
## Key Concepts
1. **Agent**: An AI entity with specific instructions and capabilities
2. **Tool**: A Python function that an agent can call
3. **Swarm**: Orchestrator for agent interactions and conversations
4. **SwarmTeam**: Coordinator for multiple specialized agents
5. **Context Variables**: Dynamic data passed to agents and tools
6. **Stream Handler**: Interface for real-time response processing
## Best Practices
1. Use `ToolResult` for wrapping tool return values:
```python
def my_tool() -> ToolResult:
return ToolResult(
content="Result",
context_variables=ContextVariables(...)
)
```
2. Implement proper error handling:
```python
try:
result = await team.execute_task(task)
except TaskExecutionError as e:
logger.error(f"Task failed: {e}")
```
3. Use context variables for dynamic behavior:
```python
def get_instructions(context: ContextVariables) -> str:
return f"Help {context['user_name']} with {context['task']}"
```
4. Leverage streaming for real-time feedback:
```python
class MyStreamHandler(SwarmStreamHandler):
async def on_stream(self, delta: Delta, agent: Agent) -> None:
print(delta.content, end="")
```
## Examples
The framework includes several example applications in the `examples/` directory:
- **Basic REPL** (`examples/repl/run.py`): Simple interactive chat interface showing basic agent usage
- **Calculator** (`examples/calculator/run.py`): Tool usage and agent switching with a math-focused agent
- **Mobile App Team** (`examples/mobile_app/run.py`): Complex team of agents (PM, Designer, Engineer, QA) building a Flutter app
- **Parallel Research** (`examples/parallel_research/run.py`): Parallel tool execution for efficient data gathering
- **Structured Outputs** (`examples/structured_outputs/run.py`): Different strategies for parsing structured agent responses
- **Software Team** (`examples/software_team/run.py`): Complete development team with planning, review, and implementation capabilities
Each example demonstrates different aspects of the framework:
```bash
# Run the REPL example
python -m examples.repl.run
# Try the mobile app team
python -m examples.mobile_app.run
# Experiment with structured outputs
python -m examples.structured_outputs.run
```
## Contributing
We welcome contributions to LiteSwarm! We're particularly interested in:
1. **Adding Tests**: We currently have minimal test coverage and welcome contributions to:
- Add unit tests for core functionality
- Add integration tests for agent interactions
- Add example-based tests for common use cases
- Set up testing infrastructure and CI
2. **Bug Reports**: Open an issue describing:
- Steps to reproduce the bug
- Expected vs actual behavior
- Your environment details
- Any relevant code snippets
3. **Feature Requests**: Open an issue describing:
- The use case for the feature
- Expected behavior
- Example code showing how it might work
4. **Code Contributions**:
- Fork the repository
- Create a new branch for your feature
- Include tests for new functionality
- Submit a pull request with a clear description
- Ensure CI passes and code follows our style guide
### Development setup:
```bash
# Clone the repository
git clone https://github.com/your-org/liteswarm.git
cd liteswarm
# Create virtual environment (choose one)
python -m venv .venv
# or
poetry install
# or
uv venv
# Install development dependencies
uv pip install -e ".[dev]"
# or
poetry install --with dev
# Run existing tests (if any)
pytest
# Run type checking
mypy .
# Run linting
ruff check .
```
### Code Style
- We use ruff for linting and formatting
- Type hints are required for all functions
- Docstrings should follow Google style
- New features should include tests
### Testing Guidelines
We're building our test suite and welcome contributions that:
- Add pytest-based tests
- Include both unit and integration tests
- Cover core functionality
- Demonstrate real-world usage
- Help improve test coverage
- Set up testing infrastructure
### Commit Messages
Follow the [Conventional Commits](https://www.conventionalcommits.org/) specification:
- `feat:` New features
- `fix:` Bug fixes
- `docs:` Documentation changes
- `test:` Adding or updating tests
- `refactor:` Code changes that neither fix bugs nor add features
## Citation
If you use LiteSwarm in your research or project, please cite our work:
```bibtex
@software{liteswarm2024,
title = {LiteSwarm: A Lightweight Framework for Building AI Agent Systems},
author = {Evgenii Mozharovskii and {GlyphyAI}},
year = {2024},
url = {https://github.com/glyphyai/liteswarm},
note = {An extensible framework for building AI agent systems with a focus on
structured interactions, LLM-agnostic design, and composable architecture.
Features include multi-agent orchestration, structured outputs, and
provider-agnostic response parsing.}
}
```
## License
MIT License - see LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "liteswarm",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "ai, agents, llm, swarm, multi-agent, agent-systems, agent-orchestration",
"author": null,
"author_email": "Evgenii Mozharovskii <eugene@glyphy.ai>",
"download_url": "https://files.pythonhosted.org/packages/97/14/f805d0101af28b015778aa6df138116eab8658999f23a132daec9a0e1cdb/liteswarm-0.1.1.tar.gz",
"platform": null,
"description": "# LiteSwarm\n\nLiteSwarm is a lightweight, extensible framework for building AI agent systems. It provides a minimal yet powerful foundation for creating both simple chatbots and complex agent teams, with customization possible at every level.\n\nThe framework is LLM-agnostic and supports 100+ language models through [litellm](https://github.com/BerriAI/litellm), including:\n- OpenAI\n- Anthropic (Claude)\n- Google (Gemini)\n- Azure OpenAI\n- AWS Bedrock\n- And many more\n\n## Quick Navigation\n- [Installation](#installation)\n- [Requirements](#requirements)\n- [Key Features](#key-features)\n- [Basic Usage](#basic-usage)\n- [Advanced Features](#advanced-features)\n- [Key Concepts](#key-concepts)\n- [Best Practices](#best-practices)\n- [Examples](#examples)\n- [Contributing](#contributing)\n- [License](#license)\n\n## Installation\n\nChoose your preferred installation method:\n\nUsing pip:\n```bash\npip install liteswarm\n```\n\nUsing uv (recommended for faster installation):\n```bash\nuv pip install liteswarm\n```\n\nUsing poetry:\n```bash\npoetry add liteswarm\n```\n\nUsing pipx (for CLI tools):\n```bash\npipx install liteswarm\n```\n\n## Requirements\n\n- Python 3.11 or higher\n- Async support (asyncio)\n- A valid API key for your chosen LLM provider\n\n### API Keys\nYou can provide your API key in two ways:\n1. Through environment variables:\n ```bash\n # For OpenAI\n export OPENAI_API_KEY=sk-...\n # For Anthropic\n export ANTHROPIC_API_KEY=sk-ant-...\n # For Google\n export GOOGLE_API_KEY=...\n ```\n\n or using os.environ:\n ```python\n import os\n\n # For OpenAI\n os.environ[\"OPENAI_API_KEY\"] = \"sk-...\"\n # For Anthropic\n os.environ[\"ANTHROPIC_API_KEY\"] = \"sk-ant-...\"\n # For Google\n os.environ[\"GOOGLE_API_KEY\"] = \"...\"\n ```\n\n2. Using a `.env` file:\n ```env\n OPENAI_API_KEY=sk-...\n ANTHROPIC_API_KEY=sk-ant-...\n GOOGLE_API_KEY=...\n ```\n\n3. Using the `LLM` class:\n ```python\n from liteswarm.types import LLM\n\n llm = LLM(\n model=\"gpt-4o\",\n api_key=\"sk-...\", # or api_base, api_version, etc.\n )\n ```\n\nSee [litellm's documentation](https://docs.litellm.ai/docs/providers) for a complete list of supported providers and their environment variables.\n\n## Key Features\n\n- **Lightweight Core**: Minimal base implementation that's easy to understand and extend\n- **LLM Agnostic**: Support for 100+ language models through litellm\n- **Flexible Agent System**: Create agents with custom instructions and capabilities\n- **Tool Integration**: Easy integration of Python functions as agent tools\n- **Structured Outputs**: Built-in support for validating and parsing agent responses\n- **Multi-Agent Teams**: Coordinate multiple specialized agents for complex tasks\n- **Streaming Support**: Real-time response streaming with customizable handlers\n- **Context Management**: Smart handling of conversation history and context\n- **Cost Tracking**: Optional tracking of token usage and API costs\n\n## Basic Usage\n\n### Simple Agent\n\n```python\nfrom liteswarm.core import Swarm\nfrom liteswarm.types import LLM, Agent\n\n# Create an agent\nagent = Agent(\n id=\"assistant\",\n instructions=\"You are a helpful AI assistant.\",\n llm=LLM(\n model=\"claude-3-5-haiku-20241022\",\n temperature=0.7,\n ),\n)\n\n# Create swarm and execute\nswarm = Swarm()\nresult = await swarm.execute(\n agent=agent,\n prompt=\"Hello!\",\n)\n\nprint(result.content)\n```\n\n### Agent with Tools\n\n```python\nfrom liteswarm.core import Swarm\nfrom liteswarm.types import LLM, Agent\n\n\ndef calculate_sum(a: int, b: int) -> int:\n \"\"\"Calculate the sum of two numbers.\"\"\"\n return a + b\n\n\nagent = Agent(\n id=\"math_agent\",\n instructions=\"Use tools for calculations. Never calculate yourself.\",\n llm=LLM(\n model=\"claude-3-5-haiku-20241022\",\n tools=[calculate_sum],\n tool_choice=\"auto\",\n ),\n)\n\n# Create swarm and execute\nswarm = Swarm()\nresult = await swarm.execute(\n agent=agent,\n prompt=\"What is 2 + 2?\",\n)\n\nprint(result.content)\n```\n\n## Advanced Features\n\n### Agent Switching\n\nAgents can dynamically switch to other agents during execution:\n\n```python\nfrom liteswarm.core import Swarm\nfrom liteswarm.types import LLM, Agent, ToolResult\n\n# Create specialized agents\nmath_agent = Agent(\n id=\"math\",\n instructions=\"You are a math expert.\",\n llm=LLM(model=\"gpt-4o\"),\n)\n\ndef switch_to_math() -> ToolResult:\n \"\"\"Switch to math agent for calculations.\"\"\"\n return ToolResult(\n content=\"Switching to math expert\",\n agent=math_agent,\n )\n\n# Create main agent with switching capability\nmain_agent = Agent(\n id=\"assistant\",\n instructions=\"Help users and switch to math agent for calculations.\",\n llm=LLM(\n model=\"gpt-4o\",\n tools=[switch_to_math],\n tool_choice=\"auto\",\n ),\n)\n\n# Agent will automatically switch when needed\nswarm = Swarm()\nresult = await swarm.execute(\n agent=main_agent,\n prompt=\"What is 234 * 567?\",\n)\n```\n\n### Agent Teams\n\nThe SwarmTeam class (from `liteswarm.experimental`) provides an experimental framework for orchestrating complex agent workflows with automated planning. It follows a two-phase process:\n\n1. **Planning Phase**: \n - Analyzes the prompt to create a structured plan\n - Breaks down work into specific tasks with dependencies\n - Supports interactive feedback loop for plan refinement\n - Validates task types and team capabilities\n\n2. **Execution Phase**:\n - Executes tasks in dependency order\n - Assigns tasks to capable team members\n - Tracks progress and maintains execution state\n - Produces an artifact with results and updates\n\nHere's a complete example:\n\n```python\nfrom liteswarm.core import Swarm\nfrom liteswarm.experimental import SwarmTeam\nfrom liteswarm.types import (\n LLM,\n Agent,\n ArtifactStatus,\n ContextVariables,\n Plan,\n PlanFeedbackHandler,\n Task,\n TaskDefinition,\n TeamMember,\n)\n\n\n# 1. Define task types\nclass ReviewTask(Task):\n pr_url: str\n review_type: str # \"security\", \"performance\", etc.\n\n\nclass ImplementTask(Task):\n feature_name: str\n requirements: list[str]\n\n\n# 2. Create task definitions with instructions\nreview_def = TaskDefinition(\n task_schema=ReviewTask,\n task_instructions=\"Review {task.pr_url} focusing on {task.review_type} aspects.\",\n)\n\nimplement_def = TaskDefinition(\n task_schema=ImplementTask,\n task_instructions=\"Implement {task.feature_name} following requirements:\\n{task.requirements}\",\n)\n\n# 3. Create specialized agents\nreview_agent = Agent(\n id=\"reviewer\",\n instructions=\"You are a code reviewer focusing on quality and security.\",\n llm=LLM(model=\"gpt-4o\"),\n)\n\ndev_agent = Agent(\n id=\"developer\",\n instructions=\"You are a developer implementing new features.\",\n llm=LLM(model=\"gpt-4o\"),\n)\n\n# 4. Create team members with capabilities\nteam_members = [\n TeamMember(\n id=\"senior-reviewer\",\n agent=review_agent,\n task_types=[ReviewTask],\n ),\n TeamMember(\n id=\"backend-dev\",\n agent=dev_agent,\n task_types=[ImplementTask],\n ),\n]\n\n# 5. Create the team\nswarm = Swarm(include_usage=True)\nteam = SwarmTeam(\n swarm=swarm,\n members=team_members,\n task_definitions=[review_def, implement_def],\n)\n\n\n# 6. Optional: Add plan feedback handler\nclass InteractiveFeedback(PlanFeedbackHandler):\n async def handle(\n self,\n plan: Plan,\n prompt: str,\n context: ContextVariables | None,\n ) -> tuple[str, ContextVariables | None] | None:\n \"\"\"Allow user to review and modify the plan before execution.\"\"\"\n print(\"\\nProposed plan:\")\n for task in plan.tasks:\n print(f\"- {task.title}\")\n\n if input(\"\\nApprove? [y/N]: \").lower() != \"y\":\n return \"Please revise the plan\", context\n else:\n return None\n\n\n# 7. Execute workflow with planning\nartifact = await team.execute(\n prompt=\"Implement a login feature and review it for security\",\n context=ContextVariables(\n pr_url=\"github.com/org/repo/123\",\n security_checklist=[\"SQL injection\", \"XSS\", \"CSRF\"],\n ),\n feedback_handler=InteractiveFeedback(),\n)\n\n# 8. Check results\nif artifact.status == ArtifactStatus.COMPLETED:\n print(\"Tasks completed:\")\n for result in artifact.task_results:\n print(f\"- {result.task.title}: {result.task.status}\")\n```\n\nThe SwarmTeam will:\n1. Create a plan with appropriate tasks and dependencies\n2. Allow plan review/modification through feedback handler\n3. Execute tasks in correct order using capable team members\n4. Produce an artifact containing all results and updates\n\nSee `examples/software_team/run.py` for a complete implementation of a development team.\n\n### Streaming Responses\n\n```python\nasync for response in swarm.stream(\n agent=agent,\n prompt=\"Generate a long response...\",\n):\n print(response.content, end=\"\", flush=True)\n```\n\n### Context Variables\n\n```python\nresult = await swarm.execute(\n agent=agent,\n prompt=\"Greet the user\",\n context_variables=ContextVariables(\n user_name=\"Alice\",\n language=\"en\",\n ),\n)\n```\n\n### Structured Outputs\n\nLiteSwarm provides two layers of structured output handling:\n\n1. **LLM-level Response Format**:\n - Set via `response_format` in `LLM` class\n - Provider-specific structured output support\n - For OpenAI/Anthropic: Direct JSON schema enforcement\n - For other providers: Manual prompt engineering\n\n2. **Framework-level Response Format**:\n - Set in `TaskDefinition` and `PlanningAgent`\n - Provider-agnostic parsing and validation\n - Supports both Pydantic models and custom parsers\n - Handles response repair and validation\n\nUsing Swarm directly with LLM-level response format:\n\n```python\nfrom pydantic import BaseModel\n\nfrom liteswarm.core.swarm import Swarm\nfrom liteswarm.types import LLM, Agent\n\n\nclass ReviewOutput(BaseModel):\n issues: list[str]\n approved: bool\n\n\nagent = Agent(\n id=\"reviewer\",\n instructions=\"Review code and provide structured feedback\",\n llm=LLM(\n model=\"gpt-4o\",\n response_format=ReviewOutput, # Direct OpenAI JSON schema support\n ),\n)\n\ncode = \"\"\"\ndef calculate_sum(a: int, b: int) -> int:\n \\\"\\\"\\\"Calculate the sum of two numbers.\\\"\\\"\\\"\n return a - b\n\"\"\"\n\nswarm = Swarm()\nresult = await swarm.execute(\n agent=agent,\n prompt=f\"Review the code and provide structured feedback:\\n{code}\",\n)\n\n# Currently, the content is the raw JSON output from the LLM,\n# so we need to parse it manually using a response_format Pydantic model.\noutput = ReviewOutput.model_validate_json(result.content)\n\nif output.issues:\n print(\"Issues:\")\n for issue in output.issues:\n print(f\"- {issue}\")\n\nprint(f\"\\nApproved: {output.approved}\")\n```\n\nUsing SwarmTeam with both layers (recommended for complex workflows):\n\n```python\nfrom typing import Literal\n\nfrom pydantic import BaseModel\n\nfrom liteswarm.core import Swarm\nfrom liteswarm.experimental import LitePlanningAgent, SwarmTeam\nfrom liteswarm.types import (\n LLM,\n Agent,\n ArtifactStatus,\n ContextVariables,\n Plan,\n Task,\n TaskDefinition,\n TeamMember,\n)\n\n\n# Define output schema for code reviews\nclass CodeReviewOutput(BaseModel):\n issues: list[str]\n approved: bool\n suggested_fixes: list[str]\n\n\n# Define task type with literal constraints\nclass ReviewTask(Task):\n type: Literal[\"code-review\"]\n code: str\n language: str\n review_type: Literal[\"general\", \"security\", \"performance\"]\n\n\n# Define plan schema for planning agent\nclass CodeReviewPlan(Plan):\n tasks: list[ReviewTask]\n\n\n# Create dynamic task instructions\ndef build_review_task_instructions(task: ReviewTask, context: ContextVariables) -> str:\n prompt = (\n \"Review the provided code focusing on {task.review_type} aspects.\\n\"\n \"Code to review:\\n{task.code}\"\n )\n return prompt.format(task=task)\n\n\n# Create task definition with response format\nreview_def = TaskDefinition(\n task_schema=ReviewTask,\n task_instructions=build_review_task_instructions,\n # Framework-level: Used to parse and validate responses\n task_response_format=CodeReviewOutput,\n)\n\n# Create review agent with LLM-level response format\nreview_agent = Agent(\n id=\"code-reviewer\",\n instructions=\"You are an expert code reviewer.\",\n llm=LLM(\n model=\"gpt-4o\",\n # LLM-level: Direct OpenAI JSON schema support\n response_format=CodeReviewOutput,\n ),\n)\n\n# Create planning agent with LLM-level response format\nplanning_agent = Agent(\n id=\"planning-agent\",\n instructions=\"You are a planning agent that creates plans for code review tasks.\",\n llm=LLM(\n model=\"gpt-4o\",\n # LLM-level: Direct OpenAI JSON schema support\n response_format=CodeReviewPlan,\n ),\n)\n\n\n# Create dynamic planning prompt\nPLANNING_PROMPT_TEMPLATE = \"\"\"\nUser Request:\n<request>{PROMPT}</request>\n\nCode Context:\n<code language=\"{LANGUAGE}\" review_type=\"{REVIEW_TYPE}\">\n{CODE}\n</code>\n\nPlease create a review plan consisting of 1 task.\n\"\"\".strip()\n\n\ndef build_planning_prompt_template(prompt: str, context: ContextVariables) -> str:\n code = context.get(\"code\", \"\")\n language = context.get(\"language\", \"\")\n review_type = context.get(\"review_type\", \"\")\n\n return PLANNING_PROMPT_TEMPLATE.format(\n PROMPT=prompt,\n CODE=code,\n LANGUAGE=language,\n REVIEW_TYPE=review_type,\n )\n\n\n# Create team with both layers of structured outputs\nswarm = Swarm()\nteam = SwarmTeam(\n swarm=swarm,\n members=[\n TeamMember(\n id=\"senior-reviewer\",\n agent=review_agent,\n task_types=[ReviewTask],\n ),\n ],\n task_definitions=[review_def],\n planning_agent=LitePlanningAgent(\n swarm=swarm,\n agent=planning_agent,\n prompt_template=build_planning_prompt_template,\n task_definitions=[review_def],\n # Framework-level: Used to parse planning responses\n response_format=CodeReviewPlan,\n ),\n)\n\n# Execute review\ncode = \"\"\"\ndef calculate_sum(a: int, b: int) -> int:\n \\\"\\\"\\\"Calculate the sum of two numbers.\\\"\\\"\\\"\n return a - bs\n\"\"\"\n\nartifact = await team.execute(\n prompt=\"Review this Python code\",\n context=ContextVariables(\n code=code,\n language=\"python\",\n review_type=\"general\",\n ),\n)\n\n# Access structured output\nif artifact.status == ArtifactStatus.COMPLETED:\n for result in artifact.task_results:\n # Output is automatically parsed into CodeReviewOutput\n output = result.output\n if not isinstance(output, CodeReviewOutput):\n raise TypeError(f\"Unexpected output type: {type(output)}\")\n\n print(f\"\\nReview by: {result.assignee.id}\")\n print(\"\\nIssues found:\")\n for issue in output.issues:\n print(f\"- {issue}\")\n print(\"\\nSuggested fixes:\")\n for fix in output.suggested_fixes:\n print(f\"- {fix}\")\n print(f\"\\nApproved: {output.approved}\")\n```\n\nThis example demonstrates:\n\n1. **LLM-level Format** (Provider-specific):\n - `response_format=CodeReviewOutput` in review agent's LLM\n - `response_format=CodeReviewPlan` in planning agent's LLM\n - OpenAI will enforce JSON schema at generation time\n\n2. **Framework-level Format** (Provider-agnostic):\n - `task_response_format=CodeReviewOutput` in task definition\n - `response_format=CodeReviewPlan` in planning agent\n - Framework handles parsing, validation, and repair\n\nThe two-layer approach ensures:\n- Structured outputs work with any LLM provider\n- Automatic parsing and validation\n- Consistent interface across providers\n- Fallback to prompt-based formatting\n- Response repair capabilities\n\nSee `examples/structured_outputs/run.py` for more examples of different structured output strategies.\n\n> **Note about OpenAI Structured Outputs**\n> \n> OpenAI's JSON schema support has certain limitations:\n> - No default values in Pydantic models\n> - No `oneOf` in union types (must use discriminated unions)\n> - Some advanced Pydantic features may not be supported\n>\n> While LiteSwarm's base `Task` and `Plan` types are designed to be OpenAI-compatible, this compatibility must be maintained by users when subclassing these types. For example:\n>\n> ```python\n> # OpenAI-compatible task type\n> class ReviewTask(Task):\n> type: Literal[\"code-review\"] # Discriminator field\n> code: str # Required field, no default\n> language: str # Required field, no default\n> \n> # Not OpenAI-compatible - has default value\n> review_type: str = \"general\" # Will work with other providers\n> ```\n>\n> We provide utilities to help maintain compatibility:\n> - `liteswarm.utils.pydantic` module contains helpers for:\n> - Converting Pydantic schemas to OpenAI format\n> - Restoring objects from OpenAI responses\n> - Handling schema transformations\n>\n> See `examples/structured_outputs/strategies/openai_pydantic.py` for practical examples of using these utilities.\n>\n> Remember: Base `Task` and `Plan` are OpenAI-compatible, but maintaining compatibility in subclasses is the user's responsibility if OpenAI structured outputs are needed.\n\n## Key Concepts\n\n1. **Agent**: An AI entity with specific instructions and capabilities\n2. **Tool**: A Python function that an agent can call\n3. **Swarm**: Orchestrator for agent interactions and conversations\n4. **SwarmTeam**: Coordinator for multiple specialized agents\n5. **Context Variables**: Dynamic data passed to agents and tools\n6. **Stream Handler**: Interface for real-time response processing\n\n## Best Practices\n\n1. Use `ToolResult` for wrapping tool return values:\n ```python\n def my_tool() -> ToolResult:\n return ToolResult(\n content=\"Result\",\n context_variables=ContextVariables(...)\n )\n ```\n\n2. Implement proper error handling:\n ```python\n try:\n result = await team.execute_task(task)\n except TaskExecutionError as e:\n logger.error(f\"Task failed: {e}\")\n ```\n\n3. Use context variables for dynamic behavior:\n ```python\n def get_instructions(context: ContextVariables) -> str:\n return f\"Help {context['user_name']} with {context['task']}\"\n ```\n\n4. Leverage streaming for real-time feedback:\n ```python\n class MyStreamHandler(SwarmStreamHandler):\n async def on_stream(self, delta: Delta, agent: Agent) -> None:\n print(delta.content, end=\"\")\n ```\n\n## Examples\n\nThe framework includes several example applications in the `examples/` directory:\n\n- **Basic REPL** (`examples/repl/run.py`): Simple interactive chat interface showing basic agent usage\n- **Calculator** (`examples/calculator/run.py`): Tool usage and agent switching with a math-focused agent\n- **Mobile App Team** (`examples/mobile_app/run.py`): Complex team of agents (PM, Designer, Engineer, QA) building a Flutter app\n- **Parallel Research** (`examples/parallel_research/run.py`): Parallel tool execution for efficient data gathering\n- **Structured Outputs** (`examples/structured_outputs/run.py`): Different strategies for parsing structured agent responses\n- **Software Team** (`examples/software_team/run.py`): Complete development team with planning, review, and implementation capabilities\n\nEach example demonstrates different aspects of the framework:\n```bash\n# Run the REPL example\npython -m examples.repl.run\n\n# Try the mobile app team\npython -m examples.mobile_app.run\n\n# Experiment with structured outputs\npython -m examples.structured_outputs.run\n```\n\n## Contributing\n\nWe welcome contributions to LiteSwarm! We're particularly interested in:\n\n1. **Adding Tests**: We currently have minimal test coverage and welcome contributions to:\n - Add unit tests for core functionality\n - Add integration tests for agent interactions\n - Add example-based tests for common use cases\n - Set up testing infrastructure and CI\n\n2. **Bug Reports**: Open an issue describing:\n - Steps to reproduce the bug\n - Expected vs actual behavior\n - Your environment details\n - Any relevant code snippets\n\n3. **Feature Requests**: Open an issue describing:\n - The use case for the feature\n - Expected behavior\n - Example code showing how it might work\n\n4. **Code Contributions**: \n - Fork the repository\n - Create a new branch for your feature\n - Include tests for new functionality\n - Submit a pull request with a clear description\n - Ensure CI passes and code follows our style guide\n\n### Development setup:\n\n```bash\n# Clone the repository\ngit clone https://github.com/your-org/liteswarm.git\ncd liteswarm\n\n# Create virtual environment (choose one)\npython -m venv .venv\n# or\npoetry install\n# or\nuv venv\n\n# Install development dependencies\nuv pip install -e \".[dev]\"\n# or\npoetry install --with dev\n\n# Run existing tests (if any)\npytest\n\n# Run type checking\nmypy .\n\n# Run linting\nruff check .\n```\n\n### Code Style\n- We use ruff for linting and formatting\n- Type hints are required for all functions\n- Docstrings should follow Google style\n- New features should include tests\n\n### Testing Guidelines\nWe're building our test suite and welcome contributions that:\n- Add pytest-based tests\n- Include both unit and integration tests\n- Cover core functionality\n- Demonstrate real-world usage\n- Help improve test coverage\n- Set up testing infrastructure\n\n### Commit Messages\nFollow the [Conventional Commits](https://www.conventionalcommits.org/) specification:\n- `feat:` New features\n- `fix:` Bug fixes\n- `docs:` Documentation changes\n- `test:` Adding or updating tests\n- `refactor:` Code changes that neither fix bugs nor add features\n\n## Citation\n\nIf you use LiteSwarm in your research or project, please cite our work:\n\n```bibtex\n@software{liteswarm2024,\n title = {LiteSwarm: A Lightweight Framework for Building AI Agent Systems},\n author = {Evgenii Mozharovskii and {GlyphyAI}},\n year = {2024},\n url = {https://github.com/glyphyai/liteswarm},\n note = {An extensible framework for building AI agent systems with a focus on \n structured interactions, LLM-agnostic design, and composable architecture. \n Features include multi-agent orchestration, structured outputs, and \n provider-agnostic response parsing.}\n}\n```\n\n## License\n\nMIT License - see LICENSE file for details.\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2024 GlyphyAI Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "A lightweight framework for building AI agent systems",
"version": "0.1.1",
"project_urls": {
"bug-tracker": "https://github.com/GlyphyAI/liteswarm/issues",
"changelog": "https://github.com/GlyphyAI/liteswarm/blob/main/CHANGELOG.md",
"documentation": "https://github.com/GlyphyAI/liteswarm#readme",
"homepage": "https://github.com/GlyphyAI/liteswarm",
"repository": "https://github.com/GlyphyAI/liteswarm"
},
"split_keywords": [
"ai",
" agents",
" llm",
" swarm",
" multi-agent",
" agent-systems",
" agent-orchestration"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "bcb9f3573456454af00e0a8c546a64303780f21ff1b6b8d251adc655f64198f5",
"md5": "8656083a78891a31e8f3543a8a61ca30",
"sha256": "2b0fd46325720f081d5261f11ed737ed2fabb1751a0baf73ea6276ad939afa12"
},
"downloads": -1,
"filename": "liteswarm-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "8656083a78891a31e8f3543a8a61ca30",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 108167,
"upload_time": "2024-12-09T23:26:46",
"upload_time_iso_8601": "2024-12-09T23:26:46.775306Z",
"url": "https://files.pythonhosted.org/packages/bc/b9/f3573456454af00e0a8c546a64303780f21ff1b6b8d251adc655f64198f5/liteswarm-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9714f805d0101af28b015778aa6df138116eab8658999f23a132daec9a0e1cdb",
"md5": "5b06e5e7c415273fda40b8a424037329",
"sha256": "1ec6acd72fd61d48e30ba02459bb8235f4c7cf3b571eef2a4718562d49247901"
},
"downloads": -1,
"filename": "liteswarm-0.1.1.tar.gz",
"has_sig": false,
"md5_digest": "5b06e5e7c415273fda40b8a424037329",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 97801,
"upload_time": "2024-12-09T23:26:49",
"upload_time_iso_8601": "2024-12-09T23:26:49.358500Z",
"url": "https://files.pythonhosted.org/packages/97/14/f805d0101af28b015778aa6df138116eab8658999f23a132daec9a0e1cdb/liteswarm-0.1.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-09 23:26:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "GlyphyAI",
"github_project": "liteswarm",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "liteswarm"
}