# just-agents
[](https://github.com/longevity-genie/just-agents/actions/workflows/run_tests.yaml)
[](https://badge.fury.io/py/just-agents-core)
A lightweight, straightforward library for LLM agents - no over-engineering, just simplicity!
## Quick Start
```bash
pip install just-agents-core
```
## ðŊ Motivation
Most of the existing agentic libraries are extremely over-engineered either directly or by using over-engineered libraries under the hood, like langchain and llamaindex.
In reality, interactions with LLMs are mostly about strings, and you can write your own template by just using f-strings and python native string templates.
There is no need in complicated chain-like classes and other abstractions, in fact popular libraries create complexity just to sell you their paid services for LLM calls monitoring because it is extremely hard to understand what exactly is sent to LLMs.
It is way easier to reason about the code if you separate your prompting from python code to a separate easily readable files (like yaml files).
We wrote this libraries while being pissed of by high complexity and wanted something controlled and simple.
Of course, you might comment that we do not have the ecosystem like, for example, tools and loaders. In reality, most of langchain tools are just very simple functions wrapped in their classes, you can always quickly look at them and write a simple function to do the same thing that just-agents will pick up easily.
## âĻ Key Features
- ðŠķ Lightweight and simple implementation
- ð Easy-to-understand agent interactions
- ð§ Customizable prompts using YAML files
- ðĪ Support for various LLM models through litellm, including DeepSeek R1 and OpenAI o3-mini (see [full list here](https://models.litellm.ai/))
- ð Chain of Thought reasoning with function calls
## ð Documentation & Tutorials
### Interactive Tutorials (Google Colab)
- [Basic Agents Tutorial](https://github.com/longevity-genie/just-agents/blob/main/examples/notebooks/01_just_agents_colab.ipynb)
- [Database Agent Tutorial](https://github.com/longevity-genie/just-agents/blob/main/examples/notebooks/02_sqlite_example.ipynb)
- [Coding Agent Tutorial](https://github.com/longevity-genie/just-agents/blob/main/examples/notebooks/03_coding_agent.ipynb)
Note: tutorials are updated less often than the code, so you might need to check the code for the most recent examples
### Example Code
Browse our [examples](https://github.com/longevity-genie/just-agents/tree/main/examples) directory for:
- ð° Basic usage examples
- ðŧ Code generation and execution
- ð ïļ Tool integration examples
- ðĨ Multi-agent interactions
## ð Installation
### Quick Install
```bash
pip install just-agents-core
```
### Development Setup
1. Clone the repository:
```bash
git clone git@github.com:longevity-genie/just-agents.git
cd just-agents
```
2. Set up the environment:
We use Poetry for dependency management. First, [install Poetry](https://python-poetry.org/docs/#installation) if you haven't already.
```bash
# Install dependencies using Poetry
poetry install
# Activate the virtual environment
poetry shell
```
3. Configure API keys:
```bash
cp .env.example .env
# Edit .env with your API keys:
# OPENAI_API_KEY=your_key_here
# GROQ_API_KEY=your_key_here
```
## ðïļ Architecture
### Core Components
1. **BaseAgent**: A thin wrapper around litellm for LLM interactions
2. **ChatAgent**: An agent that wrapes BaseAgent and add role, goal and task
3. **ChainOfThoughtAgent**: Extended agent with reasoning capabilities
4. **WebAgent**: An agent designed to be served as an OpenAI-compatible REST API endpoint
### ChatAgent
The `ChatAgent` class represents an agent with a specific role, goal, and task. Here's an example of a moderated debate between political figures:
```python
from just_agents.base_agent import ChatAgent
from just_agents.llm_options import LLAMA3_3, LLAMA3_3_specdec:
# Initialize agents with different roles
Harris = ChatAgent(
llm_options=LLAMA3_3,
role="You are Kamala Harris in a presidential debate",
goal="Win the debate with clear, concise responses",
task="Respond briefly and effectively to debate questions"
)
Trump = ChatAgent(
llm_options=LLAMA3_3_specdec,
role="You are Donald Trump in a presidential debate",
goal="Win the debate with your signature style",
task="Respond briefly and effectively to debate questions"
)
Moderator = ChatAgent(
llm_options={
"model": "groq/mixtral-8x7b-32768",
"api_base": "https://api.groq.com/openai/v1",
"temperature": 0.0,
"tools": []
},
role="You are a neutral debate moderator",
goal="Ensure a fair and focused debate",
task="Generate clear, specific questions about key political issues"
)
exchanges = 2
# Run the debate
for _ in range(exchanges):
question = Moderator.query("Generate a concise debate question about a current political issue.")
print(f"\nMODERATOR: {question}\n")
trump_reply = Trump.query(question)
print(f"TRUMP: {trump_reply}\n")
harris_reply = Harris.query(f"Question: {question}\nTrump's response: {trump_reply}")
print(f"HARRIS: {harris_reply}\n")
# Get debate summary
debate = str(Harris.memory.messages)
summary = Moderator.query(f'Summarise the following debate in less than 30 words: {debate}')
print(f"SUMMARY:\n {summary}")
```
This example demonstrates how multiple agents can interact in a structured debate format, each with their own role,
goal, and task. The moderator agent guides the conversation while two political figures engage in a debate.
All prompts that we use are stored in yaml files that you can easily overload.
### WebAgent
With a single command `run-agent`, you can instantly serveon or several agents as an OpenAI-compatible REST API endpoint:
```bash
# Basic usage
run-agent agent_profiles.yaml
```
It allows you to expose the agent as if it is a regular LLM model. We also provide run-agent command.
This is especially useful when you want to use the agent in a web application or in a chat interface.
We recently released [just-chat](https://github.com/antonkulaga/just-chat) that allows seting up a chat interface around your WebAgent with just one command.
## Chain of Thought Agent
The `ChainOfThoughtAgent` class extends the capabilities of our agents by allowing them to use reasoning steps.
Here's an example:
```python
from just_agents.patterns.chain_of_throught import ChainOfThoughtAgent
from just_agents import llm_options
def count_letters(character: str, word: str) -> str:
""" Returns the number of character occurrences in the word. """
count = 0
for char in word:
if char == character:
count += 1
print(f"Function: {character} occurs in {word} {count} times.")
return str(count)
# Initialize agent with tools and LLM options
agent = ChainOfThoughtAgent(
tools=[count_letters],
llm_options=llm_options.LLAMA3_3
)
# Optional: Add callback to see all messages
agent.memory.add_on_message(lambda message: print(message))
# Get result and reasoning chain
result, chain = agent.think("Count the number of occurrences of the letter 'L' in the word - 'LOLLAPALOOZA'.")
```
This example shows how a Chain of Thought agent can use a custom function to count letter occurrences in a word. The agent can
reason about the problem and use the provided tool to solve it.
## ðĶ Package Structure
- `just_agents`: Core library
- `just_agents_coding`: Sandbox containers and code execution agents
- `just_agents_examples`: Usage examples
- `just_agents_tools`: Reusable agent tools
- `just_agents_web`: OpenAI-compatible REST API endpoints
## ð Sandbox Execution
The `just_agents_coding` package provides secure containers for code execution:
- ðĶ Sandbox container
- 𧎠Biosandbox container
- ð Websandbox container
Mount `/input` and `/output` directories to easily manage data flow and monitor generated code.
## ð Web Deployment Features
### Quick API Deployment
With a single command `run-agent`, you can instantly serve any just-agents agent as an OpenAI-compatible REST API endpoint. This means:
- ð Instant OpenAI-compatible endpoint
- ð Works with any OpenAI client library
- ð ïļ Simple configuration through YAML files
- ð Ready for production use
### Full Chat UI Deployment
Using the `deploy-agent` command, you can deploy a complete chat interface with all necessary infrastructure:
- ðŽ Modern Hugging Face-style chat UI
- ð LiteLLM proxy for model management
- ðū MongoDB for conversation history
- ⥠Redis for response caching
- ðģ Complete Docker environment
### Benefits
1. **Quick Time-to-Production**: Deploy agents from development to production in minutes
2. **Standard Compatibility**: OpenAI-compatible API ensures easy integration with existing tools
3. **Scalability**: Docker-based deployment provides consistent environments
4. **Security**: Proper isolation of services and configuration
5. **Flexibility**: Easy customization through YAML configurations
## Acknowledgments
This project is supported by:
[](https://heales.org/)
*HEALES - Healthy Life Extension Society*
and
[](https://ibima.med.uni-rostock.de/)
[IBIMA - Institute for Biostatistics and Informatics in Medicine and Ageing Research](https://ibima.med.uni-rostock.de/)
Raw data
{
"_id": null,
"home_page": null,
"name": "just-agents",
"maintainer": "Anton Kulaga",
"docs_url": null,
"requires_python": "<3.15,>=3.10",
"maintainer_email": "antonkulaga@gmail.com",
"keywords": "python, llm, agents, AI, machine-learning",
"author": "Alex Karmazin",
"author_email": "karmazinalex@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/52/8c/b97a30fdb347fd3775545aca4569eb961d11e1db73a8f5a1992e2a2f28e9/just_agents-0.5.8.tar.gz",
"platform": null,
"description": "# just-agents\n[](https://github.com/longevity-genie/just-agents/actions/workflows/run_tests.yaml)\n[](https://badge.fury.io/py/just-agents-core)\n\nA lightweight, straightforward library for LLM agents - no over-engineering, just simplicity!\n\n## Quick Start\n```bash\npip install just-agents-core\n```\n\n\n## \ud83c\udfaf Motivation\n\nMost of the existing agentic libraries are extremely over-engineered either directly or by using over-engineered libraries under the hood, like langchain and llamaindex.\nIn reality, interactions with LLMs are mostly about strings, and you can write your own template by just using f-strings and python native string templates.\nThere is no need in complicated chain-like classes and other abstractions, in fact popular libraries create complexity just to sell you their paid services for LLM calls monitoring because it is extremely hard to understand what exactly is sent to LLMs.\n\nIt is way easier to reason about the code if you separate your prompting from python code to a separate easily readable files (like yaml files).\n\nWe wrote this libraries while being pissed of by high complexity and wanted something controlled and simple.\nOf course, you might comment that we do not have the ecosystem like, for example, tools and loaders. In reality, most of langchain tools are just very simple functions wrapped in their classes, you can always quickly look at them and write a simple function to do the same thing that just-agents will pick up easily.\n\n## \u2728 Key Features\n- \ud83e\udeb6 Lightweight and simple implementation\n- \ud83d\udcdd Easy-to-understand agent interactions\n- \ud83d\udd27 Customizable prompts using YAML files\n- \ud83e\udd16 Support for various LLM models through litellm, including DeepSeek R1 and OpenAI o3-mini (see [full list here](https://models.litellm.ai/))\n- \ud83d\udd04 Chain of Thought reasoning with function calls\n\n## \ud83d\udcda Documentation & Tutorials\n\n### Interactive Tutorials (Google Colab)\n- [Basic Agents Tutorial](https://github.com/longevity-genie/just-agents/blob/main/examples/notebooks/01_just_agents_colab.ipynb)\n- [Database Agent Tutorial](https://github.com/longevity-genie/just-agents/blob/main/examples/notebooks/02_sqlite_example.ipynb)\n- [Coding Agent Tutorial](https://github.com/longevity-genie/just-agents/blob/main/examples/notebooks/03_coding_agent.ipynb)\n\nNote: tutorials are updated less often than the code, so you might need to check the code for the most recent examples\n\n### Example Code\nBrowse our [examples](https://github.com/longevity-genie/just-agents/tree/main/examples) directory for:\n- \ud83d\udd30 Basic usage examples\n- \ud83d\udcbb Code generation and execution\n- \ud83d\udee0\ufe0f Tool integration examples\n- \ud83d\udc65 Multi-agent interactions\n\n\n## \ud83d\ude80 Installation\n\n### Quick Install\n```bash\npip install just-agents-core\n```\n\n### Development Setup\n1. Clone the repository:\n```bash\ngit clone git@github.com:longevity-genie/just-agents.git\ncd just-agents\n```\n\n2. Set up the environment:\nWe use Poetry for dependency management. First, [install Poetry](https://python-poetry.org/docs/#installation) if you haven't already.\n\n```bash\n# Install dependencies using Poetry\npoetry install\n\n# Activate the virtual environment\npoetry shell\n```\n\n3. Configure API keys:\n```bash\ncp .env.example .env\n# Edit .env with your API keys:\n# OPENAI_API_KEY=your_key_here\n# GROQ_API_KEY=your_key_here\n```\n## \ud83c\udfd7\ufe0f Architecture\n\n### Core Components\n1. **BaseAgent**: A thin wrapper around litellm for LLM interactions\n2. **ChatAgent**: An agent that wrapes BaseAgent and add role, goal and task\n3. **ChainOfThoughtAgent**: Extended agent with reasoning capabilities\n4. **WebAgent**: An agent designed to be served as an OpenAI-compatible REST API endpoint\n\n### ChatAgent\n\nThe `ChatAgent` class represents an agent with a specific role, goal, and task. Here's an example of a moderated debate between political figures:\n\n```python\nfrom just_agents.base_agent import ChatAgent\nfrom just_agents.llm_options import LLAMA3_3, LLAMA3_3_specdec:\n\n# Initialize agents with different roles\nHarris = ChatAgent(\n llm_options=LLAMA3_3, \n role=\"You are Kamala Harris in a presidential debate\",\n goal=\"Win the debate with clear, concise responses\",\n task=\"Respond briefly and effectively to debate questions\"\n)\n\nTrump = ChatAgent(\n llm_options=LLAMA3_3_specdec,\n role=\"You are Donald Trump in a presidential debate\",\n goal=\"Win the debate with your signature style\",\n task=\"Respond briefly and effectively to debate questions\"\n)\n\nModerator = ChatAgent(\n llm_options={\n \"model\": \"groq/mixtral-8x7b-32768\",\n \"api_base\": \"https://api.groq.com/openai/v1\",\n \"temperature\": 0.0,\n \"tools\": []\n },\n role=\"You are a neutral debate moderator\",\n goal=\"Ensure a fair and focused debate\",\n task=\"Generate clear, specific questions about key political issues\"\n)\n\nexchanges = 2\n\n# Run the debate\nfor _ in range(exchanges):\n question = Moderator.query(\"Generate a concise debate question about a current political issue.\")\n print(f\"\\nMODERATOR: {question}\\n\")\n\n trump_reply = Trump.query(question)\n print(f\"TRUMP: {trump_reply}\\n\")\n\n harris_reply = Harris.query(f\"Question: {question}\\nTrump's response: {trump_reply}\")\n print(f\"HARRIS: {harris_reply}\\n\")\n\n# Get debate summary\ndebate = str(Harris.memory.messages)\nsummary = Moderator.query(f'Summarise the following debate in less than 30 words: {debate}')\nprint(f\"SUMMARY:\\n {summary}\")\n```\n\nThis example demonstrates how multiple agents can interact in a structured debate format, each with their own role, \ngoal, and task. The moderator agent guides the conversation while two political figures engage in a debate.\n\nAll prompts that we use are stored in yaml files that you can easily overload.\n\n### WebAgent\n\nWith a single command `run-agent`, you can instantly serveon or several agents as an OpenAI-compatible REST API endpoint:\n\n```bash\n# Basic usage\nrun-agent agent_profiles.yaml\n```\nIt allows you to expose the agent as if it is a regular LLM model. We also provide run-agent command.\nThis is especially useful when you want to use the agent in a web application or in a chat interface.\n\nWe recently released [just-chat](https://github.com/antonkulaga/just-chat) that allows seting up a chat interface around your WebAgent with just one command.\n\n## Chain of Thought Agent\n\nThe `ChainOfThoughtAgent` class extends the capabilities of our agents by allowing them to use reasoning steps.\nHere's an example:\n\n```python\nfrom just_agents.patterns.chain_of_throught import ChainOfThoughtAgent\nfrom just_agents import llm_options\n\ndef count_letters(character: str, word: str) -> str:\n \"\"\" Returns the number of character occurrences in the word. \"\"\"\n count = 0\n for char in word:\n if char == character:\n count += 1\n print(f\"Function: {character} occurs in {word} {count} times.\")\n return str(count)\n\n# Initialize agent with tools and LLM options\nagent = ChainOfThoughtAgent(\n tools=[count_letters],\n llm_options=llm_options.LLAMA3_3\n)\n\n# Optional: Add callback to see all messages\nagent.memory.add_on_message(lambda message: print(message))\n\n# Get result and reasoning chain\nresult, chain = agent.think(\"Count the number of occurrences of the letter 'L' in the word - 'LOLLAPALOOZA'.\")\n```\n\nThis example shows how a Chain of Thought agent can use a custom function to count letter occurrences in a word. The agent can \nreason about the problem and use the provided tool to solve it.\n\n## \ud83d\udce6 Package Structure\n- `just_agents`: Core library\n- `just_agents_coding`: Sandbox containers and code execution agents\n- `just_agents_examples`: Usage examples\n- `just_agents_tools`: Reusable agent tools\n- `just_agents_web`: OpenAI-compatible REST API endpoints\n\n## \ud83d\udd12 Sandbox Execution\n\nThe `just_agents_coding` package provides secure containers for code execution:\n- \ud83d\udce6 Sandbox container\n- \ud83e\uddec Biosandbox container\n- \ud83c\udf10 Websandbox container\n\nMount `/input` and `/output` directories to easily manage data flow and monitor generated code.\n\n## \ud83c\udf10 Web Deployment Features\n\n### Quick API Deployment\nWith a single command `run-agent`, you can instantly serve any just-agents agent as an OpenAI-compatible REST API endpoint. This means:\n- \ud83d\udd0c Instant OpenAI-compatible endpoint\n- \ud83d\udd04 Works with any OpenAI client library\n- \ud83d\udee0\ufe0f Simple configuration through YAML files\n- \ud83d\ude80 Ready for production use\n\n### Full Chat UI Deployment\nUsing the `deploy-agent` command, you can deploy a complete chat interface with all necessary infrastructure:\n- \ud83d\udcac Modern Hugging Face-style chat UI\n- \ud83d\udd04 LiteLLM proxy for model management\n- \ud83d\udcbe MongoDB for conversation history\n- \u26a1 Redis for response caching\n- \ud83d\udc33 Complete Docker environment\n\n### Benefits\n1. **Quick Time-to-Production**: Deploy agents from development to production in minutes\n2. **Standard Compatibility**: OpenAI-compatible API ensures easy integration with existing tools\n3. **Scalability**: Docker-based deployment provides consistent environments\n4. **Security**: Proper isolation of services and configuration\n5. **Flexibility**: Easy customization through YAML configurations\n\n\n## Acknowledgments\n\nThis project is supported by:\n\n[](https://heales.org/)\n\n*HEALES - Healthy Life Extension Society*\n\nand\n\n[](https://ibima.med.uni-rostock.de/)\n\n[IBIMA - Institute for Biostatistics and Informatics in Medicine and Ageing Research](https://ibima.med.uni-rostock.de/)\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Just Agents",
"version": "0.5.8",
"project_urls": null,
"split_keywords": [
"python",
" llm",
" agents",
" ai",
" machine-learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "ef0fd7fb8988999d18085a2679aa078e4b003ef3ca7a34db82efa9b7a33d4efc",
"md5": "370720a539cdcde2e7f582a67db17a21",
"sha256": "283f41ff81d7c0c3d859db17180cc52f0b94f3509a40bd0e58feb1b1544471eb"
},
"downloads": -1,
"filename": "just_agents-0.5.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "370720a539cdcde2e7f582a67db17a21",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.15,>=3.10",
"size": 10128,
"upload_time": "2025-02-20T20:13:10",
"upload_time_iso_8601": "2025-02-20T20:13:10.112491Z",
"url": "https://files.pythonhosted.org/packages/ef/0f/d7fb8988999d18085a2679aa078e4b003ef3ca7a34db82efa9b7a33d4efc/just_agents-0.5.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "528cb97a30fdb347fd3775545aca4569eb961d11e1db73a8f5a1992e2a2f28e9",
"md5": "c63f0cec8d299fb39bb63000253da267",
"sha256": "1cc4b3221cf6c1f0a2c504c56f5f055349c6655361f348e514d42ee97524b051"
},
"downloads": -1,
"filename": "just_agents-0.5.8.tar.gz",
"has_sig": false,
"md5_digest": "c63f0cec8d299fb39bb63000253da267",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.15,>=3.10",
"size": 9176,
"upload_time": "2025-02-20T20:13:11",
"upload_time_iso_8601": "2025-02-20T20:13:11.299804Z",
"url": "https://files.pythonhosted.org/packages/52/8c/b97a30fdb347fd3775545aca4569eb961d11e1db73a8f5a1992e2a2f28e9/just_agents-0.5.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-20 20:13:11",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "just-agents"
}