# Argentic
Microframework for building and running local AI agents.
{: .styled-logo }
[](https://github.com/angkira/argentic/actions/workflows/python-app.yml)
Argentic provides a lightweight, configurable framework designed to simplify the setup and operation of local AI agents. It integrates with various Large Language Model (LLM) backends and utilizes a messaging protocol (currently MQTT) for flexible communication between the core agent, tools, and clients.
## Features
- **Modular Design**: Core components include an `Agent`, a `Messager` for communication, and an `LLMProvider` for interacting with language models.
- **Multiple LLM Backends**: Supports various LLMs through a factory pattern, including:
- Ollama (via `ollama` Python library)
- Llama.cpp (via HTTP server or direct CLI interaction)
- Google Gemini (via API)
- **Configuration Driven**: Easily configure LLM providers, messaging brokers (MQTT), communication topics, and logging via `config.yaml`.
- **Command-Line Interface**: Start different components (agent, example tools, CLI client) using `start.sh`. Configure config path and log level via CLI arguments (`--config-path`, `--log-level`) or environment variables (`CONFIG_PATH`, `LOG_LEVEL`).
- **Messaging Protocol**: Uses MQTT for decoupled communication between the agent and potential tools or clients. Includes message classes for defined interactions (e.g., `AskQuestionMessage`).
- **Extensible Tool System**: Designed to integrate external tools via messaging. Includes an example RAG (Retrieval-Augmented Generation) tool (`src/services/rag_tool_service.py`) demonstrating this capability.
- **CLI Client**: A simple command-line client (`src/cli_client.py`) for interacting with the agent.
- **Graceful Shutdown**: Handles termination signals for proper cleanup.
## Getting Started
1. **Clone the repository:**
```bash
git clone https://github.com/angkira/argentic.git
cd argentic
```
2. **Set up Python environment:**
You have two options:
**Option 1: Using the installation script**
```bash
# This will create a virtual environment and install the package in development mode
./install.sh
source .venv/bin/activate
```
**Option 2: Manual setup**
It's recommended to use a virtual environment. The project uses `uv` (or `pip`) and `pyproject.toml`.
```bash
# Using uv
uv venv
uv sync
source .venv/bin/activate # Or your environment's activation script
# Or using pip
python -m venv .venv
source .venv/bin/activate
pip install -e .
```
**Option 3: Installing from GitHub**
You can install Argentic directly from its GitHub repository. Replace `your_username` and `your_repository` with the actual GitHub details.
```bash
# Using uv
uv pip install git+https://github.com/your_username/your_repository.git#egg=argentic
# Using pip
pip install git+https://github.com/your_username/your_repository.git#egg=argentic
```
3. **Configure:**
- Copy or rename `config.yaml.example` to `config.yaml` (if an example exists) or edit `config.yaml` directly.
- Set up your desired LLM provider (`llm` section).
- Configure the MQTT broker details (`messaging` section).
- Set any required API keys or environment variables (e.g., `GOOGLE_GEMINI_API_KEY` if using Gemini). Refer to `.env.example` if provided.
4. **Run Components:**
Use the `start.sh` script:
- Run the main agent:
```bash
./start.sh agent [--config-path path/to/config.yaml] [--log-level DEBUG]
```
- Run the example RAG tool service (optional, in a separate terminal):
```bash
./start.sh rag
```
- Run the CLI client to interact (optional, in a separate terminal):
```bash
./start.sh cli
```
## Running as Python Module
Alternatively, you can run Argentic as a Python module using the modern `python -m` interface. This method provides the same functionality as the shell scripts but integrates better with Python packaging conventions.
### Module Command Interface
After installation (either via `./install.sh` or manual setup), you can use:
```bash
# Run the main agent
python -m argentic agent --config-path config.yaml --log-level INFO
# Run the RAG tool service
python -m argentic rag --config-path config.yaml
# Run the environment tool service
python -m argentic environment --config-path config.yaml
# Run the CLI client
python -m argentic cli --config-path config.yaml
# Get help
python -m argentic --help
python -m argentic agent --help
```
### Console Script (After Installation)
When installed in a Python environment, you can also use the shorter console command:
```bash
# All the same commands work without 'python -m'
argentic agent --config-path config.yaml --log-level INFO
argentic rag --config-path config.yaml
argentic environment --config-path config.yaml
argentic cli --config-path config.yaml
argentic --help
```
### Configuration Options
Both interfaces support the same global options:
- `--config-path`: Path to configuration file (default: `config.yaml` or `$CONFIG_PATH`)
- `--log-level`: Logging level - DEBUG, INFO, WARNING, ERROR, CRITICAL (default: `INFO` or `$LOG_LEVEL`)
### Available Subcommands
- **`agent`**: Start the main AI agent service
- **`rag`**: Start the RAG (Retrieval-Augmented Generation) tool service
- **`environment`**: Start the environment tool service
- **`cli`**: Start the interactive command-line client
### Examples
```bash
# Start agent with custom config and debug logging
python -m argentic agent --config-path prod-config.yaml --log-level DEBUG
# Start RAG service with default settings
python -m argentic rag
# Interactive CLI session
python -m argentic cli
```
## Using as a Python Package
After installation, you can import the Argentic components in your Python code using simplified imports:
```python
# Option 1: Import directly from the main package
from argentic import Agent, Messager, LLMFactory
# Option 2: Import from specific modules with reduced nesting
from argentic.core import Agent, Messager, LLMFactory
# Option 3: Import specific tools
from argentic.tools import BaseTool, ToolManager
```
You can also create custom tools or extend the core functionality by subclassing the base classes.
## Configuration (`config.yaml`)
The `config.yaml` file controls the application's behavior:
- `llm`: Defines the LLM provider to use and its specific settings. Set the `provider` key to one of the supported names below:
- `provider: ollama`
- `ollama_model_name`: (Required) The name of the model served by Ollama (e.g., `gemma3:12b-it-qat`).
- `ollama_use_chat_model`: (Optional, boolean, default: `true`) Whether to use Ollama's chat completion endpoint.
- `ollama_parameters`: (Optional) Advanced parameters for fine-tuning model behavior. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.
- `provider: llama_cpp_server`
- `llama_cpp_server_binary`: (Optional) Path to the `llama-server` executable (needed if `auto_start` is true).
- `llama_cpp_server_args`: (Optional, list) Arguments to pass when auto-starting the server (e.g., model path, host, port).
- `llama_cpp_server_host`: (Required) Hostname or IP address of the running llama.cpp server (e.g., `127.0.0.1`).
- `llama_cpp_server_port`: (Required) Port number of the running llama.cpp server (e.g., `5000`).
- `llama_cpp_server_auto_start`: (Optional, boolean, default: `false`) Whether Argentic should try to start the `llama-server` process itself.
- `llama_cpp_server_parameters`: (Optional) Advanced parameters for HTTP requests. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.
- `provider: llama_cpp_cli`
- `llama_cpp_cli_binary`: (Required) Path to the `llama.cpp` main CLI executable (e.g., `~/llama.cpp/build/bin/llama-gemma3-cli`).
- `llama_cpp_cli_model_path`: (Required) Path to the GGUF model file.
- `llama_cpp_cli_args`: (Optional, list) Additional arguments to pass to the CLI (e.g., `--temp 0.7`, `--n-predict 128`).
- `llama_cpp_cli_parameters`: (Optional) Advanced parameters automatically converted to CLI arguments. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.
- `provider: google_gemini`
- `google_gemini_api_key`: (Required) Your Google Gemini API key. **It is strongly recommended to set this via the `GOOGLE_GEMINI_API_KEY` environment variable instead of directly in the file.** Argentic uses `python-dotenv` to load variables from a `.env` file.
- `google_gemini_model_name`: (Required) The specific Gemini model to use (e.g., `gemini-2.0-flash`).
- `google_gemini_parameters`: (Optional) Advanced parameters including safety settings and structured output. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.
### Advanced LLM Configuration
For detailed information about fine-tuning LLM parameters for performance, quality, and behavior, see the [Advanced LLM Configuration Guide](advanced-llm-configuration.md). This includes:
- Provider-specific parameter reference
- Performance vs quality trade-offs
- GPU acceleration settings
- Memory optimization techniques
- Example configurations for different use cases
- Troubleshooting guide
## Tools
Argentic supports interaction with external tools via the configured messaging system. Tools run as independent services and communicate with the main agent.
**Tool Registration Process:**
1. **Tool-Side (`BaseTool`):**
- A tool service (like `rag_tool_service.py`) instantiates a tool class derived from `core.tools.tool_base.BaseTool`.
- It calls the `tool.register()` method, providing the relevant messaging topics from the configuration (`register`, `status`, `call`, `response_base`).
- The tool publishes a `RegisterToolMessage` (containing its name, description/manual, and Pydantic schema for arguments) to the agent's registration topic (e.g., `agent/tools/register`).
- The tool simultaneously subscribes to the agent's status topic (e.g., `agent/status/info`) to await a `ToolRegisteredMessage` confirmation.
2. **Agent-Side (`ToolManager`):**
- The `ToolManager` (within the main agent) listens on the registration topic.
- Upon receiving a `RegisterToolMessage`, it generates a unique `tool_id` for the tool.
- It stores the tool's metadata (ID, name, description, API schema).
- The `ToolManager` subscribes to the tool's specific result topic (e.g., `agent/tools/response/<generated_tool_id>`) to listen for task outcomes.
- It publishes the `ToolRegisteredMessage` (including the `tool_id`) back to the agent's status topic, confirming registration with the tool.
3. **Tool-Side (Confirmation):**
- The tool receives the `ToolRegisteredMessage`, stores its assigned `tool_id`.
- It then subscribes to its dedicated task topic (e.g., `agent/tools/call/<generated_tool_id>`) to listen for incoming tasks.
**Task Execution Flow:**
1. **Agent Needs Tool:** The agent (likely prompted by the LLM) decides to use a tool.
2. **Agent Executes Task (`ToolManager.execute_tool`):**
- The agent calls `tool_manager.execute_tool(tool_name_or_id, arguments)`.
- The `ToolManager` creates a `TaskMessage` (containing a unique `task_id`, the `tool_id`, and the arguments).
- It publishes this `TaskMessage` to the specific tool's task topic (e.g., `agent/tools/call/<tool_id>`).
- It waits asynchronously for a response message associated with the `task_id` on the tool's result topic.
3. **Tool Executes Task (`BaseTool._handle_task_message`):**
- The tool service receives the `TaskMessage` on its task topic.
- It validates the arguments using the tool's Pydantic schema.
- It executes the tool's core logic (`_execute` method).
- It creates a `TaskResultMessage` (on success) or `TaskErrorMessage` (on failure), including the original `task_id`.
- It publishes this result message to its result topic (e.g., `agent/tools/response/<tool_id>`).
4. **Agent Receives Result (`ToolManager._handle_result_message`):**
- The `ToolManager` receives the result message on the tool's result topic.
- It matches the `task_id` to the pending asynchronous task and delivers the result (or error) back to the agent's logic that initiated the call.
An example `rag_tool_service.py` demonstrates how a tool (`KnowledgeBaseTool`) can be built and run independently, registering and communicating with the agent using this messaging pattern.
## Testing
The project includes a comprehensive test suite organized into categories:
### Test Structure
- **Unit Tests**: Located in `tests/core/messager/unit/`, these tests verify individual components in isolation.
- **Integration Tests**: Located in `tests/core/messager/test_messager_integration.py`, these tests verify how components work together.
- **End-to-End Tests**: Located in `tests/core/messager/e2e/`, these tests verify the system behavior using actual message brokers via Docker.
### Running Tests
Several scripts are available in the `bin/` directory to run different types of tests:
- **All Tests**: Run the complete test suite with the main test script:
```bash
./bin/run_tests.sh
```
- **Unit Tests Only**: Run only the unit tests:
```bash
./bin/run_unit_tests.sh
```
- **E2E Tests Only**: Run only the end-to-end tests (requires Docker):
```bash
./bin/run_e2e_tests.sh
```
The E2E test script supports Docker container management:
```bash
# Start Docker containers before running tests
./bin/run_e2e_tests.sh --start-docker
# Start Docker, run tests, and stop containers afterward
./bin/run_e2e_tests.sh --start-docker --stop-docker
# Only start Docker containers without running tests
./bin/run_e2e_tests.sh --docker-only --start-docker
# Only stop Docker containers
./bin/run_e2e_tests.sh --docker-only --stop-docker
# Pass additional arguments to pytest after --
./bin/run_e2e_tests.sh --start-docker -- -v
```
- **Integration Tests Only**: Run only the integration tests:
```bash
./bin/run_integration_tests.sh
```
Each script accepts additional pytest arguments. For example, to run tests with higher verbosity:
```
```
Raw data
{
"_id": null,
"home_page": null,
"name": "argentic",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, ai agent, chromadb, langchain, ollama, rag, sentence-transformers",
"author": null,
"author_email": "Iurii <angkira0@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/72/2a/ead447fd2981a2cfb1f91f0e03dc7a0fbd348147b190fb7ea6809febc176/argentic-0.8.14.tar.gz",
"platform": null,
"description": "# Argentic\n\nMicroframework for building and running local AI agents.\n\n{: .styled-logo }\n\n[](https://github.com/angkira/argentic/actions/workflows/python-app.yml)\n\nArgentic provides a lightweight, configurable framework designed to simplify the setup and operation of local AI agents. It integrates with various Large Language Model (LLM) backends and utilizes a messaging protocol (currently MQTT) for flexible communication between the core agent, tools, and clients.\n\n## Features\n\n- **Modular Design**: Core components include an `Agent`, a `Messager` for communication, and an `LLMProvider` for interacting with language models.\n- **Multiple LLM Backends**: Supports various LLMs through a factory pattern, including:\n - Ollama (via `ollama` Python library)\n - Llama.cpp (via HTTP server or direct CLI interaction)\n - Google Gemini (via API)\n- **Configuration Driven**: Easily configure LLM providers, messaging brokers (MQTT), communication topics, and logging via `config.yaml`.\n- **Command-Line Interface**: Start different components (agent, example tools, CLI client) using `start.sh`. Configure config path and log level via CLI arguments (`--config-path`, `--log-level`) or environment variables (`CONFIG_PATH`, `LOG_LEVEL`).\n- **Messaging Protocol**: Uses MQTT for decoupled communication between the agent and potential tools or clients. Includes message classes for defined interactions (e.g., `AskQuestionMessage`).\n- **Extensible Tool System**: Designed to integrate external tools via messaging. Includes an example RAG (Retrieval-Augmented Generation) tool (`src/services/rag_tool_service.py`) demonstrating this capability.\n- **CLI Client**: A simple command-line client (`src/cli_client.py`) for interacting with the agent.\n- **Graceful Shutdown**: Handles termination signals for proper cleanup.\n\n## Getting Started\n\n1. **Clone the repository:**\n\n ```bash\n git clone https://github.com/angkira/argentic.git\n cd argentic\n ```\n\n2. **Set up Python environment:**\n You have two options:\n\n **Option 1: Using the installation script**\n\n ```bash\n # This will create a virtual environment and install the package in development mode\n ./install.sh\n source .venv/bin/activate\n ```\n\n **Option 2: Manual setup**\n It's recommended to use a virtual environment. The project uses `uv` (or `pip`) and `pyproject.toml`.\n\n ```bash\n # Using uv\n uv venv\n uv sync\n source .venv/bin/activate # Or your environment's activation script\n\n # Or using pip\n python -m venv .venv\n source .venv/bin/activate\n pip install -e .\n ```\n\n **Option 3: Installing from GitHub**\n You can install Argentic directly from its GitHub repository. Replace `your_username` and `your_repository` with the actual GitHub details.\n\n ```bash\n # Using uv\n uv pip install git+https://github.com/your_username/your_repository.git#egg=argentic\n\n # Using pip\n pip install git+https://github.com/your_username/your_repository.git#egg=argentic\n ```\n\n3. **Configure:**\n - Copy or rename `config.yaml.example` to `config.yaml` (if an example exists) or edit `config.yaml` directly.\n - Set up your desired LLM provider (`llm` section).\n - Configure the MQTT broker details (`messaging` section).\n - Set any required API keys or environment variables (e.g., `GOOGLE_GEMINI_API_KEY` if using Gemini). Refer to `.env.example` if provided.\n4. **Run Components:**\n Use the `start.sh` script:\n\n - Run the main agent:\n\n ```bash\n ./start.sh agent [--config-path path/to/config.yaml] [--log-level DEBUG]\n ```\n\n - Run the example RAG tool service (optional, in a separate terminal):\n\n ```bash\n ./start.sh rag\n ```\n\n - Run the CLI client to interact (optional, in a separate terminal):\n\n ```bash\n ./start.sh cli\n ```\n\n## Running as Python Module\n\nAlternatively, you can run Argentic as a Python module using the modern `python -m` interface. This method provides the same functionality as the shell scripts but integrates better with Python packaging conventions.\n\n### Module Command Interface\n\nAfter installation (either via `./install.sh` or manual setup), you can use:\n\n```bash\n# Run the main agent\npython -m argentic agent --config-path config.yaml --log-level INFO\n\n# Run the RAG tool service\npython -m argentic rag --config-path config.yaml\n\n# Run the environment tool service\npython -m argentic environment --config-path config.yaml\n\n# Run the CLI client\npython -m argentic cli --config-path config.yaml\n\n# Get help\npython -m argentic --help\npython -m argentic agent --help\n```\n\n### Console Script (After Installation)\n\nWhen installed in a Python environment, you can also use the shorter console command:\n\n```bash\n# All the same commands work without 'python -m'\nargentic agent --config-path config.yaml --log-level INFO\nargentic rag --config-path config.yaml\nargentic environment --config-path config.yaml\nargentic cli --config-path config.yaml\nargentic --help\n```\n\n### Configuration Options\n\nBoth interfaces support the same global options:\n\n- `--config-path`: Path to configuration file (default: `config.yaml` or `$CONFIG_PATH`)\n- `--log-level`: Logging level - DEBUG, INFO, WARNING, ERROR, CRITICAL (default: `INFO` or `$LOG_LEVEL`)\n\n### Available Subcommands\n\n- **`agent`**: Start the main AI agent service\n- **`rag`**: Start the RAG (Retrieval-Augmented Generation) tool service\n- **`environment`**: Start the environment tool service\n- **`cli`**: Start the interactive command-line client\n\n### Examples\n\n```bash\n# Start agent with custom config and debug logging\npython -m argentic agent --config-path prod-config.yaml --log-level DEBUG\n\n# Start RAG service with default settings\npython -m argentic rag\n\n# Interactive CLI session\npython -m argentic cli\n```\n\n## Using as a Python Package\n\nAfter installation, you can import the Argentic components in your Python code using simplified imports:\n\n```python\n# Option 1: Import directly from the main package\nfrom argentic import Agent, Messager, LLMFactory\n\n# Option 2: Import from specific modules with reduced nesting\nfrom argentic.core import Agent, Messager, LLMFactory\n\n# Option 3: Import specific tools\nfrom argentic.tools import BaseTool, ToolManager\n```\n\nYou can also create custom tools or extend the core functionality by subclassing the base classes.\n\n## Configuration (`config.yaml`)\n\nThe `config.yaml` file controls the application's behavior:\n\n- `llm`: Defines the LLM provider to use and its specific settings. Set the `provider` key to one of the supported names below:\n - `provider: ollama`\n - `ollama_model_name`: (Required) The name of the model served by Ollama (e.g., `gemma3:12b-it-qat`).\n - `ollama_use_chat_model`: (Optional, boolean, default: `true`) Whether to use Ollama's chat completion endpoint.\n - `ollama_parameters`: (Optional) Advanced parameters for fine-tuning model behavior. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.\n - `provider: llama_cpp_server`\n - `llama_cpp_server_binary`: (Optional) Path to the `llama-server` executable (needed if `auto_start` is true).\n - `llama_cpp_server_args`: (Optional, list) Arguments to pass when auto-starting the server (e.g., model path, host, port).\n - `llama_cpp_server_host`: (Required) Hostname or IP address of the running llama.cpp server (e.g., `127.0.0.1`).\n - `llama_cpp_server_port`: (Required) Port number of the running llama.cpp server (e.g., `5000`).\n - `llama_cpp_server_auto_start`: (Optional, boolean, default: `false`) Whether Argentic should try to start the `llama-server` process itself.\n - `llama_cpp_server_parameters`: (Optional) Advanced parameters for HTTP requests. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.\n - `provider: llama_cpp_cli`\n - `llama_cpp_cli_binary`: (Required) Path to the `llama.cpp` main CLI executable (e.g., `~/llama.cpp/build/bin/llama-gemma3-cli`).\n - `llama_cpp_cli_model_path`: (Required) Path to the GGUF model file.\n - `llama_cpp_cli_args`: (Optional, list) Additional arguments to pass to the CLI (e.g., `--temp 0.7`, `--n-predict 128`).\n - `llama_cpp_cli_parameters`: (Optional) Advanced parameters automatically converted to CLI arguments. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.\n - `provider: google_gemini`\n - `google_gemini_api_key`: (Required) Your Google Gemini API key. **It is strongly recommended to set this via the `GOOGLE_GEMINI_API_KEY` environment variable instead of directly in the file.** Argentic uses `python-dotenv` to load variables from a `.env` file.\n - `google_gemini_model_name`: (Required) The specific Gemini model to use (e.g., `gemini-2.0-flash`).\n - `google_gemini_parameters`: (Optional) Advanced parameters including safety settings and structured output. See [Advanced LLM Configuration](advanced-llm-configuration.md) for details.\n\n### Advanced LLM Configuration\n\nFor detailed information about fine-tuning LLM parameters for performance, quality, and behavior, see the [Advanced LLM Configuration Guide](advanced-llm-configuration.md). This includes:\n\n- Provider-specific parameter reference\n- Performance vs quality trade-offs\n- GPU acceleration settings\n- Memory optimization techniques\n- Example configurations for different use cases\n- Troubleshooting guide\n\n## Tools\n\nArgentic supports interaction with external tools via the configured messaging system. Tools run as independent services and communicate with the main agent.\n\n**Tool Registration Process:**\n\n1. **Tool-Side (`BaseTool`):**\n - A tool service (like `rag_tool_service.py`) instantiates a tool class derived from `core.tools.tool_base.BaseTool`.\n - It calls the `tool.register()` method, providing the relevant messaging topics from the configuration (`register`, `status`, `call`, `response_base`).\n - The tool publishes a `RegisterToolMessage` (containing its name, description/manual, and Pydantic schema for arguments) to the agent's registration topic (e.g., `agent/tools/register`).\n - The tool simultaneously subscribes to the agent's status topic (e.g., `agent/status/info`) to await a `ToolRegisteredMessage` confirmation.\n2. **Agent-Side (`ToolManager`):**\n - The `ToolManager` (within the main agent) listens on the registration topic.\n - Upon receiving a `RegisterToolMessage`, it generates a unique `tool_id` for the tool.\n - It stores the tool's metadata (ID, name, description, API schema).\n - The `ToolManager` subscribes to the tool's specific result topic (e.g., `agent/tools/response/<generated_tool_id>`) to listen for task outcomes.\n - It publishes the `ToolRegisteredMessage` (including the `tool_id`) back to the agent's status topic, confirming registration with the tool.\n3. **Tool-Side (Confirmation):**\n - The tool receives the `ToolRegisteredMessage`, stores its assigned `tool_id`.\n - It then subscribes to its dedicated task topic (e.g., `agent/tools/call/<generated_tool_id>`) to listen for incoming tasks.\n\n**Task Execution Flow:**\n\n1. **Agent Needs Tool:** The agent (likely prompted by the LLM) decides to use a tool.\n2. **Agent Executes Task (`ToolManager.execute_tool`):**\n - The agent calls `tool_manager.execute_tool(tool_name_or_id, arguments)`.\n - The `ToolManager` creates a `TaskMessage` (containing a unique `task_id`, the `tool_id`, and the arguments).\n - It publishes this `TaskMessage` to the specific tool's task topic (e.g., `agent/tools/call/<tool_id>`).\n - It waits asynchronously for a response message associated with the `task_id` on the tool's result topic.\n3. **Tool Executes Task (`BaseTool._handle_task_message`):**\n - The tool service receives the `TaskMessage` on its task topic.\n - It validates the arguments using the tool's Pydantic schema.\n - It executes the tool's core logic (`_execute` method).\n - It creates a `TaskResultMessage` (on success) or `TaskErrorMessage` (on failure), including the original `task_id`.\n - It publishes this result message to its result topic (e.g., `agent/tools/response/<tool_id>`).\n4. **Agent Receives Result (`ToolManager._handle_result_message`):**\n - The `ToolManager` receives the result message on the tool's result topic.\n - It matches the `task_id` to the pending asynchronous task and delivers the result (or error) back to the agent's logic that initiated the call.\n\nAn example `rag_tool_service.py` demonstrates how a tool (`KnowledgeBaseTool`) can be built and run independently, registering and communicating with the agent using this messaging pattern.\n\n## Testing\n\nThe project includes a comprehensive test suite organized into categories:\n\n### Test Structure\n\n- **Unit Tests**: Located in `tests/core/messager/unit/`, these tests verify individual components in isolation.\n- **Integration Tests**: Located in `tests/core/messager/test_messager_integration.py`, these tests verify how components work together.\n- **End-to-End Tests**: Located in `tests/core/messager/e2e/`, these tests verify the system behavior using actual message brokers via Docker.\n\n### Running Tests\n\nSeveral scripts are available in the `bin/` directory to run different types of tests:\n\n- **All Tests**: Run the complete test suite with the main test script:\n\n ```bash\n ./bin/run_tests.sh\n ```\n\n- **Unit Tests Only**: Run only the unit tests:\n\n ```bash\n ./bin/run_unit_tests.sh\n ```\n\n- **E2E Tests Only**: Run only the end-to-end tests (requires Docker):\n\n ```bash\n ./bin/run_e2e_tests.sh\n ```\n\n The E2E test script supports Docker container management:\n\n ```bash\n # Start Docker containers before running tests\n ./bin/run_e2e_tests.sh --start-docker\n\n # Start Docker, run tests, and stop containers afterward\n ./bin/run_e2e_tests.sh --start-docker --stop-docker\n\n # Only start Docker containers without running tests\n ./bin/run_e2e_tests.sh --docker-only --start-docker\n\n # Only stop Docker containers\n ./bin/run_e2e_tests.sh --docker-only --stop-docker\n\n # Pass additional arguments to pytest after --\n ./bin/run_e2e_tests.sh --start-docker -- -v\n ```\n\n- **Integration Tests Only**: Run only the integration tests:\n ```bash\n ./bin/run_integration_tests.sh\n ```\n\nEach script accepts additional pytest arguments. For example, to run tests with higher verbosity:\n\n```\n\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": null,
"version": "0.8.14",
"project_urls": null,
"split_keywords": [
"ai",
" ai agent",
" chromadb",
" langchain",
" ollama",
" rag",
" sentence-transformers"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9157b2d11cf9b933c96e4904524b1b2e9618e234acd458629d70e6ba0bf5b76c",
"md5": "54147a1685aacd3486adf3a615e5d810",
"sha256": "d1b1d649987fa214762d8c6e2fb5213c5ea3dbf6db17d310fd12b039e6716c92"
},
"downloads": -1,
"filename": "argentic-0.8.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "54147a1685aacd3486adf3a615e5d810",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 97817,
"upload_time": "2025-07-13T11:00:40",
"upload_time_iso_8601": "2025-07-13T11:00:40.773420Z",
"url": "https://files.pythonhosted.org/packages/91/57/b2d11cf9b933c96e4904524b1b2e9618e234acd458629d70e6ba0bf5b76c/argentic-0.8.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "722aead447fd2981a2cfb1f91f0e03dc7a0fbd348147b190fb7ea6809febc176",
"md5": "3171245e233fe3cbe3ab22c37fe58a02",
"sha256": "d46baaab3e70a68c8a628adfecd628cfc37ab83f258fe38cc3c13d667bd04595"
},
"downloads": -1,
"filename": "argentic-0.8.14.tar.gz",
"has_sig": false,
"md5_digest": "3171245e233fe3cbe3ab22c37fe58a02",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 743344,
"upload_time": "2025-07-13T11:00:42",
"upload_time_iso_8601": "2025-07-13T11:00:42.461560Z",
"url": "https://files.pythonhosted.org/packages/72/2a/ead447fd2981a2cfb1f91f0e03dc7a0fbd348147b190fb7ea6809febc176/argentic-0.8.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-13 11:00:42",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "argentic"
}