aurite


Nameaurite JSON
Version 0.3.27 PyPI version JSON
download
home_pagehttps://publish.obsidian.md/aurite/HOME
SummaryAurite Agents is an agent development and runtime framework.
upload_time2025-07-21 17:40:26
maintainerNone
docs_urlNone
authorRyan W
requires_python>=3.11
licenseMIT License Copyright (c) 2025 Aurite Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords ai agent mcp framework llm anthropic
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Aurite Agents Framework

**Aurite Agents** is a Python framework designed for building and orchestrating AI agents. These agents can interact with a variety of external tools, prompts, and resources through the Model Context Protocol (MCP), enabling them to perform complex tasks.

Whether you're looking to create sophisticated AI assistants, automate processes, or experiment with agentic workflows, Aurite Agents provides the building blocks and infrastructure to get you started.

## Getting Started

To install the framework as a python package, see [Package Installation Guide](docs/package_installation_guide.md).

## Core Concepts for Users

Understanding these concepts will help you configure and use the Aurite Agents framework effectively.

### 1. Projects

A **Project** in Aurite Agents is defined by a JSON configuration file (e.g., `aurite_config.json`). This file acts as a central manifest for your agentic application, specifying:
*   The name and description of the project.
*   Which LLM configurations to use (`llms`).
*   Which MCP Servers to connect to (`mcp_servers`).
*   Which Agents, Simple Workflows, and Custom Workflows are part of this project.

The active project configuration tells the `Aurite` class what components to load and make available.

For more information on project configuration files, see [Projects](docs/components/PROJECT.md).

### 2. Agentic Components

These are the primary building blocks you'll work with:

*   **Agents (`src/agents/agent.py`):**
    *   LLM-powered entities that can engage in conversations, use tools, and optionally maintain history.
    *   Configured via `AgentConfig` models, typically stored in JSON files (e.g., `config/agents/my_weather_agent.json`) and referenced in your project file.
    *   Key settings include the LLM to use, system prompts, and rules for accessing MCP Servers.

    ```mermaid
    graph TD
        Agent["Agent <br/> (src/agents/agent.py)"] --> LLM["LLM <br/> (e.g., Claude, GPT)"];
        Agent --> MCP_Servers["MCP Servers <br/> (Connections)"];

        MCP_Servers --> MCP1["MCP Server 1 <br/> (e.g., Weather Tool)"];
        MCP_Servers --> MCP2["MCP Server 2 <br/> (e.g., Database)"];
        MCP_Servers --> MCP3["MCP Server 3 <br/> (e.g., Custom API)"];

        style Agent fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333
        style LLM fill:#FFFFE0,stroke:#B8860B,stroke-width:2px,color:#333333
        style MCP1 fill:#90EE90,stroke:#006400,stroke-width:2px,color:#333333
        style MCP2 fill:#90EE90,stroke:#006400,stroke-width:2px,color:#333333
        style MCP3 fill:#90EE90,stroke:#006400,stroke-width:2px,color:#333333
    ```
    *   For more information on Agents, see [Agents](docs/components/agent.md).

*   **Simple Workflows (`src/workflows/simple_workflow.py`):**
    *   Define a sequence of Agents to be executed one after another.
    *   Configured via `WorkflowConfig` models (e.g., `config/workflows/my_simple_sequence.json`).
    *   Useful for straightforward, multi-step tasks where the output of one agent becomes the input for the next.

    ```mermaid
    graph LR
        Agent1["Agent A"] -->|Input/Output| Agent2["Agent B"];
        Agent2 -->|Input/Output| Agent3["Agent C"];

        style Agent1 fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333
        style Agent2 fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333
        style Agent3 fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333
    ```
    *   For more information on Simple Workflows, see [Simple Workflows](docs/components/simple_workflow.md).

*   **Custom Workflows (`src/workflows/custom_workflow.py`):**
    *   Allow you to define complex orchestration logic using custom Python classes.
    *   Configured via `CustomWorkflowConfig` models, pointing to your Python module and class.
    *   Provide maximum flexibility for intricate interactions and conditional logic. Here's a conceptual example of what a custom workflow class might look like:
    ```python
    # Example: src/my_custom_workflows/my_orchestrator.py class definition
    class MyCustomOrchestrator:
        async def execute_workflow(
            self,
            initial_input: Any,
            executor: "ExecutionFacade", # Type hint for the passed executor
            session_id: Optional[str] = None # Optional session_id
        ) -> Any:

            # --- Your custom Python orchestration logic here ---
            # You can call other agents, simple workflows, or even other custom workflows
            # using the 'executor' instance.

            # Example: Call an agent
            agent_result = await executor.run_agent(
                agent_name="MyProcessingAgent",
                user_message=str(initial_input), # Ensure message is a string
            )

            processed_data = agent_result.final_response.content[0].text

            # Example: Call a simple workflow
            simple_workflow_result = await executor.run_simple_workflow(
                workflow_name="MyFollowUpWorkflow",
                initial_input=processed_data
            )
            simple_workflow_output = simple_workflow_result.get("final_message")

            # Example: Call a custom workflow
            custom_workflow_result = await executor.run_custom_workflow(custom_workflow_name="MyCustomWorkflow", initial_input=simple_workflow_output)

            return custom_workflow_result
    ```
    To use this custom workflow:
      1. Save this code into a Python file (e.g., in src/my_custom_workflows/basic_executor_example.py).
      2. In your project's JSON configuration (e.g., config/projects/default.json), add or update a custom_workflow entry like this:
    ```json
    {
      "custom_workflows": [
        {
          "name": "MyBasicWorkflowExample",
          "module_path": "src.my_custom_workflows.basic_executor_example",
          "class_name": "BasicExecutorExampleWorkflow",
          "description": "A basic example demonstrating custom workflow executor usage."
        }
        // ... any other custom workflows
      ]
    }
    ```
    * (Ensure this fits into your overall project JSON structure, typically under a "custom_workflows" key)
    1. Ensure the agent named "YourConfiguredAgentName" (or whatever name you use in the code) is also defined in the 'agents' section of your project configuration.

    * For more information on Custom Workflows, see [Custom Workflows](docs/components/simple_workflow.md).

### 3. LLM Configurations

*   Define settings for different Large Language Models (e.g., model name, temperature, max tokens).
*   Managed by `LLMConfig` models, typically stored in `config/llms/default_llms.json` or a custom file.
*   Agents reference these LLM configurations by their `llm_id`, allowing you to easily switch or share LLM settings.
*   The core LLM client abstraction is `src/llm/base_client.py`.

*   For more information on LLM Configurations, see [LLM Configurations](docs/components/llm.md).

### 4. MCP Servers (as Clients)

*   External processes that provide tools, prompts, or resources according to the Model Context Protocol (MCP).
*   The Aurite framework connects to these servers to provide capabilities to agents.
*   Configured via `ClientConfig` models (e.g., `config/mcp_servers/mcp_servers.json`), specifying the server's path, capabilities, and access rules.
*   An example MCP server is `src/packaged/example_mcp_servers/weather_mcp_server.py`.

*   For more information on MCP Server Configurations, see [MCP Servers](docs/components/mcp_server.md).


## Architecture Overview

The framework follows a layered architecture, illustrated below:

```mermaid
graph TD
    A["Layer 1: Entrypoints <br/> (API, CLI, Worker)"] --> B{"Layer 2: Orchestration <br/> (HostManager, ExecutionFacade)"};
    B --> C["Layer 3: Host Infrastructure <br/> (MCPHost)"];
    C --> D["Layer 4: External Capabilities <br/> (MCP Servers)"];

    style A fill:#D1E8FF,stroke:#3670B3,stroke-width:2px,color:#333333
    style B fill:#C2F0C2,stroke:#408040,stroke-width:2px,color:#333333
    style C fill:#FFE0B3,stroke:#B37700,stroke-width:2px,color:#333333
    style D fill:#FFD1D1,stroke:#B33636,stroke-width:2px,color:#333333
```

For a comprehensive understanding of the architecture, component interactions, and design principles, please see [`docs/layers/framework_overview.md`](docs/layers/framework_overview.md). Detailed information on each specific layer can also be found in the `docs/layers/` directory.

## Entrypoints

**Running the Backend API Server:**
The primary way to interact with the framework is through its FastAPI server:
```bash
python -m aurite.bin.api.api
```
or use the `pyproject.toml` script:
```bash
start-api
```
(This script is available after running `pip install -e .[dev]`. If using Docker, the API starts automatically within its container.)

*   By default, it starts on `http://0.0.0.0:8000`. You can then send requests to its various endpoints to execute agents, register components, etc. (e.g., using Postman or `curl`).

**API Documentation:**
The API server automatically provides interactive documentation:
*   **Swagger UI:** `http://localhost:8000/api-docs` - Interactive API testing interface
*   **ReDoc:** `http://localhost:8000/redoc` - Clean, readable documentation
*   **OpenAPI JSON:** `http://localhost:8000/openapi.json` - Raw specification for tooling

The Swagger UI allows you to:
- Explore all available endpoints
- Test API calls directly from your browser
- Authenticate with your API key
- See request/response examples and schemas

For more details on the OpenAPI integration, see [OpenAPI Integration Guide](docs/openapi_integration.md).

*   Besides the main API server, the framework offers other ways to interact:


**Command-Line Interface (`src/bin/cli.py`):** For terminal-based interaction.
The `run-cli` script is available after performing the Manual Installation and running `pip install -e .[dev]`.*
```bash
# Example: Execute an agent (ensure API server is running)
# Assumes API_KEY environment variable is set.
run-cli execute agent "Weather Agent" "What is the weather in London?"

# Example: Execute a simple workflow
run-cli execute workflow "Weather Planning Workflow" "What should I wear in San Francisco today?"

# Example: Execute a custom workflow (input must be a valid JSON string)
run-cli execute custom-workflow "ExampleCustomWorkflow" "{\"city\": \"London\"}"
```

**Using CLI with Docker:** If you are using the Docker setup, these CLI commands need to be run *inside* the backend service container. You can do this by first finding your backend container ID or name (e.g., using `docker ps`) and then executing the command:
```bash
docker exec -it <your_backend_container_name_or_id> run-cli execute agent "Weather Agent" "What is the weather in London?"
```
Ensure the `API_KEY` environment variable is set within the container's environment (it should be if you used the setup scripts or configured your `.env` file correctly).

## Further Documentation

*   **Framework Architecture:** For a detailed understanding of the internal architecture, component interactions, and design principles, please see [`docs/layers/framework_overview.md`](docs/layers/framework_overview.md).
*   **Layer-Specific Details:**
    *   [`docs/layers/1_entrypoints.md`](docs/layers/1_entrypoints.md) (API, CLI, Worker)
    *   [`docs/layers/2_orchestration.md`](docs/layers/2_orchestration.md) (HostManager, ExecutionFacade)
    *   [`docs/layers/3_host.md`](docs/layers/3_host.md) (MCPHost System)
*   **Testing:** Information on running tests can be found in `tests/README.md`. Testing strategies for each layer are also detailed within their respective documentation in `docs/layers/`.

## Contributing

Contributions are welcome! Please follow standard fork/pull request workflows. Ensure tests pass and documentation is updated for any changes.

            

Raw data

            {
    "_id": null,
    "home_page": "https://publish.obsidian.md/aurite/HOME",
    "name": "aurite",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "ai, agent, mcp, framework, llm, anthropic",
    "author": "Ryan W",
    "author_email": "ryan@aurite.ai",
    "download_url": "https://files.pythonhosted.org/packages/01/21/81315e87955a83d9a2d71c61a00db5853db2f1410c06a17ed7a8c5a64aad/aurite-0.3.27.tar.gz",
    "platform": null,
    "description": "# Aurite Agents Framework\n\n**Aurite Agents** is a Python framework designed for building and orchestrating AI agents. These agents can interact with a variety of external tools, prompts, and resources through the Model Context Protocol (MCP), enabling them to perform complex tasks.\n\nWhether you're looking to create sophisticated AI assistants, automate processes, or experiment with agentic workflows, Aurite Agents provides the building blocks and infrastructure to get you started.\n\n## Getting Started\n\nTo install the framework as a python package, see [Package Installation Guide](docs/package_installation_guide.md).\n\n## Core Concepts for Users\n\nUnderstanding these concepts will help you configure and use the Aurite Agents framework effectively.\n\n### 1. Projects\n\nA **Project** in Aurite Agents is defined by a JSON configuration file (e.g., `aurite_config.json`). This file acts as a central manifest for your agentic application, specifying:\n*   The name and description of the project.\n*   Which LLM configurations to use (`llms`).\n*   Which MCP Servers to connect to (`mcp_servers`).\n*   Which Agents, Simple Workflows, and Custom Workflows are part of this project.\n\nThe active project configuration tells the `Aurite` class what components to load and make available.\n\nFor more information on project configuration files, see [Projects](docs/components/PROJECT.md).\n\n### 2. Agentic Components\n\nThese are the primary building blocks you'll work with:\n\n*   **Agents (`src/agents/agent.py`):**\n    *   LLM-powered entities that can engage in conversations, use tools, and optionally maintain history.\n    *   Configured via `AgentConfig` models, typically stored in JSON files (e.g., `config/agents/my_weather_agent.json`) and referenced in your project file.\n    *   Key settings include the LLM to use, system prompts, and rules for accessing MCP Servers.\n\n    ```mermaid\n    graph TD\n        Agent[\"Agent <br/> (src/agents/agent.py)\"] --> LLM[\"LLM <br/> (e.g., Claude, GPT)\"];\n        Agent --> MCP_Servers[\"MCP Servers <br/> (Connections)\"];\n\n        MCP_Servers --> MCP1[\"MCP Server 1 <br/> (e.g., Weather Tool)\"];\n        MCP_Servers --> MCP2[\"MCP Server 2 <br/> (e.g., Database)\"];\n        MCP_Servers --> MCP3[\"MCP Server 3 <br/> (e.g., Custom API)\"];\n\n        style Agent fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333\n        style LLM fill:#FFFFE0,stroke:#B8860B,stroke-width:2px,color:#333333\n        style MCP1 fill:#90EE90,stroke:#006400,stroke-width:2px,color:#333333\n        style MCP2 fill:#90EE90,stroke:#006400,stroke-width:2px,color:#333333\n        style MCP3 fill:#90EE90,stroke:#006400,stroke-width:2px,color:#333333\n    ```\n    *   For more information on Agents, see [Agents](docs/components/agent.md).\n\n*   **Simple Workflows (`src/workflows/simple_workflow.py`):**\n    *   Define a sequence of Agents to be executed one after another.\n    *   Configured via `WorkflowConfig` models (e.g., `config/workflows/my_simple_sequence.json`).\n    *   Useful for straightforward, multi-step tasks where the output of one agent becomes the input for the next.\n\n    ```mermaid\n    graph LR\n        Agent1[\"Agent A\"] -->|Input/Output| Agent2[\"Agent B\"];\n        Agent2 -->|Input/Output| Agent3[\"Agent C\"];\n\n        style Agent1 fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333\n        style Agent2 fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333\n        style Agent3 fill:#ADD8E6,stroke:#00008B,stroke-width:2px,color:#333333\n    ```\n    *   For more information on Simple Workflows, see [Simple Workflows](docs/components/simple_workflow.md).\n\n*   **Custom Workflows (`src/workflows/custom_workflow.py`):**\n    *   Allow you to define complex orchestration logic using custom Python classes.\n    *   Configured via `CustomWorkflowConfig` models, pointing to your Python module and class.\n    *   Provide maximum flexibility for intricate interactions and conditional logic. Here's a conceptual example of what a custom workflow class might look like:\n    ```python\n    # Example: src/my_custom_workflows/my_orchestrator.py class definition\n    class MyCustomOrchestrator:\n        async def execute_workflow(\n            self,\n            initial_input: Any,\n            executor: \"ExecutionFacade\", # Type hint for the passed executor\n            session_id: Optional[str] = None # Optional session_id\n        ) -> Any:\n\n            # --- Your custom Python orchestration logic here ---\n            # You can call other agents, simple workflows, or even other custom workflows\n            # using the 'executor' instance.\n\n            # Example: Call an agent\n            agent_result = await executor.run_agent(\n                agent_name=\"MyProcessingAgent\",\n                user_message=str(initial_input), # Ensure message is a string\n            )\n\n            processed_data = agent_result.final_response.content[0].text\n\n            # Example: Call a simple workflow\n            simple_workflow_result = await executor.run_simple_workflow(\n                workflow_name=\"MyFollowUpWorkflow\",\n                initial_input=processed_data\n            )\n            simple_workflow_output = simple_workflow_result.get(\"final_message\")\n\n            # Example: Call a custom workflow\n            custom_workflow_result = await executor.run_custom_workflow(custom_workflow_name=\"MyCustomWorkflow\", initial_input=simple_workflow_output)\n\n            return custom_workflow_result\n    ```\n    To use this custom workflow:\n      1. Save this code into a Python file (e.g., in src/my_custom_workflows/basic_executor_example.py).\n      2. In your project's JSON configuration (e.g., config/projects/default.json), add or update a custom_workflow entry like this:\n    ```json\n    {\n      \"custom_workflows\": [\n        {\n          \"name\": \"MyBasicWorkflowExample\",\n          \"module_path\": \"src.my_custom_workflows.basic_executor_example\",\n          \"class_name\": \"BasicExecutorExampleWorkflow\",\n          \"description\": \"A basic example demonstrating custom workflow executor usage.\"\n        }\n        // ... any other custom workflows\n      ]\n    }\n    ```\n    * (Ensure this fits into your overall project JSON structure, typically under a \"custom_workflows\" key)\n    1. Ensure the agent named \"YourConfiguredAgentName\" (or whatever name you use in the code) is also defined in the 'agents' section of your project configuration.\n\n    * For more information on Custom Workflows, see [Custom Workflows](docs/components/simple_workflow.md).\n\n### 3. LLM Configurations\n\n*   Define settings for different Large Language Models (e.g., model name, temperature, max tokens).\n*   Managed by `LLMConfig` models, typically stored in `config/llms/default_llms.json` or a custom file.\n*   Agents reference these LLM configurations by their `llm_id`, allowing you to easily switch or share LLM settings.\n*   The core LLM client abstraction is `src/llm/base_client.py`.\n\n*   For more information on LLM Configurations, see [LLM Configurations](docs/components/llm.md).\n\n### 4. MCP Servers (as Clients)\n\n*   External processes that provide tools, prompts, or resources according to the Model Context Protocol (MCP).\n*   The Aurite framework connects to these servers to provide capabilities to agents.\n*   Configured via `ClientConfig` models (e.g., `config/mcp_servers/mcp_servers.json`), specifying the server's path, capabilities, and access rules.\n*   An example MCP server is `src/packaged/example_mcp_servers/weather_mcp_server.py`.\n\n*   For more information on MCP Server Configurations, see [MCP Servers](docs/components/mcp_server.md).\n\n\n## Architecture Overview\n\nThe framework follows a layered architecture, illustrated below:\n\n```mermaid\ngraph TD\n    A[\"Layer 1: Entrypoints <br/> (API, CLI, Worker)\"] --> B{\"Layer 2: Orchestration <br/> (HostManager, ExecutionFacade)\"};\n    B --> C[\"Layer 3: Host Infrastructure <br/> (MCPHost)\"];\n    C --> D[\"Layer 4: External Capabilities <br/> (MCP Servers)\"];\n\n    style A fill:#D1E8FF,stroke:#3670B3,stroke-width:2px,color:#333333\n    style B fill:#C2F0C2,stroke:#408040,stroke-width:2px,color:#333333\n    style C fill:#FFE0B3,stroke:#B37700,stroke-width:2px,color:#333333\n    style D fill:#FFD1D1,stroke:#B33636,stroke-width:2px,color:#333333\n```\n\nFor a comprehensive understanding of the architecture, component interactions, and design principles, please see [`docs/layers/framework_overview.md`](docs/layers/framework_overview.md). Detailed information on each specific layer can also be found in the `docs/layers/` directory.\n\n## Entrypoints\n\n**Running the Backend API Server:**\nThe primary way to interact with the framework is through its FastAPI server:\n```bash\npython -m aurite.bin.api.api\n```\nor use the `pyproject.toml` script:\n```bash\nstart-api\n```\n(This script is available after running `pip install -e .[dev]`. If using Docker, the API starts automatically within its container.)\n\n*   By default, it starts on `http://0.0.0.0:8000`. You can then send requests to its various endpoints to execute agents, register components, etc. (e.g., using Postman or `curl`).\n\n**API Documentation:**\nThe API server automatically provides interactive documentation:\n*   **Swagger UI:** `http://localhost:8000/api-docs` - Interactive API testing interface\n*   **ReDoc:** `http://localhost:8000/redoc` - Clean, readable documentation\n*   **OpenAPI JSON:** `http://localhost:8000/openapi.json` - Raw specification for tooling\n\nThe Swagger UI allows you to:\n- Explore all available endpoints\n- Test API calls directly from your browser\n- Authenticate with your API key\n- See request/response examples and schemas\n\nFor more details on the OpenAPI integration, see [OpenAPI Integration Guide](docs/openapi_integration.md).\n\n*   Besides the main API server, the framework offers other ways to interact:\n\n\n**Command-Line Interface (`src/bin/cli.py`):** For terminal-based interaction.\nThe `run-cli` script is available after performing the Manual Installation and running `pip install -e .[dev]`.*\n```bash\n# Example: Execute an agent (ensure API server is running)\n# Assumes API_KEY environment variable is set.\nrun-cli execute agent \"Weather Agent\" \"What is the weather in London?\"\n\n# Example: Execute a simple workflow\nrun-cli execute workflow \"Weather Planning Workflow\" \"What should I wear in San Francisco today?\"\n\n# Example: Execute a custom workflow (input must be a valid JSON string)\nrun-cli execute custom-workflow \"ExampleCustomWorkflow\" \"{\\\"city\\\": \\\"London\\\"}\"\n```\n\n**Using CLI with Docker:** If you are using the Docker setup, these CLI commands need to be run *inside* the backend service container. You can do this by first finding your backend container ID or name (e.g., using `docker ps`) and then executing the command:\n```bash\ndocker exec -it <your_backend_container_name_or_id> run-cli execute agent \"Weather Agent\" \"What is the weather in London?\"\n```\nEnsure the `API_KEY` environment variable is set within the container's environment (it should be if you used the setup scripts or configured your `.env` file correctly).\n\n## Further Documentation\n\n*   **Framework Architecture:** For a detailed understanding of the internal architecture, component interactions, and design principles, please see [`docs/layers/framework_overview.md`](docs/layers/framework_overview.md).\n*   **Layer-Specific Details:**\n    *   [`docs/layers/1_entrypoints.md`](docs/layers/1_entrypoints.md) (API, CLI, Worker)\n    *   [`docs/layers/2_orchestration.md`](docs/layers/2_orchestration.md) (HostManager, ExecutionFacade)\n    *   [`docs/layers/3_host.md`](docs/layers/3_host.md) (MCPHost System)\n*   **Testing:** Information on running tests can be found in `tests/README.md`. Testing strategies for each layer are also detailed within their respective documentation in `docs/layers/`.\n\n## Contributing\n\nContributions are welcome! Please follow standard fork/pull request workflows. Ensure tests pass and documentation is updated for any changes.\n",
    "bugtrack_url": null,
    "license": "MIT License\n\nCopyright (c) 2025 Aurite Inc.\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n",
    "summary": "Aurite Agents is an agent development and runtime framework.",
    "version": "0.3.27",
    "project_urls": {
        "Homepage": "https://publish.obsidian.md/aurite/HOME",
        "Repository": "https://github.com/Aurite-ai/aurite-agents"
    },
    "split_keywords": [
        "ai",
        " agent",
        " mcp",
        " framework",
        " llm",
        " anthropic"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bd44d58077d8e3072c277c348004f5748397fa335af4fb802c14242953b86511",
                "md5": "baaf9edac36514029300a7245f5709ce",
                "sha256": "6416b573a3f07e2ce4773a7bf25168ad8be7c81b533818867ec948edea01ce57"
            },
            "downloads": -1,
            "filename": "aurite-0.3.27-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "baaf9edac36514029300a7245f5709ce",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.11",
            "size": 796504,
            "upload_time": "2025-07-21T17:40:24",
            "upload_time_iso_8601": "2025-07-21T17:40:24.684345Z",
            "url": "https://files.pythonhosted.org/packages/bd/44/d58077d8e3072c277c348004f5748397fa335af4fb802c14242953b86511/aurite-0.3.27-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "012181315e87955a83d9a2d71c61a00db5853db2f1410c06a17ed7a8c5a64aad",
                "md5": "aeed1691d3f183117cffc009632e8e26",
                "sha256": "edd2b65639ee54e606518bb76297b2ee0cae9bc3566cca2fc8aabe055107a19c"
            },
            "downloads": -1,
            "filename": "aurite-0.3.27.tar.gz",
            "has_sig": false,
            "md5_digest": "aeed1691d3f183117cffc009632e8e26",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 748631,
            "upload_time": "2025-07-21T17:40:26",
            "upload_time_iso_8601": "2025-07-21T17:40:26.231541Z",
            "url": "https://files.pythonhosted.org/packages/01/21/81315e87955a83d9a2d71c61a00db5853db2f1410c06a17ed7a8c5a64aad/aurite-0.3.27.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-21 17:40:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Aurite-ai",
    "github_project": "aurite-agents",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "aurite"
}
        
Elapsed time: 0.66171s