ToolAgents


NameToolAgents JSON
Version 0.0.12 PyPI version JSON
download
home_pageNone
SummaryToolAgents is a lightweight and flexible framework for creating function-calling agents with various language models and APIs.
upload_time2024-08-25 05:37:45
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ToolAgents

ToolAgents is a lightweight and flexible framework for creating function-calling agents with various language models and APIs. It provides a unified interface for integrating different LLM providers and executing function calls seamlessly.


## Table of Contents

1. [Features](#features)
2. [Installation](#installation)
3. [Usage](#usage)
  - [MistralAgent with llama.cpp Server](#mistralagent-with-llamacpp-server)
  - [LlamaAgent with llama.cpp Server](#llamaagent-with-llamacpp-server)
  - [ChatAPIAgent with Anthropic API](#chatapiagent-with-anthropic-api)
  - [OllamaAgent](#ollamaagent)
4. [Custom Tools](#custom-tools)
  - [Pydantic Model-based Tools](#1-pydantic-model-based-tools)
  - [Function-based Tools](#2-function-based-tools)
  - [OpenAI-style Function Specifications](#3-openai-style-function-specifications)
  - [The Importance of Good Docstrings and Descriptions](#the-importance-of-good-docstrings-and-descriptions)
5. [Contributing](#contributing)
6. [License](#license)

## Features

- Support for multiple LLM providers:
  - llama.cpp servers
  - Hugging Face's Text Generation Interface (TGI) servers
  - vLLM servers
  - OpenAI API
  - Anthropic API
  - Ollama (with Tool calling support)
- Easy-to-use interface for passing functions, Pydantic models, and tools to LLMs
- Streamlined process for function calling and result handling
- Flexible agent types:
  - MistralAgent for llama.cpp, TGI, and vLLM servers
  - LlamaAgent for llama.cpp, TGI and vLLM servers
  - ChatAPIAgent for OpenAI and Anthropic APIs
  - OllamaAgent for Ollama integration

## Installation

```bash
pip install ToolAgents
```

## Usage

### MistralAgent with llama.cpp Server

```python
from ToolAgents.agents import MistralAgent
from ToolAgents.provider import LlamaCppServerProvider, LlamaCppSamplingSettings
from ToolAgents.utilities import ChatHistory
from test_tools import calculator_function_tool, current_datetime_function_tool, get_weather_function_tool

# Initialize the provider and agent
provider = LlamaCppServerProvider("http://127.0.0.1:8080/")
agent = MistralAgent(provider=provider, debug_output=False)

# Configure settings
settings = LlamaCppSamplingSettings()
settings.temperature = 0.3
settings.top_p = 1.0
settings.max_tokens = 4096

# Define tools
tools = [calculator_function_tool, current_datetime_function_tool, get_weather_function_tool]

# Create chat history and add system message and user message.
chat_history = ChatHistory()
chat_history.add_system_message("You are a helpful assistant.")
chat_history.add_user_message("Perform the following tasks: Get the current weather in Celsius in London, New York, and at the North Pole. Solve these calculations: 42 * 42, 74 + 26, 7 * 26, 4 + 6, and 96/8.")
# Get a response
result = agent.get_streaming_response(
    messages=chat_history.to_list(),
    sampling_settings=settings,
    tools=tools
)

for token in result:
    print(token, end="", flush=True)
print()

# Add the generated messages, including tool messages, to the chat history.
chat_history.add_list_of_dicts(agent.last_messages_buffer)

# Save chat history to file.
chat_history.save_history("./chat_history.json")
```

### LlamaAgent with llama.cpp Server

```python
from ToolAgents.agents import Llama31Agent
from ToolAgents.provider import LlamaCppServerProvider, LlamaCppSamplingSettings
from ToolAgents.utilities import ChatHistory
from test_tools import calculator_function_tool, current_datetime_function_tool, get_weather_function_tool

# Initialize the provider and agent
provider = LlamaCppServerProvider("http://127.0.0.1:8080/")
agent = Llama31Agent(provider=provider, debug_output=False)

# Configure settings
settings = LlamaCppSamplingSettings()
settings.temperature = 0.3
settings.top_p = 1.0
settings.max_tokens = 4096

# Define tools
tools = [calculator_function_tool, current_datetime_function_tool, get_weather_function_tool]

# Create chat history and add system message and user message.
chat_history = ChatHistory()
chat_history.add_system_message("You are a helpful assistant.")
chat_history.add_user_message("Perform the following tasks: Get the current weather in Celsius in London, New York, and at the North Pole. Solve these calculations: 42 * 42, 74 + 26, 7 * 26, 4 + 6, and 96/8.")


# Get a response
result = agent.get_streaming_response(
    messages=chat_history.to_list(),
    sampling_settings=settings,
    tools=tools
)

for token in result:
    print(token, end="", flush=True)
print()

# Add the generated messages, including tool messages, to the chat history.
chat_history.add_list_of_dicts(agent.last_messages_buffer)

# Save chat history to file.
chat_history.save_history("./chat_history.json")
```
### ChatAPIAgent with Anthropic API

```python
import os
from dotenv import load_dotenv
from ToolAgents.agents import ChatAPIAgent
from ToolAgents.provider import AnthropicChatAPI, AnthropicSettings
from ToolAgents.utilities import ChatHistory
from test_tools import calculator_function_tool, current_datetime_function_tool, get_weather_function_tool

load_dotenv()

# Initialize the API and agent
api = AnthropicChatAPI(api_key=os.getenv("ANTHROPIC_API_KEY"), model="claude-3-sonnet-20240229")
agent = ChatAPIAgent(chat_api=api)

# Configure settings
settings = AnthropicSettings()
settings.temperature = 0.45
settings.top_p = 0.85

# Define tools
tools = [calculator_function_tool, current_datetime_function_tool, get_weather_function_tool]

# Create chat history and add system message and user message.
chat_history = ChatHistory()
chat_history.add_system_message("You are a helpful assistant.")
chat_history.add_user_message("Perform the following tasks: Get the current weather in Celsius in London, New York, and at the North Pole. Solve these calculations: 42 * 42, 74 + 26, 7 * 26, 4 + 6, and 96/8.")


# Get a response
result = agent.get_response(
    messages=chat_history.to_list(),
    tools=tools,
    settings=settings
)

print(result)

# Add the generated messages, including tool messages, to the chat history.
chat_history.add_list_of_dicts(agent.last_messages_buffer)

# Save chat history to file.
chat_history.save_history("./chat_history.json")
```

### OllamaAgent

```python
from ToolAgents.agents import OllamaAgent
from ToolAgents.utilities import ChatHistory
from test_tools import get_flight_times_tool

def run():
    agent = OllamaAgent(model='mistral-nemo', debug_output=False)

    tools = [get_flight_times_tool]

    # Create chat history and add system message and user message.
    chat_history = ChatHistory()
    chat_history.add_system_message("You are a helpful assistant.")
    chat_history.add_user_message("What is the flight time from New York (NYC) to Los Angeles (LAX)?")

    response = agent.get_response(
            messages=chat_history.to_list(),
            tools=tools,
        )

    print(response)
    
    # Add the generated messages, including tool messages, to the chat history.
    chat_history.add_list_of_dicts(agent.last_messages_buffer)
    
    chat_history.add_user_message("What is the flight time from London (LHR) to New York (JFK)?")
    print("\nStreaming response:")
    for chunk in agent.get_streaming_response(
            messages=chat_history.to_list(),
            tools=tools,
    ):
        print(chunk, end='', flush=True)

if __name__ == "__main__":
    run()
```

## Custom Tools

ToolAgents supports various ways to create custom tools, allowing you to integrate specific functionalities into your agents. Here are different approaches to creating custom tools:

### 1. Pydantic Model-based Tools

You can create tools using Pydantic models, which provide strong typing and automatic validation. Here's an example of a calculator tool:

```python
from enum import Enum
from typing import Union
from pydantic import BaseModel, Field
from ToolAgents import FunctionTool

class MathOperation(Enum):
    ADD = "add"
    SUBTRACT = "subtract"
    MULTIPLY = "multiply"
    DIVIDE = "divide"

class Calculator(BaseModel):
    """
    Perform a math operation on two numbers.
    """
    number_one: Union[int, float] = Field(..., description="First number.")
    operation: MathOperation = Field(..., description="Math operation to perform.")
    number_two: Union[int, float] = Field(..., description="Second number.")

    def run(self):
        if self.operation == MathOperation.ADD:
            return self.number_one + self.number_two
        elif self.operation == MathOperation.SUBTRACT:
            return self.number_one - self.number_two
        elif self.operation == MathOperation.MULTIPLY:
            return self.number_one * self.number_two
        elif self.operation == MathOperation.DIVIDE:
            return self.number_one / self.number_two
        else:
            raise ValueError("Unknown operation.")

calculator_tool = FunctionTool(Calculator)
```

### 2. Function-based Tools

You can also create tools from simple Python functions. Here's an example of a datetime tool:

```python
import datetime
from ToolAgents import FunctionTool

def get_current_datetime(output_format: str = '%Y-%m-%d %H:%M:%S'):
    """
    Get the current date and time in the given format.

    Args:
        output_format: formatting string for the date and time, defaults to '%Y-%m-%d %H:%M:%S'
    """
    return datetime.datetime.now().strftime(output_format)

current_datetime_tool = FunctionTool(get_current_datetime)
```

### 3. OpenAI-style Function Specifications

ToolAgents supports creating tools from OpenAI-style function specifications:

```python
from ToolAgents import FunctionTool

def get_current_weather(location, unit):
    """Get the current weather in a given location"""
    # Implementation details...

open_ai_tool_spec = {
    "type": "function",
    "function": {
        "name": "get_current_weather",
        "description": "Get the current weather in a given location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city and state, e.g. San Francisco, CA",
                },
                "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location", "unit"],
        },
    },
}

weather_tool = FunctionTool.from_openai_tool(open_ai_tool_spec, get_current_weather)
```

### The Importance of Good Docstrings and Descriptions

When creating custom tools, it's crucial to provide clear and comprehensive docstrings and descriptions. Here's why they matter:

1. **AI Understanding**: The language model uses these descriptions to understand the purpose and functionality of each tool. Better descriptions lead to more accurate tool selection and usage.

2. **Parameter Clarity**: Detailed descriptions for each parameter help the AI understand what input is expected, reducing errors and improving the quality of the generated calls.

3. **Proper Usage**: Good docstrings guide the AI on how to use the tool correctly, including any specific formats or constraints for the input.

4. **Error Prevention**: By clearly stating the expected input types and any limitations, you can prevent many potential errors before they occur.

Here's an example of a well-documented tool:

```python
from pydantic import BaseModel, Field
from ToolAgents import FunctionTool

class FlightTimes(BaseModel):
    """
    Retrieve flight information between two locations.

    This tool provides estimated flight times, including departure and arrival times,
    for flights between major airports. It uses airport codes for input.
    """

    departure: str = Field(
        ...,
        description="The departure airport code (e.g., 'NYC' for New York)",
        min_length=3,
        max_length=3
    )
    arrival: str = Field(
        ...,
        description="The arrival airport code (e.g., 'LAX' for Los Angeles)",
        min_length=3,
        max_length=3
    )

    def run(self) -> str:
        """
        Retrieve flight information for the given departure and arrival locations.

        Returns:
            str: A JSON string containing flight information including departure time,
                 arrival time, and flight duration. If no flight is found, returns an error message.
        """
        # Implementation details...

get_flight_times_tool = FunctionTool(FlightTimes)
```

In this example, the docstrings and field descriptions provide clear information about the tool's purpose, input requirements, and expected output, enabling both the AI and human developers to use the tool effectively.

## Contributing

Contributions to ToolAgents are welcome! Please feel free to submit pull requests, create issues, or suggest improvements.

## License

ToolAgents is released under the MIT License. See the [LICENSE](LICENSE) file for details.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ToolAgents",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": null,
    "author": null,
    "author_email": "Maximilian Winter <maximilian.winter.91@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/3f/9b/d308795e62ee7907c9948d2f23c6f504a45a47858dfe018380f4d36e1bfb/toolagents-0.0.12.tar.gz",
    "platform": null,
    "description": "# ToolAgents\r\n\r\nToolAgents is a lightweight and flexible framework for creating function-calling agents with various language models and APIs. It provides a unified interface for integrating different LLM providers and executing function calls seamlessly.\r\n\r\n\r\n## Table of Contents\r\n\r\n1. [Features](#features)\r\n2. [Installation](#installation)\r\n3. [Usage](#usage)\r\n  - [MistralAgent with llama.cpp Server](#mistralagent-with-llamacpp-server)\r\n  - [LlamaAgent with llama.cpp Server](#llamaagent-with-llamacpp-server)\r\n  - [ChatAPIAgent with Anthropic API](#chatapiagent-with-anthropic-api)\r\n  - [OllamaAgent](#ollamaagent)\r\n4. [Custom Tools](#custom-tools)\r\n  - [Pydantic Model-based Tools](#1-pydantic-model-based-tools)\r\n  - [Function-based Tools](#2-function-based-tools)\r\n  - [OpenAI-style Function Specifications](#3-openai-style-function-specifications)\r\n  - [The Importance of Good Docstrings and Descriptions](#the-importance-of-good-docstrings-and-descriptions)\r\n5. [Contributing](#contributing)\r\n6. [License](#license)\r\n\r\n## Features\r\n\r\n- Support for multiple LLM providers:\r\n  - llama.cpp servers\r\n  - Hugging Face's Text Generation Interface (TGI) servers\r\n  - vLLM servers\r\n  - OpenAI API\r\n  - Anthropic API\r\n  - Ollama (with Tool calling support)\r\n- Easy-to-use interface for passing functions, Pydantic models, and tools to LLMs\r\n- Streamlined process for function calling and result handling\r\n- Flexible agent types:\r\n  - MistralAgent for llama.cpp, TGI, and vLLM servers\r\n  - LlamaAgent for llama.cpp, TGI and vLLM servers\r\n  - ChatAPIAgent for OpenAI and Anthropic APIs\r\n  - OllamaAgent for Ollama integration\r\n\r\n## Installation\r\n\r\n```bash\r\npip install ToolAgents\r\n```\r\n\r\n## Usage\r\n\r\n### MistralAgent with llama.cpp Server\r\n\r\n```python\r\nfrom ToolAgents.agents import MistralAgent\r\nfrom ToolAgents.provider import LlamaCppServerProvider, LlamaCppSamplingSettings\r\nfrom ToolAgents.utilities import ChatHistory\r\nfrom test_tools import calculator_function_tool, current_datetime_function_tool, get_weather_function_tool\r\n\r\n# Initialize the provider and agent\r\nprovider = LlamaCppServerProvider(\"http://127.0.0.1:8080/\")\r\nagent = MistralAgent(provider=provider, debug_output=False)\r\n\r\n# Configure settings\r\nsettings = LlamaCppSamplingSettings()\r\nsettings.temperature = 0.3\r\nsettings.top_p = 1.0\r\nsettings.max_tokens = 4096\r\n\r\n# Define tools\r\ntools = [calculator_function_tool, current_datetime_function_tool, get_weather_function_tool]\r\n\r\n# Create chat history and add system message and user message.\r\nchat_history = ChatHistory()\r\nchat_history.add_system_message(\"You are a helpful assistant.\")\r\nchat_history.add_user_message(\"Perform the following tasks: Get the current weather in Celsius in London, New York, and at the North Pole. Solve these calculations: 42 * 42, 74 + 26, 7 * 26, 4 + 6, and 96/8.\")\r\n# Get a response\r\nresult = agent.get_streaming_response(\r\n    messages=chat_history.to_list(),\r\n    sampling_settings=settings,\r\n    tools=tools\r\n)\r\n\r\nfor token in result:\r\n    print(token, end=\"\", flush=True)\r\nprint()\r\n\r\n# Add the generated messages, including tool messages, to the chat history.\r\nchat_history.add_list_of_dicts(agent.last_messages_buffer)\r\n\r\n# Save chat history to file.\r\nchat_history.save_history(\"./chat_history.json\")\r\n```\r\n\r\n### LlamaAgent with llama.cpp Server\r\n\r\n```python\r\nfrom ToolAgents.agents import Llama31Agent\r\nfrom ToolAgents.provider import LlamaCppServerProvider, LlamaCppSamplingSettings\r\nfrom ToolAgents.utilities import ChatHistory\r\nfrom test_tools import calculator_function_tool, current_datetime_function_tool, get_weather_function_tool\r\n\r\n# Initialize the provider and agent\r\nprovider = LlamaCppServerProvider(\"http://127.0.0.1:8080/\")\r\nagent = Llama31Agent(provider=provider, debug_output=False)\r\n\r\n# Configure settings\r\nsettings = LlamaCppSamplingSettings()\r\nsettings.temperature = 0.3\r\nsettings.top_p = 1.0\r\nsettings.max_tokens = 4096\r\n\r\n# Define tools\r\ntools = [calculator_function_tool, current_datetime_function_tool, get_weather_function_tool]\r\n\r\n# Create chat history and add system message and user message.\r\nchat_history = ChatHistory()\r\nchat_history.add_system_message(\"You are a helpful assistant.\")\r\nchat_history.add_user_message(\"Perform the following tasks: Get the current weather in Celsius in London, New York, and at the North Pole. Solve these calculations: 42 * 42, 74 + 26, 7 * 26, 4 + 6, and 96/8.\")\r\n\r\n\r\n# Get a response\r\nresult = agent.get_streaming_response(\r\n    messages=chat_history.to_list(),\r\n    sampling_settings=settings,\r\n    tools=tools\r\n)\r\n\r\nfor token in result:\r\n    print(token, end=\"\", flush=True)\r\nprint()\r\n\r\n# Add the generated messages, including tool messages, to the chat history.\r\nchat_history.add_list_of_dicts(agent.last_messages_buffer)\r\n\r\n# Save chat history to file.\r\nchat_history.save_history(\"./chat_history.json\")\r\n```\r\n### ChatAPIAgent with Anthropic API\r\n\r\n```python\r\nimport os\r\nfrom dotenv import load_dotenv\r\nfrom ToolAgents.agents import ChatAPIAgent\r\nfrom ToolAgents.provider import AnthropicChatAPI, AnthropicSettings\r\nfrom ToolAgents.utilities import ChatHistory\r\nfrom test_tools import calculator_function_tool, current_datetime_function_tool, get_weather_function_tool\r\n\r\nload_dotenv()\r\n\r\n# Initialize the API and agent\r\napi = AnthropicChatAPI(api_key=os.getenv(\"ANTHROPIC_API_KEY\"), model=\"claude-3-sonnet-20240229\")\r\nagent = ChatAPIAgent(chat_api=api)\r\n\r\n# Configure settings\r\nsettings = AnthropicSettings()\r\nsettings.temperature = 0.45\r\nsettings.top_p = 0.85\r\n\r\n# Define tools\r\ntools = [calculator_function_tool, current_datetime_function_tool, get_weather_function_tool]\r\n\r\n# Create chat history and add system message and user message.\r\nchat_history = ChatHistory()\r\nchat_history.add_system_message(\"You are a helpful assistant.\")\r\nchat_history.add_user_message(\"Perform the following tasks: Get the current weather in Celsius in London, New York, and at the North Pole. Solve these calculations: 42 * 42, 74 + 26, 7 * 26, 4 + 6, and 96/8.\")\r\n\r\n\r\n# Get a response\r\nresult = agent.get_response(\r\n    messages=chat_history.to_list(),\r\n    tools=tools,\r\n    settings=settings\r\n)\r\n\r\nprint(result)\r\n\r\n# Add the generated messages, including tool messages, to the chat history.\r\nchat_history.add_list_of_dicts(agent.last_messages_buffer)\r\n\r\n# Save chat history to file.\r\nchat_history.save_history(\"./chat_history.json\")\r\n```\r\n\r\n### OllamaAgent\r\n\r\n```python\r\nfrom ToolAgents.agents import OllamaAgent\r\nfrom ToolAgents.utilities import ChatHistory\r\nfrom test_tools import get_flight_times_tool\r\n\r\ndef run():\r\n    agent = OllamaAgent(model='mistral-nemo', debug_output=False)\r\n\r\n    tools = [get_flight_times_tool]\r\n\r\n    # Create chat history and add system message and user message.\r\n    chat_history = ChatHistory()\r\n    chat_history.add_system_message(\"You are a helpful assistant.\")\r\n    chat_history.add_user_message(\"What is the flight time from New York (NYC) to Los Angeles (LAX)?\")\r\n\r\n    response = agent.get_response(\r\n            messages=chat_history.to_list(),\r\n            tools=tools,\r\n        )\r\n\r\n    print(response)\r\n    \r\n    # Add the generated messages, including tool messages, to the chat history.\r\n    chat_history.add_list_of_dicts(agent.last_messages_buffer)\r\n    \r\n    chat_history.add_user_message(\"What is the flight time from London (LHR) to New York (JFK)?\")\r\n    print(\"\\nStreaming response:\")\r\n    for chunk in agent.get_streaming_response(\r\n            messages=chat_history.to_list(),\r\n            tools=tools,\r\n    ):\r\n        print(chunk, end='', flush=True)\r\n\r\nif __name__ == \"__main__\":\r\n    run()\r\n```\r\n\r\n## Custom Tools\r\n\r\nToolAgents supports various ways to create custom tools, allowing you to integrate specific functionalities into your agents. Here are different approaches to creating custom tools:\r\n\r\n### 1. Pydantic Model-based Tools\r\n\r\nYou can create tools using Pydantic models, which provide strong typing and automatic validation. Here's an example of a calculator tool:\r\n\r\n```python\r\nfrom enum import Enum\r\nfrom typing import Union\r\nfrom pydantic import BaseModel, Field\r\nfrom ToolAgents import FunctionTool\r\n\r\nclass MathOperation(Enum):\r\n    ADD = \"add\"\r\n    SUBTRACT = \"subtract\"\r\n    MULTIPLY = \"multiply\"\r\n    DIVIDE = \"divide\"\r\n\r\nclass Calculator(BaseModel):\r\n    \"\"\"\r\n    Perform a math operation on two numbers.\r\n    \"\"\"\r\n    number_one: Union[int, float] = Field(..., description=\"First number.\")\r\n    operation: MathOperation = Field(..., description=\"Math operation to perform.\")\r\n    number_two: Union[int, float] = Field(..., description=\"Second number.\")\r\n\r\n    def run(self):\r\n        if self.operation == MathOperation.ADD:\r\n            return self.number_one + self.number_two\r\n        elif self.operation == MathOperation.SUBTRACT:\r\n            return self.number_one - self.number_two\r\n        elif self.operation == MathOperation.MULTIPLY:\r\n            return self.number_one * self.number_two\r\n        elif self.operation == MathOperation.DIVIDE:\r\n            return self.number_one / self.number_two\r\n        else:\r\n            raise ValueError(\"Unknown operation.\")\r\n\r\ncalculator_tool = FunctionTool(Calculator)\r\n```\r\n\r\n### 2. Function-based Tools\r\n\r\nYou can also create tools from simple Python functions. Here's an example of a datetime tool:\r\n\r\n```python\r\nimport datetime\r\nfrom ToolAgents import FunctionTool\r\n\r\ndef get_current_datetime(output_format: str = '%Y-%m-%d %H:%M:%S'):\r\n    \"\"\"\r\n    Get the current date and time in the given format.\r\n\r\n    Args:\r\n        output_format: formatting string for the date and time, defaults to '%Y-%m-%d %H:%M:%S'\r\n    \"\"\"\r\n    return datetime.datetime.now().strftime(output_format)\r\n\r\ncurrent_datetime_tool = FunctionTool(get_current_datetime)\r\n```\r\n\r\n### 3. OpenAI-style Function Specifications\r\n\r\nToolAgents supports creating tools from OpenAI-style function specifications:\r\n\r\n```python\r\nfrom ToolAgents import FunctionTool\r\n\r\ndef get_current_weather(location, unit):\r\n    \"\"\"Get the current weather in a given location\"\"\"\r\n    # Implementation details...\r\n\r\nopen_ai_tool_spec = {\r\n    \"type\": \"function\",\r\n    \"function\": {\r\n        \"name\": \"get_current_weather\",\r\n        \"description\": \"Get the current weather in a given location\",\r\n        \"parameters\": {\r\n            \"type\": \"object\",\r\n            \"properties\": {\r\n                \"location\": {\r\n                    \"type\": \"string\",\r\n                    \"description\": \"The city and state, e.g. San Francisco, CA\",\r\n                },\r\n                \"unit\": {\"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"]},\r\n            },\r\n            \"required\": [\"location\", \"unit\"],\r\n        },\r\n    },\r\n}\r\n\r\nweather_tool = FunctionTool.from_openai_tool(open_ai_tool_spec, get_current_weather)\r\n```\r\n\r\n### The Importance of Good Docstrings and Descriptions\r\n\r\nWhen creating custom tools, it's crucial to provide clear and comprehensive docstrings and descriptions. Here's why they matter:\r\n\r\n1. **AI Understanding**: The language model uses these descriptions to understand the purpose and functionality of each tool. Better descriptions lead to more accurate tool selection and usage.\r\n\r\n2. **Parameter Clarity**: Detailed descriptions for each parameter help the AI understand what input is expected, reducing errors and improving the quality of the generated calls.\r\n\r\n3. **Proper Usage**: Good docstrings guide the AI on how to use the tool correctly, including any specific formats or constraints for the input.\r\n\r\n4. **Error Prevention**: By clearly stating the expected input types and any limitations, you can prevent many potential errors before they occur.\r\n\r\nHere's an example of a well-documented tool:\r\n\r\n```python\r\nfrom pydantic import BaseModel, Field\r\nfrom ToolAgents import FunctionTool\r\n\r\nclass FlightTimes(BaseModel):\r\n    \"\"\"\r\n    Retrieve flight information between two locations.\r\n\r\n    This tool provides estimated flight times, including departure and arrival times,\r\n    for flights between major airports. It uses airport codes for input.\r\n    \"\"\"\r\n\r\n    departure: str = Field(\r\n        ...,\r\n        description=\"The departure airport code (e.g., 'NYC' for New York)\",\r\n        min_length=3,\r\n        max_length=3\r\n    )\r\n    arrival: str = Field(\r\n        ...,\r\n        description=\"The arrival airport code (e.g., 'LAX' for Los Angeles)\",\r\n        min_length=3,\r\n        max_length=3\r\n    )\r\n\r\n    def run(self) -> str:\r\n        \"\"\"\r\n        Retrieve flight information for the given departure and arrival locations.\r\n\r\n        Returns:\r\n            str: A JSON string containing flight information including departure time,\r\n                 arrival time, and flight duration. If no flight is found, returns an error message.\r\n        \"\"\"\r\n        # Implementation details...\r\n\r\nget_flight_times_tool = FunctionTool(FlightTimes)\r\n```\r\n\r\nIn this example, the docstrings and field descriptions provide clear information about the tool's purpose, input requirements, and expected output, enabling both the AI and human developers to use the tool effectively.\r\n\r\n## Contributing\r\n\r\nContributions to ToolAgents are welcome! Please feel free to submit pull requests, create issues, or suggest improvements.\r\n\r\n## License\r\n\r\nToolAgents is released under the MIT License. See the [LICENSE](LICENSE) file for details.\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "ToolAgents is a lightweight and flexible framework for creating function-calling agents with various language models and APIs.",
    "version": "0.0.12",
    "project_urls": {
        "Bug Tracker": "https://github.com/Maximilian-Winter/ToolAgents/issues",
        "Homepage": "https://github.com/Maximilian-Winter/ToolAgents"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "748b1df602c2aaf718b8046986eca1fc8a0d458e3b79fc3e1b8ae4fa38203784",
                "md5": "403b0f4d97b70ad2f8b6368b753fbced",
                "sha256": "ccc289f08c1d09228d67e81495fea4894fab0d0c7d05f435376a5987a1539e6f"
            },
            "downloads": -1,
            "filename": "ToolAgents-0.0.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "403b0f4d97b70ad2f8b6368b753fbced",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 66267,
            "upload_time": "2024-08-25T05:37:43",
            "upload_time_iso_8601": "2024-08-25T05:37:43.746239Z",
            "url": "https://files.pythonhosted.org/packages/74/8b/1df602c2aaf718b8046986eca1fc8a0d458e3b79fc3e1b8ae4fa38203784/ToolAgents-0.0.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3f9bd308795e62ee7907c9948d2f23c6f504a45a47858dfe018380f4d36e1bfb",
                "md5": "48711007c1a92a7e6875ec2d7a5aafd5",
                "sha256": "888d512f37a8d7853313899cd612f6043a197e53d981f2c0860bfb329fa01462"
            },
            "downloads": -1,
            "filename": "toolagents-0.0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "48711007c1a92a7e6875ec2d7a5aafd5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 61007,
            "upload_time": "2024-08-25T05:37:45",
            "upload_time_iso_8601": "2024-08-25T05:37:45.496702Z",
            "url": "https://files.pythonhosted.org/packages/3f/9b/d308795e62ee7907c9948d2f23c6f504a45a47858dfe018380f4d36e1bfb/toolagents-0.0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-25 05:37:45",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Maximilian-Winter",
    "github_project": "ToolAgents",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "toolagents"
}
        
Elapsed time: 0.43727s