Name | llmfunctionclient JSON |
Version |
0.1.5
JSON |
| download |
home_page | None |
Summary | None |
upload_time | 2024-04-12 01:48:31 |
maintainer | None |
docs_url | None |
author | James Mills |
requires_python | <4.0,>=3.4 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Python LLM Function Client
The purpose of this library is to simplify using function calling with OpenAI-like API clients. Traditionally, you would have to rewrite your functions into JSON Schema and write logic to handle tool calls in responses. With this library, you can convert python functions into JSON schema by simply calling `to_tool(func)` or you can create a client that will handle those tool calls for you and simply pass back a response once the tool call chain is finished by creating an instance of `FunctionClient`.
## Installation
To install simply run:
`pip install llmfunctionclient`
## Requirements for Functions
Functions used with this library must have type annotations for each parameter. You do not have to have an annotation for the return type of the function.
Currently, the supported types are string, int, StrEnum and IntEnum.
If the type is a StrEnum or IntEnum, the valid values will be included as part of the function tool spec.
Optionally, you can include a docstring to add descriptions. The first line of the docstring will be considered the description of the function. Subsequent lines should be of the format `<parameter_name>: <description>`
For example:
```python
def get_weather(location: str):
"""
Gets the weather
location: where to get the forecast for
"""
return f"The weather in {location} is 75 degrees"
```
This function will have "Gets the weather" as the function description and the location parameter will have the description "where to get the forecast for"
## FunctionClient
The `FunctionClient` class is made to abstract away the logic of passing along tool calls by taking in a list of functions that are allowed to be called by the LLM client, running any tool calls required by LLM client responses until it is left with just text to respond with.
```python
from llmfunctionclient import FunctionClient
from openai import OpenAI
def get_weather(location: str):
"""
Gets the weather
location: where to get the forecast for
"""
return f"The weather in {location} is 75 degrees"
client = FunctionClient(OpenAI(), "gpt-3.5-turbo", [get_weather])
client.add_message("You are a helpful weather assistant.", "system")
response = client.send_message("What's the weather in LA?", "user")
print(response) # "The current weather in Los Angeles is 75 degrees"
```
When this is run, the following happens under the hood:
1. The two messages specified here will be submitted to the LLM Client
2. The LLM Client responds with a tool called for "get_weather"
3. The get_weather function is called and the result is appended as a message
4. The LLM Client is called again with the function result.
5. The LLM Client Responds with an informed answer.
6. This response text is passed back.
You can pass functions into the constructor of the client to create the default set of tools for every message as well as pass in the `functions` kwarg to `send_message` to specify a specific set of functions for that portion of the conversation.
To force the LLM to use a specific function, you can pass the `force_function` kwarg with the function (or its name) you want the LLM to use and it will be provided as the tool_choice parameter for the chat completion endpoint.
## to_tool
If you want to continue using any other LLM clients and just want the ability to convert python functions into JSON Schema compatible with the function calling spec, you can simply import the function to_tool and call that on the function.
Example:
```python
def get_weather(location: str):
"""
Gets the weather
location: where to get the forecast for
"""
return f"The weather in {location} is 75 degrees"
```
Calling to_tool(get_weather) returns the following object
```python
{'type': 'function',
'function': {'name': 'get_weather',
'parameters': {'type': 'object',
'properties': {'location': {'type': 'string',
'description': 'where to get the forecast for'}},
'required': ['location']}},
'description': 'Gets the weather'}
```
This can then be used with the normal OpenAI client like this:
```python
messages = [{"role": "user", "content": "What's the weather like in Boston today?"}]
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
tools=[to_tool(get_weather)],
tool_choice="auto"
)
```
Raw data
{
"_id": null,
"home_page": null,
"name": "llmfunctionclient",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.4",
"maintainer_email": null,
"keywords": null,
"author": "James Mills",
"author_email": "jimmyemills@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/56/d8/92eb07bea88bfa5355f8439c9cb52224be3509aa6a0287266442cfbcd5af/llmfunctionclient-0.1.5.tar.gz",
"platform": null,
"description": "# Python LLM Function Client\n\nThe purpose of this library is to simplify using function calling with OpenAI-like API clients. Traditionally, you would have to rewrite your functions into JSON Schema and write logic to handle tool calls in responses. With this library, you can convert python functions into JSON schema by simply calling `to_tool(func)` or you can create a client that will handle those tool calls for you and simply pass back a response once the tool call chain is finished by creating an instance of `FunctionClient`.\n\n## Installation\n\nTo install simply run:\n`pip install llmfunctionclient`\n\n## Requirements for Functions\n\nFunctions used with this library must have type annotations for each parameter. You do not have to have an annotation for the return type of the function.\nCurrently, the supported types are string, int, StrEnum and IntEnum.\nIf the type is a StrEnum or IntEnum, the valid values will be included as part of the function tool spec.\n\nOptionally, you can include a docstring to add descriptions. The first line of the docstring will be considered the description of the function. Subsequent lines should be of the format `<parameter_name>: <description>`\n\nFor example:\n```python\ndef get_weather(location: str):\n \"\"\"\n Gets the weather\n\n location: where to get the forecast for\n \"\"\"\n return f\"The weather in {location} is 75 degrees\"\n```\n\nThis function will have \"Gets the weather\" as the function description and the location parameter will have the description \"where to get the forecast for\"\n\n## FunctionClient\n\nThe `FunctionClient` class is made to abstract away the logic of passing along tool calls by taking in a list of functions that are allowed to be called by the LLM client, running any tool calls required by LLM client responses until it is left with just text to respond with.\n\n```python\nfrom llmfunctionclient import FunctionClient\nfrom openai import OpenAI\n\ndef get_weather(location: str):\n \"\"\"\n Gets the weather\n\n location: where to get the forecast for\n \"\"\"\n return f\"The weather in {location} is 75 degrees\"\n\nclient = FunctionClient(OpenAI(), \"gpt-3.5-turbo\", [get_weather])\nclient.add_message(\"You are a helpful weather assistant.\", \"system\")\nresponse = client.send_message(\"What's the weather in LA?\", \"user\")\nprint(response) # \"The current weather in Los Angeles is 75 degrees\"\n```\n\nWhen this is run, the following happens under the hood: \n1. The two messages specified here will be submitted to the LLM Client\n2. The LLM Client responds with a tool called for \"get_weather\"\n3. The get_weather function is called and the result is appended as a message\n4. The LLM Client is called again with the function result.\n5. The LLM Client Responds with an informed answer.\n6. This response text is passed back.\n\nYou can pass functions into the constructor of the client to create the default set of tools for every message as well as pass in the `functions` kwarg to `send_message` to specify a specific set of functions for that portion of the conversation.\n\nTo force the LLM to use a specific function, you can pass the `force_function` kwarg with the function (or its name) you want the LLM to use and it will be provided as the tool_choice parameter for the chat completion endpoint.\n\n## to_tool\n\nIf you want to continue using any other LLM clients and just want the ability to convert python functions into JSON Schema compatible with the function calling spec, you can simply import the function to_tool and call that on the function.\n\nExample:\n```python\ndef get_weather(location: str):\n \"\"\"\n Gets the weather\n\n location: where to get the forecast for\n \"\"\"\n return f\"The weather in {location} is 75 degrees\"\n\n```\n\nCalling to_tool(get_weather) returns the following object\n\n```python\n{'type': 'function',\n 'function': {'name': 'get_weather',\n 'parameters': {'type': 'object',\n 'properties': {'location': {'type': 'string',\n 'description': 'where to get the forecast for'}},\n 'required': ['location']}},\n 'description': 'Gets the weather'}\n```\n\nThis can then be used with the normal OpenAI client like this:\n```python\nmessages = [{\"role\": \"user\", \"content\": \"What's the weather like in Boston today?\"}]\ncompletion = client.chat.completions.create(\n model=\"gpt-3.5-turbo\",\n messages=messages,\n tools=[to_tool(get_weather)],\n tool_choice=\"auto\"\n)\n```\n",
"bugtrack_url": null,
"license": null,
"summary": null,
"version": "0.1.5",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cbf54768dd3f18672a8f5a26038ae43c5e054b447c08ef6ab330678b3101fa1a",
"md5": "daa4df74ccd0a573e5c263c4ef412507",
"sha256": "40be6bc3187bbbdaeb31b82f2fdd4c3cdedace04ac1d9fd032d2530ab0c2b0ee"
},
"downloads": -1,
"filename": "llmfunctionclient-0.1.5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "daa4df74ccd0a573e5c263c4ef412507",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.4",
"size": 6029,
"upload_time": "2024-04-12T01:48:29",
"upload_time_iso_8601": "2024-04-12T01:48:29.767605Z",
"url": "https://files.pythonhosted.org/packages/cb/f5/4768dd3f18672a8f5a26038ae43c5e054b447c08ef6ab330678b3101fa1a/llmfunctionclient-0.1.5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "56d892eb07bea88bfa5355f8439c9cb52224be3509aa6a0287266442cfbcd5af",
"md5": "274262eb857814db7f122c2daac5e280",
"sha256": "736461f321e5a96a3f482a6188c00ab3b6ab58079086287ad09d7fc4b17707be"
},
"downloads": -1,
"filename": "llmfunctionclient-0.1.5.tar.gz",
"has_sig": false,
"md5_digest": "274262eb857814db7f122c2daac5e280",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.4",
"size": 5129,
"upload_time": "2024-04-12T01:48:31",
"upload_time_iso_8601": "2024-04-12T01:48:31.263153Z",
"url": "https://files.pythonhosted.org/packages/56/d8/92eb07bea88bfa5355f8439c9cb52224be3509aa6a0287266442cfbcd5af/llmfunctionclient-0.1.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-12 01:48:31",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "llmfunctionclient"
}