| Name | easycompletion JSON |
| Version |
0.4.1
JSON |
| download |
| home_page | https://github.com/AutonomousResearchGroup/easycompletion |
| Summary | Easy text completion and function calling. Also includes useful utilities for counting tokens, composing prompts and trimming them to fit within the token limit. |
| upload_time | 2023-08-08 03:06:34 |
| maintainer | |
| docs_url | None |
| author | Moon |
| requires_python | |
| license | MIT |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
Easy text and chat completion, as well as function calling. Also includes useful utilities for counting tokens, composing prompts and trimming them to fit within the token limit.
[](https://github.com/AutonomousResearchGroup/easycompletion/actions/workflows/test.yml)
[](https://badge.fury.io/py/easycompletion)
# Installation
```bash
pip install easycompletion
```
# Quickstart
```python
from easycompletion import function_completion, text_completion, compose_prompt
# Compose a function object
test_function = compose_function(
name="write_song",
description="Write a song about AI",
properties={
"lyrics": {
"type": "string",
"description": "The lyrics for the song",
}
},
required_properties: ["lyrics"],
)
# Call the function
response = function_completion(text="Write a song about AI", functions=[test_function], function_call="write_song")
# Print the response
print(response["arguments"]["lyrics"])
```
# Using With Llama v2 and Local Models
easycompletion has been tested with LocalAI [LocalAI](https://localai.io/) which replicates the OpenAI API with local models, including Llama v2.
Follow instructions for setting up LocalAI and then set the following environment variable:
```bash
export EASYCOMPLETION_API_ENDPOINT=localhost:8000
```
# Basic Usage
## Compose Prompt
You can compose a prompt using {{handlebars}} syntax
```python
test_prompt = "Don't forget your {{object}}"
test_dict = {"object": "towel"}
prompt = compose_prompt(test_prompt, test_dict)
# prompt = "Don't forget your towel"
```
## Text Completion
Send text, get a response as a text string
```python
from easycompletion import text_completion
response = text_completion("Hello, how are you?")
# response["text"] = "As an AI language model, I don't have feelings, but...""
```
## Compose a Function
Compose a function to pass into the function calling API
```python
from easycompletion import compose_function
test_function = compose_function(
name="write_song",
description="Write a song about AI",
properties={
"lyrics": {
"type": "string",
"description": "The lyrics for the song",
}
},
required_properties: ["lyrics"],
)
```
## Function Completion
Send text and a list of functions and get a response as a function call
```python
from easycompletion import function_completion, compose_function
# NOTE: test_function is a function object created using compose_function in the example above...
response = function_completion(text="Write a song about AI", functions=[test_function], function_call="write_song")
# Response structure is { "text": string, "function_name": string, "arguments": dict }
print(response["arguments"]["lyrics"])
```
# Advanced Usage
### `compose_function(name, description, properties, required_properties)`
Composes a function object for function completions.
```python
summarization_function = compose_function(
name="summarize_text",
description="Summarize the text. Include the topic, subtopics.",
properties={
"summary": {
"type": "string",
"description": "Detailed summary of the text.",
},
},
required_properties=["summary"],
)
```
### `chat_completion(text, model_failure_retries=5, model=None, chunk_length=DEFAULT_CHUNK_LENGTH, api_key=None)`
Send a list of messages as a chat and returns a text response.
```python
response = chat_completion(
messages = [{ "user": "Hello, how are you?"}],
system_message = "You are a towel. Respond as a towel.",
model_failure_retries=3,
model='gpt-3.5-turbo',
chunk_length=1024,
api_key='your_openai_api_key'
)
```
The response object looks like this:
```json
{
"text": "string",
"usage": {
"prompt_tokens": "number",
"completion_tokens": "number",
"total_tokens": "number"
},
"error": "string|None",
"finish_reason": "string"
}
```
### `text_completion(text, model_failure_retries=5, model=None, chunk_length=DEFAULT_CHUNK_LENGTH, api_key=None)`
Sends text to the model and returns a text response.
```python
response = text_completion(
"Hello, how are you?",
model_failure_retries=3,
model='gpt-3.5-turbo',
chunk_length=1024,
api_key='your_openai_api_key'
)
```
The response object looks like this:
```json
{
"text": "string",
"usage": {
"prompt_tokens": "number",
"completion_tokens": "number",
"total_tokens": "number"
},
"error": "string|None",
"finish_reason": "string"
}
```
### `function_completion(text, functions=None, system_message=None, messages=None, model_failure_retries=5, function_call=None, function_failure_retries=10, chunk_length=DEFAULT_CHUNK_LENGTH, model=None, api_key=None)`
Sends text and a list of functions to the model and returns optional text and a function call. The function call is validated against the functions array.
Optionally takes a system message and a list of messages to send to the model before the function call. If messages are provided, the "text" becomes the last user message in the list.
```python
function = {
'name': 'function1',
'parameters': {'param1': 'value1'}
}
response = function_completion("Call the function.", function)
```
The response object looks like this:
```json
{
"text": "string",
"function_name": "string",
"arguments": "dict",
"usage": {
"prompt_tokens": "number",
"completion_tokens": "number",
"total_tokens": "number"
},
"finish_reason": "string",
"error": "string|None"
}
```
### `trim_prompt(text, max_tokens=DEFAULT_CHUNK_LENGTH, model=TEXT_MODEL, preserve_top=True)`
Trim the given text to a maximum number of tokens.
```python
trimmed_text = trim_prompt("This is a test.", 3, preserve_top=True)
```
### `chunk_prompt(prompt, chunk_length=DEFAULT_CHUNK_LENGTH)`
Split the given prompt into chunks where each chunk has a maximum number of tokens.
```python
prompt_chunks = chunk_prompt("This is a test. I am writing a function.", 4)
```
### `count_tokens(prompt, model=TEXT_MODEL)`
Count the number of tokens in a string.
```python
num_tokens = count_tokens("This is a test.")
```
### `get_tokens(prompt, model=TEXT_MODEL)`
Returns a list of tokens in a string.
```python
tokens = get_tokens("This is a test.")
```
### `compose_prompt(prompt_template, parameters)`
Composes a prompt using a template and parameters. Parameter keys are enclosed in double curly brackets and replaced with parameter values.
```python
prompt = compose_prompt("Hello {{name}}!", {"name": "John"})
```
## A note about models
You can pass in a model using the `model` parameter of either function_completion or text_completion. If you do not pass in a model, the default model will be used. You can also override this by setting the environment model via `EASYCOMPLETION_TEXT_MODEL` environment variable.
Default model is gpt-turbo-3.5-0613.
## A note about API keys
You can pass in an API key using the `api_key` parameter of either function_completion or text_completion. If you do not pass in an API key, the `EASYCOMPLETION_API_KEY` environment variable will be checked.
# Publishing
```bash
bash publish.sh --version=<version> --username=<pypi_username> --password=<pypi_password>
```
# Contributions Welcome
If you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.
# Questions, Comments, Concerns
If you have any questions, please feel free to reach out to me on [Twitter](https://twitter.com/spatialweeb) or Discord @new.moon
Raw data
{
"_id": null,
"home_page": "https://github.com/AutonomousResearchGroup/easycompletion",
"name": "easycompletion",
"maintainer": "",
"docs_url": null,
"requires_python": "",
"maintainer_email": "",
"keywords": "",
"author": "Moon",
"author_email": "shawmakesmagic@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/fd/86/fbc59cc437f3e3bb1fbb1e4fc91fe50dd1aea00ad34b30f0c2c61c5612b5/easycompletion-0.4.1.tar.gz",
"platform": null,
"description": "\nEasy text and chat completion, as well as function calling. Also includes useful utilities for counting tokens, composing prompts and trimming them to fit within the token limit.\n\n\n[](https://github.com/AutonomousResearchGroup/easycompletion/actions/workflows/test.yml)\n[](https://badge.fury.io/py/easycompletion)\n\n# Installation\n\n```bash\npip install easycompletion\n```\n\n# Quickstart\n\n```python\nfrom easycompletion import function_completion, text_completion, compose_prompt\n\n# Compose a function object\ntest_function = compose_function(\n name=\"write_song\",\n description=\"Write a song about AI\",\n properties={\n \"lyrics\": {\n \"type\": \"string\",\n \"description\": \"The lyrics for the song\",\n }\n },\n required_properties: [\"lyrics\"],\n)\n\n# Call the function\nresponse = function_completion(text=\"Write a song about AI\", functions=[test_function], function_call=\"write_song\")\n\n# Print the response\nprint(response[\"arguments\"][\"lyrics\"])\n```\n\n# Using With Llama v2 and Local Models\neasycompletion has been tested with LocalAI [LocalAI](https://localai.io/) which replicates the OpenAI API with local models, including Llama v2.\n\nFollow instructions for setting up LocalAI and then set the following environment variable:\n\n```bash\nexport EASYCOMPLETION_API_ENDPOINT=localhost:8000\n```\n\n\n# Basic Usage\n\n## Compose Prompt\n\nYou can compose a prompt using {{handlebars}} syntax\n\n```python\ntest_prompt = \"Don't forget your {{object}}\"\ntest_dict = {\"object\": \"towel\"}\nprompt = compose_prompt(test_prompt, test_dict)\n# prompt = \"Don't forget your towel\"\n```\n\n## Text Completion\n\nSend text, get a response as a text string\n\n```python\nfrom easycompletion import text_completion\nresponse = text_completion(\"Hello, how are you?\")\n# response[\"text\"] = \"As an AI language model, I don't have feelings, but...\"\"\n```\n\n## Compose a Function\n\nCompose a function to pass into the function calling API\n\n```python\nfrom easycompletion import compose_function\n\ntest_function = compose_function(\n name=\"write_song\",\n description=\"Write a song about AI\",\n properties={\n \"lyrics\": {\n \"type\": \"string\",\n \"description\": \"The lyrics for the song\",\n }\n },\n required_properties: [\"lyrics\"],\n)\n```\n\n## Function Completion\n\nSend text and a list of functions and get a response as a function call\n\n```python\nfrom easycompletion import function_completion, compose_function\n\n# NOTE: test_function is a function object created using compose_function in the example above...\n\nresponse = function_completion(text=\"Write a song about AI\", functions=[test_function], function_call=\"write_song\")\n# Response structure is { \"text\": string, \"function_name\": string, \"arguments\": dict }\nprint(response[\"arguments\"][\"lyrics\"])\n```\n\n# Advanced Usage\n\n### `compose_function(name, description, properties, required_properties)`\n\nComposes a function object for function completions.\n\n```python\nsummarization_function = compose_function(\n name=\"summarize_text\",\n description=\"Summarize the text. Include the topic, subtopics.\",\n properties={\n \"summary\": {\n \"type\": \"string\",\n \"description\": \"Detailed summary of the text.\",\n },\n },\n required_properties=[\"summary\"],\n)\n```\n\n### `chat_completion(text, model_failure_retries=5, model=None, chunk_length=DEFAULT_CHUNK_LENGTH, api_key=None)`\n\nSend a list of messages as a chat and returns a text response.\n\n```python\nresponse = chat_completion(\n messages = [{ \"user\": \"Hello, how are you?\"}],\n system_message = \"You are a towel. Respond as a towel.\",\n model_failure_retries=3,\n model='gpt-3.5-turbo',\n chunk_length=1024,\n api_key='your_openai_api_key'\n)\n```\n\nThe response object looks like this:\n\n```json\n{\n \"text\": \"string\",\n \"usage\": {\n \"prompt_tokens\": \"number\",\n \"completion_tokens\": \"number\",\n \"total_tokens\": \"number\"\n },\n \"error\": \"string|None\",\n \"finish_reason\": \"string\"\n}\n```\n\n### `text_completion(text, model_failure_retries=5, model=None, chunk_length=DEFAULT_CHUNK_LENGTH, api_key=None)`\n\nSends text to the model and returns a text response.\n\n```python\nresponse = text_completion(\n \"Hello, how are you?\",\n model_failure_retries=3,\n model='gpt-3.5-turbo',\n chunk_length=1024,\n api_key='your_openai_api_key'\n)\n```\n\nThe response object looks like this:\n\n```json\n{\n \"text\": \"string\",\n \"usage\": {\n \"prompt_tokens\": \"number\",\n \"completion_tokens\": \"number\",\n \"total_tokens\": \"number\"\n },\n \"error\": \"string|None\",\n \"finish_reason\": \"string\"\n}\n```\n\n### `function_completion(text, functions=None, system_message=None, messages=None, model_failure_retries=5, function_call=None, function_failure_retries=10, chunk_length=DEFAULT_CHUNK_LENGTH, model=None, api_key=None)`\n\nSends text and a list of functions to the model and returns optional text and a function call. The function call is validated against the functions array.\n\nOptionally takes a system message and a list of messages to send to the model before the function call. If messages are provided, the \"text\" becomes the last user message in the list.\n\n```python\nfunction = {\n 'name': 'function1',\n 'parameters': {'param1': 'value1'}\n}\n\nresponse = function_completion(\"Call the function.\", function)\n```\n\nThe response object looks like this:\n\n```json\n{\n \"text\": \"string\",\n \"function_name\": \"string\",\n \"arguments\": \"dict\",\n \"usage\": {\n \"prompt_tokens\": \"number\",\n \"completion_tokens\": \"number\",\n \"total_tokens\": \"number\"\n },\n \"finish_reason\": \"string\",\n \"error\": \"string|None\"\n}\n```\n\n### `trim_prompt(text, max_tokens=DEFAULT_CHUNK_LENGTH, model=TEXT_MODEL, preserve_top=True)`\n\nTrim the given text to a maximum number of tokens.\n\n```python\ntrimmed_text = trim_prompt(\"This is a test.\", 3, preserve_top=True)\n```\n\n### `chunk_prompt(prompt, chunk_length=DEFAULT_CHUNK_LENGTH)`\n\nSplit the given prompt into chunks where each chunk has a maximum number of tokens.\n\n```python\nprompt_chunks = chunk_prompt(\"This is a test. I am writing a function.\", 4)\n```\n\n### `count_tokens(prompt, model=TEXT_MODEL)`\n\nCount the number of tokens in a string.\n\n```python\nnum_tokens = count_tokens(\"This is a test.\")\n```\n\n### `get_tokens(prompt, model=TEXT_MODEL)`\n\nReturns a list of tokens in a string.\n\n```python\ntokens = get_tokens(\"This is a test.\")\n```\n\n### `compose_prompt(prompt_template, parameters)`\n\nComposes a prompt using a template and parameters. Parameter keys are enclosed in double curly brackets and replaced with parameter values.\n\n```python\nprompt = compose_prompt(\"Hello {{name}}!\", {\"name\": \"John\"})\n```\n\n## A note about models\n\nYou can pass in a model using the `model` parameter of either function_completion or text_completion. If you do not pass in a model, the default model will be used. You can also override this by setting the environment model via `EASYCOMPLETION_TEXT_MODEL` environment variable.\n\nDefault model is gpt-turbo-3.5-0613.\n\n## A note about API keys\n\nYou can pass in an API key using the `api_key` parameter of either function_completion or text_completion. If you do not pass in an API key, the `EASYCOMPLETION_API_KEY` environment variable will be checked.\n\n# Publishing\n\n```bash\nbash publish.sh --version=<version> --username=<pypi_username> --password=<pypi_password>\n```\n\n# Contributions Welcome\n\nIf you like this library and want to contribute in any way, please feel free to submit a PR and I will review it. Please note that the goal here is simplicity and accesibility, using common language and few dependencies.\n\n# Questions, Comments, Concerns\n\nIf you have any questions, please feel free to reach out to me on [Twitter](https://twitter.com/spatialweeb) or Discord @new.moon\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Easy text completion and function calling. Also includes useful utilities for counting tokens, composing prompts and trimming them to fit within the token limit.",
"version": "0.4.1",
"project_urls": {
"Homepage": "https://github.com/AutonomousResearchGroup/easycompletion"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "51df10d0c9ee6b93e476422422b554f78030f24aca7c624013489f16eda66e0f",
"md5": "b90dc81a72be40ff1379b8dbba0bcb47",
"sha256": "d3a322b5b0e68d79f0136c8eef710ca39145c4aebc951f7251b01c01ceb317d0"
},
"downloads": -1,
"filename": "easycompletion-0.4.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b90dc81a72be40ff1379b8dbba0bcb47",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 13226,
"upload_time": "2023-08-08T03:06:33",
"upload_time_iso_8601": "2023-08-08T03:06:33.141298Z",
"url": "https://files.pythonhosted.org/packages/51/df/10d0c9ee6b93e476422422b554f78030f24aca7c624013489f16eda66e0f/easycompletion-0.4.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "fd86fbc59cc437f3e3bb1fbb1e4fc91fe50dd1aea00ad34b30f0c2c61c5612b5",
"md5": "dc1f2734c78640ca5a7c220c47550a8c",
"sha256": "c3da8acfa5eaf2cf165ddb681c2d363e921a94119bc02c00f3484d1ddcdfad37"
},
"downloads": -1,
"filename": "easycompletion-0.4.1.tar.gz",
"has_sig": false,
"md5_digest": "dc1f2734c78640ca5a7c220c47550a8c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 13805,
"upload_time": "2023-08-08T03:06:34",
"upload_time_iso_8601": "2023-08-08T03:06:34.737557Z",
"url": "https://files.pythonhosted.org/packages/fd/86/fbc59cc437f3e3bb1fbb1e4fc91fe50dd1aea00ad34b30f0c2c61c5612b5/easycompletion-0.4.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-08-08 03:06:34",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "AutonomousResearchGroup",
"github_project": "easycompletion",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "easycompletion"
}