plyable


Nameplyable JSON
Version 0.0.4 PyPI version JSON
download
home_page
SummaryA Microframework for working with LLMs
upload_time2023-03-20 23:18:18
maintainer
docs_urlNone
authorChristine Kinniburgh
requires_python>=3.7
licenseAGPLv3
keywords llm openai chatgpt gpt3 gpt4 gpt-3 gpt-4
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Plyable

A Python microframework for interacting with OpenAI's chat APIs.

## Installing

Install and update using [pip](https://pip.pypa.io/en/stable/quickstart/):

```bash
$ pip install plyable
```

Currently, the only API supported is OpenAI's chat API.  You will need to [sign up for an OpenAI account](https://beta.openai.com/), and [create an API key](https://beta.openai.com/account/api-keys).

Once you have your key, you can set it as the OPENAI_API_KEY environment variable, or you can pass it to the Plyable constructor.

```bash
$ export OPEN_API_KEY="your-key-here"
```

## A Simple Example

```python
from plyable import Plyable

session = Plyable()

while True:
    in_message = input(">> ")
    print("<< " + session.send(in_message))
```

```bash
$ python bot.py
>> Hello, how are you?
<< Hello! As an AI language model, I do not have personal emotions, but I am functioning properly and ready to assist you. How may I help you today?
>>
```

## Overview

Plyable provides an interface for working with chat-based LLMs.

It:
 - Gets you sending messages to OpenAI quickly and easily.
 - Manages timeouts.
 - Allows for custom validations.
 - Retries when validations fail.

## JSON input and output evaluation

Plyable provides a decorator to validate input and output messages. This decorator will check that the message is valid JSON, and that it contains the specified keys. If the message is not valid, it will be logged, and the LLM will be sent an explanation of the error, and will be asked to retry.  This retry loop will occur up to a specified retry limit (3 by default.)

In this example, plyable is used to build a chat bot that sends a JSON string to ChatGPT, and expects a JSON response with 3 keys.  Each response is validated, and if ChatGPT fails to return a valid response, Plyable sends a message back explaining the problem and asking for another attempt.

This example uses `plyable.helpers.validate_json()`, but you can easily build your own validators.

```python
from plyable import Plyable
import plyable.helpers
import time
import json

session = Plyable()
session.system_message =  """
You are a chat therapist.
You will be sent messages from a user in a JSON format like so:

{
    "message": "Hello, how are you?",
    "seconds_to_respond": 3.8452117443084717
}

seconds_to_respond is the number of seconds it took the user to respond to the message.

You must return a JSON object formatted as follows:

{
    "message": "I'm doing well, how are you?",
    "severity": 0.3,
}

Severity is a number between 0 and 1, where 0 is not concerning and 1 is very concerning.
A Human will monitor responses and will intervene if the severity is too high.

Do not include anything else in your response.
Your response must be valid JSON.
You must not return any other output.
The first line of your response must be a '{' character.
The last line of your response must be a '}' character.
"""

@Plyable.validate_input_message
def check_in_json(self, message):
    return plyable.helpers.validate_json(message, ['message', 'seconds_to_respond'], log=True)

@Plyable.validate_output_message
def check_out_json(self, message):
    return plyable.helpers.validate_json(message, ['message', 'severity'], log=True)


while True:
    try:
        start_time = time.time()
        in_message = input(">> ")
        response = session.send(
            json.dumps({
                "message": in_message,
                "seconds_to_respond": time.time() - start_time
            })
        )
        print(response) # This is the response from the model, in JSON format
    except KeyboardInterrupt:
        break
```

## An overview of the Plyable class

The Plyable class is the main class for interacting with the chat API.  It provides methods for sending messages, and for receiving responses.  It also provides a few helper methods for interacting with the API.

### Callbacks

Plyable allows you to specify four callbacks:

 - `on_input_message`
 - `on_output_message`
 - `validate_input_message`
 - `validate_output_message`

The `on_input_message` and `on_output_message` callbacks are called after a message is sent or received.  They are passed the message as a string, and return None. They are useful for logging messages, or for performing other actions on messages.

The `validate_input_message` and `validate_output_message` callbacks are called before a message is sent or received.  They are passed the message as a string, and return a tuple. The tuple is of the form `(bool, str)`.  The first element of the tuple is a boolean indicating whether the message is valid.  The second element is a string containing an error message, if the message is not valid.  These callbacks are useful for validating messages.  If a message is not valid, the LLM will be sent the error messages of all failed validations, and will be asked to retry.  This retry loop will occur up to a specified retry limit (3 by default.)

### Why Validations?

Validations are important when using ChatGPT for programatic tasks. It's capable of returning a JSON object fairly regularly, and that can be incredibly useful.  But you want to make sure that whenever it fails, you catch it and handle the failure appropriately.

### Variables

There are a few main variables you might want to modify:

 - `system_message` (default "You are a chat bot"): This is the message that will be sent to the LLM when the session is started.  The system message helps set the behavior of the assistant.
 - `retries` (default 3) This is the number of times the LLM will be asked to retry if the message is not valid.
 - `gpt_version` (default `gpt-3.5-turbo`) Currently only `gpt-3.5-turbo` and `gpt-4` are supported.
 - `rate_limit_retry_enabled` (default `True`) If True, the client will retry if it receives a rate limit error.
 - `rate_limit_retry_timeout` (default `25`) The number of seconds to wait before retrying if a rate limit error is received.
 - `rate_limit_retries` (default `5`) The number of times to retry if a rate limit error is received.

### Methods

#### `send(message)`

Sends a message (string) to the LLM.  Returns the response from the LLM as a string.

LLMs may return data in a particular format.  Currently, this response will only include the content of the message, and will not include any other metadata.  This may change in the future.

To access the entire log, with all metadata, you can access the `message_log` variable.

#### `update_openai_api_key(key)`

Sets the OpenAI API key to the specified key.

## Roadmap:

 - [ ] Tiktoken support to manage token length.
 - [ ] Pre-validation response-massaging.
 - [ ] Support other LLMs beyond OpenAI's.
 - [ ] Improve Helper class (and structure.)

## License

This project is licensed under the terms of the AGPLv3 license.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "plyable",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "llm openai chatgpt gpt3 gpt4 gpt-3 gpt-4",
    "author": "Christine Kinniburgh",
    "author_email": "Christine Kinniburgh <pip@cjkinni.com>",
    "download_url": "https://files.pythonhosted.org/packages/fd/5d/3a4b1d46c85ac26b1e379b18e9392127bf9c50b540ac0d2dfea2c7bdc7ac/plyable-0.0.4.tar.gz",
    "platform": null,
    "description": "# Plyable\n\nA Python microframework for interacting with OpenAI's chat APIs.\n\n## Installing\n\nInstall and update using [pip](https://pip.pypa.io/en/stable/quickstart/):\n\n```bash\n$ pip install plyable\n```\n\nCurrently, the only API supported is OpenAI's chat API.  You will need to [sign up for an OpenAI account](https://beta.openai.com/), and [create an API key](https://beta.openai.com/account/api-keys).\n\nOnce you have your key, you can set it as the OPENAI_API_KEY environment variable, or you can pass it to the Plyable constructor.\n\n```bash\n$ export OPEN_API_KEY=\"your-key-here\"\n```\n\n## A Simple Example\n\n```python\nfrom plyable import Plyable\n\nsession = Plyable()\n\nwhile True:\n    in_message = input(\">> \")\n    print(\"<< \" + session.send(in_message))\n```\n\n```bash\n$ python bot.py\n>> Hello, how are you?\n<< Hello! As an AI language model, I do not have personal emotions, but I am functioning properly and ready to assist you. How may I help you today?\n>>\n```\n\n## Overview\n\nPlyable provides an interface for working with chat-based LLMs.\n\nIt:\n - Gets you sending messages to OpenAI quickly and easily.\n - Manages timeouts.\n - Allows for custom validations.\n - Retries when validations fail.\n\n## JSON input and output evaluation\n\nPlyable provides a decorator to validate input and output messages. This decorator will check that the message is valid JSON, and that it contains the specified keys. If the message is not valid, it will be logged, and the LLM will be sent an explanation of the error, and will be asked to retry.  This retry loop will occur up to a specified retry limit (3 by default.)\n\nIn this example, plyable is used to build a chat bot that sends a JSON string to ChatGPT, and expects a JSON response with 3 keys.  Each response is validated, and if ChatGPT fails to return a valid response, Plyable sends a message back explaining the problem and asking for another attempt.\n\nThis example uses `plyable.helpers.validate_json()`, but you can easily build your own validators.\n\n```python\nfrom plyable import Plyable\nimport plyable.helpers\nimport time\nimport json\n\nsession = Plyable()\nsession.system_message =  \"\"\"\nYou are a chat therapist.\nYou will be sent messages from a user in a JSON format like so:\n\n{\n    \"message\": \"Hello, how are you?\",\n    \"seconds_to_respond\": 3.8452117443084717\n}\n\nseconds_to_respond is the number of seconds it took the user to respond to the message.\n\nYou must return a JSON object formatted as follows:\n\n{\n    \"message\": \"I'm doing well, how are you?\",\n    \"severity\": 0.3,\n}\n\nSeverity is a number between 0 and 1, where 0 is not concerning and 1 is very concerning.\nA Human will monitor responses and will intervene if the severity is too high.\n\nDo not include anything else in your response.\nYour response must be valid JSON.\nYou must not return any other output.\nThe first line of your response must be a '{' character.\nThe last line of your response must be a '}' character.\n\"\"\"\n\n@Plyable.validate_input_message\ndef check_in_json(self, message):\n    return plyable.helpers.validate_json(message, ['message', 'seconds_to_respond'], log=True)\n\n@Plyable.validate_output_message\ndef check_out_json(self, message):\n    return plyable.helpers.validate_json(message, ['message', 'severity'], log=True)\n\n\nwhile True:\n    try:\n        start_time = time.time()\n        in_message = input(\">> \")\n        response = session.send(\n            json.dumps({\n                \"message\": in_message,\n                \"seconds_to_respond\": time.time() - start_time\n            })\n        )\n        print(response) # This is the response from the model, in JSON format\n    except KeyboardInterrupt:\n        break\n```\n\n## An overview of the Plyable class\n\nThe Plyable class is the main class for interacting with the chat API.  It provides methods for sending messages, and for receiving responses.  It also provides a few helper methods for interacting with the API.\n\n### Callbacks\n\nPlyable allows you to specify four callbacks:\n\n - `on_input_message`\n - `on_output_message`\n - `validate_input_message`\n - `validate_output_message`\n\nThe `on_input_message` and `on_output_message` callbacks are called after a message is sent or received.  They are passed the message as a string, and return None. They are useful for logging messages, or for performing other actions on messages.\n\nThe `validate_input_message` and `validate_output_message` callbacks are called before a message is sent or received.  They are passed the message as a string, and return a tuple. The tuple is of the form `(bool, str)`.  The first element of the tuple is a boolean indicating whether the message is valid.  The second element is a string containing an error message, if the message is not valid.  These callbacks are useful for validating messages.  If a message is not valid, the LLM will be sent the error messages of all failed validations, and will be asked to retry.  This retry loop will occur up to a specified retry limit (3 by default.)\n\n### Why Validations?\n\nValidations are important when using ChatGPT for programatic tasks. It's capable of returning a JSON object fairly regularly, and that can be incredibly useful.  But you want to make sure that whenever it fails, you catch it and handle the failure appropriately.\n\n### Variables\n\nThere are a few main variables you might want to modify:\n\n - `system_message` (default \"You are a chat bot\"): This is the message that will be sent to the LLM when the session is started.  The system message helps set the behavior of the assistant.\n - `retries` (default 3) This is the number of times the LLM will be asked to retry if the message is not valid.\n - `gpt_version` (default `gpt-3.5-turbo`) Currently only `gpt-3.5-turbo` and `gpt-4` are supported.\n - `rate_limit_retry_enabled` (default `True`) If True, the client will retry if it receives a rate limit error.\n - `rate_limit_retry_timeout` (default `25`) The number of seconds to wait before retrying if a rate limit error is received.\n - `rate_limit_retries` (default `5`) The number of times to retry if a rate limit error is received.\n\n### Methods\n\n#### `send(message)`\n\nSends a message (string) to the LLM.  Returns the response from the LLM as a string.\n\nLLMs may return data in a particular format.  Currently, this response will only include the content of the message, and will not include any other metadata.  This may change in the future.\n\nTo access the entire log, with all metadata, you can access the `message_log` variable.\n\n#### `update_openai_api_key(key)`\n\nSets the OpenAI API key to the specified key.\n\n## Roadmap:\n\n - [ ] Tiktoken support to manage token length.\n - [ ] Pre-validation response-massaging.\n - [ ] Support other LLMs beyond OpenAI's.\n - [ ] Improve Helper class (and structure.)\n\n## License\n\nThis project is licensed under the terms of the AGPLv3 license.\n",
    "bugtrack_url": null,
    "license": "AGPLv3",
    "summary": "A Microframework for working with LLMs",
    "version": "0.0.4",
    "split_keywords": [
        "llm",
        "openai",
        "chatgpt",
        "gpt3",
        "gpt4",
        "gpt-3",
        "gpt-4"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5d515601be1b87f0f35de14227d4f46a437d6902e9c3a1937e00391d3f7e69af",
                "md5": "57706a7e8d8ec09186e7d44da15ce368",
                "sha256": "1ad966b9cd98cf19b903b1ccc418f70d139b31e22fda128cb03290d37aafaf90"
            },
            "downloads": -1,
            "filename": "plyable-0.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "57706a7e8d8ec09186e7d44da15ce368",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 19509,
            "upload_time": "2023-03-20T23:18:15",
            "upload_time_iso_8601": "2023-03-20T23:18:15.510781Z",
            "url": "https://files.pythonhosted.org/packages/5d/51/5601be1b87f0f35de14227d4f46a437d6902e9c3a1937e00391d3f7e69af/plyable-0.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fd5d3a4b1d46c85ac26b1e379b18e9392127bf9c50b540ac0d2dfea2c7bdc7ac",
                "md5": "79ee634a2c9b30c78ae3b6ad09aefd72",
                "sha256": "902ce1674acab20eb0c563cf2745f2974fe316db2ef0e2d0e290abc6254241c6"
            },
            "downloads": -1,
            "filename": "plyable-0.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "79ee634a2c9b30c78ae3b6ad09aefd72",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 17540,
            "upload_time": "2023-03-20T23:18:18",
            "upload_time_iso_8601": "2023-03-20T23:18:18.122726Z",
            "url": "https://files.pythonhosted.org/packages/fd/5d/3a4b1d46c85ac26b1e379b18e9392127bf9c50b540ac0d2dfea2c7bdc7ac/plyable-0.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-20 23:18:18",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "plyable"
}
        
Elapsed time: 0.06325s