prompthandler


Nameprompthandler JSON
Version 0.3.3 PyPI version JSON
download
home_page
SummaryToken Management system for chatgpt and more. Keeps your prompt under token with summary support
upload_time2023-07-26 14:43:36
maintainer
docs_urlNone
authorprasannan-robots
requires_python
licenseMIT License Copyright (c) 2023 Prasanna Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords openai chatgpt prompts summarizer handler summary token gpt-4
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PromptHandler
Focus on helping people not on managing prompts.

#### Manages Tokens (within limit), removes older message automatically, Summarizes 

## Installation
```
pip install prompthandler
```
## Usage
An example code to make the model chat with you in terminal
```python
from prompthandler import Prompthandler
model = Prompthandler()
model.add_system("You are now user's girlfriend so take care of him",to_head=True) # Makes this to go on to the head. Head is not rolled so it stays the same
model.add_user("Hi")
model.chat() # you can chat with it in terminal
```
For more examples see [examples.ipynb](https://github.com/prasannan-robots/Prompt-handler/blob/main/examples.ipynb)
Example Projects [my silicon version](https://prasannanrobots.pythonanywhere.com/) [github_repo](https://github.com/prasannan-robots/prasannan-robots.github.io)
## Behind the scenes

### models.py

#### `openai_chat_gpt` class

This class represents the interaction with the GPT-3.5-turbo model (or other OpenAI models). It provides methods for generating completions for given messages and managing the conversation history.

**Attributes:**

- `api_key` (str): The OpenAI API key.
- `model` (str): The name of the OpenAI model to use.
- `MAX_TOKEN` (int): The maximum number of tokens allowed for the generated completion.
- `temperature` (float): The temperature parameter controlling the randomness of the output.

**Methods:**

1. `__init__(self, api_key=None, MAX_TOKEN=4096, temperature=0, model="gpt-3.5-turbo-0613")`:
   Initializes the OpenAI chat model with the provided settings.

2. `get_completion_for_message(self, message, temperature=None)`:
   Generates a completion for a given message using the specified OpenAI model.

   - `message` (list): List of messages representing the conversation history.
   - `temperature` (float): Control the randomness of the output. If not provided, it uses the default temperature.

   Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.

### prompts.py

#### `PromptHandler` class

This class represents a conversation prompt history for interacting with the GPT-3.5-turbo model (or other OpenAI models). It extends the `openai_chat_gpt` class and provides additional methods for handling prompts, headers, and body messages.

**Attributes (in addition to `openai_chat_gpt` attributes):**

- `headers` (list): List of header messages in the conversation history.
- `body` (list): List of body messages in the conversation history.
- `head_tokens` (int): Total tokens used in the headers.
- `body_tokens` (int): Total tokens used in the body.
- `tokens` (int): Total tokens used in the entire message history.

**Methods (in addition to `openai_chat_gpt` methods):**

1. `__init__(self, MAX_TOKEN=4096, api_key=None, temperature=0, model="gpt-3.5-turbo-0613")`:
   Initializes the `PromptHandler` with the specified settings.

2. `get_completion(self, message='', update_history=True, temperature=None)`:
   Generates a completion for the conversation history.

   - `message` (str): The user's message to be added to the history.
   - `update_history` (bool): Flag to update the conversation history.
   - `temperature` (float): Control the randomness of the output.

   Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.

3. `chat(self, update_history=True, temperature=None)`:
   Starts a conversation with the model. Accepts terminal input and prints the model's responses.

   - `update_history` (bool): Flag to update the conversation history.
   - `temperature` (float): Control the randomness of the output.

4. `update_messages(self)`:
   Combines the headers and body messages into a single message history.

   Returns the combined list of messages representing the conversation history.

5. `update_tokens(self)`:
   Updates the count of tokens used in the headers, body, and entire message history.

   Returns a tuple containing the total tokens used, tokens used in headers, and tokens used in the body.

6. `calibrate(self, MAX_TOKEN=None)`:
   Calibrates the message history by removing older messages if the total token count exceeds `MAX_TOKEN`.

   - `MAX_TOKEN` (int): The maximum number of tokens allowed for the generated completion.

7. `add(self, role, content, to_head=False)`:
   Adds a message to the message history.

   - `role` (str): The role of the message (user, assistant, etc.).
   - `content` (str): The content of the message.
   - `to_head` (bool): Specifies whether the message should be appended to the headers list. If False, it will be appended to the body list.

   Returns the last message in the message history.

8. `append(self, content_list)`:
   Appends a list of messages to the message history.

   - `content_list` (list): List of messages to be appended.

9. `get_last_message(self)`:
   Returns the last message in the message history.

   Returns the last message as a dictionary containing the role and content of the message.

10. `get_token_for_message(self, messages, model_name="gpt-3.5-turbo-0613")`:
   Returns the number of tokens used by a list of messages.

   - `messages` (list): List of messages to count tokens for.
   - `model_name` (str): The name of the OpenAI model used for token encoding.

   Returns the number of tokens used by the provided list of messages.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "prompthandler",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "openai chatgpt prompts summarizer handler summary token gpt-4",
    "author": "prasannan-robots",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/f1/29/efeafb854477075fd947ecafe14c6f6466dd42671c53989f2c4638e4ed34/prompthandler-0.3.3.tar.gz",
    "platform": null,
    "description": "# PromptHandler\nFocus on helping people not on managing prompts.\n\n#### Manages Tokens (within limit), removes older message automatically, Summarizes \n\n## Installation\n```\npip install prompthandler\n```\n## Usage\nAn example code to make the model chat with you in terminal\n```python\nfrom prompthandler import Prompthandler\nmodel = Prompthandler()\nmodel.add_system(\"You are now user's girlfriend so take care of him\",to_head=True) # Makes this to go on to the head. Head is not rolled so it stays the same\nmodel.add_user(\"Hi\")\nmodel.chat() # you can chat with it in terminal\n```\nFor more examples see [examples.ipynb](https://github.com/prasannan-robots/Prompt-handler/blob/main/examples.ipynb)\nExample Projects [my silicon version](https://prasannanrobots.pythonanywhere.com/) [github_repo](https://github.com/prasannan-robots/prasannan-robots.github.io)\n## Behind the scenes\n\n### models.py\n\n#### `openai_chat_gpt` class\n\nThis class represents the interaction with the GPT-3.5-turbo model (or other OpenAI models). It provides methods for generating completions for given messages and managing the conversation history.\n\n**Attributes:**\n\n- `api_key` (str): The OpenAI API key.\n- `model` (str): The name of the OpenAI model to use.\n- `MAX_TOKEN` (int): The maximum number of tokens allowed for the generated completion.\n- `temperature` (float): The temperature parameter controlling the randomness of the output.\n\n**Methods:**\n\n1. `__init__(self, api_key=None, MAX_TOKEN=4096, temperature=0, model=\"gpt-3.5-turbo-0613\")`:\n   Initializes the OpenAI chat model with the provided settings.\n\n2. `get_completion_for_message(self, message, temperature=None)`:\n   Generates a completion for a given message using the specified OpenAI model.\n\n   - `message` (list): List of messages representing the conversation history.\n   - `temperature` (float): Control the randomness of the output. If not provided, it uses the default temperature.\n\n   Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.\n\n### prompts.py\n\n#### `PromptHandler` class\n\nThis class represents a conversation prompt history for interacting with the GPT-3.5-turbo model (or other OpenAI models). It extends the `openai_chat_gpt` class and provides additional methods for handling prompts, headers, and body messages.\n\n**Attributes (in addition to `openai_chat_gpt` attributes):**\n\n- `headers` (list): List of header messages in the conversation history.\n- `body` (list): List of body messages in the conversation history.\n- `head_tokens` (int): Total tokens used in the headers.\n- `body_tokens` (int): Total tokens used in the body.\n- `tokens` (int): Total tokens used in the entire message history.\n\n**Methods (in addition to `openai_chat_gpt` methods):**\n\n1. `__init__(self, MAX_TOKEN=4096, api_key=None, temperature=0, model=\"gpt-3.5-turbo-0613\")`:\n   Initializes the `PromptHandler` with the specified settings.\n\n2. `get_completion(self, message='', update_history=True, temperature=None)`:\n   Generates a completion for the conversation history.\n\n   - `message` (str): The user's message to be added to the history.\n   - `update_history` (bool): Flag to update the conversation history.\n   - `temperature` (float): Control the randomness of the output.\n\n   Returns a tuple containing the completion generated by the model and the number of tokens used by the completion.\n\n3. `chat(self, update_history=True, temperature=None)`:\n   Starts a conversation with the model. Accepts terminal input and prints the model's responses.\n\n   - `update_history` (bool): Flag to update the conversation history.\n   - `temperature` (float): Control the randomness of the output.\n\n4. `update_messages(self)`:\n   Combines the headers and body messages into a single message history.\n\n   Returns the combined list of messages representing the conversation history.\n\n5. `update_tokens(self)`:\n   Updates the count of tokens used in the headers, body, and entire message history.\n\n   Returns a tuple containing the total tokens used, tokens used in headers, and tokens used in the body.\n\n6. `calibrate(self, MAX_TOKEN=None)`:\n   Calibrates the message history by removing older messages if the total token count exceeds `MAX_TOKEN`.\n\n   - `MAX_TOKEN` (int): The maximum number of tokens allowed for the generated completion.\n\n7. `add(self, role, content, to_head=False)`:\n   Adds a message to the message history.\n\n   - `role` (str): The role of the message (user, assistant, etc.).\n   - `content` (str): The content of the message.\n   - `to_head` (bool): Specifies whether the message should be appended to the headers list. If False, it will be appended to the body list.\n\n   Returns the last message in the message history.\n\n8. `append(self, content_list)`:\n   Appends a list of messages to the message history.\n\n   - `content_list` (list): List of messages to be appended.\n\n9. `get_last_message(self)`:\n   Returns the last message in the message history.\n\n   Returns the last message as a dictionary containing the role and content of the message.\n\n10. `get_token_for_message(self, messages, model_name=\"gpt-3.5-turbo-0613\")`:\n   Returns the number of tokens used by a list of messages.\n\n   - `messages` (list): List of messages to count tokens for.\n   - `model_name` (str): The name of the OpenAI model used for token encoding.\n\n   Returns the number of tokens used by the provided list of messages.\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Prasanna  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Token Management system for chatgpt and more. Keeps your prompt under token with summary support",
    "version": "0.3.3",
    "project_urls": null,
    "split_keywords": [
        "openai",
        "chatgpt",
        "prompts",
        "summarizer",
        "handler",
        "summary",
        "token",
        "gpt-4"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e0ca962dce5b40ff66eda4313e644174d51f805c9989c7c5c4cb5dd17aa2fa6f",
                "md5": "e93fc118b81a3bfee2c8bc9a63f729b6",
                "sha256": "f5ce6d1183bd91efaae3ebfa33a8647a27f92fce487c3303e5f77bb9b11d8af8"
            },
            "downloads": -1,
            "filename": "prompthandler-0.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e93fc118b81a3bfee2c8bc9a63f729b6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 8001,
            "upload_time": "2023-07-26T14:43:35",
            "upload_time_iso_8601": "2023-07-26T14:43:35.326125Z",
            "url": "https://files.pythonhosted.org/packages/e0/ca/962dce5b40ff66eda4313e644174d51f805c9989c7c5c4cb5dd17aa2fa6f/prompthandler-0.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f129efeafb854477075fd947ecafe14c6f6466dd42671c53989f2c4638e4ed34",
                "md5": "25e714d58cb382fdb11d3670fff2941e",
                "sha256": "2ca8177b4ef21215067e56769b511668ef6d766269d8e80dff0eadc8416db90c"
            },
            "downloads": -1,
            "filename": "prompthandler-0.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "25e714d58cb382fdb11d3670fff2941e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6787,
            "upload_time": "2023-07-26T14:43:36",
            "upload_time_iso_8601": "2023-07-26T14:43:36.995334Z",
            "url": "https://files.pythonhosted.org/packages/f1/29/efeafb854477075fd947ecafe14c6f6466dd42671c53989f2c4638e4ed34/prompthandler-0.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-26 14:43:36",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "prompthandler"
}
        
Elapsed time: 0.09582s