openai-helper


Nameopenai-helper JSON
Version 0.2.4 PyPI version JSON
download
home_pagehttps://github.com/craigtrim/openai-helper
SummaryOpenAI Helper for Easy I/O
upload_time2023-07-26 18:26:31
maintainerCraig Trim
docs_urlNone
authorCraig Trim
requires_python>=3.8.5,<4.0.0
licenseMIT
keywords openai api utility
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # OpenAI-Helper
OpenAI Helper for Easy I/O

## Github
https://github.com/craigtrim/openai-helper

## Usage

###  Set the OpenAI credentials
```python
import os
os.environ['OPENAI_KEY'] = "<encrypted key>"
os.environ['OPENAI_ORG'] = "<encrypted key>"
```

Use `CryptoBase.encrypt_str("...")` from https://pypi.org/project/baseblock/

###  Initialize the OpenAI Helper:
```python
run = OpenAITextCompletion().run
```
This will connect to OpenAI and establish performant callbacks.

### Call OpenAI:
```python
run(input_prompt="Generate a one random number between 1 and 5000")
```

or
```python
run(engine="text-ada-001",
    temperature=1.0,
    max_tokens=256,
    input_prompt="Rewrite the input in grammatical English:\n\nInput: You believe I can help you understand what trust yourself? don't you?\nOutput:\n\n")
```

The output will contain both the input and output values:
```json
{
   "input":{
      "best_of":1,
      "engine":"text-davinci-003",
      "frequency_penalty":0.0,
      "input_prompt":"Rewrite the input in grammatical English:\n\nInput: You believe I can help you understand what trust yourself? don't you?\nOutput:\n\n",
      "max_tokens":256,
      "presence_penalty":2,
      "temperature":1.0,
      "timeout":5,
      "top_p":1.0
   },
   "output":{
      "choices":[
         {
            "finish_reason":"stop",
            "index":0,
            "logprobs":"None",
            "text":"Don't you believe that I can help you understand trust in yourself?"
         }
      ],
      "created":1659051242,
      "id":"cmpl-5Z7IwXM5bCwWj8IuHaGnOLn6bCvHz",
      "model":"text-ada-001",
      "object":"text_completion",
      "usage":{
         "completion_tokens":17,
         "prompt_tokens":32,
         "total_tokens":49
      }
   }
}
```

## Supported Parameters and Defaults
This method signature describes support:
```python
def process(self,
            input_prompt: str,
            engine: str = None,
            best_of: int = None,
            temperature: float = None,
            max_tokens: int = None,
            top_p: float = None,
            frequency_penalty: int = None,
            presence_penalty: int = None) -> dict:
    """ Run an OpenAI event

    Args:
        input_prompt (str): The Input Prompt to execute against OpenAI
        engine (str, optional): The OpenAI model (engine) to run against. Defaults to None.
            Options as of July, 2022 are:
                'text-davinci-003'
                'text-curie-001',
                'text-babbage-001'
                'text-ada-001'
        best_of (int, optional): Generates Multiple Server-Side Combinations and only selects the best. Defaults to None.
            This can really eat up OpenAI tokens so use with caution!
        temperature (float, optional): Control Randomness; Scale is 0.0 - 1.0. Defaults to None.
            Scale is 0.0 - 1.0
            Lower values approach predictable outputs and determinate behavior
            Higher values are more engaging but also less predictable
            Use High Values cautiously
        max_tokens (int, optional): The Maximum Number of tokens to generate. Defaults to None.
            Requests can use up to 4,000 tokens (this takes the length of the input prompt into account)
            The higher this value, the more each request will cost.
        top_p (float, optional): Controls Diversity via Nucleus Sampling. Defaults to None.
            no idea what this means
        frequency_penalty (int, optional): How much to penalize new tokens based on their frequency in the text so far. Defaults to None.
            Scale: 0.0 - 2.0.
        presence_penalty (int, optional): Seems similar to frequency penalty. Defaults to None.

    Returns:
        dict: an output dictionary with two keys:
            input: the input dictionary with validated parameters and default values where appropriate
            output: the output event from OpenAI
    """
```

## Counting Tokens (tiktoken)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/craigtrim/openai-helper",
    "name": "openai-helper",
    "maintainer": "Craig Trim",
    "docs_url": null,
    "requires_python": ">=3.8.5,<4.0.0",
    "maintainer_email": "craigtrim@gmail.com",
    "keywords": "openai,api,utility",
    "author": "Craig Trim",
    "author_email": "craigtrim@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/51/81/19c41f4ef3eb0a51dfecced84934eeb1dde5fccfdd7530aac4a63e4c3655/openai_helper-0.2.4.tar.gz",
    "platform": null,
    "description": "# OpenAI-Helper\nOpenAI Helper for Easy I/O\n\n## Github\nhttps://github.com/craigtrim/openai-helper\n\n## Usage\n\n###  Set the OpenAI credentials\n```python\nimport os\nos.environ['OPENAI_KEY'] = \"<encrypted key>\"\nos.environ['OPENAI_ORG'] = \"<encrypted key>\"\n```\n\nUse `CryptoBase.encrypt_str(\"...\")` from https://pypi.org/project/baseblock/\n\n###  Initialize the OpenAI Helper:\n```python\nrun = OpenAITextCompletion().run\n```\nThis will connect to OpenAI and establish performant callbacks.\n\n### Call OpenAI:\n```python\nrun(input_prompt=\"Generate a one random number between 1 and 5000\")\n```\n\nor\n```python\nrun(engine=\"text-ada-001\",\n    temperature=1.0,\n    max_tokens=256,\n    input_prompt=\"Rewrite the input in grammatical English:\\n\\nInput: You believe I can help you understand what trust yourself? don't you?\\nOutput:\\n\\n\")\n```\n\nThe output will contain both the input and output values:\n```json\n{\n   \"input\":{\n      \"best_of\":1,\n      \"engine\":\"text-davinci-003\",\n      \"frequency_penalty\":0.0,\n      \"input_prompt\":\"Rewrite the input in grammatical English:\\n\\nInput: You believe I can help you understand what trust yourself? don't you?\\nOutput:\\n\\n\",\n      \"max_tokens\":256,\n      \"presence_penalty\":2,\n      \"temperature\":1.0,\n      \"timeout\":5,\n      \"top_p\":1.0\n   },\n   \"output\":{\n      \"choices\":[\n         {\n            \"finish_reason\":\"stop\",\n            \"index\":0,\n            \"logprobs\":\"None\",\n            \"text\":\"Don't you believe that I can help you understand trust in yourself?\"\n         }\n      ],\n      \"created\":1659051242,\n      \"id\":\"cmpl-5Z7IwXM5bCwWj8IuHaGnOLn6bCvHz\",\n      \"model\":\"text-ada-001\",\n      \"object\":\"text_completion\",\n      \"usage\":{\n         \"completion_tokens\":17,\n         \"prompt_tokens\":32,\n         \"total_tokens\":49\n      }\n   }\n}\n```\n\n## Supported Parameters and Defaults\nThis method signature describes support:\n```python\ndef process(self,\n            input_prompt: str,\n            engine: str = None,\n            best_of: int = None,\n            temperature: float = None,\n            max_tokens: int = None,\n            top_p: float = None,\n            frequency_penalty: int = None,\n            presence_penalty: int = None) -> dict:\n    \"\"\" Run an OpenAI event\n\n    Args:\n        input_prompt (str): The Input Prompt to execute against OpenAI\n        engine (str, optional): The OpenAI model (engine) to run against. Defaults to None.\n            Options as of July, 2022 are:\n                'text-davinci-003'\n                'text-curie-001',\n                'text-babbage-001'\n                'text-ada-001'\n        best_of (int, optional): Generates Multiple Server-Side Combinations and only selects the best. Defaults to None.\n            This can really eat up OpenAI tokens so use with caution!\n        temperature (float, optional): Control Randomness; Scale is 0.0 - 1.0. Defaults to None.\n            Scale is 0.0 - 1.0\n            Lower values approach predictable outputs and determinate behavior\n            Higher values are more engaging but also less predictable\n            Use High Values cautiously\n        max_tokens (int, optional): The Maximum Number of tokens to generate. Defaults to None.\n            Requests can use up to 4,000 tokens (this takes the length of the input prompt into account)\n            The higher this value, the more each request will cost.\n        top_p (float, optional): Controls Diversity via Nucleus Sampling. Defaults to None.\n            no idea what this means\n        frequency_penalty (int, optional): How much to penalize new tokens based on their frequency in the text so far. Defaults to None.\n            Scale: 0.0 - 2.0.\n        presence_penalty (int, optional): Seems similar to frequency penalty. Defaults to None.\n\n    Returns:\n        dict: an output dictionary with two keys:\n            input: the input dictionary with validated parameters and default values where appropriate\n            output: the output event from OpenAI\n    \"\"\"\n```\n\n## Counting Tokens (tiktoken)\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "OpenAI Helper for Easy I/O",
    "version": "0.2.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/craigtrim/openai-helper/issues",
        "Homepage": "https://github.com/craigtrim/openai-helper",
        "Repository": "https://github.com/craigtrim/openai-helper"
    },
    "split_keywords": [
        "openai",
        "api",
        "utility"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "58374e2025f280db737a21ec3a48eaf2dff4c334418e955856acc3537f6fbce9",
                "md5": "38eaa7024141d89c89bba5089f854414",
                "sha256": "22c7c089c0049c9105d4fae8c51fc139cb4bebd0f777235c97b7a4aec4d00338"
            },
            "downloads": -1,
            "filename": "openai_helper-0.2.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "38eaa7024141d89c89bba5089f854414",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.5,<4.0.0",
            "size": 34176,
            "upload_time": "2023-07-26T18:26:30",
            "upload_time_iso_8601": "2023-07-26T18:26:30.091907Z",
            "url": "https://files.pythonhosted.org/packages/58/37/4e2025f280db737a21ec3a48eaf2dff4c334418e955856acc3537f6fbce9/openai_helper-0.2.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "518119c41f4ef3eb0a51dfecced84934eeb1dde5fccfdd7530aac4a63e4c3655",
                "md5": "fb082c01cb2e99f3373afbd6e7a564cd",
                "sha256": "e27978ffe6b3ee21b106fa91769cf7efb414696a6555882847457e4fe9b4b4dd"
            },
            "downloads": -1,
            "filename": "openai_helper-0.2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "fb082c01cb2e99f3373afbd6e7a564cd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.5,<4.0.0",
            "size": 19950,
            "upload_time": "2023-07-26T18:26:31",
            "upload_time_iso_8601": "2023-07-26T18:26:31.668893Z",
            "url": "https://files.pythonhosted.org/packages/51/81/19c41f4ef3eb0a51dfecced84934eeb1dde5fccfdd7530aac4a63e4c3655/openai_helper-0.2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-26 18:26:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "craigtrim",
    "github_project": "openai-helper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "openai-helper"
}
        
Elapsed time: 0.18033s