gpt3-contextual


Namegpt3-contextual JSON
Version 0.1 PyPI version JSON
download
home_pagehttps://github.com/uezo/gpt3-contextual
SummaryContextual chat with GPT-3 of OpenAI API.
upload_time2023-02-11 14:37:11
maintaineruezo
docs_urlNone
authoruezo
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # gpt3-contextual

Contextual chat with GPT-3 of OpenAI API.

# 🚀 Quick start

Install.

```bash
$ pip install gpt3-contextual
```

Make script as console.py.

```python
import asyncio
from gpt3contextual import ContextualChat

async def main():
    cc = ContextualChat(
        "YOUR_OPENAI_APIKEY",
        username="Human",
        agentname="AI"
    )

    while True:
        text = input("human> ")
        resp, _ = await cc.chat("user1234567890", text)
        print(f"AI> {resp}")

asyncio.run(main())
```

Run console.py and chat with GPT-3.

```
human> hello
AI>  Hi, how can I help you?
human> I'm hungry
AI>  What would you like to eat?
human> sandwitches
AI>  What kind of sandwich would you like?
human> ham&egg        
AI>  Would you like that on a white or wheat bread?
human> wheet
AI>  Would you like anything else with your ham and egg sandwich on wheat bread?
human> Everything is fine, thank you.
AI>  Great! Your order has been placed. Enjoy your meal!
```


# 🧸 Usage

You can set parameters to customize conversation senario when you make the instance of `ContextualChat`.
See also https://platform.openai.com/docs/api-reference/completions to understands some params.

- `api_key`: str : API key for OpenAI API.
- `context_count`: int : Count of turns(request and response) to use as the context. Default=`6`
- `username`: str : Name of roll of user. Default=`customer`.
- `agentname`: str = Name or role of agent(bot). Default=`agent`.
- `chat_description`: str : Some conditions to be considered in the senario of conversation.
- `engine`: str : The engine(model) to use. Default=`text-davinci-003`.
- `temperature`: float : What sampling temperature to use, between 0 and 2. Default=`0.5`.
- `max_tokens`: int : The maximum number of tokens to generate in the completion. Default=`2000`.
- `context_manager`: ContextManager : Custom ContextManager.


# 🥪 How it works

As you know, `Completion` endpoint doesn't provide the features for keep and use context.
So, simply, this library send the previous conversation histories with the request text like below.

Prompt on the 1st turn:
```
human:hello
AI:
```
Chatbot returns `Hi, how can I help you?`.

Prompt on the 2nd turn:
```
human:hello
AI:Hi, how can I help you?
human:I'm hungry
AI:
```
Chatbot returns `What would you like to eat?`

:

Prompt on the 6th turn:
```
human:sandwitches
AI:What kind of sandwich would you like?
human:ham&egg        
AI:Would you like that on a white or wheat bread?
human:wheet
AI:Would you like anything else with your ham and egg sandwich on wheat bread?
human:Everything is fine, thank you.
AI:
```
Chatbot returns `Great! Your order has been placed. Enjoy your meal!`


If you change the `username` and `agentname` the senario of conversation changes.
And, setting `chat_description` also make effects on the situations.

Here is an example to simulate the conversation by brother and sister in Japanese.

```python
cc = ContextualChat(
    openai_apikey,
    username="兄",
    agentname="妹",
    chat_description="これは兄と親しい妹との会話です。仲良しなので丁寧語を使わずに話してください。"
)
```

Prompt to be sent to OpenAI API is like bellow. `chat_description` is always set to the first line, no matter how many turns of conversation proceed.

```
これは兄と親しい妹との会話です。仲良しなので丁寧語を使わずに話してください。
兄:おはよー
妹:おはよー!今日の予定は?
兄:いつも通り、特にないよ
妹:じゃあ、今日は一緒に遊ぼうよ!
兄:何かしたいことある?
妹:うん、今日は映画を見よう!
兄:いいね。 どんなのがいい?
妹:思いついた!サスペンス映画がいいかな!
````

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/uezo/gpt3-contextual",
    "name": "gpt3-contextual",
    "maintainer": "uezo",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "uezo@uezo.net",
    "keywords": "",
    "author": "uezo",
    "author_email": "uezo@uezo.net",
    "download_url": "",
    "platform": null,
    "description": "# gpt3-contextual\n\nContextual chat with GPT-3 of OpenAI API.\n\n# \ud83d\ude80 Quick start\n\nInstall.\n\n```bash\n$ pip install gpt3-contextual\n```\n\nMake script as console.py.\n\n```python\nimport asyncio\nfrom gpt3contextual import ContextualChat\n\nasync def main():\n    cc = ContextualChat(\n        \"YOUR_OPENAI_APIKEY\",\n        username=\"Human\",\n        agentname=\"AI\"\n    )\n\n    while True:\n        text = input(\"human> \")\n        resp, _ = await cc.chat(\"user1234567890\", text)\n        print(f\"AI> {resp}\")\n\nasyncio.run(main())\n```\n\nRun console.py and chat with GPT-3.\n\n```\nhuman> hello\nAI>  Hi, how can I help you?\nhuman> I'm hungry\nAI>  What would you like to eat?\nhuman> sandwitches\nAI>  What kind of sandwich would you like?\nhuman> ham&egg        \nAI>  Would you like that on a white or wheat bread?\nhuman> wheet\nAI>  Would you like anything else with your ham and egg sandwich on wheat bread?\nhuman> Everything is fine, thank you.\nAI>  Great! Your order has been placed. Enjoy your meal!\n```\n\n\n# \ud83e\uddf8 Usage\n\nYou can set parameters to customize conversation senario when you make the instance of `ContextualChat`.\nSee also https://platform.openai.com/docs/api-reference/completions to understands some params.\n\n- `api_key`: str : API key for OpenAI API.\n- `context_count`: int : Count of turns(request and response) to use as the context. Default=`6`\n- `username`: str : Name of roll of user. Default=`customer`.\n- `agentname`: str = Name or role of agent(bot). Default=`agent`.\n- `chat_description`: str : Some conditions to be considered in the senario of conversation.\n- `engine`: str : The engine(model) to use. Default=`text-davinci-003`.\n- `temperature`: float : What sampling temperature to use, between 0 and 2. Default=`0.5`.\n- `max_tokens`: int : The maximum number of tokens to generate in the completion. Default=`2000`.\n- `context_manager`: ContextManager : Custom ContextManager.\n\n\n# \ud83e\udd6a How it works\n\nAs you know, `Completion` endpoint doesn't provide the features for keep and use context.\nSo, simply, this library send the previous conversation histories with the request text like below.\n\nPrompt on the 1st turn:\n```\nhuman:hello\nAI:\n```\nChatbot returns `Hi, how can I help you?`.\n\nPrompt on the 2nd turn:\n```\nhuman:hello\nAI:Hi, how can I help you?\nhuman:I'm hungry\nAI:\n```\nChatbot returns `What would you like to eat?`\n\n:\n\nPrompt on the 6th turn:\n```\nhuman:sandwitches\nAI:What kind of sandwich would you like?\nhuman:ham&egg        \nAI:Would you like that on a white or wheat bread?\nhuman:wheet\nAI:Would you like anything else with your ham and egg sandwich on wheat bread?\nhuman:Everything is fine, thank you.\nAI:\n```\nChatbot returns `Great! Your order has been placed. Enjoy your meal!`\n\n\nIf you change the `username` and `agentname` the senario of conversation changes.\nAnd, setting `chat_description` also make effects on the situations.\n\nHere is an example to simulate the conversation by brother and sister in Japanese.\n\n```python\ncc = ContextualChat(\n    openai_apikey,\n    username=\"\u5144\",\n    agentname=\"\u59b9\",\n    chat_description=\"\u3053\u308c\u306f\u5144\u3068\u89aa\u3057\u3044\u59b9\u3068\u306e\u4f1a\u8a71\u3067\u3059\u3002\u4ef2\u826f\u3057\u306a\u306e\u3067\u4e01\u5be7\u8a9e\u3092\u4f7f\u308f\u305a\u306b\u8a71\u3057\u3066\u304f\u3060\u3055\u3044\u3002\"\n)\n```\n\nPrompt to be sent to OpenAI API is like bellow. `chat_description` is always set to the first line, no matter how many turns of conversation proceed.\n\n```\n\u3053\u308c\u306f\u5144\u3068\u89aa\u3057\u3044\u59b9\u3068\u306e\u4f1a\u8a71\u3067\u3059\u3002\u4ef2\u826f\u3057\u306a\u306e\u3067\u4e01\u5be7\u8a9e\u3092\u4f7f\u308f\u305a\u306b\u8a71\u3057\u3066\u304f\u3060\u3055\u3044\u3002\n\u5144:\u304a\u306f\u3088\u30fc\n\u59b9:\u304a\u306f\u3088\u30fc\uff01\u4eca\u65e5\u306e\u4e88\u5b9a\u306f\uff1f\n\u5144:\u3044\u3064\u3082\u901a\u308a\u3001\u7279\u306b\u306a\u3044\u3088\n\u59b9:\u3058\u3083\u3042\u3001\u4eca\u65e5\u306f\u4e00\u7dd2\u306b\u904a\u307c\u3046\u3088\uff01\n\u5144:\u4f55\u304b\u3057\u305f\u3044\u3053\u3068\u3042\u308b\uff1f\n\u59b9:\u3046\u3093\u3001\u4eca\u65e5\u306f\u6620\u753b\u3092\u898b\u3088\u3046\uff01\n\u5144:\u3044\u3044\u306d\u3002 \u3069\u3093\u306a\u306e\u304c\u3044\u3044\uff1f\n\u59b9:\u601d\u3044\u3064\u3044\u305f\uff01\u30b5\u30b9\u30da\u30f3\u30b9\u6620\u753b\u304c\u3044\u3044\u304b\u306a\uff01\n````\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Contextual chat with GPT-3 of OpenAI API.",
    "version": "0.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "cb5ae35fbda39fd059b3edf6a2bfca9fd986a79b0bcaf44e2e3052ef095f5051",
                "md5": "a1ec264f22c5af55b6ad5762869229d8",
                "sha256": "4456747f1cecf7c4abbb537ce5c46e49912b097c95bd8eea145b2b0ceab3499f"
            },
            "downloads": -1,
            "filename": "gpt3_contextual-0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a1ec264f22c5af55b6ad5762869229d8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 5401,
            "upload_time": "2023-02-11T14:37:11",
            "upload_time_iso_8601": "2023-02-11T14:37:11.224056Z",
            "url": "https://files.pythonhosted.org/packages/cb/5a/e35fbda39fd059b3edf6a2bfca9fd986a79b0bcaf44e2e3052ef095f5051/gpt3_contextual-0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-11 14:37:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "uezo",
    "github_project": "gpt3-contextual",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "gpt3-contextual"
}
        
Elapsed time: 0.09268s