llm-kira


Namellm-kira JSON
Version 0.4.1 PyPI version JSON
download
home_pagehttps://github.com/sudoskys/llm_kira
Summarychatbot client for llm
upload_time2023-02-11 15:38:42
maintainersudoskys
docs_urlNone
authorsudoskys
requires_python>=3.8,<4.0
licenseLGPL-2.1-or-later
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llm-kira

A refactored version of the `openai-kira` specification. Use redis or a file database.

Building ChatBot with LLMs.Using `async` requests.

> Contributors welcomed.

## Features

* safely cut context
* usage
* async request api / curl
* multi-Api Key load
* self-design callback

## Basic Use

`pip install -U llm-kira`

**Init**

```python
import llm_kira

llm_kira.setting.redisSetting = llm_kira.setting.RedisConfig(host="localhost",
                                                             port=6379,
                                                             db=0,
                                                             password=None)
llm_kira.setting.dbFile = "client_memory.db"
llm_kira.setting.proxyUrl = None  # "127.0.0.1"

# Plugin
llm_kira.setting.webServerUrlFilter = False
llm_kira.setting.webServerStopSentence = ["广告", "营销号"]  # 有默认值
```

## Demo

**More examples of use in `test/test.py`.**

Take `openai` as an example

```python
import asyncio
import random
import llm_kira
from llm_kira.client import Optimizer
from llm_kira.client.types import PromptItem
from llm_kira.client.llms.openai import OpenAiParam
from typing import List

openaiApiKey = ["key1", "key2"]
openaiApiKey: List[str]

receiver = llm_kira.client
conversation = receiver.Conversation(
    start_name="Human:",
    restart_name="AI:",
    conversation_id=10093,  # random.randint(1, 10000000),
)

llm = llm_kira.client.llms.OpenAi(
    profile=conversation,
    api_key=openaiApiKey,
    token_limit=3700,
    auto_penalty=False,
    call_func=None,
)

mem = receiver.MemoryManager(profile=conversation)
chat_client = receiver.ChatBot(profile=conversation,
                               memory_manger=mem,
                               optimizer=Optimizer.SinglePoint,
                               llm_model=llm)


async def chat():
    promptManager = receiver.PromptManager(profile=conversation,
                                           connect_words="\n",
                                           template="Templates, custom prefixes"
                                           )
    promptManager.insert(item=PromptItem(start=conversation.start_name, text="My id is 1596321"))
    response = await chat_client.predict(llm_param=OpenAiParam(model_name="text-davinci-003", n=2, best_of=2),
                                         prompt=promptManager,
                                         predict_tokens=500,
                                         increase="External enhancements, or searched result",
                                         )
    print(f"id {response.conversation_id}")
    print(f"ask {response.ask}")
    print(f"reply {response.reply}")
    print(f"usage:{response.llm.usage}")
    print(f"usage:{response.llm.raw}")
    print(f"---{response.llm.time}---")

    promptManager.clean()
    promptManager.insert(item=PromptItem(start=conversation.start_name, text="Whats my id?"))
    response = await chat_client.predict(llm_param=OpenAiParam(model_name="text-davinci-003"),
                                         prompt=promptManager,
                                         predict_tokens=500,
                                         increase="外部增强:每句话后面都要带 “喵”",
                                         # parse_reply=None
                                         )
    _info = "parse_reply 函数回调会处理 llm 的回复字段,比如 list 等,传入list,传出 str 的回复。必须是 str。"
    _info2 = "The parse_reply function callback handles the reply fields of llm, such as list, etc. Pass in list and pass out str for the reply."
    print(f"id {response.conversation_id}")
    print(f"ask {response.ask}")
    print(f"reply {response.reply}")
    print(f"usage:{response.llm.usage}")
    print(f"usage:{response.llm.raw}")
    print(f"---{response.llm.time}---")


asyncio.run(chat())
```

## Frame

```
├── client
│      ├── agent.py  //profile class
│      ├── anchor.py // client etc.
│      ├── enhance.py // web search etc.
│      ├── __init__.py
│      ├── llm.py // llm func.
│      ├── module  // plugin for enhance
│      ├── Optimizer.py // memory Optimizer (cutter
│      ├── pot.py // test cache
│      ├── test_module.py // test plugin
│      ├── text_analysis_tools // nlp support
│      ├── types.py // data class
│      └── vocab.json // cache?
├── __init__.py
├── openai  // func
│      ├── api // data
│      ├── __init__.py
│      └── resouce  // func
├── requirements.txt
└── utils  // utils... tools...
    ├── chat.py
    ├── data.py
    ├── fatlangdetect //lang detect
    ├── langdetect
    ├── network.py
    └── setting.py

```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/sudoskys/llm_kira",
    "name": "llm-kira",
    "maintainer": "sudoskys",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "coldlando@hotmail.com",
    "keywords": "",
    "author": "sudoskys",
    "author_email": "coldlando@hotmail.com",
    "download_url": "https://files.pythonhosted.org/packages/4e/98/448b0aecde1e5b616e8f2a16e0c630f9f1449c9e26acc2e8e86de10c7cb0/llm_kira-0.4.1.tar.gz",
    "platform": null,
    "description": "# llm-kira\n\nA refactored version of the `openai-kira` specification. Use redis or a file database.\n\nBuilding ChatBot with LLMs.Using `async` requests.\n\n> Contributors welcomed.\n\n## Features\n\n* safely cut context\n* usage\n* async request api / curl\n* multi-Api Key load\n* self-design callback\n\n## Basic Use\n\n`pip install -U llm-kira`\n\n**Init**\n\n```python\nimport llm_kira\n\nllm_kira.setting.redisSetting = llm_kira.setting.RedisConfig(host=\"localhost\",\n                                                             port=6379,\n                                                             db=0,\n                                                             password=None)\nllm_kira.setting.dbFile = \"client_memory.db\"\nllm_kira.setting.proxyUrl = None  # \"127.0.0.1\"\n\n# Plugin\nllm_kira.setting.webServerUrlFilter = False\nllm_kira.setting.webServerStopSentence = [\"\u5e7f\u544a\", \"\u8425\u9500\u53f7\"]  # \u6709\u9ed8\u8ba4\u503c\n```\n\n## Demo\n\n**More examples of use in `test/test.py`.**\n\nTake `openai` as an example\n\n```python\nimport asyncio\nimport random\nimport llm_kira\nfrom llm_kira.client import Optimizer\nfrom llm_kira.client.types import PromptItem\nfrom llm_kira.client.llms.openai import OpenAiParam\nfrom typing import List\n\nopenaiApiKey = [\"key1\", \"key2\"]\nopenaiApiKey: List[str]\n\nreceiver = llm_kira.client\nconversation = receiver.Conversation(\n    start_name=\"Human:\",\n    restart_name=\"AI:\",\n    conversation_id=10093,  # random.randint(1, 10000000),\n)\n\nllm = llm_kira.client.llms.OpenAi(\n    profile=conversation,\n    api_key=openaiApiKey,\n    token_limit=3700,\n    auto_penalty=False,\n    call_func=None,\n)\n\nmem = receiver.MemoryManager(profile=conversation)\nchat_client = receiver.ChatBot(profile=conversation,\n                               memory_manger=mem,\n                               optimizer=Optimizer.SinglePoint,\n                               llm_model=llm)\n\n\nasync def chat():\n    promptManager = receiver.PromptManager(profile=conversation,\n                                           connect_words=\"\\n\",\n                                           template=\"Templates, custom prefixes\"\n                                           )\n    promptManager.insert(item=PromptItem(start=conversation.start_name, text=\"My id is 1596321\"))\n    response = await chat_client.predict(llm_param=OpenAiParam(model_name=\"text-davinci-003\", n=2, best_of=2),\n                                         prompt=promptManager,\n                                         predict_tokens=500,\n                                         increase=\"External enhancements, or searched result\",\n                                         )\n    print(f\"id {response.conversation_id}\")\n    print(f\"ask {response.ask}\")\n    print(f\"reply {response.reply}\")\n    print(f\"usage:{response.llm.usage}\")\n    print(f\"usage:{response.llm.raw}\")\n    print(f\"---{response.llm.time}---\")\n\n    promptManager.clean()\n    promptManager.insert(item=PromptItem(start=conversation.start_name, text=\"Whats my id\uff1f\"))\n    response = await chat_client.predict(llm_param=OpenAiParam(model_name=\"text-davinci-003\"),\n                                         prompt=promptManager,\n                                         predict_tokens=500,\n                                         increase=\"\u5916\u90e8\u589e\u5f3a:\u6bcf\u53e5\u8bdd\u540e\u9762\u90fd\u8981\u5e26 \u201c\u55b5\u201d\",\n                                         # parse_reply=None\n                                         )\n    _info = \"parse_reply \u51fd\u6570\u56de\u8c03\u4f1a\u5904\u7406 llm \u7684\u56de\u590d\u5b57\u6bb5\uff0c\u6bd4\u5982 list \u7b49\uff0c\u4f20\u5165list\uff0c\u4f20\u51fa str \u7684\u56de\u590d\u3002\u5fc5\u987b\u662f str\u3002\"\n    _info2 = \"The parse_reply function callback handles the reply fields of llm, such as list, etc. Pass in list and pass out str for the reply.\"\n    print(f\"id {response.conversation_id}\")\n    print(f\"ask {response.ask}\")\n    print(f\"reply {response.reply}\")\n    print(f\"usage:{response.llm.usage}\")\n    print(f\"usage:{response.llm.raw}\")\n    print(f\"---{response.llm.time}---\")\n\n\nasyncio.run(chat())\n```\n\n## Frame\n\n```\n\u251c\u2500\u2500 client\n\u2502      \u251c\u2500\u2500 agent.py  //profile class\n\u2502      \u251c\u2500\u2500 anchor.py // client etc.\n\u2502      \u251c\u2500\u2500 enhance.py // web search etc.\n\u2502      \u251c\u2500\u2500 __init__.py\n\u2502      \u251c\u2500\u2500 llm.py // llm func.\n\u2502      \u251c\u2500\u2500 module  // plugin for enhance\n\u2502      \u251c\u2500\u2500 Optimizer.py // memory Optimizer (cutter\n\u2502      \u251c\u2500\u2500 pot.py // test cache\n\u2502      \u251c\u2500\u2500 test_module.py // test plugin\n\u2502      \u251c\u2500\u2500 text_analysis_tools // nlp support\n\u2502      \u251c\u2500\u2500 types.py // data class\n\u2502      \u2514\u2500\u2500 vocab.json // cache?\n\u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 openai  // func\n\u2502      \u251c\u2500\u2500 api // data\n\u2502      \u251c\u2500\u2500 __init__.py\n\u2502      \u2514\u2500\u2500 resouce  // func\n\u251c\u2500\u2500 requirements.txt\n\u2514\u2500\u2500 utils  // utils... tools...\n    \u251c\u2500\u2500 chat.py\n    \u251c\u2500\u2500 data.py\n    \u251c\u2500\u2500 fatlangdetect //lang detect\n    \u251c\u2500\u2500 langdetect\n    \u251c\u2500\u2500 network.py\n    \u2514\u2500\u2500 setting.py\n\n```\n\n",
    "bugtrack_url": null,
    "license": "LGPL-2.1-or-later",
    "summary": "chatbot client for llm",
    "version": "0.4.1",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ef32b6f3338f7bd9e4befcbcaff00209c22c77c4f2b3212e02f55550a3913a83",
                "md5": "2329b731fbf3544b1c3327e43306597e",
                "sha256": "4003bc320d4fa9d1df1bb70e0a0ce9df351c9bb72b941d630d12fc54f4ae5f01"
            },
            "downloads": -1,
            "filename": "llm_kira-0.4.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2329b731fbf3544b1c3327e43306597e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 6940212,
            "upload_time": "2023-02-11T15:38:40",
            "upload_time_iso_8601": "2023-02-11T15:38:40.352587Z",
            "url": "https://files.pythonhosted.org/packages/ef/32/b6f3338f7bd9e4befcbcaff00209c22c77c4f2b3212e02f55550a3913a83/llm_kira-0.4.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4e98448b0aecde1e5b616e8f2a16e0c630f9f1449c9e26acc2e8e86de10c7cb0",
                "md5": "5865131903c93e2f07edee6e735f8e4a",
                "sha256": "b75975b2a6020a9ace116372e84beb74f8cfd7bfb6a078680b5198886c9a21a8"
            },
            "downloads": -1,
            "filename": "llm_kira-0.4.1.tar.gz",
            "has_sig": false,
            "md5_digest": "5865131903c93e2f07edee6e735f8e4a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 6812566,
            "upload_time": "2023-02-11T15:38:42",
            "upload_time_iso_8601": "2023-02-11T15:38:42.301207Z",
            "url": "https://files.pythonhosted.org/packages/4e/98/448b0aecde1e5b616e8f2a16e0c630f9f1449c9e26acc2e8e86de10c7cb0/llm_kira-0.4.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-02-11 15:38:42",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "sudoskys",
    "github_project": "llm_kira",
    "lcname": "llm-kira"
}
        
Elapsed time: 0.04205s