<div align="center">
<a href="https://pypi.python.org/pypi/chattool">
<img src="https://img.shields.io/pypi/v/chattool.svg" alt="PyPI version" />
</a>
<a href="https://github.com/cubenlp/chattool/actions/workflows/test.yml">
<img src="https://github.com/cubenlp/chattool/actions/workflows/test.yml/badge.svg" alt="Tests" />
</a>
<a href="https://chattool.cubenlp.com">
<img src="https://img.shields.io/badge/docs-github_pages-blue.svg" alt="Documentation Status" />
</a>
<a href="https://codecov.io/gh/cubenlp/chattool">
<img src="https://codecov.io/gh/cubenlp/chattool/branch/master/graph/badge.svg" alt="Coverage" />
</a>
</div>
<div align="center">
<img src="https://qiniu.wzhecnu.cn/PicBed6/picgo/chattool.jpeg" alt="ChatAPI Toolkit" width="360", style="border-radius: 20px;">
[English](README_en.md) | [简体中文](README.md)
</div>
基于 OpenAI API 的 `Chat` 对象,支持多轮对话,代理,以及异步处理数据等。
## 安装方法
```bash
pip install chattool --upgrade
```
## 使用方法
### 设置密钥和代理链接
通过环境变量设置密钥和代理,比如在 `~/.bashrc` 或者 `~/.zshrc` 中追加
```bash
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export OPENAI_API_BASE="https://api.example.com/v1"
export OPENAI_API_BASE_URL="https://api.example.com" # 可选
```
或者在代码中设置:
```py
import chattool
chattool.api_key = "sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
chattool.api_base = "https://api.example.com/v1"
```
注:环境变量 `OPENAI_API_BASE` 优先于 `OPENAI_API_BASE_URL`,二者选其一即可。
### 示例
示例1,多轮对话:
```python
# 初次对话
chat = Chat("Hello, GPT-3.5!")
resp = chat.getresponse()
# 继续对话
chat.user("How are you?")
next_resp = chat.getresponse()
# 人为添加返回内容
chat.user("What's your name?")
chat.assistant("My name is GPT-3.5.")
# 保存对话内容
chat.save("chat.json", mode="w") # 默认为 "a"
# 打印对话历史
chat.print_log()
```
示例2,批量处理数据(串行),并使用缓存文件 `chat.jsonl`:
```python
# 编写处理函数
def data2chat(msg):
chat = Chat()
chat.system("你是一个熟练的数字翻译家。")
chat.user(f"请将该数字翻译为罗马数字:{msg}")
# 注意,在函数内获取返回
chat.getresponse()
return chat
checkpoint = "chat.jsonl" # 缓存文件的名称
msgs = ["1", "2", "3", "4", "5", "6", "7", "8", "9"]
# 处理数据,如果 checkpoint 存在,则从上次中断处继续
continue_chats = process_chats(msgs, data2chat, checkpoint)
```
示例3,批量处理数据(异步并行),用不同语言打印 hello,并使用两个协程:
```python
from chattool import async_chat_completion, load_chats, Chat
langs = ["python", "java", "Julia", "C++"]
def data2chat(msg):
chat = Chat()
chat.user("请用语言 %s 打印 hello world" % msg)
# 注意,这里不需要 getresponse 而交给异步处理
return chat
async_chat_completion(langs, chkpoint="async_chat.jsonl", nproc=2, data2chat=data2chat)
chats = load_chats("async_chat.jsonl")
```
在 Jupyter Notebook 中运行,因其[特殊机制](https://stackoverflow.com/questions/47518874/how-do-i-run-python-asyncio-code-in-a-jupyter-notebook),需使用 `await` 关键字和 `wait=True` 参数:
```python
await async_chat_completion(langs, chkpoint="async_chat.jsonl", nproc=2, data2chat=data2chat, wait=True)
```
### 工具调用
定义函数:
```python
def add(a: int, b: int) -> int:
"""
This function adds two numbers.
Parameters:
a (int): The first number.
b (int): The second number.
Returns:
int: The sum of the two numbers.
"""
return a + b
def mult(a:int, b:int) -> int:
"""This function multiplies two numbers.
It is a useful calculator!
Args:
a (int): The first number.
b (int): The second number.
Returns:
int: The product of the two numbers.
"""
return a * b
```
添加函数到 `Chat` 对象:
```py
from chattool import Chat
chat = Chat("find the value of (23723 * 1322312 ) + 12312")
chat.settools([add, mult])
```
自动执行工具,根据返回信息判断是否结束,`maxturns` 默认为 3:
```py
chat.autoresponse(display=True, tool_type='tool_choice', maxturns=3)
```
使用通用函数 `python`
```py
from chattool.functioncall import python
chat = Chat("find the value of (23723 * 1322312 ) + 12312")
chat.settools([python])
chat.autoresponse(display=True, tool_type='tool_choice', maxturns=3)
```
注意,执行模型生成的任意代码有潜在风险。
## 开源协议
使用 MIT 协议开源。
## 更新日志
- 当前版本 `3.2.1`,简化异步处理和串行处理的接口,更新子模块名称,避免冲突
- 版本 `2.3.0`,支持调用外部工具,异步处理数据,以及模型微调功能
- 版本 `2.0.0` 开始,更名为 `chattool`
- 版本 `1.0.0` 开始,支持异步处理数据
- 版本 `0.6.0` 开始,支持 [function call](https://platform.openai.com/docs/guides/gpt/function-calling) 功能
- 版本 `0.5.0` 开始,支持使用 `process_chats` 处理数据,借助 `msg2chat` 函数以及 `checkpoint` 文件
- 版本 `0.4.0` 开始,工具维护转至 [CubeNLP](https://github.com/cubenlp) 组织账号
- 版本 `0.3.0` 开始不依赖模块 `openai.py` ,而是直接使用 `requests` 发送请求
- 支持对每个 `Chat` 使用不同 API 密钥
- 支持使用代理链接
- 版本 `0.2.0` 改用 `Chat` 类型作为中心交互对象
Raw data
{
"_id": null,
"home_page": "https://github.com/cubenlp/chattool",
"name": "chattool",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "chattool",
"author": "Rex Wang",
"author_email": "1073853456@qq.com",
"download_url": "https://files.pythonhosted.org/packages/b9/45/c535c70d301a431414e53d8421ca17ff62d15a80ca6d10a8d6e4fa99233c/chattool-3.3.4.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <a href=\"https://pypi.python.org/pypi/chattool\">\n <img src=\"https://img.shields.io/pypi/v/chattool.svg\" alt=\"PyPI version\" />\n </a>\n <a href=\"https://github.com/cubenlp/chattool/actions/workflows/test.yml\">\n <img src=\"https://github.com/cubenlp/chattool/actions/workflows/test.yml/badge.svg\" alt=\"Tests\" />\n </a>\n <a href=\"https://chattool.cubenlp.com\">\n <img src=\"https://img.shields.io/badge/docs-github_pages-blue.svg\" alt=\"Documentation Status\" />\n </a>\n <a href=\"https://codecov.io/gh/cubenlp/chattool\">\n <img src=\"https://codecov.io/gh/cubenlp/chattool/branch/master/graph/badge.svg\" alt=\"Coverage\" />\n </a>\n</div>\n\n<div align=\"center\">\n <img src=\"https://qiniu.wzhecnu.cn/PicBed6/picgo/chattool.jpeg\" alt=\"ChatAPI Toolkit\" width=\"360\", style=\"border-radius: 20px;\">\n\n[English](README_en.md) | [\u7b80\u4f53\u4e2d\u6587](README.md)\n</div>\n\n\u57fa\u4e8e OpenAI API \u7684 `Chat` \u5bf9\u8c61\uff0c\u652f\u6301\u591a\u8f6e\u5bf9\u8bdd\uff0c\u4ee3\u7406\uff0c\u4ee5\u53ca\u5f02\u6b65\u5904\u7406\u6570\u636e\u7b49\u3002\n\n## \u5b89\u88c5\u65b9\u6cd5\n\n```bash\npip install chattool --upgrade\n```\n\n## \u4f7f\u7528\u65b9\u6cd5\n\n### \u8bbe\u7f6e\u5bc6\u94a5\u548c\u4ee3\u7406\u94fe\u63a5\n\n\u901a\u8fc7\u73af\u5883\u53d8\u91cf\u8bbe\u7f6e\u5bc6\u94a5\u548c\u4ee3\u7406\uff0c\u6bd4\u5982\u5728 `~/.bashrc` \u6216\u8005 `~/.zshrc` \u4e2d\u8ffd\u52a0\n\n```bash\nexport OPENAI_API_KEY=\"sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nexport OPENAI_API_BASE=\"https://api.example.com/v1\"\nexport OPENAI_API_BASE_URL=\"https://api.example.com\" # \u53ef\u9009\n```\n\n\u6216\u8005\u5728\u4ee3\u7801\u4e2d\u8bbe\u7f6e\uff1a\n\n```py\nimport chattool\nchattool.api_key = \"sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx\"\nchattool.api_base = \"https://api.example.com/v1\"\n```\n\n\u6ce8\uff1a\u73af\u5883\u53d8\u91cf `OPENAI_API_BASE` \u4f18\u5148\u4e8e `OPENAI_API_BASE_URL`\uff0c\u4e8c\u8005\u9009\u5176\u4e00\u5373\u53ef\u3002\n\n### \u793a\u4f8b\n\n\u793a\u4f8b1\uff0c\u591a\u8f6e\u5bf9\u8bdd\uff1a\n\n```python\n# \u521d\u6b21\u5bf9\u8bdd\nchat = Chat(\"Hello, GPT-3.5!\")\nresp = chat.getresponse()\n\n# \u7ee7\u7eed\u5bf9\u8bdd\nchat.user(\"How are you?\")\nnext_resp = chat.getresponse()\n\n# \u4eba\u4e3a\u6dfb\u52a0\u8fd4\u56de\u5185\u5bb9\nchat.user(\"What's your name?\")\nchat.assistant(\"My name is GPT-3.5.\")\n\n# \u4fdd\u5b58\u5bf9\u8bdd\u5185\u5bb9\nchat.save(\"chat.json\", mode=\"w\") # \u9ed8\u8ba4\u4e3a \"a\"\n\n# \u6253\u5370\u5bf9\u8bdd\u5386\u53f2\nchat.print_log()\n```\n\n\u793a\u4f8b2\uff0c\u6279\u91cf\u5904\u7406\u6570\u636e\uff08\u4e32\u884c\uff09\uff0c\u5e76\u4f7f\u7528\u7f13\u5b58\u6587\u4ef6 `chat.jsonl`\uff1a\n\n```python\n# \u7f16\u5199\u5904\u7406\u51fd\u6570\ndef data2chat(msg):\n chat = Chat()\n chat.system(\"\u4f60\u662f\u4e00\u4e2a\u719f\u7ec3\u7684\u6570\u5b57\u7ffb\u8bd1\u5bb6\u3002\")\n chat.user(f\"\u8bf7\u5c06\u8be5\u6570\u5b57\u7ffb\u8bd1\u4e3a\u7f57\u9a6c\u6570\u5b57\uff1a{msg}\")\n # \u6ce8\u610f\uff0c\u5728\u51fd\u6570\u5185\u83b7\u53d6\u8fd4\u56de\n chat.getresponse()\n return chat\n\ncheckpoint = \"chat.jsonl\" # \u7f13\u5b58\u6587\u4ef6\u7684\u540d\u79f0\nmsgs = [\"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]\n# \u5904\u7406\u6570\u636e\uff0c\u5982\u679c checkpoint \u5b58\u5728\uff0c\u5219\u4ece\u4e0a\u6b21\u4e2d\u65ad\u5904\u7ee7\u7eed\ncontinue_chats = process_chats(msgs, data2chat, checkpoint)\n```\n\n\u793a\u4f8b3\uff0c\u6279\u91cf\u5904\u7406\u6570\u636e\uff08\u5f02\u6b65\u5e76\u884c\uff09\uff0c\u7528\u4e0d\u540c\u8bed\u8a00\u6253\u5370 hello\uff0c\u5e76\u4f7f\u7528\u4e24\u4e2a\u534f\u7a0b\uff1a\n\n```python\nfrom chattool import async_chat_completion, load_chats, Chat\n\nlangs = [\"python\", \"java\", \"Julia\", \"C++\"]\ndef data2chat(msg):\n chat = Chat()\n chat.user(\"\u8bf7\u7528\u8bed\u8a00 %s \u6253\u5370 hello world\" % msg)\n # \u6ce8\u610f\uff0c\u8fd9\u91cc\u4e0d\u9700\u8981 getresponse \u800c\u4ea4\u7ed9\u5f02\u6b65\u5904\u7406\n return chat\n\nasync_chat_completion(langs, chkpoint=\"async_chat.jsonl\", nproc=2, data2chat=data2chat)\nchats = load_chats(\"async_chat.jsonl\")\n```\n\n\u5728 Jupyter Notebook \u4e2d\u8fd0\u884c\uff0c\u56e0\u5176[\u7279\u6b8a\u673a\u5236](https://stackoverflow.com/questions/47518874/how-do-i-run-python-asyncio-code-in-a-jupyter-notebook)\uff0c\u9700\u4f7f\u7528 `await` \u5173\u952e\u5b57\u548c `wait=True` \u53c2\u6570\uff1a\n\n```python\nawait async_chat_completion(langs, chkpoint=\"async_chat.jsonl\", nproc=2, data2chat=data2chat, wait=True)\n```\n\n### \u5de5\u5177\u8c03\u7528\n\n\u5b9a\u4e49\u51fd\u6570\uff1a\n\n```python\ndef add(a: int, b: int) -> int:\n \"\"\"\n This function adds two numbers.\n\n Parameters:\n a (int): The first number.\n b (int): The second number.\n\n Returns:\n int: The sum of the two numbers.\n \"\"\"\n return a + b\n\ndef mult(a:int, b:int) -> int:\n \"\"\"This function multiplies two numbers.\n It is a useful calculator!\n\n Args:\n a (int): The first number.\n b (int): The second number.\n\n Returns:\n int: The product of the two numbers.\n \"\"\"\n return a * b\n```\n\n\u6dfb\u52a0\u51fd\u6570\u5230 `Chat` \u5bf9\u8c61\uff1a\n\n```py\nfrom chattool import Chat\nchat = Chat(\"find the value of (23723 * 1322312 ) + 12312\")\nchat.settools([add, mult])\n```\n\n\u81ea\u52a8\u6267\u884c\u5de5\u5177\uff0c\u6839\u636e\u8fd4\u56de\u4fe1\u606f\u5224\u65ad\u662f\u5426\u7ed3\u675f\uff0c`maxturns` \u9ed8\u8ba4\u4e3a 3\uff1a\n\n```py\nchat.autoresponse(display=True, tool_type='tool_choice', maxturns=3) \n```\n\n\u4f7f\u7528\u901a\u7528\u51fd\u6570 `python`\n\n```py\nfrom chattool.functioncall import python\nchat = Chat(\"find the value of (23723 * 1322312 ) + 12312\")\nchat.settools([python])\nchat.autoresponse(display=True, tool_type='tool_choice', maxturns=3) \n```\n\n\u6ce8\u610f\uff0c\u6267\u884c\u6a21\u578b\u751f\u6210\u7684\u4efb\u610f\u4ee3\u7801\u6709\u6f5c\u5728\u98ce\u9669\u3002\n\n## \u5f00\u6e90\u534f\u8bae\n\n\u4f7f\u7528 MIT \u534f\u8bae\u5f00\u6e90\u3002\n\n## \u66f4\u65b0\u65e5\u5fd7\n\n- \u5f53\u524d\u7248\u672c `3.2.1`\uff0c\u7b80\u5316\u5f02\u6b65\u5904\u7406\u548c\u4e32\u884c\u5904\u7406\u7684\u63a5\u53e3\uff0c\u66f4\u65b0\u5b50\u6a21\u5757\u540d\u79f0\uff0c\u907f\u514d\u51b2\u7a81\n- \u7248\u672c `2.3.0`\uff0c\u652f\u6301\u8c03\u7528\u5916\u90e8\u5de5\u5177\uff0c\u5f02\u6b65\u5904\u7406\u6570\u636e\uff0c\u4ee5\u53ca\u6a21\u578b\u5fae\u8c03\u529f\u80fd\n- \u7248\u672c `2.0.0` \u5f00\u59cb\uff0c\u66f4\u540d\u4e3a `chattool`\n- \u7248\u672c `1.0.0` \u5f00\u59cb\uff0c\u652f\u6301\u5f02\u6b65\u5904\u7406\u6570\u636e\n- \u7248\u672c `0.6.0` \u5f00\u59cb\uff0c\u652f\u6301 [function call](https://platform.openai.com/docs/guides/gpt/function-calling) \u529f\u80fd\n- \u7248\u672c `0.5.0` \u5f00\u59cb\uff0c\u652f\u6301\u4f7f\u7528 `process_chats` \u5904\u7406\u6570\u636e\uff0c\u501f\u52a9 `msg2chat` \u51fd\u6570\u4ee5\u53ca `checkpoint` \u6587\u4ef6\n- \u7248\u672c `0.4.0` \u5f00\u59cb\uff0c\u5de5\u5177\u7ef4\u62a4\u8f6c\u81f3 [CubeNLP](https://github.com/cubenlp) \u7ec4\u7ec7\u8d26\u53f7\n- \u7248\u672c `0.3.0` \u5f00\u59cb\u4e0d\u4f9d\u8d56\u6a21\u5757 `openai.py` \uff0c\u800c\u662f\u76f4\u63a5\u4f7f\u7528 `requests` \u53d1\u9001\u8bf7\u6c42\n - \u652f\u6301\u5bf9\u6bcf\u4e2a `Chat` \u4f7f\u7528\u4e0d\u540c API \u5bc6\u94a5\n - \u652f\u6301\u4f7f\u7528\u4ee3\u7406\u94fe\u63a5\n- \u7248\u672c `0.2.0` \u6539\u7528 `Chat` \u7c7b\u578b\u4f5c\u4e3a\u4e2d\u5fc3\u4ea4\u4e92\u5bf9\u8c61\n\n",
"bugtrack_url": null,
"license": "MIT license",
"summary": "Toolkit for Chat API",
"version": "3.3.4",
"project_urls": {
"Homepage": "https://github.com/cubenlp/chattool"
},
"split_keywords": [
"chattool"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "05daa3319e19e978bb979f93bbc46771547932274f5fb25693f3cd18fd7cab6a",
"md5": "b75a21f5820d20069b289b830220d7b9",
"sha256": "c440eb55dbaa8205e2c8e22649fadc0155ff76091685fe3074b6bbb3c7166728"
},
"downloads": -1,
"filename": "chattool-3.3.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b75a21f5820d20069b289b830220d7b9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 25817,
"upload_time": "2024-09-29T12:20:57",
"upload_time_iso_8601": "2024-09-29T12:20:57.672525Z",
"url": "https://files.pythonhosted.org/packages/05/da/a3319e19e978bb979f93bbc46771547932274f5fb25693f3cd18fd7cab6a/chattool-3.3.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b945c535c70d301a431414e53d8421ca17ff62d15a80ca6d10a8d6e4fa99233c",
"md5": "c386e6b8ff80c8b81dd1a0c2ebf5c1d9",
"sha256": "da15f8f6c84bf24adf08f1a59165e214877ee9937b0ad9b289f3a33dd20f3a5e"
},
"downloads": -1,
"filename": "chattool-3.3.4.tar.gz",
"has_sig": false,
"md5_digest": "c386e6b8ff80c8b81dd1a0c2ebf5c1d9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 25403,
"upload_time": "2024-09-29T12:20:59",
"upload_time_iso_8601": "2024-09-29T12:20:59.060611Z",
"url": "https://files.pythonhosted.org/packages/b9/45/c535c70d301a431414e53d8421ca17ff62d15a80ca6d10a8d6e4fa99233c/chattool-3.3.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-29 12:20:59",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "cubenlp",
"github_project": "chattool",
"travis_ci": false,
"coveralls": true,
"github_actions": true,
"lcname": "chattool"
}