prompt-me


Nameprompt-me JSON
Version 2.1.1 PyPI version JSON
download
home_pagehttps://github.com/Undertone0809/prompt-me
SummaryA lightweight LLM Prompt Layer framework
upload_time2023-05-08 11:07:25
maintainer
docs_urlNone
authorZeeland
requires_python
licenseApache 2.0
keywords prompt-me prompt chatgpt gpt chatbot llm
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">
    prompt-me
</h1>


<p align="center">
    <a target="_blank" href="">
        <img src="https://img.shields.io/github/license/Undertone0809/prompt-me.svg?style=flat-square" />
    </a>
    <a target="_blank" href=''>
        <img src="https://img.shields.io/github/release/Undertone0809/prompt-me/all.svg?style=flat-square"/>
    </a>
    <a target="_blank" href=''>
        <img src="https://bestpractices.coreinfrastructure.org/projects/3018/badge"/>
   </a>
</p>

<p align="center">
  <img src="https://zeeland-bucket.oss-cn-beijing.aliyuncs.com/images/20230507194900.png"/>
</p>



`prompt-me` 是一个专为 Prompt Engineer设计LLM Prompt Layer框架,支持连续对话、角色预设、提供缓存的功能,可以记录历史对话等功能,开箱即用。
通过 prompt-me,你可以轻松构建起属于自己的GPT应用程序。

# 特性

- 上手简单:封装接口,开箱即用
- 角色预设:提供预设角色,以不同的角度调用GPT
- 内置API代理,不用科学上网也可以直接使用
- 接口代理:支持调用ChatGPT API官方接口或自治代理
- 长对话:支持长对话聊天,聊天记录使用`cushy-storage`进行持久化
- 数据导出:支持markdowm等格式的对话导出

# 快速上手

```shell script
pip install prompt-me --upgrade 
```

## 基本使用

- 方式1(推荐)

```python
import os
from prompt_me import ChatBot, enable_log_no_file

os.environ['OPENAI_API_KEY'] = "your_key"


def main():
    # enable_log_no_file()
    print("A Simple ChatBot built by ChatGPT API")
    conversation_id = None
    bot = ChatBot()
    while True:
        prompt = str(input("[User] "))
        ret, conversation_id = bot.ask(prompt, conversation_id)
        print(ret, conversation_id)


if __name__ == '__main__':
    main()

```

- 方式2

```python
from prompt_me import ChatBot, enable_log


def main():
    # enable_log() # 日志功能
    print("A Simple ChatBot built by ChatGPT API")
    conversation_id = None
    bot = ChatBot(key='yourkey')
    while True:
        prompt = str(input("[User] "))
        ret, conversation_id = bot.ask(prompt, conversation_id)
        print(ret, conversation_id)


if __name__ == '__main__':
    main()
```

- 获取历史对话

```python
import os
from prompt_me import ChatBot, enable_log_no_file

os.environ['OPENAI_API_KEY'] = "your_key"


def main():
    # enable_log_no_file()
    bot = ChatBot()
    ret, conversation_id = bot.ask("please give me a bucket sort python code")
    messages = bot.get_history(conversation_id)
    for message in messages:
        print(message)


if __name__ == '__main__':
    main()
```

- 导出历史对话为markdown

```python
import os
from prompt_me import ChatBot, enable_log_no_file

os.environ['OPENAI_API_KEY'] = "your_key"


def main():
    # enable_log_no_file()
    bot = ChatBot()
    ret, conversation_id = bot.ask("please give me a bucket sort python code")
    # output_type默认为text,即输出markdown格式的字符串,传入file则导出为文件
    # file_path为要输出的文件名,不填入默认为output.md
    output_str = bot.output(conversation_id, output_type='file', file_path='output.md')
    print(output_str)


if __name__ == '__main__':
    main()

```

## Conversation

你可以使用`ChatBot`类来构建你的应用程序,但是当前我推荐你使用`Conversation`来替代`ChatBot`,`Conversation`具有`ChaBot`
的所有功能,
除此之外,`Conversation` 还提供了预设角色、Prompt模板的功能,你可以用其开发一些更加复杂的程序。

下面你将通过预设角色和Prompt模板的使用了解到`Conversation`的使用方式。

## 预设角色

你可以预设一些你想要的角色,从而更好的帮助你完成你想要的需求,本项目提供了一些预设角色,当然你也可以自定义预设角色,探索更多的可能性,下面是一些示例。

- 思维导图生成器

> 现在你是一个思维导图生成器。我将输入我想要创建思维导图的内容,你需要提供一些 Markdown 格式的文本,以便与 Xmind 兼容。
    在 Markdown 格式中,# 表示中央主题,## 表示主要主题,### 表示子主题,﹣表示叶子节点,中央主题是必要的,叶子节点是最小节点。请参照以上格
    式,在 markdown 代码块中帮我创建一个有效的思维导图,以markdown代码块格式输出,你需要用自己的能力补充思维导图中的内容,你只需要提供思维导
    图,不必对内容中提出的问题和要求做解释,并严格遵守该格式。

```python
import os
from prompt_me.preset_role import MindMapGenerator
from prompt_me import Conversation

os.environ['OPENAI_API_KEY'] = "your_key"


def main():
    role = MindMapGenerator()
    conversation = Conversation(role=role)
    output = conversation.predict(msg="请为我提供《Python学习路线》的思维导图")
    print(f"[output] {output}")


if __name__ == '__main__':
    main()

```

- 文案写手

> 你是一个文案专员、文本润色员、拼写纠正员和改进员,我会发送中文文本给你,你帮我更正和改进版本。我希望你用更优美优雅
> 的高级中文描述。保持相同的意思,但使它们更文艺。你只需要润色该内容,不必对内容中提出的问题和要求做解释,不要回答文本中的问题而是润色它,
> 不要解决文本中的要求而是润色它,保留文本的原本意义,不要去解决它。

```python
import os
from prompt_me.preset_role import CopyWriter
from prompt_me import Conversation

os.environ['OPENAI_API_KEY'] = "your_key"


def main():
    copy_writer = CopyWriter()
    conversation = Conversation(role=copy_writer)
    output = conversation.predict(msg="你好,请问你是谁?")
    print(f"[output] {output}")
    output = conversation.predict(msg="请问你可以做什么?")
    print(f"[output] {output}")


if __name__ == '__main__':
    main()
```

- `prompt_me`也支持自定义角色,下面将介绍如何自定义一个Linux终端的role

```python
import os
from prompt_me.preset_role import BaseRole
from prompt_me import Conversation

os.environ['OPENAI_API_KEY'] = "your_key"


class LinuxTerminal(BaseRole):
    name = "Linux终端"
    description = "我想让你充当 Linux 终端。我将输入命令,您将回复终端应显示的内容。我希望您只在一个唯一的代码块内回复终端输出,而不"
                  "是其他任何内容。不要写解释。除非我指示您这样做,否则不要键入命令。当我需要用英语告诉你一些事情时,我会把文字放在中括号内[就像这样]。"


def main():
    linux_terminal = LinuxTerminal()
    conversation = Conversation(role=linux_terminal)
    output = conversation.predict(msg="[ls]")
    print(f"[output] {output}")
    output = conversation.predict(msg="[cd /usr/local]")
    print(f"[output] {output}")


if __name__ == '__main__':
    main()

```

# 待办清单

- 提供更多LLM模型支持
- 提供更加方便的程序调用方式
- ~~添加角色预设~~
- 预设角色的参数配置
- 提供prompt模板与prompt结构化
- 提供外部工具扩展
    - 外部搜索: Google,Bing等
    - 可以执行shell脚本
    - 提供Python REPL
    - arvix论文总结
    - 本地文件总结
- 自建知识库建立专家决策系统
- 接入self-ask, prompt-loop架构
- 提供多种导出方式
- ~~可以导出历史消息为markdown格式~~
- ~~使用环境变量配置key~~
- 提供显示当前token(单词量)的功能
- ~~添加错误处理机制,如网络异常、服务器异常等,保证程序的可靠性~~
- ~~开发ChatBot v2, [issue](https://github.com/Undertone0809/cushy-chat/issues/1)~~
- 完善代理模式

# 一些问题

- 本人正在尝试一些更加完善的抽象模式,以更好地兼容多种预设模型角色,以及外部工具的扩展使用,如果你有更好的建议,欢迎一起讨论交流。
- 当前代理模式还需要进一步完善,不过当前无需代理就可以直接使用。

# 贡献

如果你想为这个项目做贡献,你可以提交pr或issue。我很高兴看到更多的人参与并优化它。

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Undertone0809/prompt-me",
    "name": "prompt-me",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "prompt-me,prompt,chatgpt,gpt,chatbot,llm",
    "author": "Zeeland",
    "author_email": "zeeland@foxmail.com",
    "download_url": "https://files.pythonhosted.org/packages/67/8c/f1ad7805cc9ea8c853538dd00dde4804f529e6ba87faf756d0879bf1afb4/prompt-me-2.1.1.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">\r\n    prompt-me\r\n</h1>\r\n\r\n\r\n<p align=\"center\">\r\n    <a target=\"_blank\" href=\"\">\r\n        <img src=\"https://img.shields.io/github/license/Undertone0809/prompt-me.svg?style=flat-square\" />\r\n    </a>\r\n    <a target=\"_blank\" href=''>\r\n        <img src=\"https://img.shields.io/github/release/Undertone0809/prompt-me/all.svg?style=flat-square\"/>\r\n    </a>\r\n    <a target=\"_blank\" href=''>\r\n        <img src=\"https://bestpractices.coreinfrastructure.org/projects/3018/badge\"/>\r\n   </a>\r\n</p>\r\n\r\n<p align=\"center\">\r\n  <img src=\"https://zeeland-bucket.oss-cn-beijing.aliyuncs.com/images/20230507194900.png\"/>\r\n</p>\r\n\r\n\r\n\r\n`prompt-me` \u662f\u4e00\u4e2a\u4e13\u4e3a Prompt Engineer\u8bbe\u8ba1LLM Prompt Layer\u6846\u67b6\uff0c\u652f\u6301\u8fde\u7eed\u5bf9\u8bdd\u3001\u89d2\u8272\u9884\u8bbe\u3001\u63d0\u4f9b\u7f13\u5b58\u7684\u529f\u80fd\uff0c\u53ef\u4ee5\u8bb0\u5f55\u5386\u53f2\u5bf9\u8bdd\u7b49\u529f\u80fd\uff0c\u5f00\u7bb1\u5373\u7528\u3002\r\n\u901a\u8fc7 prompt-me\uff0c\u4f60\u53ef\u4ee5\u8f7b\u677e\u6784\u5efa\u8d77\u5c5e\u4e8e\u81ea\u5df1\u7684GPT\u5e94\u7528\u7a0b\u5e8f\u3002\r\n\r\n# \u7279\u6027\r\n\r\n- \u4e0a\u624b\u7b80\u5355\uff1a\u5c01\u88c5\u63a5\u53e3\uff0c\u5f00\u7bb1\u5373\u7528\r\n- \u89d2\u8272\u9884\u8bbe\uff1a\u63d0\u4f9b\u9884\u8bbe\u89d2\u8272\uff0c\u4ee5\u4e0d\u540c\u7684\u89d2\u5ea6\u8c03\u7528GPT\r\n- \u5185\u7f6eAPI\u4ee3\u7406\uff0c\u4e0d\u7528\u79d1\u5b66\u4e0a\u7f51\u4e5f\u53ef\u4ee5\u76f4\u63a5\u4f7f\u7528\r\n- \u63a5\u53e3\u4ee3\u7406\uff1a\u652f\u6301\u8c03\u7528ChatGPT API\u5b98\u65b9\u63a5\u53e3\u6216\u81ea\u6cbb\u4ee3\u7406\r\n- \u957f\u5bf9\u8bdd\uff1a\u652f\u6301\u957f\u5bf9\u8bdd\u804a\u5929\uff0c\u804a\u5929\u8bb0\u5f55\u4f7f\u7528`cushy-storage`\u8fdb\u884c\u6301\u4e45\u5316\r\n- \u6570\u636e\u5bfc\u51fa\uff1a\u652f\u6301markdowm\u7b49\u683c\u5f0f\u7684\u5bf9\u8bdd\u5bfc\u51fa\r\n\r\n# \u5feb\u901f\u4e0a\u624b\r\n\r\n```shell script\r\npip install prompt-me --upgrade \r\n```\r\n\r\n## \u57fa\u672c\u4f7f\u7528\r\n\r\n- \u65b9\u5f0f1\uff08\u63a8\u8350\uff09\r\n\r\n```python\r\nimport os\r\nfrom prompt_me import ChatBot, enable_log_no_file\r\n\r\nos.environ['OPENAI_API_KEY'] = \"your_key\"\r\n\r\n\r\ndef main():\r\n    # enable_log_no_file()\r\n    print(\"A Simple ChatBot built by ChatGPT API\")\r\n    conversation_id = None\r\n    bot = ChatBot()\r\n    while True:\r\n        prompt = str(input(\"[User] \"))\r\n        ret, conversation_id = bot.ask(prompt, conversation_id)\r\n        print(ret, conversation_id)\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n```\r\n\r\n- \u65b9\u5f0f2\r\n\r\n```python\r\nfrom prompt_me import ChatBot, enable_log\r\n\r\n\r\ndef main():\r\n    # enable_log() # \u65e5\u5fd7\u529f\u80fd\r\n    print(\"A Simple ChatBot built by ChatGPT API\")\r\n    conversation_id = None\r\n    bot = ChatBot(key='yourkey')\r\n    while True:\r\n        prompt = str(input(\"[User] \"))\r\n        ret, conversation_id = bot.ask(prompt, conversation_id)\r\n        print(ret, conversation_id)\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n```\r\n\r\n- \u83b7\u53d6\u5386\u53f2\u5bf9\u8bdd\r\n\r\n```python\r\nimport os\r\nfrom prompt_me import ChatBot, enable_log_no_file\r\n\r\nos.environ['OPENAI_API_KEY'] = \"your_key\"\r\n\r\n\r\ndef main():\r\n    # enable_log_no_file()\r\n    bot = ChatBot()\r\n    ret, conversation_id = bot.ask(\"please give me a bucket sort python code\")\r\n    messages = bot.get_history(conversation_id)\r\n    for message in messages:\r\n        print(message)\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n```\r\n\r\n- \u5bfc\u51fa\u5386\u53f2\u5bf9\u8bdd\u4e3amarkdown\r\n\r\n```python\r\nimport os\r\nfrom prompt_me import ChatBot, enable_log_no_file\r\n\r\nos.environ['OPENAI_API_KEY'] = \"your_key\"\r\n\r\n\r\ndef main():\r\n    # enable_log_no_file()\r\n    bot = ChatBot()\r\n    ret, conversation_id = bot.ask(\"please give me a bucket sort python code\")\r\n    # output_type\u9ed8\u8ba4\u4e3atext\uff0c\u5373\u8f93\u51famarkdown\u683c\u5f0f\u7684\u5b57\u7b26\u4e32\uff0c\u4f20\u5165file\u5219\u5bfc\u51fa\u4e3a\u6587\u4ef6\r\n    # file_path\u4e3a\u8981\u8f93\u51fa\u7684\u6587\u4ef6\u540d\uff0c\u4e0d\u586b\u5165\u9ed8\u8ba4\u4e3aoutput.md\r\n    output_str = bot.output(conversation_id, output_type='file', file_path='output.md')\r\n    print(output_str)\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n```\r\n\r\n## Conversation\r\n\r\n\u4f60\u53ef\u4ee5\u4f7f\u7528`ChatBot`\u7c7b\u6765\u6784\u5efa\u4f60\u7684\u5e94\u7528\u7a0b\u5e8f\uff0c\u4f46\u662f\u5f53\u524d\u6211\u63a8\u8350\u4f60\u4f7f\u7528`Conversation`\u6765\u66ff\u4ee3`ChatBot`\uff0c`Conversation`\u5177\u6709`ChaBot`\r\n\u7684\u6240\u6709\u529f\u80fd\uff0c\r\n\u9664\u6b64\u4e4b\u5916\uff0c`Conversation` \u8fd8\u63d0\u4f9b\u4e86\u9884\u8bbe\u89d2\u8272\u3001Prompt\u6a21\u677f\u7684\u529f\u80fd\uff0c\u4f60\u53ef\u4ee5\u7528\u5176\u5f00\u53d1\u4e00\u4e9b\u66f4\u52a0\u590d\u6742\u7684\u7a0b\u5e8f\u3002\r\n\r\n\u4e0b\u9762\u4f60\u5c06\u901a\u8fc7\u9884\u8bbe\u89d2\u8272\u548cPrompt\u6a21\u677f\u7684\u4f7f\u7528\u4e86\u89e3\u5230`Conversation`\u7684\u4f7f\u7528\u65b9\u5f0f\u3002\r\n\r\n## \u9884\u8bbe\u89d2\u8272\r\n\r\n\u4f60\u53ef\u4ee5\u9884\u8bbe\u4e00\u4e9b\u4f60\u60f3\u8981\u7684\u89d2\u8272\uff0c\u4ece\u800c\u66f4\u597d\u7684\u5e2e\u52a9\u4f60\u5b8c\u6210\u4f60\u60f3\u8981\u7684\u9700\u6c42\uff0c\u672c\u9879\u76ee\u63d0\u4f9b\u4e86\u4e00\u4e9b\u9884\u8bbe\u89d2\u8272\uff0c\u5f53\u7136\u4f60\u4e5f\u53ef\u4ee5\u81ea\u5b9a\u4e49\u9884\u8bbe\u89d2\u8272\uff0c\u63a2\u7d22\u66f4\u591a\u7684\u53ef\u80fd\u6027\uff0c\u4e0b\u9762\u662f\u4e00\u4e9b\u793a\u4f8b\u3002\r\n\r\n- \u601d\u7ef4\u5bfc\u56fe\u751f\u6210\u5668\r\n\r\n> \u73b0\u5728\u4f60\u662f\u4e00\u4e2a\u601d\u7ef4\u5bfc\u56fe\u751f\u6210\u5668\u3002\u6211\u5c06\u8f93\u5165\u6211\u60f3\u8981\u521b\u5efa\u601d\u7ef4\u5bfc\u56fe\u7684\u5185\u5bb9\uff0c\u4f60\u9700\u8981\u63d0\u4f9b\u4e00\u4e9b Markdown \u683c\u5f0f\u7684\u6587\u672c\uff0c\u4ee5\u4fbf\u4e0e Xmind \u517c\u5bb9\u3002\r\n    \u5728 Markdown \u683c\u5f0f\u4e2d\uff0c# \u8868\u793a\u4e2d\u592e\u4e3b\u9898\uff0c## \u8868\u793a\u4e3b\u8981\u4e3b\u9898\uff0c### \u8868\u793a\u5b50\u4e3b\u9898\uff0c\ufe63\u8868\u793a\u53f6\u5b50\u8282\u70b9\uff0c\u4e2d\u592e\u4e3b\u9898\u662f\u5fc5\u8981\u7684\uff0c\u53f6\u5b50\u8282\u70b9\u662f\u6700\u5c0f\u8282\u70b9\u3002\u8bf7\u53c2\u7167\u4ee5\u4e0a\u683c\r\n    \u5f0f\uff0c\u5728 markdown \u4ee3\u7801\u5757\u4e2d\u5e2e\u6211\u521b\u5efa\u4e00\u4e2a\u6709\u6548\u7684\u601d\u7ef4\u5bfc\u56fe\uff0c\u4ee5markdown\u4ee3\u7801\u5757\u683c\u5f0f\u8f93\u51fa\uff0c\u4f60\u9700\u8981\u7528\u81ea\u5df1\u7684\u80fd\u529b\u8865\u5145\u601d\u7ef4\u5bfc\u56fe\u4e2d\u7684\u5185\u5bb9\uff0c\u4f60\u53ea\u9700\u8981\u63d0\u4f9b\u601d\u7ef4\u5bfc\r\n    \u56fe\uff0c\u4e0d\u5fc5\u5bf9\u5185\u5bb9\u4e2d\u63d0\u51fa\u7684\u95ee\u9898\u548c\u8981\u6c42\u505a\u89e3\u91ca\uff0c\u5e76\u4e25\u683c\u9075\u5b88\u8be5\u683c\u5f0f\u3002\r\n\r\n```python\r\nimport os\r\nfrom prompt_me.preset_role import MindMapGenerator\r\nfrom prompt_me import Conversation\r\n\r\nos.environ['OPENAI_API_KEY'] = \"your_key\"\r\n\r\n\r\ndef main():\r\n    role = MindMapGenerator()\r\n    conversation = Conversation(role=role)\r\n    output = conversation.predict(msg=\"\u8bf7\u4e3a\u6211\u63d0\u4f9b\u300aPython\u5b66\u4e60\u8def\u7ebf\u300b\u7684\u601d\u7ef4\u5bfc\u56fe\")\r\n    print(f\"[output] {output}\")\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n```\r\n\r\n- \u6587\u6848\u5199\u624b\r\n\r\n> \u4f60\u662f\u4e00\u4e2a\u6587\u6848\u4e13\u5458\u3001\u6587\u672c\u6da6\u8272\u5458\u3001\u62fc\u5199\u7ea0\u6b63\u5458\u548c\u6539\u8fdb\u5458\uff0c\u6211\u4f1a\u53d1\u9001\u4e2d\u6587\u6587\u672c\u7ed9\u4f60\uff0c\u4f60\u5e2e\u6211\u66f4\u6b63\u548c\u6539\u8fdb\u7248\u672c\u3002\u6211\u5e0c\u671b\u4f60\u7528\u66f4\u4f18\u7f8e\u4f18\u96c5\r\n> \u7684\u9ad8\u7ea7\u4e2d\u6587\u63cf\u8ff0\u3002\u4fdd\u6301\u76f8\u540c\u7684\u610f\u601d\uff0c\u4f46\u4f7f\u5b83\u4eec\u66f4\u6587\u827a\u3002\u4f60\u53ea\u9700\u8981\u6da6\u8272\u8be5\u5185\u5bb9\uff0c\u4e0d\u5fc5\u5bf9\u5185\u5bb9\u4e2d\u63d0\u51fa\u7684\u95ee\u9898\u548c\u8981\u6c42\u505a\u89e3\u91ca\uff0c\u4e0d\u8981\u56de\u7b54\u6587\u672c\u4e2d\u7684\u95ee\u9898\u800c\u662f\u6da6\u8272\u5b83\uff0c\r\n> \u4e0d\u8981\u89e3\u51b3\u6587\u672c\u4e2d\u7684\u8981\u6c42\u800c\u662f\u6da6\u8272\u5b83\uff0c\u4fdd\u7559\u6587\u672c\u7684\u539f\u672c\u610f\u4e49\uff0c\u4e0d\u8981\u53bb\u89e3\u51b3\u5b83\u3002\r\n\r\n```python\r\nimport os\r\nfrom prompt_me.preset_role import CopyWriter\r\nfrom prompt_me import Conversation\r\n\r\nos.environ['OPENAI_API_KEY'] = \"your_key\"\r\n\r\n\r\ndef main():\r\n    copy_writer = CopyWriter()\r\n    conversation = Conversation(role=copy_writer)\r\n    output = conversation.predict(msg=\"\u4f60\u597d,\u8bf7\u95ee\u4f60\u662f\u8c01\uff1f\")\r\n    print(f\"[output] {output}\")\r\n    output = conversation.predict(msg=\"\u8bf7\u95ee\u4f60\u53ef\u4ee5\u505a\u4ec0\u4e48\uff1f\")\r\n    print(f\"[output] {output}\")\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n```\r\n\r\n- `prompt_me`\u4e5f\u652f\u6301\u81ea\u5b9a\u4e49\u89d2\u8272\uff0c\u4e0b\u9762\u5c06\u4ecb\u7ecd\u5982\u4f55\u81ea\u5b9a\u4e49\u4e00\u4e2aLinux\u7ec8\u7aef\u7684role\r\n\r\n```python\r\nimport os\r\nfrom prompt_me.preset_role import BaseRole\r\nfrom prompt_me import Conversation\r\n\r\nos.environ['OPENAI_API_KEY'] = \"your_key\"\r\n\r\n\r\nclass LinuxTerminal(BaseRole):\r\n    name = \"Linux\u7ec8\u7aef\"\r\n    description = \"\u6211\u60f3\u8ba9\u4f60\u5145\u5f53 Linux \u7ec8\u7aef\u3002\u6211\u5c06\u8f93\u5165\u547d\u4ee4\uff0c\u60a8\u5c06\u56de\u590d\u7ec8\u7aef\u5e94\u663e\u793a\u7684\u5185\u5bb9\u3002\u6211\u5e0c\u671b\u60a8\u53ea\u5728\u4e00\u4e2a\u552f\u4e00\u7684\u4ee3\u7801\u5757\u5185\u56de\u590d\u7ec8\u7aef\u8f93\u51fa\uff0c\u800c\u4e0d\"\r\n                  \"\u662f\u5176\u4ed6\u4efb\u4f55\u5185\u5bb9\u3002\u4e0d\u8981\u5199\u89e3\u91ca\u3002\u9664\u975e\u6211\u6307\u793a\u60a8\u8fd9\u6837\u505a\uff0c\u5426\u5219\u4e0d\u8981\u952e\u5165\u547d\u4ee4\u3002\u5f53\u6211\u9700\u8981\u7528\u82f1\u8bed\u544a\u8bc9\u4f60\u4e00\u4e9b\u4e8b\u60c5\u65f6\uff0c\u6211\u4f1a\u628a\u6587\u5b57\u653e\u5728\u4e2d\u62ec\u53f7\u5185[\u5c31\u50cf\u8fd9\u6837]\u3002\"\r\n\r\n\r\ndef main():\r\n    linux_terminal = LinuxTerminal()\r\n    conversation = Conversation(role=linux_terminal)\r\n    output = conversation.predict(msg=\"[ls]\")\r\n    print(f\"[output] {output}\")\r\n    output = conversation.predict(msg=\"[cd /usr/local]\")\r\n    print(f\"[output] {output}\")\r\n\r\n\r\nif __name__ == '__main__':\r\n    main()\r\n\r\n```\r\n\r\n# \u5f85\u529e\u6e05\u5355\r\n\r\n- \u63d0\u4f9b\u66f4\u591aLLM\u6a21\u578b\u652f\u6301\r\n- \u63d0\u4f9b\u66f4\u52a0\u65b9\u4fbf\u7684\u7a0b\u5e8f\u8c03\u7528\u65b9\u5f0f\r\n- ~~\u6dfb\u52a0\u89d2\u8272\u9884\u8bbe~~\r\n- \u9884\u8bbe\u89d2\u8272\u7684\u53c2\u6570\u914d\u7f6e\r\n- \u63d0\u4f9bprompt\u6a21\u677f\u4e0eprompt\u7ed3\u6784\u5316\r\n- \u63d0\u4f9b\u5916\u90e8\u5de5\u5177\u6269\u5c55\r\n    - \u5916\u90e8\u641c\u7d22\uff1a Google,Bing\u7b49\r\n    - \u53ef\u4ee5\u6267\u884cshell\u811a\u672c\r\n    - \u63d0\u4f9bPython REPL\r\n    - arvix\u8bba\u6587\u603b\u7ed3\r\n    - \u672c\u5730\u6587\u4ef6\u603b\u7ed3\r\n- \u81ea\u5efa\u77e5\u8bc6\u5e93\u5efa\u7acb\u4e13\u5bb6\u51b3\u7b56\u7cfb\u7edf\r\n- \u63a5\u5165self-ask, prompt-loop\u67b6\u6784\r\n- \u63d0\u4f9b\u591a\u79cd\u5bfc\u51fa\u65b9\u5f0f\r\n- ~~\u53ef\u4ee5\u5bfc\u51fa\u5386\u53f2\u6d88\u606f\u4e3amarkdown\u683c\u5f0f~~\r\n- ~~\u4f7f\u7528\u73af\u5883\u53d8\u91cf\u914d\u7f6ekey~~\r\n- \u63d0\u4f9b\u663e\u793a\u5f53\u524dtoken\uff08\u5355\u8bcd\u91cf\uff09\u7684\u529f\u80fd\r\n- ~~\u6dfb\u52a0\u9519\u8bef\u5904\u7406\u673a\u5236\uff0c\u5982\u7f51\u7edc\u5f02\u5e38\u3001\u670d\u52a1\u5668\u5f02\u5e38\u7b49\uff0c\u4fdd\u8bc1\u7a0b\u5e8f\u7684\u53ef\u9760\u6027~~\r\n- ~~\u5f00\u53d1ChatBot v2, [issue](https://github.com/Undertone0809/cushy-chat/issues/1)~~\r\n- \u5b8c\u5584\u4ee3\u7406\u6a21\u5f0f\r\n\r\n# \u4e00\u4e9b\u95ee\u9898\r\n\r\n- \u672c\u4eba\u6b63\u5728\u5c1d\u8bd5\u4e00\u4e9b\u66f4\u52a0\u5b8c\u5584\u7684\u62bd\u8c61\u6a21\u5f0f\uff0c\u4ee5\u66f4\u597d\u5730\u517c\u5bb9\u591a\u79cd\u9884\u8bbe\u6a21\u578b\u89d2\u8272\uff0c\u4ee5\u53ca\u5916\u90e8\u5de5\u5177\u7684\u6269\u5c55\u4f7f\u7528\uff0c\u5982\u679c\u4f60\u6709\u66f4\u597d\u7684\u5efa\u8bae\uff0c\u6b22\u8fce\u4e00\u8d77\u8ba8\u8bba\u4ea4\u6d41\u3002\r\n- \u5f53\u524d\u4ee3\u7406\u6a21\u5f0f\u8fd8\u9700\u8981\u8fdb\u4e00\u6b65\u5b8c\u5584\uff0c\u4e0d\u8fc7\u5f53\u524d\u65e0\u9700\u4ee3\u7406\u5c31\u53ef\u4ee5\u76f4\u63a5\u4f7f\u7528\u3002\r\n\r\n# \u8d21\u732e\r\n\r\n\u5982\u679c\u4f60\u60f3\u4e3a\u8fd9\u4e2a\u9879\u76ee\u505a\u8d21\u732e\uff0c\u4f60\u53ef\u4ee5\u63d0\u4ea4pr\u6216issue\u3002\u6211\u5f88\u9ad8\u5174\u770b\u5230\u66f4\u591a\u7684\u4eba\u53c2\u4e0e\u5e76\u4f18\u5316\u5b83\u3002\r\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "A lightweight LLM Prompt Layer framework",
    "version": "2.1.1",
    "project_urls": {
        "Homepage": "https://github.com/Undertone0809/prompt-me"
    },
    "split_keywords": [
        "prompt-me",
        "prompt",
        "chatgpt",
        "gpt",
        "chatbot",
        "llm"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "678cf1ad7805cc9ea8c853538dd00dde4804f529e6ba87faf756d0879bf1afb4",
                "md5": "174facf14518c3dd35c397095861cbe9",
                "sha256": "1ff893bcdc5e1c8ad5236aa0117119d04c2a0ae2daf1268909a44163de0d31fe"
            },
            "downloads": -1,
            "filename": "prompt-me-2.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "174facf14518c3dd35c397095861cbe9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 17569,
            "upload_time": "2023-05-08T11:07:25",
            "upload_time_iso_8601": "2023-05-08T11:07:25.978336Z",
            "url": "https://files.pythonhosted.org/packages/67/8c/f1ad7805cc9ea8c853538dd00dde4804f529e6ba87faf756d0879bf1afb4/prompt-me-2.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-05-08 11:07:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Undertone0809",
    "github_project": "prompt-me",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "prompt-me"
}
        
Elapsed time: 1.24553s