Name | think-llm-client JSON |
Version |
0.3.3
JSON |
| download |
home_page | None |
Summary | A flexible SDK for LLM and VLM model interactions with CLI support |
upload_time | 2025-02-20 03:49:28 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.12 |
license | MIT |
keywords |
ai
cli
gpt
llm
openai
vision
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Think LLM Client
一个灵活的 LLM 和 VLM 模型交互 SDK,支持基础的模型交互和 CLI 界面。
## 特性
- 支持多种模型类型(LLM、VLM)
- 支持多个提供商和模型
- 提供基础的模型交互接口
- 提供丰富的 CLI 界面
- 支持图片分析和比较
- 支持流式输出和思维链
- 支持对话历史管理
- 类型提示和文档完备
## 安装
使用 [uv](https://github.com/astral-sh/uv) 安装(推荐):
```bash
uv pip install think-llm-client
```
或使用传统的 pip:
```bash
pip install think-llm-client
```
## 快速开始
### 基础用法
```python
import asyncio
from think_llm_client import LLMClient
async def main():
# 创建客户端
client = LLMClient()
# 设置模型
client.set_model("llm", "openai", "gpt-4")
# 基础对话
reasoning, response = await client.chat("Python 中的装饰器是什么?")
print(f"回答:{response}")
# 图片分析
reasoning, response = await client.analyze_image(
"image.jpg",
"分析这个产品的优缺点"
)
print(f"图片分析:{response}")
if __name__ == "__main__":
asyncio.run(main())
```
### CLI 界面
```bash
# 启动交互式对话
python -m think_llm_client.cli chat
# 分析图片
python -m think_llm_client.cli analyze image.jpg "描述这个图片"
```
## 配置
### 配置文件位置
配置文件可以放置在以下位置:
1. 项目根目录的 `config.json`
2. 用户目录下的 `.think_llm_client/config.json`
你可以在创建客户端时指定配置文件路径:
```python
from think_llm_client import LLMClient
client = LLMClient(config_path="/path/to/your/config.json")
```
### 配置文件格式
配置文件使用 JSON 格式,支持配置多种模型类型(LLM、VLM)和多个提供商:
```json
{
"model_types": {
"llm": {
"providers": {
"openai": {
"api_key": "your-api-key",
"api_url": "https://api.openai.com/v1",
"model": {
"gpt-4": {
"max_tokens": 2000,
"system_prompt": "你是一个有帮助的助手"
},
"gpt-3.5-turbo": {
"max_tokens": 1000
}
}
}
}
},
"vlm": {
"providers": {
"openai": {
"api_key": "your-api-key",
"api_url": "https://api.openai.com/v1",
"model": {
"gpt-4-vision-preview": {
"max_tokens": 1000
}
}
}
}
}
}
}
```
或者使用环境变量:
```bash
export OPENAI_API_KEY=your-api-key
```
## 详细使用说明
### 1. 基础对话
```python
import asyncio
from think_llm_client import LLMClient
async def main():
# 创建客户端
client = LLMClient()
# 设置模型
client.set_model("llm", "openai", "gpt-4")
# 基础对话(默认使用流式输出)
reasoning, response = await client.chat("Python 中的装饰器是什么?")
print(f"思维过程:{reasoning}")
print(f"回答:{response}")
# 非流式对话
reasoning, response = await client.chat(
"给我一个装饰器的例子",
stream=False
)
if __name__ == "__main__":
asyncio.run(main())
```
### 2. 图片分析
```python
async def analyze_images():
client = LLMClient()
client.set_model("vlm", "openai", "gpt-4-vision")
# 分析单张图片
reasoning, response = await client.analyze_image(
"product.jpg",
"分析这个产品的优缺点"
)
# 比较多张图片
reasoning, response = await client.compare_images(
["image1.jpg", "image2.jpg"],
"比较这两张图片的区别"
)
```
### 3. 对话历史管理
```python
async def manage_chat_history():
client = LLMClient()
client.set_model("llm", "openai", "gpt-4")
# 进行对话
await client.chat("你好")
await client.chat("今天天气不错")
# 保存对话历史
client.save_chat_history("my_chat.json")
# 清除当前对话历史
client.clear_history()
# 加载之前的对话历史
client.load_chat_history_from_file("my_chat.json")
# 获取可用的历史记录
histories = client.get_available_histories()
for path, timestamp in histories:
print(f"历史记录:{path}, 时间:{timestamp}")
```
### 4. 流式输出处理
```python
async def handle_stream():
client = LLMClient()
client.set_model("llm", "openai", "gpt-4")
async for type_, chunk, full_content in client.chat_stream("讲个故事"):
if type_ == "reasoning":
print(f"思维过程: {chunk}", end="")
else:
print(f"内容: {chunk}", end="")
```
## CLI 使用
### 基础对话
```bash
# 启动交互式对话
python -m think_llm_client.cli chat
# 指定模型进行对话
python -m think_llm_client.cli chat --model-type llm --provider openai --model gpt-4
```
### 图片分析
```bash
# 分析单张图片
python -m think_llm_client.cli analyze image.jpg "描述这个图片"
# 比较多张图片
python -m think_llm_client.cli compare image1.jpg image2.jpg "比较这两张图片的区别"
```
## 高级特性
### 流式输出
```python
async def main():
client = LLMClient()
client.set_model("llm", "openai", "gpt-4")
# 启用流式输出
async for chunk_type, chunk, full_content in client.chat_stream("解释量子计算"):
if chunk_type == "reasoning":
print(f"思维过程: {chunk}", end="")
elif chunk_type == "content":
print(f"回答: {chunk}", end="")
```
### 对话历史管理
```python
# 保存对话历史
client.save_history("chat_history.json")
# 加载对话历史
client.load_history_from_file("chat_history.json")
```
## 开发
使用 [uv](https://github.com/astral-sh/uv) 安装开发依赖(推荐):
```bash
uv pip install -e ".[dev]"
```
或使用传统的 pip:
```bash
pip install -e ".[dev]"
```
运行测试和代码检查:
```bash
# 运行测试
pytest
# 代码格式化
black .
ruff check .
mypy .
```
## 添加新的 Git 标签并推送到 GitHub
1. 确保所有更改已提交并推送到主分支:
```bash
git add .
git commit -m "Your commit message"
git push origin main
```
2. 创建新的 Git 标签:
```bash
git tag vX.Y.Z
```
3. 推送标签到 GitHub:
```bash
git push origin vX.Y.Z
```
## 许可证
MIT License
Raw data
{
"_id": null,
"home_page": null,
"name": "think-llm-client",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.12",
"maintainer_email": null,
"keywords": "ai, cli, gpt, llm, openai, vision",
"author": null,
"author_email": "ThinkThinking <yezhenjie@outlook.de>",
"download_url": "https://files.pythonhosted.org/packages/89/64/291554c73a8d4a0b452dc9b122c5b65d5ffdc859abd1a7955588b8e2fda5/think_llm_client-0.3.3.tar.gz",
"platform": null,
"description": "# Think LLM Client\n\n\u4e00\u4e2a\u7075\u6d3b\u7684 LLM \u548c VLM \u6a21\u578b\u4ea4\u4e92 SDK\uff0c\u652f\u6301\u57fa\u7840\u7684\u6a21\u578b\u4ea4\u4e92\u548c CLI \u754c\u9762\u3002\n\n## \u7279\u6027\n\n- \u652f\u6301\u591a\u79cd\u6a21\u578b\u7c7b\u578b\uff08LLM\u3001VLM\uff09\n- \u652f\u6301\u591a\u4e2a\u63d0\u4f9b\u5546\u548c\u6a21\u578b\n- \u63d0\u4f9b\u57fa\u7840\u7684\u6a21\u578b\u4ea4\u4e92\u63a5\u53e3\n- \u63d0\u4f9b\u4e30\u5bcc\u7684 CLI \u754c\u9762\n- \u652f\u6301\u56fe\u7247\u5206\u6790\u548c\u6bd4\u8f83\n- \u652f\u6301\u6d41\u5f0f\u8f93\u51fa\u548c\u601d\u7ef4\u94fe\n- \u652f\u6301\u5bf9\u8bdd\u5386\u53f2\u7ba1\u7406\n- \u7c7b\u578b\u63d0\u793a\u548c\u6587\u6863\u5b8c\u5907\n\n## \u5b89\u88c5\n\n\u4f7f\u7528 [uv](https://github.com/astral-sh/uv) \u5b89\u88c5\uff08\u63a8\u8350\uff09\uff1a\n\n```bash\nuv pip install think-llm-client\n```\n\n\u6216\u4f7f\u7528\u4f20\u7edf\u7684 pip\uff1a\n\n```bash\npip install think-llm-client\n```\n\n## \u5feb\u901f\u5f00\u59cb\n\n### \u57fa\u7840\u7528\u6cd5\n\n```python\nimport asyncio\nfrom think_llm_client import LLMClient\n\nasync def main():\n # \u521b\u5efa\u5ba2\u6237\u7aef\n client = LLMClient()\n \n # \u8bbe\u7f6e\u6a21\u578b\n client.set_model(\"llm\", \"openai\", \"gpt-4\")\n \n # \u57fa\u7840\u5bf9\u8bdd\n reasoning, response = await client.chat(\"Python \u4e2d\u7684\u88c5\u9970\u5668\u662f\u4ec0\u4e48\uff1f\")\n print(f\"\u56de\u7b54\uff1a{response}\")\n \n # \u56fe\u7247\u5206\u6790\n reasoning, response = await client.analyze_image(\n \"image.jpg\",\n \"\u5206\u6790\u8fd9\u4e2a\u4ea7\u54c1\u7684\u4f18\u7f3a\u70b9\"\n )\n print(f\"\u56fe\u7247\u5206\u6790\uff1a{response}\")\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### CLI \u754c\u9762\n\n```bash\n# \u542f\u52a8\u4ea4\u4e92\u5f0f\u5bf9\u8bdd\npython -m think_llm_client.cli chat\n\n# \u5206\u6790\u56fe\u7247\npython -m think_llm_client.cli analyze image.jpg \"\u63cf\u8ff0\u8fd9\u4e2a\u56fe\u7247\"\n```\n\n## \u914d\u7f6e\n\n### \u914d\u7f6e\u6587\u4ef6\u4f4d\u7f6e\n\n\u914d\u7f6e\u6587\u4ef6\u53ef\u4ee5\u653e\u7f6e\u5728\u4ee5\u4e0b\u4f4d\u7f6e\uff1a\n1. \u9879\u76ee\u6839\u76ee\u5f55\u7684 `config.json`\n2. \u7528\u6237\u76ee\u5f55\u4e0b\u7684 `.think_llm_client/config.json`\n\n\u4f60\u53ef\u4ee5\u5728\u521b\u5efa\u5ba2\u6237\u7aef\u65f6\u6307\u5b9a\u914d\u7f6e\u6587\u4ef6\u8def\u5f84\uff1a\n\n```python\nfrom think_llm_client import LLMClient\n\nclient = LLMClient(config_path=\"/path/to/your/config.json\")\n```\n\n### \u914d\u7f6e\u6587\u4ef6\u683c\u5f0f\n\n\u914d\u7f6e\u6587\u4ef6\u4f7f\u7528 JSON \u683c\u5f0f\uff0c\u652f\u6301\u914d\u7f6e\u591a\u79cd\u6a21\u578b\u7c7b\u578b\uff08LLM\u3001VLM\uff09\u548c\u591a\u4e2a\u63d0\u4f9b\u5546\uff1a\n\n```json\n{\n \"model_types\": {\n \"llm\": {\n \"providers\": {\n \"openai\": {\n \"api_key\": \"your-api-key\",\n \"api_url\": \"https://api.openai.com/v1\",\n \"model\": {\n \"gpt-4\": {\n \"max_tokens\": 2000,\n \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a\u6709\u5e2e\u52a9\u7684\u52a9\u624b\"\n },\n \"gpt-3.5-turbo\": {\n \"max_tokens\": 1000\n }\n }\n }\n }\n },\n \"vlm\": {\n \"providers\": {\n \"openai\": {\n \"api_key\": \"your-api-key\",\n \"api_url\": \"https://api.openai.com/v1\",\n \"model\": {\n \"gpt-4-vision-preview\": {\n \"max_tokens\": 1000\n }\n }\n }\n }\n }\n }\n}\n```\n\n\u6216\u8005\u4f7f\u7528\u73af\u5883\u53d8\u91cf\uff1a\n\n```bash\nexport OPENAI_API_KEY=your-api-key\n```\n\n## \u8be6\u7ec6\u4f7f\u7528\u8bf4\u660e\n\n### 1. \u57fa\u7840\u5bf9\u8bdd\n\n```python\nimport asyncio\nfrom think_llm_client import LLMClient\n\nasync def main():\n # \u521b\u5efa\u5ba2\u6237\u7aef\n client = LLMClient()\n \n # \u8bbe\u7f6e\u6a21\u578b\n client.set_model(\"llm\", \"openai\", \"gpt-4\")\n \n # \u57fa\u7840\u5bf9\u8bdd\uff08\u9ed8\u8ba4\u4f7f\u7528\u6d41\u5f0f\u8f93\u51fa\uff09\n reasoning, response = await client.chat(\"Python \u4e2d\u7684\u88c5\u9970\u5668\u662f\u4ec0\u4e48\uff1f\")\n print(f\"\u601d\u7ef4\u8fc7\u7a0b\uff1a{reasoning}\")\n print(f\"\u56de\u7b54\uff1a{response}\")\n \n # \u975e\u6d41\u5f0f\u5bf9\u8bdd\n reasoning, response = await client.chat(\n \"\u7ed9\u6211\u4e00\u4e2a\u88c5\u9970\u5668\u7684\u4f8b\u5b50\",\n stream=False\n )\n\nif __name__ == \"__main__\":\n asyncio.run(main())\n```\n\n### 2. \u56fe\u7247\u5206\u6790\n\n```python\nasync def analyze_images():\n client = LLMClient()\n client.set_model(\"vlm\", \"openai\", \"gpt-4-vision\")\n \n # \u5206\u6790\u5355\u5f20\u56fe\u7247\n reasoning, response = await client.analyze_image(\n \"product.jpg\",\n \"\u5206\u6790\u8fd9\u4e2a\u4ea7\u54c1\u7684\u4f18\u7f3a\u70b9\"\n )\n \n # \u6bd4\u8f83\u591a\u5f20\u56fe\u7247\n reasoning, response = await client.compare_images(\n [\"image1.jpg\", \"image2.jpg\"],\n \"\u6bd4\u8f83\u8fd9\u4e24\u5f20\u56fe\u7247\u7684\u533a\u522b\"\n )\n```\n\n### 3. \u5bf9\u8bdd\u5386\u53f2\u7ba1\u7406\n\n```python\nasync def manage_chat_history():\n client = LLMClient()\n client.set_model(\"llm\", \"openai\", \"gpt-4\")\n \n # \u8fdb\u884c\u5bf9\u8bdd\n await client.chat(\"\u4f60\u597d\")\n await client.chat(\"\u4eca\u5929\u5929\u6c14\u4e0d\u9519\")\n \n # \u4fdd\u5b58\u5bf9\u8bdd\u5386\u53f2\n client.save_chat_history(\"my_chat.json\")\n \n # \u6e05\u9664\u5f53\u524d\u5bf9\u8bdd\u5386\u53f2\n client.clear_history()\n \n # \u52a0\u8f7d\u4e4b\u524d\u7684\u5bf9\u8bdd\u5386\u53f2\n client.load_chat_history_from_file(\"my_chat.json\")\n \n # \u83b7\u53d6\u53ef\u7528\u7684\u5386\u53f2\u8bb0\u5f55\n histories = client.get_available_histories()\n for path, timestamp in histories:\n print(f\"\u5386\u53f2\u8bb0\u5f55\uff1a{path}, \u65f6\u95f4\uff1a{timestamp}\")\n```\n\n### 4. \u6d41\u5f0f\u8f93\u51fa\u5904\u7406\n\n```python\nasync def handle_stream():\n client = LLMClient()\n client.set_model(\"llm\", \"openai\", \"gpt-4\")\n \n async for type_, chunk, full_content in client.chat_stream(\"\u8bb2\u4e2a\u6545\u4e8b\"):\n if type_ == \"reasoning\":\n print(f\"\u601d\u7ef4\u8fc7\u7a0b: {chunk}\", end=\"\")\n else:\n print(f\"\u5185\u5bb9: {chunk}\", end=\"\")\n```\n\n## CLI \u4f7f\u7528\n\n### \u57fa\u7840\u5bf9\u8bdd\n\n```bash\n# \u542f\u52a8\u4ea4\u4e92\u5f0f\u5bf9\u8bdd\npython -m think_llm_client.cli chat\n\n# \u6307\u5b9a\u6a21\u578b\u8fdb\u884c\u5bf9\u8bdd\npython -m think_llm_client.cli chat --model-type llm --provider openai --model gpt-4\n```\n\n### \u56fe\u7247\u5206\u6790\n\n```bash\n# \u5206\u6790\u5355\u5f20\u56fe\u7247\npython -m think_llm_client.cli analyze image.jpg \"\u63cf\u8ff0\u8fd9\u4e2a\u56fe\u7247\"\n\n# \u6bd4\u8f83\u591a\u5f20\u56fe\u7247\npython -m think_llm_client.cli compare image1.jpg image2.jpg \"\u6bd4\u8f83\u8fd9\u4e24\u5f20\u56fe\u7247\u7684\u533a\u522b\"\n```\n\n## \u9ad8\u7ea7\u7279\u6027\n\n### \u6d41\u5f0f\u8f93\u51fa\n\n```python\nasync def main():\n client = LLMClient()\n client.set_model(\"llm\", \"openai\", \"gpt-4\")\n \n # \u542f\u7528\u6d41\u5f0f\u8f93\u51fa\n async for chunk_type, chunk, full_content in client.chat_stream(\"\u89e3\u91ca\u91cf\u5b50\u8ba1\u7b97\"):\n if chunk_type == \"reasoning\":\n print(f\"\u601d\u7ef4\u8fc7\u7a0b: {chunk}\", end=\"\")\n elif chunk_type == \"content\":\n print(f\"\u56de\u7b54: {chunk}\", end=\"\")\n```\n\n### \u5bf9\u8bdd\u5386\u53f2\u7ba1\u7406\n\n```python\n# \u4fdd\u5b58\u5bf9\u8bdd\u5386\u53f2\nclient.save_history(\"chat_history.json\")\n\n# \u52a0\u8f7d\u5bf9\u8bdd\u5386\u53f2\nclient.load_history_from_file(\"chat_history.json\")\n```\n\n## \u5f00\u53d1\n\n\u4f7f\u7528 [uv](https://github.com/astral-sh/uv) \u5b89\u88c5\u5f00\u53d1\u4f9d\u8d56\uff08\u63a8\u8350\uff09\uff1a\n\n```bash\nuv pip install -e \".[dev]\"\n```\n\n\u6216\u4f7f\u7528\u4f20\u7edf\u7684 pip\uff1a\n\n```bash\npip install -e \".[dev]\"\n```\n\n\u8fd0\u884c\u6d4b\u8bd5\u548c\u4ee3\u7801\u68c0\u67e5\uff1a\n\n```bash\n# \u8fd0\u884c\u6d4b\u8bd5\npytest\n\n# \u4ee3\u7801\u683c\u5f0f\u5316\nblack .\nruff check .\nmypy .\n```\n\n## \u6dfb\u52a0\u65b0\u7684 Git \u6807\u7b7e\u5e76\u63a8\u9001\u5230 GitHub\n\n1. \u786e\u4fdd\u6240\u6709\u66f4\u6539\u5df2\u63d0\u4ea4\u5e76\u63a8\u9001\u5230\u4e3b\u5206\u652f\uff1a\n ```bash\n git add .\n git commit -m \"Your commit message\"\n git push origin main\n ```\n\n2. \u521b\u5efa\u65b0\u7684 Git \u6807\u7b7e\uff1a\n ```bash\n git tag vX.Y.Z\n ```\n\n3. \u63a8\u9001\u6807\u7b7e\u5230 GitHub\uff1a\n ```bash\n git push origin vX.Y.Z\n ```\n\n## \u8bb8\u53ef\u8bc1\n\nMIT License\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A flexible SDK for LLM and VLM model interactions with CLI support",
"version": "0.3.3",
"project_urls": {
"Documentation": "https://github.com/thinkthinking/think-llm-client#readme",
"Homepage": "https://github.com/thinkthinking/think-llm-client",
"Issues": "https://github.com/thinkthinking/think-llm-client/issues",
"Repository": "https://github.com/thinkthinking/think-llm-client.git"
},
"split_keywords": [
"ai",
" cli",
" gpt",
" llm",
" openai",
" vision"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4c080d8c7b64c3693848e1be7e2da585e3a2598cae1a24b77289160d309ff4a0",
"md5": "bae358334a79ff7c121cc0944d15d783",
"sha256": "79ab1dde520eab8fd77cb62683cda832c93f0ba3667d4fad7b7a7fa2133c3a97"
},
"downloads": -1,
"filename": "think_llm_client-0.3.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bae358334a79ff7c121cc0944d15d783",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.12",
"size": 20918,
"upload_time": "2025-02-20T03:49:27",
"upload_time_iso_8601": "2025-02-20T03:49:27.075439Z",
"url": "https://files.pythonhosted.org/packages/4c/08/0d8c7b64c3693848e1be7e2da585e3a2598cae1a24b77289160d309ff4a0/think_llm_client-0.3.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8964291554c73a8d4a0b452dc9b122c5b65d5ffdc859abd1a7955588b8e2fda5",
"md5": "3918ce8dedcbcef8e39e5396e11ede68",
"sha256": "2e996bf5bfb0d85ddb32caded21b667abf3ecc49c90b699b15cd235c25fe3c94"
},
"downloads": -1,
"filename": "think_llm_client-0.3.3.tar.gz",
"has_sig": false,
"md5_digest": "3918ce8dedcbcef8e39e5396e11ede68",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.12",
"size": 55432,
"upload_time": "2025-02-20T03:49:28",
"upload_time_iso_8601": "2025-02-20T03:49:28.280046Z",
"url": "https://files.pythonhosted.org/packages/89/64/291554c73a8d4a0b452dc9b122c5b65d5ffdc859abd1a7955588b8e2fda5/think_llm_client-0.3.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-20 03:49:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "thinkthinking",
"github_project": "think-llm-client#readme",
"github_not_found": true,
"lcname": "think-llm-client"
}