llmakits


Namellmakits JSON
Version 0.6.14 PyPI version JSON
download
home_pagehttps://github.com/tinycen/llmakits
SummaryA powerful Python toolkit for simplifying LLM integration and management with multi-model scheduling, fault tolerance, and load balancing support
upload_time2025-10-28 08:18:16
maintainerNone
docs_urlNone
authortinycen
requires_python>=3.10
licenseNone
keywords llm ai chatgpt openai zhipu dashscope modelscope multi-model scheduling fault-tolerance
VCS
bugtrack_url
requirements regex pyyaml pandas openai zhipuai dashscope filekits funcguard ollama
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # llmakits

一个功能强大的Python工具包,用于简化大语言模型(LLM)的集成和管理。支持多模型调度、故障转移、负载均衡等功能。

[![zread](https://img.shields.io/badge/Ask_Zread-_.svg?style=flat&color=00b0aa&labelColor=000000&logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iMTYiIHZpZXdCb3g9IjAgMCAxNiAxNiIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTQuOTYxNTYgMS42MDAxSDIuMjQxNTZDMS44ODgxIDEuNjAwMSAxLjYwMTU2IDEuODg2NjQgMS42MDE1NiAyLjI0MDFWNC45NjAxQzEuNjAxNTYgNS4zMTM1NiAxLjg4ODEgNS42MDAxIDIuMjQxNTYgNS42MDAxSDQuOTYxNTZDNS4zMTUwMiA1LjYwMDEgNS42MDE1NiA1LjMxMzU2IDUuNjAxNTYgNC45NjAxVjIuMjQwMUM1LjYwMTU2IDEuODg2NjQgNS4zMTUwMiAxLjYwMDEgNC45NjE1NiAxLjYwMDFaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00Ljk2MTU2IDEwLjM5OTlIMi4yNDE1NkMxLjg4ODEgMTAuMzk5OSAxLjYwMTU2IDEwLjY4NjQgMS42MDE1NiAxMS4wMzk5VjEzLjc1OTlDMS42MDE1NiAxNC4xMTM0IDEuODg4MSAxNC4zOTk5IDIuMjQxNTYgMTQuMzk5OUg0Ljk2MTU2QzUuMzE1MDIgMTQuMzk5OSA1LjYwMTU2IDE0LjExMzQgNS42MDE1NiAxMy43NTk5VjExLjAzOTlDNS42MDE1NiAxMC42ODY0IDUuMzE1MDIgMTAuMzk5OSA0Ljk2MTU2IDEwLjM5OTlaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik0xMy43NTg0IDEuNjAwMUgxMS4wMzg0QzEwLjY4NSAxLjYwMDEgMTAuMzk4NCAxLjg4NjY0IDEwLjM5ODQgMi4yNDAxVjQuOTYwMUMxMC4zOTg0IDUuMzEzNTYgMTAuNjg1IDUuNjAwMSAxMS4wMzg0IDUuNjAwMUgxMy43NTg0QzE0LjExMTkgNS42MDAxIDE0LjM5ODQgNS4zMTM1NiAxNC4zOTg0IDQuOTYwMVYyLjI0MDFDMTQuMzk4NCAxLjg4NjY0IDE0LjExMTkgMS42MDAxIDEzLjc1ODQgMS42MDAxWiIgZmlsbD0iI2ZmZiIvPgo8cGF0aCBkPSJNNCAxMkwxMiA0TDQgMTJaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00IDEyTDEyIDQiIHN0cm9rZT0iI2ZmZiIgc3Ryb2tlLXdpZHRoPSIxLjUiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIvPgo8L3N2Zz4K&logoColor=ffffff)](https://zread.ai/tinycen/llmakits)

## 功能特性

- 🚀 **多模型支持**: 支持OpenAI、智谱AI、DashScope、ModelScope等多个主流LLM平台;
- 🔄 **智能调度**: 内置模型故障转移和负载均衡机制;
  - 自动切换:当模型失败时,自动切换到下个可用模型;
  - 负载均衡:Token 或 请求次数 达到上限后,自动切换到下个api_key;
  - API密钥用尽处理:自动检测并移除API密钥用尽的模型;
- 📊 **消息处理**: 强大的消息格式化、验证和提取功能;
- 🛡️ **错误处理**: 完善的重试机制和异常处理;
- 📝 **流式输出**: 支持流式响应处理;
- ⏱️ **性能监控**: 支持设置耗时警告阈值,监控模型响应性能;
- 🎯 **电商工具**: 内置电商场景专用工具集;
- 💡 **状态保持**: 模型实例缓存,避免重复实例化,保持API密钥切换状态。

## 安装/更新

```bash
pip install --upgrade llmakits
```

## 快速开始

### 1. 配置模型和API密钥

**模型配置文件** (`config/models_config.yaml`):
- 支持按业务场景分组配置
- 每个组可以配置多个模型,实现故障转移
- 模型会按配置顺序依次尝试,直到成功

```yaml
# 标题生成专用模型组
generate_title:
  - sdk_name: "dashscope"
    model_name: "qwen3-max-preview"

  - sdk_name: "zhipu"
    model_name: "glm-4-plus"

# 翻译专用模型组
translate_box:
  - sdk_name: "modelscope"
    model_name: "Qwen/Qwen3-32B"

  - sdk_name: "modelscope"
    model_name: "deepseek-ai/DeepSeek-V3"
```

**密钥配置文件** (`config/keys_config.yaml`):
- 支持多密钥配置,自动负载均衡
- 当密钥达到每日使用限制时,自动切换到下一个密钥
- 支持不同平台的独立配置

```yaml
# 百度AI Studio平台
aistudio:
  base_url: "https://aistudio.baidu.com/llm/lmapi/v3"
  api_keys: ["your-api-key-1", "your-api-key-2"]

# 百度AI Studio应用平台
aistudio_app:
  base_url: "https://api-i0c6md2d80ndh773.aistudio-app.com/v1"
  api_keys: ["your-api-key-1", "your-api-key-2"]

# 阿里云DashScope平台
dashscope:
  base_url: "https://dashscope.aliyuncs.com/compatible-mode/v1"
  api_keys: ["your-api-key-1", "your-api-key-2"]

# ModelScope平台
modelscope:
  base_url: "https://api-inference.modelscope.cn/v1/"
  api_keys: ["your-api-key-1", "your-api-key-2"]

# 智谱AI平台
zhipu:
  base_url: ""  # 使用默认URL
  api_keys: ["your-api-key-1", "your-api-key-2"]
```

#### 错误处理和故障转移

1. **模型级别故障转移**: 当前模型失败时,自动切换到同组的下一个模型
2. **API密钥用尽检测**: 自动检测 `API_KEY_EXHAUSTED` 异常,并移除对应的模型
3. **结果验证**: 支持自定义验证函数,验证失败时自动尝试下一个模型
4. **状态保持**: 模型实例在dispatcher中缓存,保持API密钥切换状态

#### 配置优化建议

1. **使用模型组**: 推荐使用 `execute_with_group` 方法,避免重复实例化
2. **合理配置模型顺序**: 将性能更好、更稳定的模型放在前面
3. **适当设置重试**: 根据业务需求配置模型数量和密钥数量
4. **监控切换次数**: 通过 `model_switch_count` 监控模型切换频率

#### 全局模型配置

支持通过CSV文件配置模型的高级参数,实现更精细的模型控制:

**全局配置文件格式** (`config/global_model_config.csv`):

文件格式:仅支持 .csv 和 .xlsx 格式。

> ⚠️ **重要提示**:表格文件中的布尔值(True/False)必须显式用英文双引号包裹,例如 `"True"` 或 `"""True"""`。这是为了确保CSV解析器正确处理布尔值,避免类型转换问题,同时也是为了和0/1数值区分。

| 参数名 | 说明 | 适用 platform/sdk |
| --- | --- | --- |
| `platform` | 平台名称 | - |
| `model_name` | 模型名称 | - |
| `stream` | 是否启用流式输出 | - |
| `stream_real` | 是否启用真实流式输出 | - |
| `response_format` | 响应格式 (`json` 或 `text`) | `zhipu` |
| `thinking` | 思考模式配置 | `zhipu` |
| `extra_enable_thinking` | 启用思考功能(会嵌套在extra_body中) | `modelscope`,`dashscope_openai` |
| `reasoning_effort` | 推理努力程度 | `gemini` |

**通配符匹配支持**:
- `platform` - `model_name` 格式
- 精确匹配: `dashscope,qwen3-max-preview`
- 通配符匹配 (`*` 包裹模型名称):
  - 示例:`openai,*gpt*` (匹配所有包含 gpt 的模型)
- 通用匹配 (`*` 替代模型名称):
  - 示例:`zhipu,*` (匹配智谱平台所有模型)

**使用示例**:
```python
from llmakits import load_models

# 加载带全局配置的模型
models, keys = load_models(
    'config/models_config.yaml',
    'config/keys_config.yaml',
    global_config='config/global_model_config.csv'
)
```

### 2. 加载模型

```python
from llmakits import load_models

# 方式1:传入配置文件路径(字符串)
models = load_models('config/models_config.yaml', 'config/keys_config.yaml')

# 方式2:直接传入配置字典
models_config = {
    "my_models": [
        {"model_name": "gpt-3.5-turbo", "sdk_name": "openai"}
    ]
}
model_keys = {
    "openai": {
        "base_url": "https://api.openai.com/v1",
        "api_keys": ["your-api-key"]
    }
}
models = load_models(models_config, model_keys)

# 方式3:使用全局配置(支持高级参数配置)
models = load_models(
    'config/models_config.yaml',
    'config/keys_config.yaml',
    global_config='config/global_model_config.csv'  # 可选:全局模型配置
)

# 获取模型组
my_models = models['my_models']
```

### 3. 发送消息(多模型调度)

#### 使用 ModelDispatcher(推荐)

ModelDispatcher 提供了两种使用方式,推荐使用 `execute_with_group` 方法:

**方式一:使用模型组(推荐)**

```python
from llmakits import ModelDispatcher

# 创建调度器实例并加载配置
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')

# 准备消息
message_info = {
    "system_prompt": "你是一个 helpful 助手",
    "user_text": "请介绍一下Python编程语言"
}

# 使用模型组执行任务 - 自动管理模型状态和故障转移
result, tokens = dispatcher.execute_with_group(message_info, group_name="generate_title")
print(f"结果: {result}")
print(f"使用token数: {tokens}")
print(f"模型切换次数: {dispatcher.model_switch_count}")
```

#### 消息格式说明

`message_info` 参数支持以下字段:
- `system_prompt`: 系统提示词(可选)
- `user_text`: 用户输入文本(可选)
- `include_img`: 是否包含图片(可选,默认False)
- `img_list`: 图片URL列表(可选,默认为空列表)

基本使用示例:

```python
# 简单文本对话
message_info = {
    "system_prompt": "你是一个 helpful 助手",
    "user_text": "请介绍一下Python编程语言"
}

# 带图片的对话
message_info = {
    "system_prompt": "你是一个图像分析专家",
    "user_text": "请分析这张图片",
    "include_img": True,
    "img_list": ["https://example.com/image.jpg"]
}
# 如果include_img = True 同时 img_list 是空的,此时会报出错误。
```

**方式二:手动传入模型列表**

```python
from llmakits import ModelDispatcher

# 创建调度器实例
dispatcher = ModelDispatcher()

# 准备消息和模型列表
message_info = {
    "system_prompt": "你是一个 helpful 助手",
    "user_text": "请介绍一下Python编程语言"
}

# 执行任务
result, tokens = dispatcher.execute_task(message_info, my_models)
```

#### 高级用法:结果验证和格式化

```python
from llmakits import ModelDispatcher

# 创建调度器
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')

# 定义结果验证函数
def validate_result(result):
    """验证结果是否包含必要的字段"""
    return "python" in result.lower() and "编程" in result

# 准备消息
message_info = {
    "system_prompt": "你是一个编程专家",
    "user_text": "请介绍Python语言的特点"
}

# 执行任务,启用JSON格式化和结果验证
result, tokens = dispatcher.execute_with_group(
    message_info,
    group_name="generate_title",
    format_json=True,           # 格式化为JSON
    validate_func=validate_result  # 验证结果
)

print(f"验证通过的结果: {result}")
print(f"使用token数: {tokens}")
```

#### 高级用法:获取详细执行结果

```python
from llmakits import ModelDispatcher
from llmakits.dispatcher import ExecutionResult

# 创建调度器
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')

# 准备消息
message_info = {
    "system_prompt": "你是一个编程专家",
    "user_text": "请介绍Python语言的特点"
}

# 获取详细执行结果
result: ExecutionResult = dispatcher.execute_with_group(
    message_info,
    group_name="generate_title",
    return_detailed=True  # 返回详细结果
)

print(f"返回消息: {result.return_message}")
print(f"使用token数: {result.total_tokens}")
print(f"最后尝试的模型索引: {result.last_tried_index}")
print(f"是否成功: {result.success}")
if result.error:
    print(f"错误信息: {result.error}")
```

#### 高级用法:耗时警告监控

```python
from llmakits import ModelDispatcher

# 创建调度器并设置耗时警告阈值(单位:秒)
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')
dispatcher.warning_time = 30  # 设置30秒为耗时警告阈值

# 准备消息
message_info = {
    "system_prompt": "你是一个 helpful 助手",
    "user_text": "请详细介绍Python编程语言及其生态系统"
}

# 执行任务 - 当模型执行时间超过30秒时会显示警告
result, tokens = dispatcher.execute_with_group(message_info, group_name="generate_title")
print(f"结果: {result}")
print(f"使用token数: {tokens}")

# 查看模型切换次数
print(f"模型切换次数: {dispatcher.model_switch_count}")
```

#### 高级用法:指定起始模型索引

```python
from llmakits import ModelDispatcher

# 创建调度器
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')

# 准备消息
message_info = {
    "system_prompt": "你是一个 helpful 助手",
    "user_text": "请介绍一下Python编程语言"
}

# 从第2个模型开始执行(索引从0开始)
result, tokens = dispatcher.execute_with_group(
    message_info,
    group_name="generate_title",
    start_index=1  # 从第2个模型开始
)
print(f"结果: {result}")
print(f"使用token数: {tokens}")
```

**耗时警告功能特点:**

1. **性能监控**: 当模型执行时间超过设定阈值时,自动显示警告信息
2. **灵活配置**: 可以根据业务需求设置不同的警告阈值
3. **不影响执行**: 警告信息不会中断任务执行,仅作为性能提示
4. **详细报告**: 警告信息包含模型名称和实际执行时间

**使用场景:**
- 监控模型响应性能,及时发现性能问题
- 在生产环境中跟踪异常耗时的请求
- 优化模型选择和配置,提高整体响应速度

#### 增强版调度策略:dispatcher_with_repair

```python
from llmakits import dispatcher_with_repair

# 创建调度器
from llmakits import ModelDispatcher
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')

# 准备消息
message_info = {
    "system_prompt": "你是一个JSON数据生成专家",
    "user_text": "请生成一个包含产品信息的JSON对象"
}

# 使用增强版调度策略 - 自动修复JSON错误
try:
    result, tokens = dispatcher_with_repair(
        dispatcher=dispatcher,
        message_info=message_info,
        group_name="generate_json",  # 主模型组名称
        validate_func=None,  # 可选:自定义验证函数
        fix_json_config={
            "group_name": "fix_json",  # 修复模型组名称
            "system_prompt": "你是一个JSON修复专家,请修复下面错误的JSON格式",
            "example_json": '{"name": "产品名称", "price": 99.99}'  # 可选:JSON示例
        }
    )
    print(f"修复后的结果: {result}")
    print(f"使用token数: {tokens}")
except Exception as e:
    print(f"所有模型和修复尝试均失败: {e}")
```

**增强版调度策略特点:**

1. **自动修复JSON错误**:当主模型返回格式错误的JSON时,自动调用修复模型组进行修复
2. **多模型支持**:每个失败的模型都会尝试修复,确保所有主模型都有机会尝试
3. **独立修复流程**:使用独立的修复调度器,避免状态混乱
4. **详细错误处理**:区分JSON错误和其他类型错误,采取不同的处理策略

**使用场景:**
- 需要生成结构化JSON数据的任务
- 对JSON格式要求严格的场景
- 希望提高任务成功率的自动化流程

### 4. 直接使用模型客户端

```python
from llmakits import BaseOpenai

# 创建模型客户端
model = BaseOpenai(
    platform="openai",
    base_url="https://api.openai.com/v1",
    api_keys=["your-api-key"],
    model_name="gpt-3.5-turbo"
)

# 方法1: 使用消息列表格式(兼容OpenAI格式)
messages = [
    {"role": "system", "content": "你是一个 helpful 助手"},
    {"role": "user", "content": "Hello!"}
]
result, tokens = model.send_message(messages)
print(f"回复: {result}")

# 方法2: 使用message_info格式(推荐)
message_info = {
    "system_prompt": "你是一个 helpful 助手",
    "user_text": "Hello!"
}
result, tokens = model.send_message([], message_info)
print(f"回复: {result}")
```

#### 高级配置选项

```python
from llmakits import BaseOpenai

# 创建带高级配置的客户端
client = BaseOpenai(
    platform="openai",
    base_url="https://api.openai.com/v1",
    api_keys=["your-api-key"],
    model_name="gpt-4o",
    stream=True,              # 启用流式输出
    stream_real=False,        # 真实流式输出
    request_timeout=60,       # 请求超时时间(秒)
    max_retries=3            # 最大重试次数
)

# 获取可用模型列表(DataFrame格式)
models_df = client.models_df()
print(f"可用模型: {models_df}")
```

#### 获取模型信息

```python
from llmakits import BaseOpenai

# 创建客户端
client = BaseOpenai(
    platform="openai",
    base_url="https://api.openai.com/v1",
    api_keys=["your-api-key"],
    model_name="gpt-4o"
)

# 获取模型列表(DataFrame格式,包含创建时间等信息)
models_df = client.models_df()
print(f"模型列表:")
print(models_df)

# 获取特定模型的创建时间
if 'created' in models_df.columns:
    gpt4o_created = models_df[models_df['id'] == 'gpt-4o']['created'].iloc[0]
    print(f"GPT-4o 创建时间: {gpt4o_created}")
```

#### 错误处理和API密钥耗尽

```python
from llmakits import BaseOpenai

client = BaseOpenai(
    platform="openai",
    base_url="https://api.openai.com/v1",
    api_keys=["your-api-key"],
    model_name="gpt-4o"
)

try:
    response, tokens = client.send_message([], message_info)
    print(f"模型响应: {response}")
except Exception as e:
    if "API key exhausted" in str(e):
        print("API密钥已耗尽,请更换密钥")
    else:
        print(f"发生错误: {e}")
```

## 高级功能

### 消息处理

```python
from llmakits.message import prepare_messages, extract_field, convert_to_json

# 准备消息
messages = prepare_messages(system="你是一个助手", user="请帮忙", assistant="好的")

# 提取并转换为JSON
json_str = '{"name": "test"} some text'
result = convert_to_json(json_str)

# 提取字段
field_value = extract_field(json_str, "name")
print(field_value)  # 输出: test

# 提取多个字段
name, age = extract_field(json_str, "name", "age")
print(name)  # 输出: test
print(age)  # 输出: None

```

### 电商工具

#### 基础工具函数

```python
from llmakits.e_commerce import contains_chinese, remove_chinese, shorten_title, validate_html

# 使用简单函数
result = contains_chinese("智能手机")  # 返回 True
title = shorten_title("一个很长的商品标题", 50)  # 缩减到50字符

# HTML验证
allowed_tags = {'div', 'p', 'span', 'strong', 'em'}
is_valid, error_msg = validate_html("<div>内容</div>", allowed_tags)
```

#### 高级电商功能

电商工具函数现在支持使用模型组名称,更加简洁:

```python
from llmakits.e_commerce import generate_title, generate_html, fill_attr,predict_cat_direct, predict_cat_gradual, translate_options

# 创建调度器 - 加载配置
dispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')

# 生成优化商品标题
system_prompt = "你是一个电商标题优化专家"
title = generate_title(
    dispatcher=dispatcher,
    title="原始商品标题",
    product_info="这是一个需要优化的商品,包含详细的产品描述和特点",
    system_prompt=system_prompt,
    group_name="generate_title",  # 使用模型组名称
    min_length=10,
    max_length=225,
    min_word=2,      # 标题最少包含2个单词
    max_attempts=3   # 最大重试/修改次数
)

# 预测商品类目
cat_tree = {}  # 类目树数据
categories = predict_cat_direct(
    dispatcher=dispatcher,
    product={"title": "商品标题", "image_url": ""},  # 商品信息字典
    cat_tree=cat_tree,
    system_prompt="你是一个商品分类专家,请根据商品标题预测合适的商品类目"
)

# 预测商品类目(带JSON修复功能)
categories_with_fix = predict_cat_direct(
    dispatcher=dispatcher,
    product={"title": "护发喷雾", "image_url": "https://example.com/image.jpg"},
    cat_tree=cat_tree,
    system_prompt="你是一个商品分类专家,请根据商品标题和图片预测合适的商品类目",
    fix_json_config={
        "system_prompt": "你是一个JSON格式修复专家,请修复下面错误的JSON格式",
        "group_name": "fix_json"
    }
)

# 梯度预测商品类目(逐级预测)
categories_gradual = predict_cat_gradual(
    dispatcher=dispatcher,
    product={"title": "智能手机", "image_url": "https://example.com/image.jpg"},
    cat_tree=cat_tree,
    predict_config={
        "system_prompt": "你是一个商品分类专家,请根据商品标题和图片逐级预测合适的商品类目",
        "group_name": "predict_category"
    },
    fix_json_config={
        "system_prompt": "你是一个JSON格式修复专家,请修复下面错误的JSON格式",
        "group_name": "fix_json"
    }
)

# 翻译商品选项
options = ["红色", "蓝色", "绿色"]
translated = translate_options(
    dispatcher=dispatcher,
    title="商品标题",
    options=options,
    to_lang="english",
    group_name="translate_box",  # 使用模型组名称
    system_prompt="翻译商品选项"
)


# 生成HTML商品描述(自动修复错误)
product_info = """
产品名称:智能手表
特点:防水、心率监测、GPS定位
材质:不锈钢表带,强化玻璃表面
适用场景:运动、日常佩戴
"""

html_description = generate_html(
    dispatcher=dispatcher,
    product_info=product_info,
    generate_prompt="你是一个电商产品描述专家,请根据产品信息生成美观的HTML格式描述,包含标题、段落、列表等结构",
    fix_prompt="修复HTML中的不允许标签,确保HTML格式正确",
    generate_group="generate_html",  # 生成HTML使用的模型组
    fix_group="fix_html",       # 修复HTML使用的模型组
    allowed_tags={'div', 'p', 'h1', 'h2', 'h3', 'ul', 'li', 'strong', 'em', 'span', 'br'}
)

# 填充属性值

# 准备消息信息
message_info = {
    "system_prompt": "你是一个商品属性填充专家,请根据商品信息填充相应的属性值",
    "user_text": "请为智能手表填充颜色属性"
}

# 定义可选项列表
color_choices = ["黑色", "白色", "蓝色", "红色", "粉色", "金色", "银色"]

# 使用fill_attr函数填充属性
filled_result = fill_attr(
    dispatcher=dispatcher,
    message_info=message_info,
    group="generate_title",  # 使用模型组名称
    choices=color_choices    # 可选值列表,用于验证结果
)

print(f"填充的属性结果: {filled_result}")
```

## 许可证

Apache 2.0 License

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/tinycen/llmakits",
    "name": "llmakits",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "llm, ai, chatgpt, openai, zhipu, dashscope, modelscope, multi-model, scheduling, fault-tolerance",
    "author": "tinycen",
    "author_email": "sky_ruocen@qq.com",
    "download_url": "https://files.pythonhosted.org/packages/10/ea/2ff8956461a511d54b7ff8915019084d0786cbcd1904a006dc40cd20c36c/llmakits-0.6.14.tar.gz",
    "platform": null,
    "description": "# llmakits\n\n\u4e00\u4e2a\u529f\u80fd\u5f3a\u5927\u7684Python\u5de5\u5177\u5305\uff0c\u7528\u4e8e\u7b80\u5316\u5927\u8bed\u8a00\u6a21\u578b(LLM)\u7684\u96c6\u6210\u548c\u7ba1\u7406\u3002\u652f\u6301\u591a\u6a21\u578b\u8c03\u5ea6\u3001\u6545\u969c\u8f6c\u79fb\u3001\u8d1f\u8f7d\u5747\u8861\u7b49\u529f\u80fd\u3002\n\n[![zread](https://img.shields.io/badge/Ask_Zread-_.svg?style=flat&color=00b0aa&labelColor=000000&logo=data%3Aimage%2Fsvg%2Bxml%3Bbase64%2CPHN2ZyB3aWR0aD0iMTYiIGhlaWdodD0iMTYiIHZpZXdCb3g9IjAgMCAxNiAxNiIgZmlsbD0ibm9uZSIgeG1sbnM9Imh0dHA6Ly93d3cudzMub3JnLzIwMDAvc3ZnIj4KPHBhdGggZD0iTTQuOTYxNTYgMS42MDAxSDIuMjQxNTZDMS44ODgxIDEuNjAwMSAxLjYwMTU2IDEuODg2NjQgMS42MDE1NiAyLjI0MDFWNC45NjAxQzEuNjAxNTYgNS4zMTM1NiAxLjg4ODEgNS42MDAxIDIuMjQxNTYgNS42MDAxSDQuOTYxNTZDNS4zMTUwMiA1LjYwMDEgNS42MDE1NiA1LjMxMzU2IDUuNjAxNTYgNC45NjAxVjIuMjQwMUM1LjYwMTU2IDEuODg2NjQgNS4zMTUwMiAxLjYwMDEgNC45NjE1NiAxLjYwMDFaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00Ljk2MTU2IDEwLjM5OTlIMi4yNDE1NkMxLjg4ODEgMTAuMzk5OSAxLjYwMTU2IDEwLjY4NjQgMS42MDE1NiAxMS4wMzk5VjEzLjc1OTlDMS42MDE1NiAxNC4xMTM0IDEuODg4MSAxNC4zOTk5IDIuMjQxNTYgMTQuMzk5OUg0Ljk2MTU2QzUuMzE1MDIgMTQuMzk5OSA1LjYwMTU2IDE0LjExMzQgNS42MDE1NiAxMy43NTk5VjExLjAzOTlDNS42MDE1NiAxMC42ODY0IDUuMzE1MDIgMTAuMzk5OSA0Ljk2MTU2IDEwLjM5OTlaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik0xMy43NTg0IDEuNjAwMUgxMS4wMzg0QzEwLjY4NSAxLjYwMDEgMTAuMzk4NCAxLjg4NjY0IDEwLjM5ODQgMi4yNDAxVjQuOTYwMUMxMC4zOTg0IDUuMzEzNTYgMTAuNjg1IDUuNjAwMSAxMS4wMzg0IDUuNjAwMUgxMy43NTg0QzE0LjExMTkgNS42MDAxIDE0LjM5ODQgNS4zMTM1NiAxNC4zOTg0IDQuOTYwMVYyLjI0MDFDMTQuMzk4NCAxLjg4NjY0IDE0LjExMTkgMS42MDAxIDEzLjc1ODQgMS42MDAxWiIgZmlsbD0iI2ZmZiIvPgo8cGF0aCBkPSJNNCAxMkwxMiA0TDQgMTJaIiBmaWxsPSIjZmZmIi8%2BCjxwYXRoIGQ9Ik00IDEyTDEyIDQiIHN0cm9rZT0iI2ZmZiIgc3Ryb2tlLXdpZHRoPSIxLjUiIHN0cm9rZS1saW5lY2FwPSJyb3VuZCIvPgo8L3N2Zz4K&logoColor=ffffff)](https://zread.ai/tinycen/llmakits)\n\n## \u529f\u80fd\u7279\u6027\n\n- \ud83d\ude80 **\u591a\u6a21\u578b\u652f\u6301**: \u652f\u6301OpenAI\u3001\u667a\u8c31AI\u3001DashScope\u3001ModelScope\u7b49\u591a\u4e2a\u4e3b\u6d41LLM\u5e73\u53f0\uff1b\n- \ud83d\udd04 **\u667a\u80fd\u8c03\u5ea6**: \u5185\u7f6e\u6a21\u578b\u6545\u969c\u8f6c\u79fb\u548c\u8d1f\u8f7d\u5747\u8861\u673a\u5236\uff1b\n  - \u81ea\u52a8\u5207\u6362\uff1a\u5f53\u6a21\u578b\u5931\u8d25\u65f6\uff0c\u81ea\u52a8\u5207\u6362\u5230\u4e0b\u4e2a\u53ef\u7528\u6a21\u578b\uff1b\n  - \u8d1f\u8f7d\u5747\u8861\uff1aToken \u6216 \u8bf7\u6c42\u6b21\u6570 \u8fbe\u5230\u4e0a\u9650\u540e\uff0c\u81ea\u52a8\u5207\u6362\u5230\u4e0b\u4e2aapi_key\uff1b\n  - API\u5bc6\u94a5\u7528\u5c3d\u5904\u7406\uff1a\u81ea\u52a8\u68c0\u6d4b\u5e76\u79fb\u9664API\u5bc6\u94a5\u7528\u5c3d\u7684\u6a21\u578b\uff1b\n- \ud83d\udcca **\u6d88\u606f\u5904\u7406**: \u5f3a\u5927\u7684\u6d88\u606f\u683c\u5f0f\u5316\u3001\u9a8c\u8bc1\u548c\u63d0\u53d6\u529f\u80fd\uff1b\n- \ud83d\udee1\ufe0f **\u9519\u8bef\u5904\u7406**: \u5b8c\u5584\u7684\u91cd\u8bd5\u673a\u5236\u548c\u5f02\u5e38\u5904\u7406\uff1b\n- \ud83d\udcdd **\u6d41\u5f0f\u8f93\u51fa**: \u652f\u6301\u6d41\u5f0f\u54cd\u5e94\u5904\u7406\uff1b\n- \u23f1\ufe0f **\u6027\u80fd\u76d1\u63a7**: \u652f\u6301\u8bbe\u7f6e\u8017\u65f6\u8b66\u544a\u9608\u503c\uff0c\u76d1\u63a7\u6a21\u578b\u54cd\u5e94\u6027\u80fd\uff1b\n- \ud83c\udfaf **\u7535\u5546\u5de5\u5177**: \u5185\u7f6e\u7535\u5546\u573a\u666f\u4e13\u7528\u5de5\u5177\u96c6\uff1b\n- \ud83d\udca1 **\u72b6\u6001\u4fdd\u6301**: \u6a21\u578b\u5b9e\u4f8b\u7f13\u5b58\uff0c\u907f\u514d\u91cd\u590d\u5b9e\u4f8b\u5316\uff0c\u4fdd\u6301API\u5bc6\u94a5\u5207\u6362\u72b6\u6001\u3002\n\n## \u5b89\u88c5/\u66f4\u65b0\n\n```bash\npip install --upgrade llmakits\n```\n\n## \u5feb\u901f\u5f00\u59cb\n\n### 1. \u914d\u7f6e\u6a21\u578b\u548cAPI\u5bc6\u94a5\n\n**\u6a21\u578b\u914d\u7f6e\u6587\u4ef6** (`config/models_config.yaml`):\n- \u652f\u6301\u6309\u4e1a\u52a1\u573a\u666f\u5206\u7ec4\u914d\u7f6e\n- \u6bcf\u4e2a\u7ec4\u53ef\u4ee5\u914d\u7f6e\u591a\u4e2a\u6a21\u578b\uff0c\u5b9e\u73b0\u6545\u969c\u8f6c\u79fb\n- \u6a21\u578b\u4f1a\u6309\u914d\u7f6e\u987a\u5e8f\u4f9d\u6b21\u5c1d\u8bd5\uff0c\u76f4\u5230\u6210\u529f\n\n```yaml\n# \u6807\u9898\u751f\u6210\u4e13\u7528\u6a21\u578b\u7ec4\ngenerate_title:\n  - sdk_name: \"dashscope\"\n    model_name: \"qwen3-max-preview\"\n\n  - sdk_name: \"zhipu\"\n    model_name: \"glm-4-plus\"\n\n# \u7ffb\u8bd1\u4e13\u7528\u6a21\u578b\u7ec4\ntranslate_box:\n  - sdk_name: \"modelscope\"\n    model_name: \"Qwen/Qwen3-32B\"\n\n  - sdk_name: \"modelscope\"\n    model_name: \"deepseek-ai/DeepSeek-V3\"\n```\n\n**\u5bc6\u94a5\u914d\u7f6e\u6587\u4ef6** (`config/keys_config.yaml`):\n- \u652f\u6301\u591a\u5bc6\u94a5\u914d\u7f6e\uff0c\u81ea\u52a8\u8d1f\u8f7d\u5747\u8861\n- \u5f53\u5bc6\u94a5\u8fbe\u5230\u6bcf\u65e5\u4f7f\u7528\u9650\u5236\u65f6\uff0c\u81ea\u52a8\u5207\u6362\u5230\u4e0b\u4e00\u4e2a\u5bc6\u94a5\n- \u652f\u6301\u4e0d\u540c\u5e73\u53f0\u7684\u72ec\u7acb\u914d\u7f6e\n\n```yaml\n# \u767e\u5ea6AI Studio\u5e73\u53f0\naistudio:\n  base_url: \"https://aistudio.baidu.com/llm/lmapi/v3\"\n  api_keys: [\"your-api-key-1\", \"your-api-key-2\"]\n\n# \u767e\u5ea6AI Studio\u5e94\u7528\u5e73\u53f0\naistudio_app:\n  base_url: \"https://api-i0c6md2d80ndh773.aistudio-app.com/v1\"\n  api_keys: [\"your-api-key-1\", \"your-api-key-2\"]\n\n# \u963f\u91cc\u4e91DashScope\u5e73\u53f0\ndashscope:\n  base_url: \"https://dashscope.aliyuncs.com/compatible-mode/v1\"\n  api_keys: [\"your-api-key-1\", \"your-api-key-2\"]\n\n# ModelScope\u5e73\u53f0\nmodelscope:\n  base_url: \"https://api-inference.modelscope.cn/v1/\"\n  api_keys: [\"your-api-key-1\", \"your-api-key-2\"]\n\n# \u667a\u8c31AI\u5e73\u53f0\nzhipu:\n  base_url: \"\"  # \u4f7f\u7528\u9ed8\u8ba4URL\n  api_keys: [\"your-api-key-1\", \"your-api-key-2\"]\n```\n\n#### \u9519\u8bef\u5904\u7406\u548c\u6545\u969c\u8f6c\u79fb\n\n1. **\u6a21\u578b\u7ea7\u522b\u6545\u969c\u8f6c\u79fb**: \u5f53\u524d\u6a21\u578b\u5931\u8d25\u65f6\uff0c\u81ea\u52a8\u5207\u6362\u5230\u540c\u7ec4\u7684\u4e0b\u4e00\u4e2a\u6a21\u578b\n2. **API\u5bc6\u94a5\u7528\u5c3d\u68c0\u6d4b**: \u81ea\u52a8\u68c0\u6d4b `API_KEY_EXHAUSTED` \u5f02\u5e38\uff0c\u5e76\u79fb\u9664\u5bf9\u5e94\u7684\u6a21\u578b\n3. **\u7ed3\u679c\u9a8c\u8bc1**: \u652f\u6301\u81ea\u5b9a\u4e49\u9a8c\u8bc1\u51fd\u6570\uff0c\u9a8c\u8bc1\u5931\u8d25\u65f6\u81ea\u52a8\u5c1d\u8bd5\u4e0b\u4e00\u4e2a\u6a21\u578b\n4. **\u72b6\u6001\u4fdd\u6301**: \u6a21\u578b\u5b9e\u4f8b\u5728dispatcher\u4e2d\u7f13\u5b58\uff0c\u4fdd\u6301API\u5bc6\u94a5\u5207\u6362\u72b6\u6001\n\n#### \u914d\u7f6e\u4f18\u5316\u5efa\u8bae\n\n1. **\u4f7f\u7528\u6a21\u578b\u7ec4**: \u63a8\u8350\u4f7f\u7528 `execute_with_group` \u65b9\u6cd5\uff0c\u907f\u514d\u91cd\u590d\u5b9e\u4f8b\u5316\n2. **\u5408\u7406\u914d\u7f6e\u6a21\u578b\u987a\u5e8f**: \u5c06\u6027\u80fd\u66f4\u597d\u3001\u66f4\u7a33\u5b9a\u7684\u6a21\u578b\u653e\u5728\u524d\u9762\n3. **\u9002\u5f53\u8bbe\u7f6e\u91cd\u8bd5**: \u6839\u636e\u4e1a\u52a1\u9700\u6c42\u914d\u7f6e\u6a21\u578b\u6570\u91cf\u548c\u5bc6\u94a5\u6570\u91cf\n4. **\u76d1\u63a7\u5207\u6362\u6b21\u6570**: \u901a\u8fc7 `model_switch_count` \u76d1\u63a7\u6a21\u578b\u5207\u6362\u9891\u7387\n\n#### \u5168\u5c40\u6a21\u578b\u914d\u7f6e\n\n\u652f\u6301\u901a\u8fc7CSV\u6587\u4ef6\u914d\u7f6e\u6a21\u578b\u7684\u9ad8\u7ea7\u53c2\u6570\uff0c\u5b9e\u73b0\u66f4\u7cbe\u7ec6\u7684\u6a21\u578b\u63a7\u5236\uff1a\n\n**\u5168\u5c40\u914d\u7f6e\u6587\u4ef6\u683c\u5f0f** (`config/global_model_config.csv`):\n\n\u6587\u4ef6\u683c\u5f0f\uff1a\u4ec5\u652f\u6301 .csv \u548c .xlsx \u683c\u5f0f\u3002\n\n> \u26a0\ufe0f **\u91cd\u8981\u63d0\u793a**\uff1a\u8868\u683c\u6587\u4ef6\u4e2d\u7684\u5e03\u5c14\u503c\uff08True/False\uff09\u5fc5\u987b\u663e\u5f0f\u7528\u82f1\u6587\u53cc\u5f15\u53f7\u5305\u88f9\uff0c\u4f8b\u5982 `\"True\"` \u6216 `\"\"\"True\"\"\"`\u3002\u8fd9\u662f\u4e3a\u4e86\u786e\u4fddCSV\u89e3\u6790\u5668\u6b63\u786e\u5904\u7406\u5e03\u5c14\u503c\uff0c\u907f\u514d\u7c7b\u578b\u8f6c\u6362\u95ee\u9898\uff0c\u540c\u65f6\u4e5f\u662f\u4e3a\u4e86\u548c0/1\u6570\u503c\u533a\u5206\u3002\n\n| \u53c2\u6570\u540d | \u8bf4\u660e | \u9002\u7528 platform/sdk |\n| --- | --- | --- |\n| `platform` | \u5e73\u53f0\u540d\u79f0 | - |\n| `model_name` | \u6a21\u578b\u540d\u79f0 | - |\n| `stream` | \u662f\u5426\u542f\u7528\u6d41\u5f0f\u8f93\u51fa | - |\n| `stream_real` | \u662f\u5426\u542f\u7528\u771f\u5b9e\u6d41\u5f0f\u8f93\u51fa | - |\n| `response_format` | \u54cd\u5e94\u683c\u5f0f (`json` \u6216 `text`) | `zhipu` |\n| `thinking` | \u601d\u8003\u6a21\u5f0f\u914d\u7f6e | `zhipu` |\n| `extra_enable_thinking` | \u542f\u7528\u601d\u8003\u529f\u80fd\uff08\u4f1a\u5d4c\u5957\u5728extra_body\u4e2d\uff09 | `modelscope`,`dashscope_openai` |\n| `reasoning_effort` | \u63a8\u7406\u52aa\u529b\u7a0b\u5ea6 | `gemini` |\n\n**\u901a\u914d\u7b26\u5339\u914d\u652f\u6301**:\n- `platform` - `model_name` \u683c\u5f0f\n- \u7cbe\u786e\u5339\u914d: `dashscope,qwen3-max-preview`\n- \u901a\u914d\u7b26\u5339\u914d (`*` \u5305\u88f9\u6a21\u578b\u540d\u79f0):\n  - \u793a\u4f8b\uff1a`openai,*gpt*` (\u5339\u914d\u6240\u6709\u5305\u542b gpt \u7684\u6a21\u578b)\n- \u901a\u7528\u5339\u914d (`*` \u66ff\u4ee3\u6a21\u578b\u540d\u79f0):\n  - \u793a\u4f8b\uff1a`zhipu,*` (\u5339\u914d\u667a\u8c31\u5e73\u53f0\u6240\u6709\u6a21\u578b)\n\n**\u4f7f\u7528\u793a\u4f8b**:\n```python\nfrom llmakits import load_models\n\n# \u52a0\u8f7d\u5e26\u5168\u5c40\u914d\u7f6e\u7684\u6a21\u578b\nmodels, keys = load_models(\n    'config/models_config.yaml',\n    'config/keys_config.yaml',\n    global_config='config/global_model_config.csv'\n)\n```\n\n### 2. \u52a0\u8f7d\u6a21\u578b\n\n```python\nfrom llmakits import load_models\n\n# \u65b9\u5f0f1\uff1a\u4f20\u5165\u914d\u7f6e\u6587\u4ef6\u8def\u5f84\uff08\u5b57\u7b26\u4e32\uff09\nmodels = load_models('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u65b9\u5f0f2\uff1a\u76f4\u63a5\u4f20\u5165\u914d\u7f6e\u5b57\u5178\nmodels_config = {\n    \"my_models\": [\n        {\"model_name\": \"gpt-3.5-turbo\", \"sdk_name\": \"openai\"}\n    ]\n}\nmodel_keys = {\n    \"openai\": {\n        \"base_url\": \"https://api.openai.com/v1\",\n        \"api_keys\": [\"your-api-key\"]\n    }\n}\nmodels = load_models(models_config, model_keys)\n\n# \u65b9\u5f0f3\uff1a\u4f7f\u7528\u5168\u5c40\u914d\u7f6e\uff08\u652f\u6301\u9ad8\u7ea7\u53c2\u6570\u914d\u7f6e\uff09\nmodels = load_models(\n    'config/models_config.yaml',\n    'config/keys_config.yaml',\n    global_config='config/global_model_config.csv'  # \u53ef\u9009\uff1a\u5168\u5c40\u6a21\u578b\u914d\u7f6e\n)\n\n# \u83b7\u53d6\u6a21\u578b\u7ec4\nmy_models = models['my_models']\n```\n\n### 3. \u53d1\u9001\u6d88\u606f\uff08\u591a\u6a21\u578b\u8c03\u5ea6\uff09\n\n#### \u4f7f\u7528 ModelDispatcher\uff08\u63a8\u8350\uff09\n\nModelDispatcher \u63d0\u4f9b\u4e86\u4e24\u79cd\u4f7f\u7528\u65b9\u5f0f\uff0c\u63a8\u8350\u4f7f\u7528 `execute_with_group` \u65b9\u6cd5\uff1a\n\n**\u65b9\u5f0f\u4e00\uff1a\u4f7f\u7528\u6a21\u578b\u7ec4\uff08\u63a8\u8350\uff09**\n\n```python\nfrom llmakits import ModelDispatcher\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\u5b9e\u4f8b\u5e76\u52a0\u8f7d\u914d\u7f6e\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u51c6\u5907\u6d88\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\",\n    \"user_text\": \"\u8bf7\u4ecb\u7ecd\u4e00\u4e0bPython\u7f16\u7a0b\u8bed\u8a00\"\n}\n\n# \u4f7f\u7528\u6a21\u578b\u7ec4\u6267\u884c\u4efb\u52a1 - \u81ea\u52a8\u7ba1\u7406\u6a21\u578b\u72b6\u6001\u548c\u6545\u969c\u8f6c\u79fb\nresult, tokens = dispatcher.execute_with_group(message_info, group_name=\"generate_title\")\nprint(f\"\u7ed3\u679c: {result}\")\nprint(f\"\u4f7f\u7528token\u6570: {tokens}\")\nprint(f\"\u6a21\u578b\u5207\u6362\u6b21\u6570: {dispatcher.model_switch_count}\")\n```\n\n#### \u6d88\u606f\u683c\u5f0f\u8bf4\u660e\n\n`message_info` \u53c2\u6570\u652f\u6301\u4ee5\u4e0b\u5b57\u6bb5\uff1a\n- `system_prompt`: \u7cfb\u7edf\u63d0\u793a\u8bcd\uff08\u53ef\u9009\uff09\n- `user_text`: \u7528\u6237\u8f93\u5165\u6587\u672c\uff08\u53ef\u9009\uff09\n- `include_img`: \u662f\u5426\u5305\u542b\u56fe\u7247\uff08\u53ef\u9009\uff0c\u9ed8\u8ba4False\uff09\n- `img_list`: \u56fe\u7247URL\u5217\u8868\uff08\u53ef\u9009\uff0c\u9ed8\u8ba4\u4e3a\u7a7a\u5217\u8868\uff09\n\n\u57fa\u672c\u4f7f\u7528\u793a\u4f8b\uff1a\n\n```python\n# \u7b80\u5355\u6587\u672c\u5bf9\u8bdd\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\",\n    \"user_text\": \"\u8bf7\u4ecb\u7ecd\u4e00\u4e0bPython\u7f16\u7a0b\u8bed\u8a00\"\n}\n\n# \u5e26\u56fe\u7247\u7684\u5bf9\u8bdd\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a\u56fe\u50cf\u5206\u6790\u4e13\u5bb6\",\n    \"user_text\": \"\u8bf7\u5206\u6790\u8fd9\u5f20\u56fe\u7247\",\n    \"include_img\": True,\n    \"img_list\": [\"https://example.com/image.jpg\"]\n}\n# \u5982\u679cinclude_img = True \u540c\u65f6 img_list \u662f\u7a7a\u7684\uff0c\u6b64\u65f6\u4f1a\u62a5\u51fa\u9519\u8bef\u3002\n```\n\n**\u65b9\u5f0f\u4e8c\uff1a\u624b\u52a8\u4f20\u5165\u6a21\u578b\u5217\u8868**\n\n```python\nfrom llmakits import ModelDispatcher\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\u5b9e\u4f8b\ndispatcher = ModelDispatcher()\n\n# \u51c6\u5907\u6d88\u606f\u548c\u6a21\u578b\u5217\u8868\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\",\n    \"user_text\": \"\u8bf7\u4ecb\u7ecd\u4e00\u4e0bPython\u7f16\u7a0b\u8bed\u8a00\"\n}\n\n# \u6267\u884c\u4efb\u52a1\nresult, tokens = dispatcher.execute_task(message_info, my_models)\n```\n\n#### \u9ad8\u7ea7\u7528\u6cd5\uff1a\u7ed3\u679c\u9a8c\u8bc1\u548c\u683c\u5f0f\u5316\n\n```python\nfrom llmakits import ModelDispatcher\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u5b9a\u4e49\u7ed3\u679c\u9a8c\u8bc1\u51fd\u6570\ndef validate_result(result):\n    \"\"\"\u9a8c\u8bc1\u7ed3\u679c\u662f\u5426\u5305\u542b\u5fc5\u8981\u7684\u5b57\u6bb5\"\"\"\n    return \"python\" in result.lower() and \"\u7f16\u7a0b\" in result\n\n# \u51c6\u5907\u6d88\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a\u7f16\u7a0b\u4e13\u5bb6\",\n    \"user_text\": \"\u8bf7\u4ecb\u7ecdPython\u8bed\u8a00\u7684\u7279\u70b9\"\n}\n\n# \u6267\u884c\u4efb\u52a1\uff0c\u542f\u7528JSON\u683c\u5f0f\u5316\u548c\u7ed3\u679c\u9a8c\u8bc1\nresult, tokens = dispatcher.execute_with_group(\n    message_info,\n    group_name=\"generate_title\",\n    format_json=True,           # \u683c\u5f0f\u5316\u4e3aJSON\n    validate_func=validate_result  # \u9a8c\u8bc1\u7ed3\u679c\n)\n\nprint(f\"\u9a8c\u8bc1\u901a\u8fc7\u7684\u7ed3\u679c: {result}\")\nprint(f\"\u4f7f\u7528token\u6570: {tokens}\")\n```\n\n#### \u9ad8\u7ea7\u7528\u6cd5\uff1a\u83b7\u53d6\u8be6\u7ec6\u6267\u884c\u7ed3\u679c\n\n```python\nfrom llmakits import ModelDispatcher\nfrom llmakits.dispatcher import ExecutionResult\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u51c6\u5907\u6d88\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a\u7f16\u7a0b\u4e13\u5bb6\",\n    \"user_text\": \"\u8bf7\u4ecb\u7ecdPython\u8bed\u8a00\u7684\u7279\u70b9\"\n}\n\n# \u83b7\u53d6\u8be6\u7ec6\u6267\u884c\u7ed3\u679c\nresult: ExecutionResult = dispatcher.execute_with_group(\n    message_info,\n    group_name=\"generate_title\",\n    return_detailed=True  # \u8fd4\u56de\u8be6\u7ec6\u7ed3\u679c\n)\n\nprint(f\"\u8fd4\u56de\u6d88\u606f: {result.return_message}\")\nprint(f\"\u4f7f\u7528token\u6570: {result.total_tokens}\")\nprint(f\"\u6700\u540e\u5c1d\u8bd5\u7684\u6a21\u578b\u7d22\u5f15: {result.last_tried_index}\")\nprint(f\"\u662f\u5426\u6210\u529f: {result.success}\")\nif result.error:\n    print(f\"\u9519\u8bef\u4fe1\u606f: {result.error}\")\n```\n\n#### \u9ad8\u7ea7\u7528\u6cd5\uff1a\u8017\u65f6\u8b66\u544a\u76d1\u63a7\n\n```python\nfrom llmakits import ModelDispatcher\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\u5e76\u8bbe\u7f6e\u8017\u65f6\u8b66\u544a\u9608\u503c\uff08\u5355\u4f4d\uff1a\u79d2\uff09\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\ndispatcher.warning_time = 30  # \u8bbe\u7f6e30\u79d2\u4e3a\u8017\u65f6\u8b66\u544a\u9608\u503c\n\n# \u51c6\u5907\u6d88\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\",\n    \"user_text\": \"\u8bf7\u8be6\u7ec6\u4ecb\u7ecdPython\u7f16\u7a0b\u8bed\u8a00\u53ca\u5176\u751f\u6001\u7cfb\u7edf\"\n}\n\n# \u6267\u884c\u4efb\u52a1 - \u5f53\u6a21\u578b\u6267\u884c\u65f6\u95f4\u8d85\u8fc730\u79d2\u65f6\u4f1a\u663e\u793a\u8b66\u544a\nresult, tokens = dispatcher.execute_with_group(message_info, group_name=\"generate_title\")\nprint(f\"\u7ed3\u679c: {result}\")\nprint(f\"\u4f7f\u7528token\u6570: {tokens}\")\n\n# \u67e5\u770b\u6a21\u578b\u5207\u6362\u6b21\u6570\nprint(f\"\u6a21\u578b\u5207\u6362\u6b21\u6570: {dispatcher.model_switch_count}\")\n```\n\n#### \u9ad8\u7ea7\u7528\u6cd5\uff1a\u6307\u5b9a\u8d77\u59cb\u6a21\u578b\u7d22\u5f15\n\n```python\nfrom llmakits import ModelDispatcher\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u51c6\u5907\u6d88\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\",\n    \"user_text\": \"\u8bf7\u4ecb\u7ecd\u4e00\u4e0bPython\u7f16\u7a0b\u8bed\u8a00\"\n}\n\n# \u4ece\u7b2c2\u4e2a\u6a21\u578b\u5f00\u59cb\u6267\u884c\uff08\u7d22\u5f15\u4ece0\u5f00\u59cb\uff09\nresult, tokens = dispatcher.execute_with_group(\n    message_info,\n    group_name=\"generate_title\",\n    start_index=1  # \u4ece\u7b2c2\u4e2a\u6a21\u578b\u5f00\u59cb\n)\nprint(f\"\u7ed3\u679c: {result}\")\nprint(f\"\u4f7f\u7528token\u6570: {tokens}\")\n```\n\n**\u8017\u65f6\u8b66\u544a\u529f\u80fd\u7279\u70b9\uff1a**\n\n1. **\u6027\u80fd\u76d1\u63a7**: \u5f53\u6a21\u578b\u6267\u884c\u65f6\u95f4\u8d85\u8fc7\u8bbe\u5b9a\u9608\u503c\u65f6\uff0c\u81ea\u52a8\u663e\u793a\u8b66\u544a\u4fe1\u606f\n2. **\u7075\u6d3b\u914d\u7f6e**: \u53ef\u4ee5\u6839\u636e\u4e1a\u52a1\u9700\u6c42\u8bbe\u7f6e\u4e0d\u540c\u7684\u8b66\u544a\u9608\u503c\n3. **\u4e0d\u5f71\u54cd\u6267\u884c**: \u8b66\u544a\u4fe1\u606f\u4e0d\u4f1a\u4e2d\u65ad\u4efb\u52a1\u6267\u884c\uff0c\u4ec5\u4f5c\u4e3a\u6027\u80fd\u63d0\u793a\n4. **\u8be6\u7ec6\u62a5\u544a**: \u8b66\u544a\u4fe1\u606f\u5305\u542b\u6a21\u578b\u540d\u79f0\u548c\u5b9e\u9645\u6267\u884c\u65f6\u95f4\n\n**\u4f7f\u7528\u573a\u666f\uff1a**\n- \u76d1\u63a7\u6a21\u578b\u54cd\u5e94\u6027\u80fd\uff0c\u53ca\u65f6\u53d1\u73b0\u6027\u80fd\u95ee\u9898\n- \u5728\u751f\u4ea7\u73af\u5883\u4e2d\u8ddf\u8e2a\u5f02\u5e38\u8017\u65f6\u7684\u8bf7\u6c42\n- \u4f18\u5316\u6a21\u578b\u9009\u62e9\u548c\u914d\u7f6e\uff0c\u63d0\u9ad8\u6574\u4f53\u54cd\u5e94\u901f\u5ea6\n\n#### \u589e\u5f3a\u7248\u8c03\u5ea6\u7b56\u7565\uff1adispatcher_with_repair\n\n```python\nfrom llmakits import dispatcher_with_repair\n\n# \u521b\u5efa\u8c03\u5ea6\u5668\nfrom llmakits import ModelDispatcher\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u51c6\u5907\u6d88\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2aJSON\u6570\u636e\u751f\u6210\u4e13\u5bb6\",\n    \"user_text\": \"\u8bf7\u751f\u6210\u4e00\u4e2a\u5305\u542b\u4ea7\u54c1\u4fe1\u606f\u7684JSON\u5bf9\u8c61\"\n}\n\n# \u4f7f\u7528\u589e\u5f3a\u7248\u8c03\u5ea6\u7b56\u7565 - \u81ea\u52a8\u4fee\u590dJSON\u9519\u8bef\ntry:\n    result, tokens = dispatcher_with_repair(\n        dispatcher=dispatcher,\n        message_info=message_info,\n        group_name=\"generate_json\",  # \u4e3b\u6a21\u578b\u7ec4\u540d\u79f0\n        validate_func=None,  # \u53ef\u9009\uff1a\u81ea\u5b9a\u4e49\u9a8c\u8bc1\u51fd\u6570\n        fix_json_config={\n            \"group_name\": \"fix_json\",  # \u4fee\u590d\u6a21\u578b\u7ec4\u540d\u79f0\n            \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2aJSON\u4fee\u590d\u4e13\u5bb6\uff0c\u8bf7\u4fee\u590d\u4e0b\u9762\u9519\u8bef\u7684JSON\u683c\u5f0f\",\n            \"example_json\": '{\"name\": \"\u4ea7\u54c1\u540d\u79f0\", \"price\": 99.99}'  # \u53ef\u9009\uff1aJSON\u793a\u4f8b\n        }\n    )\n    print(f\"\u4fee\u590d\u540e\u7684\u7ed3\u679c: {result}\")\n    print(f\"\u4f7f\u7528token\u6570: {tokens}\")\nexcept Exception as e:\n    print(f\"\u6240\u6709\u6a21\u578b\u548c\u4fee\u590d\u5c1d\u8bd5\u5747\u5931\u8d25: {e}\")\n```\n\n**\u589e\u5f3a\u7248\u8c03\u5ea6\u7b56\u7565\u7279\u70b9\uff1a**\n\n1. **\u81ea\u52a8\u4fee\u590dJSON\u9519\u8bef**\uff1a\u5f53\u4e3b\u6a21\u578b\u8fd4\u56de\u683c\u5f0f\u9519\u8bef\u7684JSON\u65f6\uff0c\u81ea\u52a8\u8c03\u7528\u4fee\u590d\u6a21\u578b\u7ec4\u8fdb\u884c\u4fee\u590d\n2. **\u591a\u6a21\u578b\u652f\u6301**\uff1a\u6bcf\u4e2a\u5931\u8d25\u7684\u6a21\u578b\u90fd\u4f1a\u5c1d\u8bd5\u4fee\u590d\uff0c\u786e\u4fdd\u6240\u6709\u4e3b\u6a21\u578b\u90fd\u6709\u673a\u4f1a\u5c1d\u8bd5\n3. **\u72ec\u7acb\u4fee\u590d\u6d41\u7a0b**\uff1a\u4f7f\u7528\u72ec\u7acb\u7684\u4fee\u590d\u8c03\u5ea6\u5668\uff0c\u907f\u514d\u72b6\u6001\u6df7\u4e71\n4. **\u8be6\u7ec6\u9519\u8bef\u5904\u7406**\uff1a\u533a\u5206JSON\u9519\u8bef\u548c\u5176\u4ed6\u7c7b\u578b\u9519\u8bef\uff0c\u91c7\u53d6\u4e0d\u540c\u7684\u5904\u7406\u7b56\u7565\n\n**\u4f7f\u7528\u573a\u666f\uff1a**\n- \u9700\u8981\u751f\u6210\u7ed3\u6784\u5316JSON\u6570\u636e\u7684\u4efb\u52a1\n- \u5bf9JSON\u683c\u5f0f\u8981\u6c42\u4e25\u683c\u7684\u573a\u666f\n- \u5e0c\u671b\u63d0\u9ad8\u4efb\u52a1\u6210\u529f\u7387\u7684\u81ea\u52a8\u5316\u6d41\u7a0b\n\n### 4. \u76f4\u63a5\u4f7f\u7528\u6a21\u578b\u5ba2\u6237\u7aef\n\n```python\nfrom llmakits import BaseOpenai\n\n# \u521b\u5efa\u6a21\u578b\u5ba2\u6237\u7aef\nmodel = BaseOpenai(\n    platform=\"openai\",\n    base_url=\"https://api.openai.com/v1\",\n    api_keys=[\"your-api-key\"],\n    model_name=\"gpt-3.5-turbo\"\n)\n\n# \u65b9\u6cd51: \u4f7f\u7528\u6d88\u606f\u5217\u8868\u683c\u5f0f\uff08\u517c\u5bb9OpenAI\u683c\u5f0f\uff09\nmessages = [\n    {\"role\": \"system\", \"content\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\"},\n    {\"role\": \"user\", \"content\": \"Hello!\"}\n]\nresult, tokens = model.send_message(messages)\nprint(f\"\u56de\u590d: {result}\")\n\n# \u65b9\u6cd52: \u4f7f\u7528message_info\u683c\u5f0f\uff08\u63a8\u8350\uff09\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a helpful \u52a9\u624b\",\n    \"user_text\": \"Hello!\"\n}\nresult, tokens = model.send_message([], message_info)\nprint(f\"\u56de\u590d: {result}\")\n```\n\n#### \u9ad8\u7ea7\u914d\u7f6e\u9009\u9879\n\n```python\nfrom llmakits import BaseOpenai\n\n# \u521b\u5efa\u5e26\u9ad8\u7ea7\u914d\u7f6e\u7684\u5ba2\u6237\u7aef\nclient = BaseOpenai(\n    platform=\"openai\",\n    base_url=\"https://api.openai.com/v1\",\n    api_keys=[\"your-api-key\"],\n    model_name=\"gpt-4o\",\n    stream=True,              # \u542f\u7528\u6d41\u5f0f\u8f93\u51fa\n    stream_real=False,        # \u771f\u5b9e\u6d41\u5f0f\u8f93\u51fa\n    request_timeout=60,       # \u8bf7\u6c42\u8d85\u65f6\u65f6\u95f4\uff08\u79d2\uff09\n    max_retries=3            # \u6700\u5927\u91cd\u8bd5\u6b21\u6570\n)\n\n# \u83b7\u53d6\u53ef\u7528\u6a21\u578b\u5217\u8868\uff08DataFrame\u683c\u5f0f\uff09\nmodels_df = client.models_df()\nprint(f\"\u53ef\u7528\u6a21\u578b: {models_df}\")\n```\n\n#### \u83b7\u53d6\u6a21\u578b\u4fe1\u606f\n\n```python\nfrom llmakits import BaseOpenai\n\n# \u521b\u5efa\u5ba2\u6237\u7aef\nclient = BaseOpenai(\n    platform=\"openai\",\n    base_url=\"https://api.openai.com/v1\",\n    api_keys=[\"your-api-key\"],\n    model_name=\"gpt-4o\"\n)\n\n# \u83b7\u53d6\u6a21\u578b\u5217\u8868\uff08DataFrame\u683c\u5f0f\uff0c\u5305\u542b\u521b\u5efa\u65f6\u95f4\u7b49\u4fe1\u606f\uff09\nmodels_df = client.models_df()\nprint(f\"\u6a21\u578b\u5217\u8868:\")\nprint(models_df)\n\n# \u83b7\u53d6\u7279\u5b9a\u6a21\u578b\u7684\u521b\u5efa\u65f6\u95f4\nif 'created' in models_df.columns:\n    gpt4o_created = models_df[models_df['id'] == 'gpt-4o']['created'].iloc[0]\n    print(f\"GPT-4o \u521b\u5efa\u65f6\u95f4: {gpt4o_created}\")\n```\n\n#### \u9519\u8bef\u5904\u7406\u548cAPI\u5bc6\u94a5\u8017\u5c3d\n\n```python\nfrom llmakits import BaseOpenai\n\nclient = BaseOpenai(\n    platform=\"openai\",\n    base_url=\"https://api.openai.com/v1\",\n    api_keys=[\"your-api-key\"],\n    model_name=\"gpt-4o\"\n)\n\ntry:\n    response, tokens = client.send_message([], message_info)\n    print(f\"\u6a21\u578b\u54cd\u5e94: {response}\")\nexcept Exception as e:\n    if \"API key exhausted\" in str(e):\n        print(\"API\u5bc6\u94a5\u5df2\u8017\u5c3d\uff0c\u8bf7\u66f4\u6362\u5bc6\u94a5\")\n    else:\n        print(f\"\u53d1\u751f\u9519\u8bef: {e}\")\n```\n\n## \u9ad8\u7ea7\u529f\u80fd\n\n### \u6d88\u606f\u5904\u7406\n\n```python\nfrom llmakits.message import prepare_messages, extract_field, convert_to_json\n\n# \u51c6\u5907\u6d88\u606f\nmessages = prepare_messages(system=\"\u4f60\u662f\u4e00\u4e2a\u52a9\u624b\", user=\"\u8bf7\u5e2e\u5fd9\", assistant=\"\u597d\u7684\")\n\n# \u63d0\u53d6\u5e76\u8f6c\u6362\u4e3aJSON\njson_str = '{\"name\": \"test\"} some text'\nresult = convert_to_json(json_str)\n\n# \u63d0\u53d6\u5b57\u6bb5\nfield_value = extract_field(json_str, \"name\")\nprint(field_value)  # \u8f93\u51fa: test\n\n# \u63d0\u53d6\u591a\u4e2a\u5b57\u6bb5\nname, age = extract_field(json_str, \"name\", \"age\")\nprint(name)  # \u8f93\u51fa: test\nprint(age)  # \u8f93\u51fa: None\n\n```\n\n### \u7535\u5546\u5de5\u5177\n\n#### \u57fa\u7840\u5de5\u5177\u51fd\u6570\n\n```python\nfrom llmakits.e_commerce import contains_chinese, remove_chinese, shorten_title, validate_html\n\n# \u4f7f\u7528\u7b80\u5355\u51fd\u6570\nresult = contains_chinese(\"\u667a\u80fd\u624b\u673a\")  # \u8fd4\u56de True\ntitle = shorten_title(\"\u4e00\u4e2a\u5f88\u957f\u7684\u5546\u54c1\u6807\u9898\", 50)  # \u7f29\u51cf\u523050\u5b57\u7b26\n\n# HTML\u9a8c\u8bc1\nallowed_tags = {'div', 'p', 'span', 'strong', 'em'}\nis_valid, error_msg = validate_html(\"<div>\u5185\u5bb9</div>\", allowed_tags)\n```\n\n#### \u9ad8\u7ea7\u7535\u5546\u529f\u80fd\n\n\u7535\u5546\u5de5\u5177\u51fd\u6570\u73b0\u5728\u652f\u6301\u4f7f\u7528\u6a21\u578b\u7ec4\u540d\u79f0\uff0c\u66f4\u52a0\u7b80\u6d01\uff1a\n\n```python\nfrom llmakits.e_commerce import generate_title, generate_html, fill_attr,predict_cat_direct, predict_cat_gradual, translate_options\n\n# \u521b\u5efa\u8c03\u5ea6\u5668 - \u52a0\u8f7d\u914d\u7f6e\ndispatcher = ModelDispatcher('config/models_config.yaml', 'config/keys_config.yaml')\n\n# \u751f\u6210\u4f18\u5316\u5546\u54c1\u6807\u9898\nsystem_prompt = \"\u4f60\u662f\u4e00\u4e2a\u7535\u5546\u6807\u9898\u4f18\u5316\u4e13\u5bb6\"\ntitle = generate_title(\n    dispatcher=dispatcher,\n    title=\"\u539f\u59cb\u5546\u54c1\u6807\u9898\",\n    product_info=\"\u8fd9\u662f\u4e00\u4e2a\u9700\u8981\u4f18\u5316\u7684\u5546\u54c1\uff0c\u5305\u542b\u8be6\u7ec6\u7684\u4ea7\u54c1\u63cf\u8ff0\u548c\u7279\u70b9\",\n    system_prompt=system_prompt,\n    group_name=\"generate_title\",  # \u4f7f\u7528\u6a21\u578b\u7ec4\u540d\u79f0\n    min_length=10,\n    max_length=225,\n    min_word=2,      # \u6807\u9898\u6700\u5c11\u5305\u542b2\u4e2a\u5355\u8bcd\n    max_attempts=3   # \u6700\u5927\u91cd\u8bd5/\u4fee\u6539\u6b21\u6570\n)\n\n# \u9884\u6d4b\u5546\u54c1\u7c7b\u76ee\ncat_tree = {}  # \u7c7b\u76ee\u6811\u6570\u636e\ncategories = predict_cat_direct(\n    dispatcher=dispatcher,\n    product={\"title\": \"\u5546\u54c1\u6807\u9898\", \"image_url\": \"\"},  # \u5546\u54c1\u4fe1\u606f\u5b57\u5178\n    cat_tree=cat_tree,\n    system_prompt=\"\u4f60\u662f\u4e00\u4e2a\u5546\u54c1\u5206\u7c7b\u4e13\u5bb6\uff0c\u8bf7\u6839\u636e\u5546\u54c1\u6807\u9898\u9884\u6d4b\u5408\u9002\u7684\u5546\u54c1\u7c7b\u76ee\"\n)\n\n# \u9884\u6d4b\u5546\u54c1\u7c7b\u76ee\uff08\u5e26JSON\u4fee\u590d\u529f\u80fd\uff09\ncategories_with_fix = predict_cat_direct(\n    dispatcher=dispatcher,\n    product={\"title\": \"\u62a4\u53d1\u55b7\u96fe\", \"image_url\": \"https://example.com/image.jpg\"},\n    cat_tree=cat_tree,\n    system_prompt=\"\u4f60\u662f\u4e00\u4e2a\u5546\u54c1\u5206\u7c7b\u4e13\u5bb6\uff0c\u8bf7\u6839\u636e\u5546\u54c1\u6807\u9898\u548c\u56fe\u7247\u9884\u6d4b\u5408\u9002\u7684\u5546\u54c1\u7c7b\u76ee\",\n    fix_json_config={\n        \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2aJSON\u683c\u5f0f\u4fee\u590d\u4e13\u5bb6\uff0c\u8bf7\u4fee\u590d\u4e0b\u9762\u9519\u8bef\u7684JSON\u683c\u5f0f\",\n        \"group_name\": \"fix_json\"\n    }\n)\n\n# \u68af\u5ea6\u9884\u6d4b\u5546\u54c1\u7c7b\u76ee\uff08\u9010\u7ea7\u9884\u6d4b\uff09\ncategories_gradual = predict_cat_gradual(\n    dispatcher=dispatcher,\n    product={\"title\": \"\u667a\u80fd\u624b\u673a\", \"image_url\": \"https://example.com/image.jpg\"},\n    cat_tree=cat_tree,\n    predict_config={\n        \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a\u5546\u54c1\u5206\u7c7b\u4e13\u5bb6\uff0c\u8bf7\u6839\u636e\u5546\u54c1\u6807\u9898\u548c\u56fe\u7247\u9010\u7ea7\u9884\u6d4b\u5408\u9002\u7684\u5546\u54c1\u7c7b\u76ee\",\n        \"group_name\": \"predict_category\"\n    },\n    fix_json_config={\n        \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2aJSON\u683c\u5f0f\u4fee\u590d\u4e13\u5bb6\uff0c\u8bf7\u4fee\u590d\u4e0b\u9762\u9519\u8bef\u7684JSON\u683c\u5f0f\",\n        \"group_name\": \"fix_json\"\n    }\n)\n\n# \u7ffb\u8bd1\u5546\u54c1\u9009\u9879\noptions = [\"\u7ea2\u8272\", \"\u84dd\u8272\", \"\u7eff\u8272\"]\ntranslated = translate_options(\n    dispatcher=dispatcher,\n    title=\"\u5546\u54c1\u6807\u9898\",\n    options=options,\n    to_lang=\"english\",\n    group_name=\"translate_box\",  # \u4f7f\u7528\u6a21\u578b\u7ec4\u540d\u79f0\n    system_prompt=\"\u7ffb\u8bd1\u5546\u54c1\u9009\u9879\"\n)\n\n\n# \u751f\u6210HTML\u5546\u54c1\u63cf\u8ff0\uff08\u81ea\u52a8\u4fee\u590d\u9519\u8bef\uff09\nproduct_info = \"\"\"\n\u4ea7\u54c1\u540d\u79f0\uff1a\u667a\u80fd\u624b\u8868\n\u7279\u70b9\uff1a\u9632\u6c34\u3001\u5fc3\u7387\u76d1\u6d4b\u3001GPS\u5b9a\u4f4d\n\u6750\u8d28\uff1a\u4e0d\u9508\u94a2\u8868\u5e26\uff0c\u5f3a\u5316\u73bb\u7483\u8868\u9762\n\u9002\u7528\u573a\u666f\uff1a\u8fd0\u52a8\u3001\u65e5\u5e38\u4f69\u6234\n\"\"\"\n\nhtml_description = generate_html(\n    dispatcher=dispatcher,\n    product_info=product_info,\n    generate_prompt=\"\u4f60\u662f\u4e00\u4e2a\u7535\u5546\u4ea7\u54c1\u63cf\u8ff0\u4e13\u5bb6\uff0c\u8bf7\u6839\u636e\u4ea7\u54c1\u4fe1\u606f\u751f\u6210\u7f8e\u89c2\u7684HTML\u683c\u5f0f\u63cf\u8ff0\uff0c\u5305\u542b\u6807\u9898\u3001\u6bb5\u843d\u3001\u5217\u8868\u7b49\u7ed3\u6784\",\n    fix_prompt=\"\u4fee\u590dHTML\u4e2d\u7684\u4e0d\u5141\u8bb8\u6807\u7b7e\uff0c\u786e\u4fddHTML\u683c\u5f0f\u6b63\u786e\",\n    generate_group=\"generate_html\",  # \u751f\u6210HTML\u4f7f\u7528\u7684\u6a21\u578b\u7ec4\n    fix_group=\"fix_html\",       # \u4fee\u590dHTML\u4f7f\u7528\u7684\u6a21\u578b\u7ec4\n    allowed_tags={'div', 'p', 'h1', 'h2', 'h3', 'ul', 'li', 'strong', 'em', 'span', 'br'}\n)\n\n# \u586b\u5145\u5c5e\u6027\u503c\n\n# \u51c6\u5907\u6d88\u606f\u4fe1\u606f\nmessage_info = {\n    \"system_prompt\": \"\u4f60\u662f\u4e00\u4e2a\u5546\u54c1\u5c5e\u6027\u586b\u5145\u4e13\u5bb6\uff0c\u8bf7\u6839\u636e\u5546\u54c1\u4fe1\u606f\u586b\u5145\u76f8\u5e94\u7684\u5c5e\u6027\u503c\",\n    \"user_text\": \"\u8bf7\u4e3a\u667a\u80fd\u624b\u8868\u586b\u5145\u989c\u8272\u5c5e\u6027\"\n}\n\n# \u5b9a\u4e49\u53ef\u9009\u9879\u5217\u8868\ncolor_choices = [\"\u9ed1\u8272\", \"\u767d\u8272\", \"\u84dd\u8272\", \"\u7ea2\u8272\", \"\u7c89\u8272\", \"\u91d1\u8272\", \"\u94f6\u8272\"]\n\n# \u4f7f\u7528fill_attr\u51fd\u6570\u586b\u5145\u5c5e\u6027\nfilled_result = fill_attr(\n    dispatcher=dispatcher,\n    message_info=message_info,\n    group=\"generate_title\",  # \u4f7f\u7528\u6a21\u578b\u7ec4\u540d\u79f0\n    choices=color_choices    # \u53ef\u9009\u503c\u5217\u8868\uff0c\u7528\u4e8e\u9a8c\u8bc1\u7ed3\u679c\n)\n\nprint(f\"\u586b\u5145\u7684\u5c5e\u6027\u7ed3\u679c: {filled_result}\")\n```\n\n## \u8bb8\u53ef\u8bc1\n\nApache 2.0 License\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A powerful Python toolkit for simplifying LLM integration and management with multi-model scheduling, fault tolerance, and load balancing support",
    "version": "0.6.14",
    "project_urls": {
        "Bug Reports": "https://github.com/tinycen/llmakits/issues",
        "Documentation": "https://github.com/tinycen/llmakits#readme",
        "Homepage": "https://github.com/tinycen/llmakits",
        "Source": "https://github.com/tinycen/llmakits"
    },
    "split_keywords": [
        "llm",
        " ai",
        " chatgpt",
        " openai",
        " zhipu",
        " dashscope",
        " modelscope",
        " multi-model",
        " scheduling",
        " fault-tolerance"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0cfee428cef471377f9f62f3a672b48e51c8469c9fa71e28ca79e0de84cb9c97",
                "md5": "368545e13a935f242656b6eba983cd3e",
                "sha256": "2c50415e013d17fb942eb7439d3a44d8bf47b3cc7a26a5f898cfa5b0b7074578"
            },
            "downloads": -1,
            "filename": "llmakits-0.6.14-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "368545e13a935f242656b6eba983cd3e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 55931,
            "upload_time": "2025-10-28T08:18:15",
            "upload_time_iso_8601": "2025-10-28T08:18:15.263780Z",
            "url": "https://files.pythonhosted.org/packages/0c/fe/e428cef471377f9f62f3a672b48e51c8469c9fa71e28ca79e0de84cb9c97/llmakits-0.6.14-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "10ea2ff8956461a511d54b7ff8915019084d0786cbcd1904a006dc40cd20c36c",
                "md5": "f4708683f94e680f436b57f78e36dc50",
                "sha256": "4b6987679212c75df58be07d3c35a74ce53157a3154b17aaf9cd8690c7904d72"
            },
            "downloads": -1,
            "filename": "llmakits-0.6.14.tar.gz",
            "has_sig": false,
            "md5_digest": "f4708683f94e680f436b57f78e36dc50",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 51406,
            "upload_time": "2025-10-28T08:18:16",
            "upload_time_iso_8601": "2025-10-28T08:18:16.594048Z",
            "url": "https://files.pythonhosted.org/packages/10/ea/2ff8956461a511d54b7ff8915019084d0786cbcd1904a006dc40cd20c36c/llmakits-0.6.14.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-28 08:18:16",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "tinycen",
    "github_project": "llmakits",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "regex",
            "specs": []
        },
        {
            "name": "pyyaml",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "openai",
            "specs": []
        },
        {
            "name": "zhipuai",
            "specs": []
        },
        {
            "name": "dashscope",
            "specs": []
        },
        {
            "name": "filekits",
            "specs": []
        },
        {
            "name": "funcguard",
            "specs": []
        },
        {
            "name": "ollama",
            "specs": [
                [
                    ">=",
                    "0.6.0"
                ]
            ]
        }
    ],
    "lcname": "llmakits"
}
        
Elapsed time: 2.22113s