hcgf


Namehcgf JSON
Version 0.4.2 PyPI version JSON
download
home_pagehttps://github.com/hscspring/hcgf
SummaryHumanable Chat Generative-model Fine-tuning.
upload_time2023-09-19 15:27:29
maintainer
docs_urlNone
authorYam
requires_python
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # hcgf

A Humanable Chat Generative-model Fine-tuning tool.


## Install

```bash
pip install hcgf
```

安装依赖:

```
pip install -r requirements.txt
```


- 建议使用PyTorch2.0。
- 未支持多节点。


## Fine-tuning

支持的模型:

- [ChatGLM](https://huggingface.co/THUDM/chatglm-6b)
- [ChatGLM2](https://huggingface.co/THUDM/chatglm2-6b)
- [Qwen](https://huggingface.co/Qwen/Qwen-7B-Chat)
- [Linly LLaMA](https://huggingface.co/Linly-AI/ChatFlow-7B)
- [BELLE LLaMA](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-enc)
- [Ziya LLaMA](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1)
- [Baichuan LLaMA](https://huggingface.co/baichuan-inc/Baichuan-7B)
- [Bloom](https://huggingface.co/bigscience/bloomz-7b1-mt)
- [Pangu](https://huggingface.co/imone/pangu_2_6B)


### Dataset

每一行一个dict的`.json`文件,必须包含`prompt`和`completion`两个字段。示例如下:

```python
{"prompt": "你是谁?", "completion": "不告诉你。"}
```

### Command Fine-tuning

支持分布式Zero3、Zero2和DDP模式,使用方法请参考帮助文档。

```bash
hcgf_tune -h
```

至少要指定`model`和`data_path`参数,如下。

```bash
hcgf_tune --model THUDM/chatglm-6b --data_path path/to/train_data.json lora
```

首先要理解一下,模型训练时,除了模型(也就是参数)占用的空间外,还有优化器、梯度等会占用显存。


一共五种策略:

- fsdp_zero3:命令行模式默认策略,FULL_SHARD,参数、梯度、优化器状态SHARD,慢但是省显存,数据并行。
- fsdp_zero2:GRAD_OP_SHARD,梯度、优化器状态SHARD,比上面那个快一些,数据并行。
- mpdp(ddp):NO_SHARD,类似DDP,就是把模型分别加载到每张卡上,比上面2个都快,数据并行。
- mpds(8bit):8bit模式(下面的《8bit Fine-tuning》),模型被分到多个卡(甚至CPU)上,没有数据并行,很慢。
- msds(single_gpu):单卡模式(下面的《Single Device Fine-tuning》),能跑起来的情况下比较快。

| 卡数 | 显存           | 训练数据 | 策略                  |
| ---- | -------------- | -------- | --------------------- |
| 多卡 | 单卡跑不起模型 | 数据很多 | fsdp_zero3/fsdp_zero2 |
|      | 单卡跑得起模型 | 数据很多 | mpdp                  |
|      | 单卡跑不起模型 | 数据很少 | mpds                  |
|      | 单卡跑得起模型 | 数据很少 | msds                  |
| 单卡 | 单卡跑不起模型 | -        | mpds                  |
|      | 单卡跑得起模型 | -        | msds                  |


注意事项:
- 这里显存是在训练模式下的,和推理模式占用不同,可参考下面的《Configuration》。推理只支持后两种模式。
- FSDP模式下可能还没有单卡快(单卡跑得起的时候),这是正常的,因为FSDP对数据分片了,而且为了更大限度地使用显存,还可能需要把一些数据倒腾到CPU。
- 分布式训练下,batch_size其实是per_device_batch_size,真正的batch_size相当于`device_num×per_device_batch_size`。也就是说,同样的batch_size、数据和配置下,单卡比多卡更新的次数多。
- 如果有accumulate_steps参数,则需要再乘以它才是真正更新参数的batch_size。



### Single Device Fine-tuning

至少需要一张16G显存的卡。如果不指定显卡,默认为`cuda`。

```python
#===== 微调 =====#
import hcgf
gl = hcgf.GlmLora("THUDM/chatglm-6b", device="cuda:0")
gl.load_data("/path/to/data.json").tune()

#===== 推理 =====#
gl = hcgf.GlmLora("THUDM/chatglm-6b", device="cuda:0")
gl.load_pretrained("/path/to/lora_pt").eval()
gl.chat("你是谁?")

#===== 切换模式 =====#
gl = hcgf.GlmLora("THUDM/chatglm-6b", device="cuda:0")
gl.load_data("/path/to/data.json").tune()
# 切换到推理模式
gl.eval()
gl.chat("你是谁?")
# 切换回微调模式,还是用原来的数据继续跑
gl.tune()
# 如果有新的数据集,参考上面的写法,先加载数据
gl.load_data("/path/to/new_data.json").tune()
# 如果在原来的基础上用新数据继续微调,先加载之前的pt文件,再加载数据微调
gl.load_pretrained("/path/to/lora_pt").load_data("/path/to/new_data.json").tune()
```

当然,也可以使用`hcgf_tune`:


```bash
hcgf_tune strategy msds --model THUDM/chatglm-6b --data_path path/to/train_data.json lora
```


### 8bit Fine-tuning

至少需要一张12G显存的卡。不指定device。只需要初始化时改一下即可,其他操作和上面正常微调一样。

需要安装依赖: `bitsandbytes`


```python
gl = hcgf.GlmLora("THUDM/chatglm-6b", load_in_8bit=True)
```


当然,也可以使用`hcgf_tune`:


```bash
hcgf_tune strategy mpds --model THUDM/chatglm-6b --data_path path/to/train_data.json lora
```

### Continually Fine-tuning

先加载之前的`pt`文件,然后加载数据微调。


```python
gl.load_pretrained("/path/to/lora_pt").load_data("/path/to/new_data.json").tune()
```

### Demo/Inference

请执行`hcgf_infer -h`查看帮助。


### Parameters

主要方法参数,有值的表示默认值。


```python
load_data(
    data_path: str, 
    max_seq_len: int = 512, # 句子最大长度,超过会截断。注意,这里指Prompt或Completion的长度,应保证两者长度之和不大于模型最大长度。
)
tune(
    batch_size: int = 8,
    lr: float = 2e-4,
    num_epochs: int = 3,
    warmup_steps: Optional[int] = None,     # 为None时会用1/3个Epoch进行warmup
    accumulate_steps: Optional[int] = None, # 为None时等价于1
    out_dir: str = "./output/",
    print_every: Optional[int] = None,      # 为None时每1/10Epoch个Steps打印一次输出(Step、Loss、LearningRate)
)
# 未说明参数含义同`chat`
generate(
    sents: Union[str, List[str]],           # 输入的句子,可以是str或列表(多个输入),**注意**需要根据训练样本格式构造好输入。
    do_sample: bool = True,
    num_beams: int = 1,
    temperature: float = 0.2,
    top_p: float = 0.7,
    repetition_penalty: float = 1.02,
)
# ChatGLM only
chat(
    inp: str, 
    history: List[Tuple[str, str]] = None,  # (问,答)Pair对
    max_new_tokens: int = 512,              # 生成的文本最大长度,Prompt的长度=支持的最大长度-max_new_tokens,Prompt长度超过会被截断
    do_sample: bool = True,                 # 采样
    num_beams: int = 1,                     # Beam Search 的 beam 数量
    temperature: float = 0.95,              # 越小越确定,越大越随机,比如你微调后可以把它改成0.1
    top_p: float = 0.7,                     # 同上,两者不要同时调
    repetition_penalty: float = 1.02,       # 生成内容重复惩罚,越大越不容易重复
    stop: List[str] = []                    # 停止文本,可以是标点、特定词或句子等,输出不包含停止文本
)
```


Better Practice:

- 一般只需调整`temerature`。


### Configuration

有几个影响显存的参数可以配置:`max_seq_len`,`batch_size`。


```python
(
gl
.load_data("./data/chatgpt_finetune_faq.json", max_seq_len=128)
.tune(batch_size=1)
)

```

以下配置针对`ChatGLM-6B`。


不同配置 `8bit` 资源占用:

| max_seq_len | batch_size | memory |
| ----------- | ---------- | ------ |
| `64`        | 1          | 11G    |
| `128`       | 1          | 12G    |
| `512`       | 1          | 22G    |
| 128         | `2`        | 15G    |
| 128         | `4`        | 21G    |

不同配置正常资源占用:

| max_seq_len | batch_size | memory |
| ----------- | ---------- | ------ |
| `64`        | 1          | 15G    |
| `128`       | 1          | 16G    |
| `512`       | 1          | 30G    |
| 128         | `2`        | 19G    |
| 128         | `4`        | 25G    |


## RM

使用小模型(如BERT等)训练。

### Training

### Dataset

需要pair对数据,计算logits过程和普通预训练模型一样(一个Batch多个pair对);计算loss时属于同一个pair对的logits放一块算。

推理时直接用logits就行。



## Test

```bash
# 全部测试
python -m pytest
# 测试训练和推理,比较慢
python -m pytest -s -m slow
# 测试其他的
python -m pytest -m "not slow"
```


## Other

如果遇到加载超时,可以直接load本地cache下的模型:

```Python
GlmLora("/path/to/huggingface/models--THUDM--chatglm-6b/snapshots/<id>/")
```


## ChangeLog

- **v0.4.0** `20230909`
  - 支持Qwen、ChatGLM2、Baichuan等
  - 支持IA3微调
- **v0.3.0** `20230526`
  - 支持LLaMA(包括Native、Alpaca、Ziya等)
- **v0.2.0** `20230513`
  - 支持分布式微调
  - 调整推理模式,支持Batch
- **v0.1.0** `20230412`
  - 支持ChatGLM新版Tokenizer
  - 使用官方调整后的MASK方式
- **v0.0.7** `20230405`

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hscspring/hcgf",
    "name": "hcgf",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Yam",
    "author_email": "haoshaochun@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/25/4c/f4ca9c152bc54cf35608c3b6281ef026fefa781fbc9bbad4b6d68987f171/hcgf-0.4.2.tar.gz",
    "platform": null,
    "description": "# hcgf\n\nA Humanable Chat Generative-model Fine-tuning tool.\n\n\n## Install\n\n```bash\npip install hcgf\n```\n\n\u5b89\u88c5\u4f9d\u8d56\uff1a\n\n```\npip install -r requirements.txt\n```\n\n\n- \u5efa\u8bae\u4f7f\u7528PyTorch2.0\u3002\n- \u672a\u652f\u6301\u591a\u8282\u70b9\u3002\n\n\n## Fine-tuning\n\n\u652f\u6301\u7684\u6a21\u578b\uff1a\n\n- [ChatGLM](https://huggingface.co/THUDM/chatglm-6b)\n- [ChatGLM2](https://huggingface.co/THUDM/chatglm2-6b)\n- [Qwen](https://huggingface.co/Qwen/Qwen-7B-Chat)\n- [Linly LLaMA](https://huggingface.co/Linly-AI/ChatFlow-7B)\n- [BELLE LLaMA](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-enc)\n- [Ziya LLaMA](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1.1)\n- [Baichuan LLaMA](https://huggingface.co/baichuan-inc/Baichuan-7B)\n- [Bloom](https://huggingface.co/bigscience/bloomz-7b1-mt)\n- [Pangu](https://huggingface.co/imone/pangu_2_6B)\n\n\n### Dataset\n\n\u6bcf\u4e00\u884c\u4e00\u4e2adict\u7684`.json`\u6587\u4ef6\uff0c\u5fc5\u987b\u5305\u542b`prompt`\u548c`completion`\u4e24\u4e2a\u5b57\u6bb5\u3002\u793a\u4f8b\u5982\u4e0b\uff1a\n\n```python\n{\"prompt\": \"\u4f60\u662f\u8c01\uff1f\", \"completion\": \"\u4e0d\u544a\u8bc9\u4f60\u3002\"}\n```\n\n### Command Fine-tuning\n\n\u652f\u6301\u5206\u5e03\u5f0fZero3\u3001Zero2\u548cDDP\u6a21\u5f0f\uff0c\u4f7f\u7528\u65b9\u6cd5\u8bf7\u53c2\u8003\u5e2e\u52a9\u6587\u6863\u3002\n\n```bash\nhcgf_tune -h\n```\n\n\u81f3\u5c11\u8981\u6307\u5b9a`model`\u548c`data_path`\u53c2\u6570\uff0c\u5982\u4e0b\u3002\n\n```bash\nhcgf_tune --model THUDM/chatglm-6b --data_path path/to/train_data.json lora\n```\n\n\u9996\u5148\u8981\u7406\u89e3\u4e00\u4e0b\uff0c\u6a21\u578b\u8bad\u7ec3\u65f6\uff0c\u9664\u4e86\u6a21\u578b\uff08\u4e5f\u5c31\u662f\u53c2\u6570\uff09\u5360\u7528\u7684\u7a7a\u95f4\u5916\uff0c\u8fd8\u6709\u4f18\u5316\u5668\u3001\u68af\u5ea6\u7b49\u4f1a\u5360\u7528\u663e\u5b58\u3002\n\n\n\u4e00\u5171\u4e94\u79cd\u7b56\u7565\uff1a\n\n- fsdp_zero3\uff1a\u547d\u4ee4\u884c\u6a21\u5f0f\u9ed8\u8ba4\u7b56\u7565\uff0cFULL_SHARD\uff0c\u53c2\u6570\u3001\u68af\u5ea6\u3001\u4f18\u5316\u5668\u72b6\u6001SHARD\uff0c\u6162\u4f46\u662f\u7701\u663e\u5b58\uff0c\u6570\u636e\u5e76\u884c\u3002\n- fsdp_zero2\uff1aGRAD_OP_SHARD\uff0c\u68af\u5ea6\u3001\u4f18\u5316\u5668\u72b6\u6001SHARD\uff0c\u6bd4\u4e0a\u9762\u90a3\u4e2a\u5feb\u4e00\u4e9b\uff0c\u6570\u636e\u5e76\u884c\u3002\n- mpdp(ddp)\uff1aNO_SHARD\uff0c\u7c7b\u4f3cDDP\uff0c\u5c31\u662f\u628a\u6a21\u578b\u5206\u522b\u52a0\u8f7d\u5230\u6bcf\u5f20\u5361\u4e0a\uff0c\u6bd4\u4e0a\u97622\u4e2a\u90fd\u5feb\uff0c\u6570\u636e\u5e76\u884c\u3002\n- mpds(8bit)\uff1a8bit\u6a21\u5f0f\uff08\u4e0b\u9762\u7684\u300a8bit Fine-tuning\u300b\uff09\uff0c\u6a21\u578b\u88ab\u5206\u5230\u591a\u4e2a\u5361\uff08\u751a\u81f3CPU\uff09\u4e0a\uff0c\u6ca1\u6709\u6570\u636e\u5e76\u884c\uff0c\u5f88\u6162\u3002\n- msds(single_gpu)\uff1a\u5355\u5361\u6a21\u5f0f\uff08\u4e0b\u9762\u7684\u300aSingle Device Fine-tuning\u300b\uff09\uff0c\u80fd\u8dd1\u8d77\u6765\u7684\u60c5\u51b5\u4e0b\u6bd4\u8f83\u5feb\u3002\n\n| \u5361\u6570 | \u663e\u5b58           | \u8bad\u7ec3\u6570\u636e | \u7b56\u7565                  |\n| ---- | -------------- | -------- | --------------------- |\n| \u591a\u5361 | \u5355\u5361\u8dd1\u4e0d\u8d77\u6a21\u578b | \u6570\u636e\u5f88\u591a | fsdp_zero3/fsdp_zero2 |\n|      | \u5355\u5361\u8dd1\u5f97\u8d77\u6a21\u578b | \u6570\u636e\u5f88\u591a | mpdp                  |\n|      | \u5355\u5361\u8dd1\u4e0d\u8d77\u6a21\u578b | \u6570\u636e\u5f88\u5c11 | mpds                  |\n|      | \u5355\u5361\u8dd1\u5f97\u8d77\u6a21\u578b | \u6570\u636e\u5f88\u5c11 | msds                  |\n| \u5355\u5361 | \u5355\u5361\u8dd1\u4e0d\u8d77\u6a21\u578b | -        | mpds                  |\n|      | \u5355\u5361\u8dd1\u5f97\u8d77\u6a21\u578b | -        | msds                  |\n\n\n\u6ce8\u610f\u4e8b\u9879\uff1a\n- \u8fd9\u91cc\u663e\u5b58\u662f\u5728\u8bad\u7ec3\u6a21\u5f0f\u4e0b\u7684\uff0c\u548c\u63a8\u7406\u6a21\u5f0f\u5360\u7528\u4e0d\u540c\uff0c\u53ef\u53c2\u8003\u4e0b\u9762\u7684\u300aConfiguration\u300b\u3002\u63a8\u7406\u53ea\u652f\u6301\u540e\u4e24\u79cd\u6a21\u5f0f\u3002\n- FSDP\u6a21\u5f0f\u4e0b\u53ef\u80fd\u8fd8\u6ca1\u6709\u5355\u5361\u5feb\uff08\u5355\u5361\u8dd1\u5f97\u8d77\u7684\u65f6\u5019\uff09\uff0c\u8fd9\u662f\u6b63\u5e38\u7684\uff0c\u56e0\u4e3aFSDP\u5bf9\u6570\u636e\u5206\u7247\u4e86\uff0c\u800c\u4e14\u4e3a\u4e86\u66f4\u5927\u9650\u5ea6\u5730\u4f7f\u7528\u663e\u5b58\uff0c\u8fd8\u53ef\u80fd\u9700\u8981\u628a\u4e00\u4e9b\u6570\u636e\u5012\u817e\u5230CPU\u3002\n- \u5206\u5e03\u5f0f\u8bad\u7ec3\u4e0b\uff0cbatch_size\u5176\u5b9e\u662fper_device_batch_size\uff0c\u771f\u6b63\u7684batch_size\u76f8\u5f53\u4e8e`device_num\u00d7per_device_batch_size`\u3002\u4e5f\u5c31\u662f\u8bf4\uff0c\u540c\u6837\u7684batch_size\u3001\u6570\u636e\u548c\u914d\u7f6e\u4e0b\uff0c\u5355\u5361\u6bd4\u591a\u5361\u66f4\u65b0\u7684\u6b21\u6570\u591a\u3002\n- \u5982\u679c\u6709accumulate_steps\u53c2\u6570\uff0c\u5219\u9700\u8981\u518d\u4e58\u4ee5\u5b83\u624d\u662f\u771f\u6b63\u66f4\u65b0\u53c2\u6570\u7684batch_size\u3002\n\n\n\n### Single Device Fine-tuning\n\n\u81f3\u5c11\u9700\u8981\u4e00\u5f2016G\u663e\u5b58\u7684\u5361\u3002\u5982\u679c\u4e0d\u6307\u5b9a\u663e\u5361\uff0c\u9ed8\u8ba4\u4e3a`cuda`\u3002\n\n```python\n#===== \u5fae\u8c03 =====#\nimport hcgf\ngl = hcgf.GlmLora(\"THUDM/chatglm-6b\", device=\"cuda:0\")\ngl.load_data(\"/path/to/data.json\").tune()\n\n#===== \u63a8\u7406 =====#\ngl = hcgf.GlmLora(\"THUDM/chatglm-6b\", device=\"cuda:0\")\ngl.load_pretrained(\"/path/to/lora_pt\").eval()\ngl.chat(\"\u4f60\u662f\u8c01?\")\n\n#===== \u5207\u6362\u6a21\u5f0f =====#\ngl = hcgf.GlmLora(\"THUDM/chatglm-6b\", device=\"cuda:0\")\ngl.load_data(\"/path/to/data.json\").tune()\n# \u5207\u6362\u5230\u63a8\u7406\u6a21\u5f0f\ngl.eval()\ngl.chat(\"\u4f60\u662f\u8c01\uff1f\")\n# \u5207\u6362\u56de\u5fae\u8c03\u6a21\u5f0f\uff0c\u8fd8\u662f\u7528\u539f\u6765\u7684\u6570\u636e\u7ee7\u7eed\u8dd1\ngl.tune()\n# \u5982\u679c\u6709\u65b0\u7684\u6570\u636e\u96c6\uff0c\u53c2\u8003\u4e0a\u9762\u7684\u5199\u6cd5\uff0c\u5148\u52a0\u8f7d\u6570\u636e\ngl.load_data(\"/path/to/new_data.json\").tune()\n# \u5982\u679c\u5728\u539f\u6765\u7684\u57fa\u7840\u4e0a\u7528\u65b0\u6570\u636e\u7ee7\u7eed\u5fae\u8c03\uff0c\u5148\u52a0\u8f7d\u4e4b\u524d\u7684pt\u6587\u4ef6\uff0c\u518d\u52a0\u8f7d\u6570\u636e\u5fae\u8c03\ngl.load_pretrained(\"/path/to/lora_pt\").load_data(\"/path/to/new_data.json\").tune()\n```\n\n\u5f53\u7136\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528`hcgf_tune`:\n\n\n```bash\nhcgf_tune strategy msds --model THUDM/chatglm-6b --data_path path/to/train_data.json lora\n```\n\n\n### 8bit Fine-tuning\n\n\u81f3\u5c11\u9700\u8981\u4e00\u5f2012G\u663e\u5b58\u7684\u5361\u3002\u4e0d\u6307\u5b9adevice\u3002\u53ea\u9700\u8981\u521d\u59cb\u5316\u65f6\u6539\u4e00\u4e0b\u5373\u53ef\uff0c\u5176\u4ed6\u64cd\u4f5c\u548c\u4e0a\u9762\u6b63\u5e38\u5fae\u8c03\u4e00\u6837\u3002\n\n\u9700\u8981\u5b89\u88c5\u4f9d\u8d56: `bitsandbytes`\n\n\n```python\ngl = hcgf.GlmLora(\"THUDM/chatglm-6b\", load_in_8bit=True)\n```\n\n\n\u5f53\u7136\uff0c\u4e5f\u53ef\u4ee5\u4f7f\u7528`hcgf_tune`:\n\n\n```bash\nhcgf_tune strategy mpds --model THUDM/chatglm-6b --data_path path/to/train_data.json lora\n```\n\n### Continually Fine-tuning\n\n\u5148\u52a0\u8f7d\u4e4b\u524d\u7684`pt`\u6587\u4ef6\uff0c\u7136\u540e\u52a0\u8f7d\u6570\u636e\u5fae\u8c03\u3002\n\n\n```python\ngl.load_pretrained(\"/path/to/lora_pt\").load_data(\"/path/to/new_data.json\").tune()\n```\n\n### Demo/Inference\n\n\u8bf7\u6267\u884c`hcgf_infer -h`\u67e5\u770b\u5e2e\u52a9\u3002\n\n\n### Parameters\n\n\u4e3b\u8981\u65b9\u6cd5\u53c2\u6570\uff0c\u6709\u503c\u7684\u8868\u793a\u9ed8\u8ba4\u503c\u3002\n\n\n```python\nload_data(\n    data_path: str, \n    max_seq_len: int = 512, # \u53e5\u5b50\u6700\u5927\u957f\u5ea6\uff0c\u8d85\u8fc7\u4f1a\u622a\u65ad\u3002\u6ce8\u610f\uff0c\u8fd9\u91cc\u6307Prompt\u6216Completion\u7684\u957f\u5ea6\uff0c\u5e94\u4fdd\u8bc1\u4e24\u8005\u957f\u5ea6\u4e4b\u548c\u4e0d\u5927\u4e8e\u6a21\u578b\u6700\u5927\u957f\u5ea6\u3002\n)\ntune(\n    batch_size: int = 8,\n    lr: float = 2e-4,\n    num_epochs: int = 3,\n    warmup_steps: Optional[int] = None,     # \u4e3aNone\u65f6\u4f1a\u75281/3\u4e2aEpoch\u8fdb\u884cwarmup\n    accumulate_steps: Optional[int] = None, # \u4e3aNone\u65f6\u7b49\u4ef7\u4e8e1\n    out_dir: str = \"./output/\",\n    print_every: Optional[int] = None,      # \u4e3aNone\u65f6\u6bcf1/10Epoch\u4e2aSteps\u6253\u5370\u4e00\u6b21\u8f93\u51fa\uff08Step\u3001Loss\u3001LearningRate\uff09\n)\n# \u672a\u8bf4\u660e\u53c2\u6570\u542b\u4e49\u540c`chat`\ngenerate(\n    sents: Union[str, List[str]],           # \u8f93\u5165\u7684\u53e5\u5b50\uff0c\u53ef\u4ee5\u662fstr\u6216\u5217\u8868\uff08\u591a\u4e2a\u8f93\u5165\uff09\uff0c**\u6ce8\u610f**\u9700\u8981\u6839\u636e\u8bad\u7ec3\u6837\u672c\u683c\u5f0f\u6784\u9020\u597d\u8f93\u5165\u3002\n    do_sample: bool = True,\n    num_beams: int = 1,\n    temperature: float = 0.2,\n    top_p: float = 0.7,\n    repetition_penalty: float = 1.02,\n)\n# ChatGLM only\nchat(\n    inp: str, \n    history: List[Tuple[str, str]] = None,  # (\u95ee\uff0c\u7b54)Pair\u5bf9\n    max_new_tokens: int = 512,              # \u751f\u6210\u7684\u6587\u672c\u6700\u5927\u957f\u5ea6\uff0cPrompt\u7684\u957f\u5ea6=\u652f\u6301\u7684\u6700\u5927\u957f\u5ea6-max_new_tokens\uff0cPrompt\u957f\u5ea6\u8d85\u8fc7\u4f1a\u88ab\u622a\u65ad\n    do_sample: bool = True,                 # \u91c7\u6837\n    num_beams: int = 1,                     # Beam Search \u7684 beam \u6570\u91cf\n    temperature: float = 0.95,              # \u8d8a\u5c0f\u8d8a\u786e\u5b9a\uff0c\u8d8a\u5927\u8d8a\u968f\u673a\uff0c\u6bd4\u5982\u4f60\u5fae\u8c03\u540e\u53ef\u4ee5\u628a\u5b83\u6539\u62100.1\n    top_p: float = 0.7,                     # \u540c\u4e0a\uff0c\u4e24\u8005\u4e0d\u8981\u540c\u65f6\u8c03\n    repetition_penalty: float = 1.02,       # \u751f\u6210\u5185\u5bb9\u91cd\u590d\u60e9\u7f5a\uff0c\u8d8a\u5927\u8d8a\u4e0d\u5bb9\u6613\u91cd\u590d\n    stop: List[str] = []                    # \u505c\u6b62\u6587\u672c\uff0c\u53ef\u4ee5\u662f\u6807\u70b9\u3001\u7279\u5b9a\u8bcd\u6216\u53e5\u5b50\u7b49\uff0c\u8f93\u51fa\u4e0d\u5305\u542b\u505c\u6b62\u6587\u672c\n)\n```\n\n\nBetter Practice:\n\n- \u4e00\u822c\u53ea\u9700\u8c03\u6574`temerature`\u3002\n\n\n### Configuration\n\n\u6709\u51e0\u4e2a\u5f71\u54cd\u663e\u5b58\u7684\u53c2\u6570\u53ef\u4ee5\u914d\u7f6e\uff1a`max_seq_len`\uff0c`batch_size`\u3002\n\n\n```python\n(\ngl\n.load_data(\"./data/chatgpt_finetune_faq.json\", max_seq_len=128)\n.tune(batch_size=1)\n)\n\n```\n\n\u4ee5\u4e0b\u914d\u7f6e\u9488\u5bf9`ChatGLM-6B`\u3002\n\n\n\u4e0d\u540c\u914d\u7f6e `8bit` \u8d44\u6e90\u5360\u7528\uff1a\n\n| max_seq_len | batch_size | memory |\n| ----------- | ---------- | ------ |\n| `64`        | 1          | 11G    |\n| `128`       | 1          | 12G    |\n| `512`       | 1          | 22G    |\n| 128         | `2`        | 15G    |\n| 128         | `4`        | 21G    |\n\n\u4e0d\u540c\u914d\u7f6e\u6b63\u5e38\u8d44\u6e90\u5360\u7528\uff1a\n\n| max_seq_len | batch_size | memory |\n| ----------- | ---------- | ------ |\n| `64`        | 1          | 15G    |\n| `128`       | 1          | 16G    |\n| `512`       | 1          | 30G    |\n| 128         | `2`        | 19G    |\n| 128         | `4`        | 25G    |\n\n\n## RM\n\n\u4f7f\u7528\u5c0f\u6a21\u578b\uff08\u5982BERT\u7b49\uff09\u8bad\u7ec3\u3002\n\n### Training\n\n### Dataset\n\n\u9700\u8981pair\u5bf9\u6570\u636e\uff0c\u8ba1\u7b97logits\u8fc7\u7a0b\u548c\u666e\u901a\u9884\u8bad\u7ec3\u6a21\u578b\u4e00\u6837\uff08\u4e00\u4e2aBatch\u591a\u4e2apair\u5bf9\uff09\uff1b\u8ba1\u7b97loss\u65f6\u5c5e\u4e8e\u540c\u4e00\u4e2apair\u5bf9\u7684logits\u653e\u4e00\u5757\u7b97\u3002\n\n\u63a8\u7406\u65f6\u76f4\u63a5\u7528logits\u5c31\u884c\u3002\n\n\n\n## Test\n\n```bash\n# \u5168\u90e8\u6d4b\u8bd5\npython -m pytest\n# \u6d4b\u8bd5\u8bad\u7ec3\u548c\u63a8\u7406\uff0c\u6bd4\u8f83\u6162\npython -m pytest -s -m slow\n# \u6d4b\u8bd5\u5176\u4ed6\u7684\npython -m pytest -m \"not slow\"\n```\n\n\n## Other\n\n\u5982\u679c\u9047\u5230\u52a0\u8f7d\u8d85\u65f6\uff0c\u53ef\u4ee5\u76f4\u63a5load\u672c\u5730cache\u4e0b\u7684\u6a21\u578b\uff1a\n\n```Python\nGlmLora(\"/path/to/huggingface/models--THUDM--chatglm-6b/snapshots/<id>/\")\n```\n\n\n## ChangeLog\n\n- **v0.4.0** `20230909`\n  - \u652f\u6301Qwen\u3001ChatGLM2\u3001Baichuan\u7b49\n  - \u652f\u6301IA3\u5fae\u8c03\n- **v0.3.0** `20230526`\n  - \u652f\u6301LLaMA\uff08\u5305\u62ecNative\u3001Alpaca\u3001Ziya\u7b49\uff09\n- **v0.2.0** `20230513`\n  - \u652f\u6301\u5206\u5e03\u5f0f\u5fae\u8c03\n  - \u8c03\u6574\u63a8\u7406\u6a21\u5f0f\uff0c\u652f\u6301Batch\n- **v0.1.0** `20230412`\n  - \u652f\u6301ChatGLM\u65b0\u7248Tokenizer\n  - \u4f7f\u7528\u5b98\u65b9\u8c03\u6574\u540e\u7684MASK\u65b9\u5f0f\n- **v0.0.7** `20230405`\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Humanable Chat Generative-model Fine-tuning.",
    "version": "0.4.2",
    "project_urls": {
        "Homepage": "https://github.com/hscspring/hcgf"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2d6445808d6d61cdf55a73c72d0b3e6dcff69866e343b7af61103a9d1039bd40",
                "md5": "5c9721976ae63b193d4b51b18fc74e12",
                "sha256": "532e28dd5f6d351d93d0a710583a3f04204f8eebaf4f829e52960855e6c7200d"
            },
            "downloads": -1,
            "filename": "hcgf-0.4.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5c9721976ae63b193d4b51b18fc74e12",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 44817,
            "upload_time": "2023-09-19T15:27:25",
            "upload_time_iso_8601": "2023-09-19T15:27:25.347108Z",
            "url": "https://files.pythonhosted.org/packages/2d/64/45808d6d61cdf55a73c72d0b3e6dcff69866e343b7af61103a9d1039bd40/hcgf-0.4.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "254cf4ca9c152bc54cf35608c3b6281ef026fefa781fbc9bbad4b6d68987f171",
                "md5": "70a261ba53ec26e0101835e2566e7640",
                "sha256": "e97e36197cf7edfb19ffaad62e92ea483a699195348dff20bf55c092df9aa4d8"
            },
            "downloads": -1,
            "filename": "hcgf-0.4.2.tar.gz",
            "has_sig": false,
            "md5_digest": "70a261ba53ec26e0101835e2566e7640",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 42708,
            "upload_time": "2023-09-19T15:27:29",
            "upload_time_iso_8601": "2023-09-19T15:27:29.900941Z",
            "url": "https://files.pythonhosted.org/packages/25/4c/f4ca9c152bc54cf35608c3b6281ef026fefa781fbc9bbad4b6d68987f171/hcgf-0.4.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-19 15:27:29",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hscspring",
    "github_project": "hcgf",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "hcgf"
}
        
Yam
Elapsed time: 0.13384s