glmtuner


Nameglmtuner JSON
Version 0.1.5 PyPI version JSON
download
home_pagehttps://github.com/hiyouga/ChatGLM-Efficient-Tuning
SummaryFine-tuning ChatGLM-6B with PEFT
upload_time2023-08-12 13:40:37
maintainer
docs_urlNone
authorhiyouga
requires_python>=3.8.0
licenseApache 2.0 License
keywords chatglm llm chatgpt transformer pytorch deep learning
VCS
bugtrack_url
requirements torch transformers datasets accelerate peft trl sentencepiece jieba rouge-chinese nltk gradio uvicorn pydantic fastapi sse-starlette matplotlib protobuf cpm-kernels
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ChatGLM Efficient Tuning

[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/ChatGLM-Efficient-Tuning?style=social)](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/ChatGLM-Efficient-Tuning)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/ChatGLM-Efficient-Tuning)](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/commits/main)
[![PyPI](https://img.shields.io/pypi/v/glmtuner)](https://pypi.org/project/glmtuner/)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/pulls)

Fine-tuning 🤖[ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) model with 🤗[PEFT](https://github.com/huggingface/peft).

👋 Join our [WeChat](assets/wechat.jpg).

\[ English | [中文](README_zh.md) \]

If you have any questions, please refer to our [Wiki📄](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/wiki).

## Notice

This repo will **not be maintained** in the future. Please follow **[LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)** for fine-tuning the language models (including ChatGLM2-6B).

## Changelog

[23/07/15] Now we develop an all-in-one Web UI for training, evaluation and inference. Try `train_web.py` to fine-tune ChatGLM-6B model in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.

[23/07/09] Now we release [FastEdit](https://github.com/hiyouga/FastEdit)⚡🩹, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.

[23/06/25] Now we align the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in arbitrary ChatGPT-based applications.

[23/06/25] Now we support fine-tuning the [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) model with our framework!

[23/06/05] Now we support 4-bit LoRA training (aka [QLoRA](https://github.com/artidoro/qlora)). Try `--quantization_bit 4` argument to work with 4-bit quantized model. (experimental feature)

[23/06/01] We implemented a framework supporting the efficient tuning of LLaMA and BLOOM models. Please follow [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) if you are interested.

[23/05/19] Now we support using the development set to evaluate the model while training. Try `--dev_ratio` argument to specify the size of development set.

[23/04/29] Now we support training ChatGLM with **Reinforcement Learning with Human Feedback (RLHF)** ! We provide several examples to run RLHF training, please refer to the `examples` folder for details.

[23/04/20] Our repo achieved 100 stars within 12 days! Congratulations!

[23/04/19] Now we support **merging the weights** of fine-tuned models trained by LoRA! Try `--checkpoint_dir checkpoint1,checkpoint2` argument for continually fine-tuning the models.

[23/04/18] Now we support training the **quantized models** using three fine-tuning methods! Try `quantization_bit` argument for training the model in 4/8 bits.

[23/04/12] Now we support **training from checkpoints**! Use `--checkpoint_dir` argument to specify the checkpoint model to fine-tune from.

[23/04/11] Now we support training with **combined datasets**! Try `--dataset dataset1,dataset2` argument for training with multiple datasets.

## Datasets

- For supervised fine-tuning:
  - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)
  - [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)
  - [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
  - [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
  - [Self-cognition (zh)](data/self_cognition.json)
  - [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)
  - [RefGPT (zh)](https://github.com/sufengniu/RefGPT)
  - [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
  - [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)
  - [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)
  - [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)
  - [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)
  - [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)
  - [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)
  - [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
  - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)
  - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)
  - [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)
  - [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)
  - [UltraChat (en)](https://github.com/thunlp/UltraChat)
  - [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)
- For reward modelling:
  - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)
  - [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)
  - [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)

Please refer to [data/README.md](data/README.md) for details.

Some datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.

```bash
pip install --upgrade huggingface_hub
huggingface-cli login
```

## Fine-Tuning Methods

Our script now supports the following fine-tuning methods:

- [LoRA](https://arxiv.org/abs/2106.09685)
  - Fine-tuning the low-rank adapters of the model.
- [P-Tuning V2](https://github.com/THUDM/P-tuning-v2)
  - Fine-tuning the prefix encoder of the model.
- [Freeze](https://arxiv.org/abs/2012.14913)
  - Fine-tuning the MLPs in the last n blocks of the model.
- Full Tuning
  - Fine-tuning all the parameters of the model.

## Requirement

- Python 3.8+ and PyTorch 1.13.1+
- 🤗Transformers, Datasets, Accelerate, PEFT and TRL
- fire, protobuf, cpm-kernels and sentencepiece
- jieba, rouge-chinese and nltk (used at evaluation)
- gradio and matplotlib (used in train_web.py)
- uvicorn, fastapi and sse-starlette (used in api_demo.py)

And **powerful GPUs**!

## Getting Started

### Data Preparation (optional)

Please refer to `data/example_dataset` for checking the details about the format of dataset files. You can either use a single `.json` file or a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) with multiple files to create a custom dataset.

Note: please update `data/dataset_info.json` to use your custom dataset. About the format of this file, please refer to `data/README.md`.

### Dependence Installation (optional)

```bash
git lfs install
git clone https://github.com/hiyouga/ChatGLM-Efficient-Tuning.git
conda create -n chatglm_etuning python=3.10
conda activate chatglm_etuning
cd ChatGLM-Efficient-Tuning
pip install -r requirements.txt
```

If you want to enable the quantized LoRA (QLoRA) on the Windows platform, you will be required to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.1.

```bash
pip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl
```

### All-in-one Web UI

```bash
CUDA_VISIBLE_DEVICES=0 python src/train_web.py
```

Currently the web UI only supports training on **a single GPU**.

### Fine-tuning with a Single GPU

```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage sft \
    --model_name_or_path path_to_your_chatglm_model \
    --do_train \
    --dataset alpaca_gpt4_en \
    --finetuning_type lora \
    --output_dir path_to_sft_checkpoint \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 5e-5 \
    --num_train_epochs 3.0 \
    --plot_loss \
    --fp16
```

Please refer to our [Wiki](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/wiki) about the details of the arguments.

### Distributed Fine-tuning with Multiple GPUs

```bash
accelerate config # configure the environment
accelerate launch src/train_bash.py # arguments (same as above)
```

### Training Reward Model

```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage rm \
    --model_name_or_path path_to_your_chatglm_model \
    --do_train \
    --dataset comparison_gpt4_en \
    --finetuning_type lora \
    --resume_lora_training False \
    --checkpoint_dir path_to_sft_checkpoint \
    --output_dir path_to_rm_checkpoint \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 1e-5 \
    --num_train_epochs 1.0 \
    --plot_loss \
    --fp16
```

### Training with RLHF

```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage ppo \
    --model_name_or_path path_to_your_chatglm_model \
    --do_train \
    --dataset alpaca_gpt4_en \
    --finetuning_type lora \
    --resume_lora_training False \
    --checkpoint_dir path_to_sft_checkpoint \
    --reward_model path_to_rm_checkpoint \
    --output_dir path_to_ppo_checkpoint \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 4 \
    --lr_scheduler_type cosine \
    --logging_steps 10 \
    --save_steps 1000 \
    --learning_rate 1e-5 \
    --num_train_epochs 1.0 \
    --plot_loss
```

### Evaluation (BLEU and ROUGE_CHINESE)

```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage sft \
    --model_name_or_path path_to_your_chatglm_model \
    --do_eval \
    --dataset alpaca_gpt4_en \
    --finetuning_type lora \
    --checkpoint_dir path_to_checkpoint \
    --output_dir path_to_eval_result \
    --per_device_eval_batch_size 8 \
    --max_samples 50 \
    --predict_with_generate
```

### Predict

```bash
CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \
    --stage sft \
    --model_name_or_path path_to_your_chatglm_model \
    --do_predict \
    --dataset alpaca_gpt4_en \
    --finetuning_type lora \
    --checkpoint_dir path_to_checkpoint \
    --output_dir path_to_predict_result \
    --per_device_eval_batch_size 8 \
    --max_samples 100 \
    --predict_with_generate
```

If you want to predict the samples with empty responses, please kindly fill the `response` column with **dummy tokens** to ensure the sample will not be discarded throughout the preprocessing phase.

### API Demo

```bash
python src/api_demo.py \
    --model_name_or_path path_to_your_chatglm_model \
    --finetuning_type lora \
    --checkpoint_dir path_to_checkpoint
```

Visit `http://localhost:8000/docs` for API documentation.

### CLI Demo

```bash
python src/cli_demo.py \
    --model_name_or_path path_to_your_chatglm_model \
    --finetuning_type lora \
    --checkpoint_dir path_to_checkpoint
```

### Web Demo

```bash
python src/web_demo.py \
    --model_name_or_path path_to_your_chatglm_model \
    --finetuning_type lora \
    --checkpoint_dir path_to_checkpoint
```

### Export model

```bash
python src/export_model.py \
    --model_name_or_path path_to_your_chatglm_model \
    --finetuning_type lora \
    --checkpoint_dir path_to_checkpoint \
    --output_dir path_to_export
```

### Hardware Requirements

| Fine-tune method | Batch size | Mode |  GRAM  | Speed |
| ---------------- | ---------- | ---- | ------ | ----- |
| LoRA (r=8)       |     16     | FP16 |  28GB  | 8ex/s |
| LoRA (r=8)       |     8      | FP16 |  24GB  | 8ex/s |
| LoRA (r=8)       |     4      | FP16 |  20GB  | 8ex/s |
| LoRA (r=8)       |     4      | INT8 |  10GB  | 8ex/s |
| LoRA (r=8)       |     4      | INT4 |   8GB  | 8ex/s |
| P-Tuning (p=16)  |     4      | FP16 |  20GB  | 8ex/s |
| P-Tuning (p=16)  |     4      | INT8 |  16GB  | 8ex/s |
| P-Tuning (p=16)  |     4      | INT4 |  12GB  | 8ex/s |
| Freeze (l=3)     |     4      | FP16 |  24GB  | 8ex/s |

| RM  method       | Batch size | Mode |  GRAM  | Speed |
| ---------------- | ---------- | ---- | ------ | ----- |
| LoRA (r=8) + rm  |     4      | FP16 |  22GB  | -     |
| LoRA (r=8) + rm  |     1      | INT8 |  11GB  | -     |

| RLHF method      | Batch size | Mode |  GRAM  | Speed |
| ---------------- | ---------- | ---- | ------ | ----- |
| LoRA (r=8) + ppo |     4      | FP16 |  23GB  | -     |
| LoRA (r=8) + ppo |     1      | INT8 |  12GB  | -     |

> Note: `r` is the lora rank, `p` is the number of prefix tokens, `l` is the number of trainable layers, `ex/s` is the examples per second at training. The `gradient_accumulation_steps` is set to `1`. All are evaluated on a single Tesla V100 (32G) GPU, they are approximated values and may vary in different GPUs.

## Fine-tuning ChatGLM: A Case

### Training Results

We use the whole `alpaca_gpt4_zh` dataset to fine-tune the ChatGLM model with LoRA (r=8) for one epoch, using the default hyper-parameters. The loss curve during training is presented below.

![training loss](assets/trainer_state.jpg)

### Evaluation Results

We select 100 instances in the `alpaca_gpt4_zh` dataset to evaluate the fine-tuned ChatGLM model and compute the BLEU and ROUGE scores. The results are presented below.

|   Score   | Original | FZ (l=2) | PT (p=16) | LoRA (r=8) |
| --------- | -------- | ----- | ----- | ----------------- |
| BLEU-4    |  15.75   | 16.85 | 16.06 | 17.01 (**+1.26**) |
| Rouge-1   |  34.51   | 36.62 | 34.80 | 36.77 (**+2.26**) |
| Rouge-2   |  15.11   | 17.04 | 15.32 | 16.83 (**+1.72**) |
| Rouge-l   |  26.18   | 28.17 | 26.35 | 28.86 (**+2.68**) |
| Params (%)|  /       | 4.35% | 0.06% | 0.06%             |

> FZ: freeze tuning, PT: P-Tuning V2 (we use `pre_seq_len=16` for fair comparison with LoRA), Params: the percentange of trainable parameters.

## Projects

- [SupritYoung/RLHF-Label-Tool](https://github.com/SupritYoung/RLHF-Label-Tool/tree/master): A tool for ranking the responses of LLMs to generate annotated samples used in RLHF training.

## Compared with Existing Implementations

- [THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B/tree/main/ptuning)
  - Official implementation of fine-tuning ChatGLM with [P-Tuning v2](https://github.com/THUDM/P-tuning-v2) on the [ADGEN](https://aclanthology.org/D19-1321.pdf) dataset.
  - Our fine-tuning script is largely depend on it. We further implement the [LoRA](https://arxiv.org/abs/2106.09685) tuning method. Additionally, we **dynamically** pad the inputs to the longest sequence in the batch instead of the maximum length, to accelerate the fine-tuning.
- [mymusise/ChatGLM-Tuning](https://github.com/mymusise/ChatGLM-Tuning)
  - An unoffical implementation of fine-tuning ChatGLM with [LoRA](https://arxiv.org/abs/2106.09685) on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.
  - We borrowed some ideas from it. Our fine-tuning script **integrates** the data pre-processing part into the training procedure, so we need not generate a pre-processed dataset before training.
- [ssbuild/chatglm_finetuning](https://github.com/ssbuild/chatglm_finetuning)
  - An unofficial implementation of fine-tuning ChatGLM with several PEFT methods on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.
  - Our fine-tuning script is implemented **purely** with [Hugging Face transformers](https://github.com/huggingface/transformers) and is independent of the [deep_training](https://github.com/ssbuild/deep_training) framework.
- [lich99/ChatGLM-finetune-LoRA](https://github.com/lich99/ChatGLM-finetune-LoRA)
  - An unofficial implementation of fine-tuning ChatGLM with [LoRA](https://arxiv.org/abs/2106.09685) on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.
  - We use the [Hugging Face PEFT](https://github.com/huggingface/peft) to provide the state-of-the-art PEFT methods.
- [liucongg/ChatGLM-Finetuning](https://github.com/liucongg/ChatGLM-Finetuning)
  - An unofficial implementation of fine-tuning ChatGLM with several methods including Freeze, LoRA and P-Tuning on the industrial dataset.
  - We are aim to incorporate more instruction-following datasets for fine-tuning the ChatGLM model.
- [yanqiangmiffy/InstructGLM](https://github.com/yanqiangmiffy/InstructGLM)
  - An unofficial implementation of fine-tuning ChatGLM that explores the ChatGLM's ability on the instruction-following datasets.
  - Our fine-tuning script integrates the data pre-processing part in to the training procedure.

## TODO

- [ ] Employing [LangChain](https://github.com/hwchase17/langchain) to easily build applications that are capable of leveraging external knowledge upon fine-tuned ChatGLM models.
- [ ] Implementing the alignment algorithms to align human preferrences.
  - [x] [RLHF](https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
  - [ ] [RRHF](https://github.com/GanjinZero/RRHF)
  - [ ] [RAFT](https://github.com/OptimalScale/LMFlow)
- [ ] Incorporating [Chinese datasets](https://github.com/brightmart/nlp_chinese_corpus) into the training sets.
  - [x] [BELLE](https://github.com/LianjiaTech/BELLE)
  - [ ] [pCLUE](https://github.com/CLUEbenchmark/pCLUE)
  - [ ] [CLUECorpus](https://github.com/CLUEbenchmark/CLUECorpus2020)
  - [x] [GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)
  - [x] [FireflyDataset](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)
- [ ] Incorporating [ChatGPT](https://openai.com/blog/chatgpt) & [GPT-4](https://openai.com/research/gpt-4) self-chat data into the training sets.
  - [ ] [Baize](https://github.com/project-baize/baize-chatbot)
  - [x] [GPT-4-LLM](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)
- [x] Implementing the Freeze-Tuning and P-Tuning method.
- [x] Supporting Multi-GPUs fine-tuning.
- [x] Adding script for evaluation.
- [x] Loading from checkpoint.
- [x] Fine-tuning the quantized model.
- [x] Writing a guidebook about how to fine-tune ChatGLM with this framework.
- [ ] Combining with state-of-the-art model editing algorithms. (*e.g. [MEND](https://arxiv.org/abs/2110.11309)*)
- [x] Incorporating the [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) for SFT and alignment.
- [ ] Incorporating the high quality Chinese instruction dataset [COIG](https://huggingface.co/datasets/BAAI/COIG).

## License

This repository is licensed under the [Apache-2.0 License](LICENSE). Please follow the [Model License](https://github.com/THUDM/ChatGLM-6B/blob/main/MODEL_LICENSE) to use ChatGLM-6B model.

## Citation

If this work is helpful, please cite as:

```bibtex
@Misc{chatglm-efficient-tuning,
  title = {ChatGLM Efficient Tuning},
  author = {hiyouga},
  howpublished = {\url{https://github.com/hiyouga/ChatGLM-Efficient-Tuning}},
  year = {2023}
}
```

## Acknowledgement

This repo benefits from [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM-Tuning](https://github.com/mymusise/ChatGLM-Tuning) and [yuanzhoulvpi2017/zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp). Thanks for their wonderful works.

## Star History

![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/ChatGLM-Efficient-Tuning&type=Date)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hiyouga/ChatGLM-Efficient-Tuning",
    "name": "glmtuner",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "ChatGLM,LLM,ChatGPT,transformer,pytorch,deep learning",
    "author": "hiyouga",
    "author_email": "hiyouga@buaa.edu.cn",
    "download_url": "https://files.pythonhosted.org/packages/e2/e3/eca9f1968a0bb162fea22eb77c449aee14e21c09e28c5b94007e56b076e8/glmtuner-0.1.5.tar.gz",
    "platform": null,
    "description": "# ChatGLM Efficient Tuning\n\n[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/ChatGLM-Efficient-Tuning?style=social)](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/stargazers)\n[![GitHub Code License](https://img.shields.io/github/license/hiyouga/ChatGLM-Efficient-Tuning)](LICENSE)\n[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/ChatGLM-Efficient-Tuning)](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/commits/main)\n[![PyPI](https://img.shields.io/pypi/v/glmtuner)](https://pypi.org/project/glmtuner/)\n[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/pulls)\n\nFine-tuning \ud83e\udd16[ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B) model with \ud83e\udd17[PEFT](https://github.com/huggingface/peft).\n\n\ud83d\udc4b Join our [WeChat](assets/wechat.jpg).\n\n\\[ English | [\u4e2d\u6587](README_zh.md) \\]\n\nIf you have any questions, please refer to our [Wiki\ud83d\udcc4](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/wiki).\n\n## Notice\n\nThis repo will **not be maintained** in the future. Please follow **[LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning)** for fine-tuning the language models (including ChatGLM2-6B).\n\n## Changelog\n\n[23/07/15] Now we develop an all-in-one Web UI for training, evaluation and inference. Try `train_web.py` to fine-tune ChatGLM-6B model in your Web browser. Thank [@KanadeSiina](https://github.com/KanadeSiina) and [@codemayq](https://github.com/codemayq) for their efforts in the development.\n\n[23/07/09] Now we release [FastEdit](https://github.com/hiyouga/FastEdit)\u26a1\ud83e\ude79, an easy-to-use package for editing the factual knowledge of large language models efficiently. Please follow [FastEdit](https://github.com/hiyouga/FastEdit) if you are interested.\n\n[23/06/25] Now we align the [demo API](src/api_demo.py) with the [OpenAI's](https://platform.openai.com/docs/api-reference/chat) format where you can insert the fine-tuned model in arbitrary ChatGPT-based applications.\n\n[23/06/25] Now we support fine-tuning the [ChatGLM2-6B](https://github.com/THUDM/ChatGLM2-6B) model with our framework!\n\n[23/06/05] Now we support 4-bit LoRA training (aka [QLoRA](https://github.com/artidoro/qlora)). Try `--quantization_bit 4` argument to work with 4-bit quantized model. (experimental feature)\n\n[23/06/01] We implemented a framework supporting the efficient tuning of LLaMA and BLOOM models. Please follow [LLaMA-Efficient-Tuning](https://github.com/hiyouga/LLaMA-Efficient-Tuning) if you are interested.\n\n[23/05/19] Now we support using the development set to evaluate the model while training. Try `--dev_ratio` argument to specify the size of development set.\n\n[23/04/29] Now we support training ChatGLM with **Reinforcement Learning with Human Feedback (RLHF)** ! We provide several examples to run RLHF training, please refer to the `examples` folder for details.\n\n[23/04/20] Our repo achieved 100 stars within 12 days! Congratulations!\n\n[23/04/19] Now we support **merging the weights** of fine-tuned models trained by LoRA! Try `--checkpoint_dir checkpoint1,checkpoint2` argument for continually fine-tuning the models.\n\n[23/04/18] Now we support training the **quantized models** using three fine-tuning methods! Try `quantization_bit` argument for training the model in 4/8 bits.\n\n[23/04/12] Now we support **training from checkpoints**! Use `--checkpoint_dir` argument to specify the checkpoint model to fine-tune from.\n\n[23/04/11] Now we support training with **combined datasets**! Try `--dataset dataset1,dataset2` argument for training with multiple datasets.\n\n## Datasets\n\n- For supervised fine-tuning:\n  - [Stanford Alpaca (en)](https://github.com/tatsu-lab/stanford_alpaca)\n  - [Stanford Alpaca (zh)](https://github.com/ymcui/Chinese-LLaMA-Alpaca)\n  - [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)\n  - [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)\n  - [Self-cognition (zh)](data/self_cognition.json)\n  - [ShareGPT (zh)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection)\n  - [RefGPT (zh)](https://github.com/sufengniu/RefGPT)\n  - [Guanaco Dataset (multilingual)](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)\n  - [BELLE 2M (zh)](https://huggingface.co/datasets/BelleGroup/train_2M_CN)\n  - [BELLE 1M (zh)](https://huggingface.co/datasets/BelleGroup/train_1M_CN)\n  - [BELLE 0.5M (zh)](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN)\n  - [BELLE Dialogue 0.4M (zh)](https://huggingface.co/datasets/BelleGroup/generated_chat_0.4M)\n  - [BELLE School Math 0.25M (zh)](https://huggingface.co/datasets/BelleGroup/school_math_0.25M)\n  - [BELLE Multiturn Chat 0.8M (zh)](https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M)\n  - [Firefly 1.1M (zh)](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)\n  - [LIMA (en)](https://huggingface.co/datasets/GAIR/lima)\n  - [CodeAlpaca 20k (en)](https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k)\n  - [Alpaca CoT (multilingual)](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT)\n  - [Web QA (zh)](https://huggingface.co/datasets/suolyer/webqa)\n  - [UltraChat (en)](https://github.com/thunlp/UltraChat)\n  - [WebNovel (zh)](https://huggingface.co/datasets/zxbsmk/webnovel_cn)\n- For reward modelling:\n  - [HH-RLHF (en)](https://huggingface.co/datasets/Anthropic/hh-rlhf)\n  - [Open Assistant (multilingual)](https://huggingface.co/datasets/OpenAssistant/oasst1)\n  - [GPT-4 Generated Data (en&zh)](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)\n\nPlease refer to [data/README.md](data/README.md) for details.\n\nSome datasets require confirmation before using them, so we recommend logging in with your Hugging Face account using these commands.\n\n```bash\npip install --upgrade huggingface_hub\nhuggingface-cli login\n```\n\n## Fine-Tuning Methods\n\nOur script now supports the following fine-tuning methods:\n\n- [LoRA](https://arxiv.org/abs/2106.09685)\n  - Fine-tuning the low-rank adapters of the model.\n- [P-Tuning V2](https://github.com/THUDM/P-tuning-v2)\n  - Fine-tuning the prefix encoder of the model.\n- [Freeze](https://arxiv.org/abs/2012.14913)\n  - Fine-tuning the MLPs in the last n blocks of the model.\n- Full Tuning\n  - Fine-tuning all the parameters of the model.\n\n## Requirement\n\n- Python 3.8+ and PyTorch 1.13.1+\n- \ud83e\udd17Transformers, Datasets, Accelerate, PEFT and TRL\n- fire, protobuf, cpm-kernels and sentencepiece\n- jieba, rouge-chinese and nltk (used at evaluation)\n- gradio and matplotlib (used in train_web.py)\n- uvicorn, fastapi and sse-starlette (used in api_demo.py)\n\nAnd **powerful GPUs**!\n\n## Getting Started\n\n### Data Preparation (optional)\n\nPlease refer to `data/example_dataset` for checking the details about the format of dataset files. You can either use a single `.json` file or a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) with multiple files to create a custom dataset.\n\nNote: please update `data/dataset_info.json` to use your custom dataset. About the format of this file, please refer to `data/README.md`.\n\n### Dependence Installation (optional)\n\n```bash\ngit lfs install\ngit clone https://github.com/hiyouga/ChatGLM-Efficient-Tuning.git\nconda create -n chatglm_etuning python=3.10\nconda activate chatglm_etuning\ncd ChatGLM-Efficient-Tuning\npip install -r requirements.txt\n```\n\nIf you want to enable the quantized LoRA (QLoRA) on the Windows platform, you will be required to install a pre-built version of `bitsandbytes` library, which supports CUDA 11.1 to 12.1.\n\n```bash\npip install https://github.com/jllllll/bitsandbytes-windows-webui/releases/download/wheels/bitsandbytes-0.39.1-py3-none-win_amd64.whl\n```\n\n### All-in-one Web UI\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python src/train_web.py\n```\n\nCurrently the web UI only supports training on **a single GPU**.\n\n### Fine-tuning with a Single GPU\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python src/train_bash.py \\\n    --stage sft \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --do_train \\\n    --dataset alpaca_gpt4_en \\\n    --finetuning_type lora \\\n    --output_dir path_to_sft_checkpoint \\\n    --per_device_train_batch_size 4 \\\n    --gradient_accumulation_steps 4 \\\n    --lr_scheduler_type cosine \\\n    --logging_steps 10 \\\n    --save_steps 1000 \\\n    --learning_rate 5e-5 \\\n    --num_train_epochs 3.0 \\\n    --plot_loss \\\n    --fp16\n```\n\nPlease refer to our [Wiki](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/wiki) about the details of the arguments.\n\n### Distributed Fine-tuning with Multiple GPUs\n\n```bash\naccelerate config # configure the environment\naccelerate launch src/train_bash.py # arguments (same as above)\n```\n\n### Training Reward Model\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python src/train_bash.py \\\n    --stage rm \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --do_train \\\n    --dataset comparison_gpt4_en \\\n    --finetuning_type lora \\\n    --resume_lora_training False \\\n    --checkpoint_dir path_to_sft_checkpoint \\\n    --output_dir path_to_rm_checkpoint \\\n    --per_device_train_batch_size 4 \\\n    --gradient_accumulation_steps 4 \\\n    --lr_scheduler_type cosine \\\n    --logging_steps 10 \\\n    --save_steps 1000 \\\n    --learning_rate 1e-5 \\\n    --num_train_epochs 1.0 \\\n    --plot_loss \\\n    --fp16\n```\n\n### Training with RLHF\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python src/train_bash.py \\\n    --stage ppo \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --do_train \\\n    --dataset alpaca_gpt4_en \\\n    --finetuning_type lora \\\n    --resume_lora_training False \\\n    --checkpoint_dir path_to_sft_checkpoint \\\n    --reward_model path_to_rm_checkpoint \\\n    --output_dir path_to_ppo_checkpoint \\\n    --per_device_train_batch_size 2 \\\n    --gradient_accumulation_steps 4 \\\n    --lr_scheduler_type cosine \\\n    --logging_steps 10 \\\n    --save_steps 1000 \\\n    --learning_rate 1e-5 \\\n    --num_train_epochs 1.0 \\\n    --plot_loss\n```\n\n### Evaluation (BLEU and ROUGE_CHINESE)\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python src/train_bash.py \\\n    --stage sft \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --do_eval \\\n    --dataset alpaca_gpt4_en \\\n    --finetuning_type lora \\\n    --checkpoint_dir path_to_checkpoint \\\n    --output_dir path_to_eval_result \\\n    --per_device_eval_batch_size 8 \\\n    --max_samples 50 \\\n    --predict_with_generate\n```\n\n### Predict\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python src/train_bash.py \\\n    --stage sft \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --do_predict \\\n    --dataset alpaca_gpt4_en \\\n    --finetuning_type lora \\\n    --checkpoint_dir path_to_checkpoint \\\n    --output_dir path_to_predict_result \\\n    --per_device_eval_batch_size 8 \\\n    --max_samples 100 \\\n    --predict_with_generate\n```\n\nIf you want to predict the samples with empty responses, please kindly fill the `response` column with **dummy tokens** to ensure the sample will not be discarded throughout the preprocessing phase.\n\n### API Demo\n\n```bash\npython src/api_demo.py \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --finetuning_type lora \\\n    --checkpoint_dir path_to_checkpoint\n```\n\nVisit `http://localhost:8000/docs` for API documentation.\n\n### CLI Demo\n\n```bash\npython src/cli_demo.py \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --finetuning_type lora \\\n    --checkpoint_dir path_to_checkpoint\n```\n\n### Web Demo\n\n```bash\npython src/web_demo.py \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --finetuning_type lora \\\n    --checkpoint_dir path_to_checkpoint\n```\n\n### Export model\n\n```bash\npython src/export_model.py \\\n    --model_name_or_path path_to_your_chatglm_model \\\n    --finetuning_type lora \\\n    --checkpoint_dir path_to_checkpoint \\\n    --output_dir path_to_export\n```\n\n### Hardware Requirements\n\n| Fine-tune method | Batch size | Mode |  GRAM  | Speed |\n| ---------------- | ---------- | ---- | ------ | ----- |\n| LoRA (r=8)       |     16     | FP16 |  28GB  | 8ex/s |\n| LoRA (r=8)       |     8      | FP16 |  24GB  | 8ex/s |\n| LoRA (r=8)       |     4      | FP16 |  20GB  | 8ex/s |\n| LoRA (r=8)       |     4      | INT8 |  10GB  | 8ex/s |\n| LoRA (r=8)       |     4      | INT4 |   8GB  | 8ex/s |\n| P-Tuning (p=16)  |     4      | FP16 |  20GB  | 8ex/s |\n| P-Tuning (p=16)  |     4      | INT8 |  16GB  | 8ex/s |\n| P-Tuning (p=16)  |     4      | INT4 |  12GB  | 8ex/s |\n| Freeze (l=3)     |     4      | FP16 |  24GB  | 8ex/s |\n\n| RM  method       | Batch size | Mode |  GRAM  | Speed |\n| ---------------- | ---------- | ---- | ------ | ----- |\n| LoRA (r=8) + rm  |     4      | FP16 |  22GB  | -     |\n| LoRA (r=8) + rm  |     1      | INT8 |  11GB  | -     |\n\n| RLHF method      | Batch size | Mode |  GRAM  | Speed |\n| ---------------- | ---------- | ---- | ------ | ----- |\n| LoRA (r=8) + ppo |     4      | FP16 |  23GB  | -     |\n| LoRA (r=8) + ppo |     1      | INT8 |  12GB  | -     |\n\n> Note: `r` is the lora rank, `p` is the number of prefix tokens, `l` is the number of trainable layers, `ex/s` is the examples per second at training. The `gradient_accumulation_steps` is set to `1`. All are evaluated on a single Tesla V100 (32G) GPU, they are approximated values and may vary in different GPUs.\n\n## Fine-tuning ChatGLM: A Case\n\n### Training Results\n\nWe use the whole `alpaca_gpt4_zh` dataset to fine-tune the ChatGLM model with LoRA (r=8) for one epoch, using the default hyper-parameters. The loss curve during training is presented below.\n\n![training loss](assets/trainer_state.jpg)\n\n### Evaluation Results\n\nWe select 100 instances in the `alpaca_gpt4_zh` dataset to evaluate the fine-tuned ChatGLM model and compute the BLEU and ROUGE scores. The results are presented below.\n\n|   Score   | Original | FZ (l=2) | PT (p=16) | LoRA (r=8) |\n| --------- | -------- | ----- | ----- | ----------------- |\n| BLEU-4    |  15.75   | 16.85 | 16.06 | 17.01 (**+1.26**) |\n| Rouge-1   |  34.51   | 36.62 | 34.80 | 36.77 (**+2.26**) |\n| Rouge-2   |  15.11   | 17.04 | 15.32 | 16.83 (**+1.72**) |\n| Rouge-l   |  26.18   | 28.17 | 26.35 | 28.86 (**+2.68**) |\n| Params (%)|  /       | 4.35% | 0.06% | 0.06%             |\n\n> FZ: freeze tuning, PT: P-Tuning V2 (we use `pre_seq_len=16` for fair comparison with LoRA), Params: the percentange of trainable parameters.\n\n## Projects\n\n- [SupritYoung/RLHF-Label-Tool](https://github.com/SupritYoung/RLHF-Label-Tool/tree/master): A tool for ranking the responses of LLMs to generate annotated samples used in RLHF training.\n\n## Compared with Existing Implementations\n\n- [THUDM/ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B/tree/main/ptuning)\n  - Official implementation of fine-tuning ChatGLM with [P-Tuning v2](https://github.com/THUDM/P-tuning-v2) on the [ADGEN](https://aclanthology.org/D19-1321.pdf) dataset.\n  - Our fine-tuning script is largely depend on it. We further implement the [LoRA](https://arxiv.org/abs/2106.09685) tuning method. Additionally, we **dynamically** pad the inputs to the longest sequence in the batch instead of the maximum length, to accelerate the fine-tuning.\n- [mymusise/ChatGLM-Tuning](https://github.com/mymusise/ChatGLM-Tuning)\n  - An unoffical implementation of fine-tuning ChatGLM with [LoRA](https://arxiv.org/abs/2106.09685) on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.\n  - We borrowed some ideas from it. Our fine-tuning script **integrates** the data pre-processing part into the training procedure, so we need not generate a pre-processed dataset before training.\n- [ssbuild/chatglm_finetuning](https://github.com/ssbuild/chatglm_finetuning)\n  - An unofficial implementation of fine-tuning ChatGLM with several PEFT methods on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.\n  - Our fine-tuning script is implemented **purely** with [Hugging Face transformers](https://github.com/huggingface/transformers) and is independent of the [deep_training](https://github.com/ssbuild/deep_training) framework.\n- [lich99/ChatGLM-finetune-LoRA](https://github.com/lich99/ChatGLM-finetune-LoRA)\n  - An unofficial implementation of fine-tuning ChatGLM with [LoRA](https://arxiv.org/abs/2106.09685) on the [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset.\n  - We use the [Hugging Face PEFT](https://github.com/huggingface/peft) to provide the state-of-the-art PEFT methods.\n- [liucongg/ChatGLM-Finetuning](https://github.com/liucongg/ChatGLM-Finetuning)\n  - An unofficial implementation of fine-tuning ChatGLM with several methods including Freeze, LoRA and P-Tuning on the industrial dataset.\n  - We are aim to incorporate more instruction-following datasets for fine-tuning the ChatGLM model.\n- [yanqiangmiffy/InstructGLM](https://github.com/yanqiangmiffy/InstructGLM)\n  - An unofficial implementation of fine-tuning ChatGLM that explores the ChatGLM's ability on the instruction-following datasets.\n  - Our fine-tuning script integrates the data pre-processing part in to the training procedure.\n\n## TODO\n\n- [ ] Employing [LangChain](https://github.com/hwchase17/langchain) to easily build applications that are capable of leveraging external knowledge upon fine-tuned ChatGLM models.\n- [ ] Implementing the alignment algorithms to align human preferrences.\n  - [x] [RLHF](https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)\n  - [ ] [RRHF](https://github.com/GanjinZero/RRHF)\n  - [ ] [RAFT](https://github.com/OptimalScale/LMFlow)\n- [ ] Incorporating [Chinese datasets](https://github.com/brightmart/nlp_chinese_corpus) into the training sets.\n  - [x] [BELLE](https://github.com/LianjiaTech/BELLE)\n  - [ ] [pCLUE](https://github.com/CLUEbenchmark/pCLUE)\n  - [ ] [CLUECorpus](https://github.com/CLUEbenchmark/CLUECorpus2020)\n  - [x] [GuanacoDataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset)\n  - [x] [FireflyDataset](https://huggingface.co/datasets/YeungNLP/firefly-train-1.1M)\n- [ ] Incorporating [ChatGPT](https://openai.com/blog/chatgpt) & [GPT-4](https://openai.com/research/gpt-4) self-chat data into the training sets.\n  - [ ] [Baize](https://github.com/project-baize/baize-chatbot)\n  - [x] [GPT-4-LLM](https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM)\n- [x] Implementing the Freeze-Tuning and P-Tuning method.\n- [x] Supporting Multi-GPUs fine-tuning.\n- [x] Adding script for evaluation.\n- [x] Loading from checkpoint.\n- [x] Fine-tuning the quantized model.\n- [x] Writing a guidebook about how to fine-tune ChatGLM with this framework.\n- [ ] Combining with state-of-the-art model editing algorithms. (*e.g. [MEND](https://arxiv.org/abs/2110.11309)*)\n- [x] Incorporating the [OpenAssistant Conversations Dataset](https://huggingface.co/datasets/OpenAssistant/oasst1) for SFT and alignment.\n- [ ] Incorporating the high quality Chinese instruction dataset [COIG](https://huggingface.co/datasets/BAAI/COIG).\n\n## License\n\nThis repository is licensed under the [Apache-2.0 License](LICENSE). Please follow the [Model License](https://github.com/THUDM/ChatGLM-6B/blob/main/MODEL_LICENSE) to use ChatGLM-6B model.\n\n## Citation\n\nIf this work is helpful, please cite as:\n\n```bibtex\n@Misc{chatglm-efficient-tuning,\n  title = {ChatGLM Efficient Tuning},\n  author = {hiyouga},\n  howpublished = {\\url{https://github.com/hiyouga/ChatGLM-Efficient-Tuning}},\n  year = {2023}\n}\n```\n\n## Acknowledgement\n\nThis repo benefits from [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B), [ChatGLM-Tuning](https://github.com/mymusise/ChatGLM-Tuning) and [yuanzhoulvpi2017/zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp). Thanks for their wonderful works.\n\n## Star History\n\n![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/ChatGLM-Efficient-Tuning&type=Date)\n",
    "bugtrack_url": null,
    "license": "Apache 2.0 License",
    "summary": "Fine-tuning ChatGLM-6B with PEFT",
    "version": "0.1.5",
    "project_urls": {
        "Homepage": "https://github.com/hiyouga/ChatGLM-Efficient-Tuning"
    },
    "split_keywords": [
        "chatglm",
        "llm",
        "chatgpt",
        "transformer",
        "pytorch",
        "deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "efae738539047d4f383bde884752abaf75a397be07e059c8c29b52870c5ffd64",
                "md5": "12882c0f02767c24df265ed09bb06c80",
                "sha256": "fd23ea773ca3294c8c10fd7d0c294e485380c1dea575543a6526ba646a8ee9d5"
            },
            "downloads": -1,
            "filename": "glmtuner-0.1.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "12882c0f02767c24df265ed09bb06c80",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 69706,
            "upload_time": "2023-08-12T13:40:35",
            "upload_time_iso_8601": "2023-08-12T13:40:35.601616Z",
            "url": "https://files.pythonhosted.org/packages/ef/ae/738539047d4f383bde884752abaf75a397be07e059c8c29b52870c5ffd64/glmtuner-0.1.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e2e3eca9f1968a0bb162fea22eb77c449aee14e21c09e28c5b94007e56b076e8",
                "md5": "f1df22adf0fc2bf147e52da87f96ab3c",
                "sha256": "133b78f69e10ff92d17e883148bc4cb40b6f4ff72ba1258d9c241ca669667999"
            },
            "downloads": -1,
            "filename": "glmtuner-0.1.5.tar.gz",
            "has_sig": false,
            "md5_digest": "f1df22adf0fc2bf147e52da87f96ab3c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 57784,
            "upload_time": "2023-08-12T13:40:37",
            "upload_time_iso_8601": "2023-08-12T13:40:37.883519Z",
            "url": "https://files.pythonhosted.org/packages/e2/e3/eca9f1968a0bb162fea22eb77c449aee14e21c09e28c5b94007e56b076e8/glmtuner-0.1.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-12 13:40:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hiyouga",
    "github_project": "ChatGLM-Efficient-Tuning",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "1.13.1"
                ]
            ]
        },
        {
            "name": "transformers",
            "specs": [
                [
                    ">=",
                    "4.29.1"
                ]
            ]
        },
        {
            "name": "datasets",
            "specs": [
                [
                    ">=",
                    "2.12.0"
                ]
            ]
        },
        {
            "name": "accelerate",
            "specs": [
                [
                    ">=",
                    "0.21.0"
                ]
            ]
        },
        {
            "name": "peft",
            "specs": [
                [
                    ">=",
                    "0.4.0"
                ]
            ]
        },
        {
            "name": "trl",
            "specs": [
                [
                    ">=",
                    "0.4.7"
                ]
            ]
        },
        {
            "name": "sentencepiece",
            "specs": []
        },
        {
            "name": "jieba",
            "specs": []
        },
        {
            "name": "rouge-chinese",
            "specs": []
        },
        {
            "name": "nltk",
            "specs": []
        },
        {
            "name": "gradio",
            "specs": [
                [
                    ">=",
                    "3.36.0"
                ]
            ]
        },
        {
            "name": "uvicorn",
            "specs": []
        },
        {
            "name": "pydantic",
            "specs": [
                [
                    "==",
                    "1.10.11"
                ]
            ]
        },
        {
            "name": "fastapi",
            "specs": [
                [
                    "==",
                    "0.95.1"
                ]
            ]
        },
        {
            "name": "sse-starlette",
            "specs": []
        },
        {
            "name": "matplotlib",
            "specs": []
        },
        {
            "name": "protobuf",
            "specs": []
        },
        {
            "name": "cpm-kernels",
            "specs": []
        }
    ],
    "lcname": "glmtuner"
}
        
Elapsed time: 0.10924s