Name | xturing JSON |
Version |
0.1.8
JSON |
| download |
home_page | |
Summary | Fine-tuning, evaluation and data generation for LLMs |
upload_time | 2023-09-06 18:26:17 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.7 |
license | Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. |
keywords |
nlp
llm
finetuning
evaluation
data-generation
training
distributed
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<img src=".github/stochastic_logo_light.svg#gh-light-mode-only" width="250" alt="Stochastic.ai"/>
<img src=".github/stochastic_logo_dark.svg#gh-dark-mode-only" width="250" alt="Stochastic.ai"/>
</p>
<h3 align="center">Build, customize and control your own personal LLMs</h3>
<p align="center">
<a href="https://pypi.org/project/xturing/">
<img src="https://img.shields.io/pypi/v/xturing?style=for-the-badge" />
</a>
<a href="https://xturing.stochastic.ai/">
<img src="https://img.shields.io/badge/Documentation-blue?logo=GitBook&logoColor=white&style=for-the-badge" />
</a>
<a href="https://discord.gg/TgHXuSJEk6">
<img src="https://img.shields.io/badge/Chat-FFFFFF?logo=discord&style=for-the-badge"/>
</a>
</p>
<br>
___
`xTuring` provides fast, efficient and simple fine-tuning of LLMs, such as LLaMA, GPT-J, Galactica, and more.
By providing an easy-to-use interface for fine-tuning LLMs to your own data and application, xTuring makes it
simple to build, customize and control LLMs. The entire process can be done inside your computer or in your
private cloud, ensuring data privacy and security.
With `xTuring` you can,
- Ingest data from different sources and preprocess them to a format LLMs can understand
- Scale from single to multiple GPUs for faster fine-tuning
- Leverage memory-efficient methods (i.e. INT4, LoRA fine-tuning) to reduce hardware costs by up to 90%
- Explore different fine-tuning methods and benchmark them to find the best performing model
- Evaluate fine-tuned models on well-defined metrics for in-depth analysis
<br>
## 🌟 What's new?
We are excited to announce the latest enhancements to our `xTuring` library:
1. __`LLaMA 2` integration__ - You can use and fine-tune the _`LLaMA 2`_ model in different configurations: _off-the-shelf_, _off-the-shelf with INT8 precision_, _LoRA fine-tuning_, _LoRA fine-tuning with INT8 precision_ and _LoRA fine-tuning with INT4 precision_ using the `GenericModel` wrapper and/or you can use the `Llama2` class from `xturing.models` to test and finetune the model.
```python
from xturing.models import Llama2
model = Llama2()
## or
from xturing.models import BaseModel
model = BaseModel.create('llama2')
```
2. __`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://towardsdatascience.com/perplexity-in-language-models-87a196019a94).
```python
# Make the necessary imports
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel
# Load the desired dataset
dataset = InstructionDataset('../llama/alpaca_data')
# Load the desired model
model = BaseModel.create('gpt2')
# Run the Evaluation of the model on the dataset
result = model.evaluate(dataset)
# Print the result
print(f"Perplexity of the evalution: {result}")
```
3. __`INT4` Precision__ - You can now use and fine-tune any LLM with `INT4 Precision` using `GenericLoraKbitModel`.
```python
# Make the necessary imports
from xturing.datasets import InstructionDataset
from xturing.models import GenericLoraKbitModel
# Load the desired dataset
dataset = InstructionDataset('../llama/alpaca_data')
# Load the desired model for INT4 bit fine-tuning
model = GenericLoraKbitModel('tiiuae/falcon-7b')
# Run the fine-tuning
model.finetune(dataset)
```
4. __CPU inference__ - Now you can use just your CPU for inference of any LLM. _CAUTION : The inference process may be sluggish because CPUs lack the required computational capacity for efficient inference_.
5. __Batch integration__ - By tweaking the 'batch_size' in the .generate() and .evaluate() functions, you can expedite results. Using a 'batch_size' greater than 1 typically enhances processing efficiency.
```python
# Make the necessary imports
from xturing.datasets import InstructionDataset
from xturing.models import GenericLoraKbitModel
# Load the desired dataset
dataset = InstructionDataset('../llama/alpaca_data')
# Load the desired model for INT4 bit fine-tuning
model = GenericLoraKbitModel('tiiuae/falcon-7b')
# Generate outputs on desired prompts
outputs = model.generate(dataset = dataset, batch_size=10)
```
An exploration of the [Llama LoRA INT4 working example](examples/features/int4_finetuning/LLaMA_lora_int4.ipynb) is recommended for an understanding of its application.
For an extended insight, consider examining the [GenericModel working example](examples/features/generic/generic_model.py) available in the repository.
<br>
## ⚙️ Installation
```bash
pip install xturing
```
<br>
## 🚀 Quickstart
```python
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel
# Load the dataset
instruction_dataset = InstructionDataset("./alpaca_data")
# Initialize the model
model = BaseModel.create("llama_lora")
# Finetune the model
model.finetune(dataset=instruction_dataset)
# Perform inference
output = model.generate(texts=["Why LLM models are becoming so important?"])
print("Generated output by the model: {}".format(output))
```
You can find the data folder [here](examples/models/llama/alpaca_data).
<br>
## CLI playground
<img src=".github/cli-playground.gif" width="80%" style="margin: 0 1%;"/>
```bash
$ xturing chat -m "<path-to-model-folder>"
```
## UI playground
<img src=".github/ui-playground2.gif" width="80%" style="margin: 0 1%;"/>
```python
from xturing.datasets import InstructionDataset
from xturing.models import BaseModel
from xturing.ui import Playground
dataset = InstructionDataset("./alpaca_data")
model = BaseModel.create("<model_name>")
model.finetune(dataset=dataset)
model.save("llama_lora_finetuned")
Playground().launch() ## launches localhost UI
```
<br>
## 📚 Tutorials
- [Preparing your dataset](examples/datasets/preparing_your_dataset.py)
- [Cerebras-GPT fine-tuning with LoRA and INT8](examples/models/cerebras/cerebras_lora_int8.ipynb)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eKq3oF7dnK8KuIfsTE70Gvvniwr1O9D0?usp=sharing)
- [Cerebras-GPT fine-tuning with LoRA](examples/models/cerebras/cerebras_lora.ipynb)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1VjqQhstm5pT4EjPjx4Je7b3W2X1V3vDo?usp=sharing)
- [LLaMA fine-tuning with LoRA and INT8](examples/models/llama/llama_lora_int8.py)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing)
- [LLaMA fine-tuning with LoRA](examples/models/llama/llama_lora.py)
- [LLaMA fine-tuning](examples/models/llama/llama.py)
- [GPT-J fine-tuning with LoRA and INT8](examples/models/gptj/gptj_lora_int8.py)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hB_8s1V9K4IzifmlmN2AovGEJzTB1c7e?usp=sharing)
- [GPT-J fine-tuning with LoRA](examples/models/gptj/gptj_lora.py)
- [GPT-2 fine-tuning with LoRA](examples/models/gpt2/gpt2_lora.py)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://drive.google.com/file/d/1Sh-ocNpKn9pS7jv6oBb_Q8DitFyj1avL/view?usp=sharing)
<br>
## 📊 Performance
Here is a comparison for the performance of different fine-tuning techniques on the LLaMA 7B model. We use the [Alpaca dataset](examples/models/llama/alpaca_data/) for fine-tuning. The dataset contains 52K instructions.
Hardware:
4xA100 40GB GPU, 335GB CPU RAM
Fine-tuning parameters:
```javascript
{
'maximum sequence length': 512,
'batch size': 1,
}
```
| LLaMA-7B | DeepSpeed + CPU Offloading | LoRA + DeepSpeed | LoRA + DeepSpeed + CPU Offloading |
| :---------: | :----: | :----: | :----: |
| GPU | 33.5 GB | 23.7 GB | 21.9 GB |
| CPU | 190 GB | 10.2 GB | 14.9 GB |
| Time/epoch | 21 hours | 20 mins | 20 mins |
Contribute to this by submitting your performance results on other GPUs by creating an issue with your hardware specifications, memory consumption and time per epoch.
<br>
## 📎 Fine-tuned model checkpoints
We have already fine-tuned some models that you can use as your base or start playing with.
Here is how you would load them:
```python
from xturing.models import BaseModel
model = BaseModel.load("x/distilgpt2_lora_finetuned_alpaca")
```
| model | dataset | Path |
|---------------------|--------|---------------|
| DistilGPT-2 LoRA | alpaca | `x/distilgpt2_lora_finetuned_alpaca` |
| LLaMA LoRA | alpaca | `x/llama_lora_finetuned_alpaca` |
<br>
## Supported Models
Below is a list of all the supported models via `BaseModel` class of `xTuring` and their corresponding keys to load them.
| Model | Key |
| -- | -- |
|Bloom | bloom|
|Cerebras | cerebras|
|DistilGPT-2 | distilgpt2|
|Falcon-7B | falcon|
|Galactica | galactica|
|GPT-J | gptj|
|GPT-2 | gpt2|
|LlaMA | llama|
|LlaMA2 | llama2|
|OPT-1.3B | opt|
The above mentioned are the base variants of the LLMs. Below are the templates to get their `LoRA`, `INT8`, `INT8 + LoRA` and `INT4 + LoRA` versions.
| Version | Template |
| -- | -- |
| LoRA| <model_key>_lora|
| INT8| <model_key>_int8|
| INT8 + LoRA| <model_key>_lora_int8|
** In order to load any model's __`INT4+LoRA`__ version, you will need to make use of `GenericLoraKbitModel` class from `xturing.models`. Below is how to use it:
```python
model = GenericLoraKbitModel('<model_path>')
```
The `model_path` can be replaced with you local directory or any HuggingFace library model like `facebook/opt-1.3b`.
## 📈 Roadmap
- [x] Support for `LLaMA`, `GPT-J`, `GPT-2`, `OPT`, `Cerebras-GPT`, `Galactica` and `Bloom` models
- [x] Dataset generation using self-instruction
- [x] Low-precision LoRA fine-tuning and unsupervised fine-tuning
- [x] INT8 low-precision fine-tuning support
- [x] OpenAI, Cohere and AI21 Studio model APIs for dataset generation
- [x] Added fine-tuned checkpoints for some models to the hub
- [x] INT4 LLaMA LoRA fine-tuning demo
- [x] INT4 LLaMA LoRA fine-tuning with INT4 generation
- [x] Support for a `Generic model` wrapper
- [x] Support for `Falcon-7B` model
- [x] INT4 low-precision fine-tuning support
- [x] Evaluation of LLM models
- [ ] INT3, INT2, INT1 low-precision fine-tuning support
- [ ] Support for Stable Diffusion
<br>
## 🤝 Help and Support
If you have any questions, you can create an issue on this repository.
You can also join our [Discord server](https://discord.gg/TgHXuSJEk6) and start a discussion in the `#xturing` channel.
<br>
## 📝 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
<br>
## 🌎 Contributing
As an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features and better documentation. Please read our [contributing guide](CONTRIBUTING.md) to learn how you can get involved.
Raw data
{
"_id": null,
"home_page": "",
"name": "xturing",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "nlp,llm,finetuning,evaluation,data-generation,training,distributed",
"author": "",
"author_email": "Glenn Ko <glenn@stochastic.ai>, Yuji Chai <yuji.chai@stochastic.ai>, Roman Ageev <roman.ageev@stochastic.ai>, Toan Do <toan.do@stochastic.ai>, Marcos R M <marcos.rm@stochastic.ai>, Sarthak Langde <sarthak.langde@stochastic.ai>, Riccardo Romagnoli <riccardo.romagnoli@stochastic.ai>, Subhash G N <subhash.gn@stochastic.ai>",
"download_url": "https://files.pythonhosted.org/packages/5a/c8/63c5c7beda06875479beff7464f0ab890662e5cded985d9e3f88d168efb6/xturing-0.1.8.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n <img src=\".github/stochastic_logo_light.svg#gh-light-mode-only\" width=\"250\" alt=\"Stochastic.ai\"/>\n <img src=\".github/stochastic_logo_dark.svg#gh-dark-mode-only\" width=\"250\" alt=\"Stochastic.ai\"/>\n</p>\n<h3 align=\"center\">Build, customize and control your own personal LLMs</h3>\n\n<p align=\"center\">\n <a href=\"https://pypi.org/project/xturing/\">\n <img src=\"https://img.shields.io/pypi/v/xturing?style=for-the-badge\" />\n </a>\n <a href=\"https://xturing.stochastic.ai/\">\n <img src=\"https://img.shields.io/badge/Documentation-blue?logo=GitBook&logoColor=white&style=for-the-badge\" />\n </a>\n <a href=\"https://discord.gg/TgHXuSJEk6\">\n <img src=\"https://img.shields.io/badge/Chat-FFFFFF?logo=discord&style=for-the-badge\"/>\n </a>\n</p>\n<br>\n\n___\n\n`xTuring` provides fast, efficient and simple fine-tuning of LLMs, such as LLaMA, GPT-J, Galactica, and more.\nBy providing an easy-to-use interface for fine-tuning LLMs to your own data and application, xTuring makes it\nsimple to build, customize and control LLMs. The entire process can be done inside your computer or in your\nprivate cloud, ensuring data privacy and security.\n\nWith `xTuring` you can,\n- Ingest data from different sources and preprocess them to a format LLMs can understand\n- Scale from single to multiple GPUs for faster fine-tuning\n- Leverage memory-efficient methods (i.e. INT4, LoRA fine-tuning) to reduce hardware costs by up to 90%\n- Explore different fine-tuning methods and benchmark them to find the best performing model\n- Evaluate fine-tuned models on well-defined metrics for in-depth analysis\n\n<br>\n\n## \ud83c\udf1f What's new?\nWe are excited to announce the latest enhancements to our `xTuring` library:\n1. __`LLaMA 2` integration__ - You can use and fine-tune the _`LLaMA 2`_ model in different configurations: _off-the-shelf_, _off-the-shelf with INT8 precision_, _LoRA fine-tuning_, _LoRA fine-tuning with INT8 precision_ and _LoRA fine-tuning with INT4 precision_ using the `GenericModel` wrapper and/or you can use the `Llama2` class from `xturing.models` to test and finetune the model.\n```python\nfrom xturing.models import Llama2\nmodel = Llama2()\n\n## or\nfrom xturing.models import BaseModel\nmodel = BaseModel.create('llama2')\n\n```\n2. __`Evaluation`__ - Now you can evaluate any `Causal Language Model` on any dataset. The metrics currently supported is [`perplexity`](https://towardsdatascience.com/perplexity-in-language-models-87a196019a94).\n```python\n# Make the necessary imports\nfrom xturing.datasets import InstructionDataset\nfrom xturing.models import BaseModel\n\n# Load the desired dataset\ndataset = InstructionDataset('../llama/alpaca_data')\n\n# Load the desired model\nmodel = BaseModel.create('gpt2')\n\n# Run the Evaluation of the model on the dataset\nresult = model.evaluate(dataset)\n\n# Print the result\nprint(f\"Perplexity of the evalution: {result}\")\n\n```\n3. __`INT4` Precision__ - You can now use and fine-tune any LLM with `INT4 Precision` using `GenericLoraKbitModel`.\n```python\n# Make the necessary imports\nfrom xturing.datasets import InstructionDataset\nfrom xturing.models import GenericLoraKbitModel\n\n# Load the desired dataset\ndataset = InstructionDataset('../llama/alpaca_data')\n\n# Load the desired model for INT4 bit fine-tuning\nmodel = GenericLoraKbitModel('tiiuae/falcon-7b')\n\n# Run the fine-tuning\nmodel.finetune(dataset)\n```\n4. __CPU inference__ - Now you can use just your CPU for inference of any LLM. _CAUTION : The inference process may be sluggish because CPUs lack the required computational capacity for efficient inference_.\n5. __Batch integration__ - By tweaking the 'batch_size' in the .generate() and .evaluate() functions, you can expedite results. Using a 'batch_size' greater than 1 typically enhances processing efficiency.\n```python\n# Make the necessary imports\nfrom xturing.datasets import InstructionDataset\nfrom xturing.models import GenericLoraKbitModel\n\n# Load the desired dataset\ndataset = InstructionDataset('../llama/alpaca_data')\n\n# Load the desired model for INT4 bit fine-tuning\nmodel = GenericLoraKbitModel('tiiuae/falcon-7b')\n\n# Generate outputs on desired prompts\noutputs = model.generate(dataset = dataset, batch_size=10)\n\n```\n\nAn exploration of the [Llama LoRA INT4 working example](examples/features/int4_finetuning/LLaMA_lora_int4.ipynb) is recommended for an understanding of its application.\n\nFor an extended insight, consider examining the [GenericModel working example](examples/features/generic/generic_model.py) available in the repository.\n\n<br>\n\n## \u2699\ufe0f Installation\n```bash\npip install xturing\n```\n\n<br>\n\n## \ud83d\ude80 Quickstart\n\n```python\nfrom xturing.datasets import InstructionDataset\nfrom xturing.models import BaseModel\n\n# Load the dataset\ninstruction_dataset = InstructionDataset(\"./alpaca_data\")\n\n# Initialize the model\nmodel = BaseModel.create(\"llama_lora\")\n\n# Finetune the model\nmodel.finetune(dataset=instruction_dataset)\n\n# Perform inference\noutput = model.generate(texts=[\"Why LLM models are becoming so important?\"])\n\nprint(\"Generated output by the model: {}\".format(output))\n```\n\nYou can find the data folder [here](examples/models/llama/alpaca_data).\n\n<br>\n\n## CLI playground\n<img src=\".github/cli-playground.gif\" width=\"80%\" style=\"margin: 0 1%;\"/>\n\n```bash\n$ xturing chat -m \"<path-to-model-folder>\"\n\n```\n\n## UI playground\n<img src=\".github/ui-playground2.gif\" width=\"80%\" style=\"margin: 0 1%;\"/>\n\n```python\nfrom xturing.datasets import InstructionDataset\nfrom xturing.models import BaseModel\nfrom xturing.ui import Playground\n\ndataset = InstructionDataset(\"./alpaca_data\")\nmodel = BaseModel.create(\"<model_name>\")\n\nmodel.finetune(dataset=dataset)\n\nmodel.save(\"llama_lora_finetuned\")\n\nPlayground().launch() ## launches localhost UI\n\n```\n\n<br>\n\n## \ud83d\udcda Tutorials\n- [Preparing your dataset](examples/datasets/preparing_your_dataset.py)\n- [Cerebras-GPT fine-tuning with LoRA and INT8](examples/models/cerebras/cerebras_lora_int8.ipynb)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eKq3oF7dnK8KuIfsTE70Gvvniwr1O9D0?usp=sharing)\n- [Cerebras-GPT fine-tuning with LoRA](examples/models/cerebras/cerebras_lora.ipynb)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1VjqQhstm5pT4EjPjx4Je7b3W2X1V3vDo?usp=sharing)\n- [LLaMA fine-tuning with LoRA and INT8](examples/models/llama/llama_lora_int8.py)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SQUXq1AMZPSLD4mk3A3swUIc6Y2dclme?usp=sharing)\n- [LLaMA fine-tuning with LoRA](examples/models/llama/llama_lora.py)\n- [LLaMA fine-tuning](examples/models/llama/llama.py)\n- [GPT-J fine-tuning with LoRA and INT8](examples/models/gptj/gptj_lora_int8.py)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1hB_8s1V9K4IzifmlmN2AovGEJzTB1c7e?usp=sharing)\n- [GPT-J fine-tuning with LoRA](examples/models/gptj/gptj_lora.py)\n- [GPT-2 fine-tuning with LoRA](examples/models/gpt2/gpt2_lora.py)   [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://drive.google.com/file/d/1Sh-ocNpKn9pS7jv6oBb_Q8DitFyj1avL/view?usp=sharing)\n\n<br>\n\n## \ud83d\udcca Performance\n\nHere is a comparison for the performance of different fine-tuning techniques on the LLaMA 7B model. We use the [Alpaca dataset](examples/models/llama/alpaca_data/) for fine-tuning. The dataset contains 52K instructions.\n\nHardware:\n\n4xA100 40GB GPU, 335GB CPU RAM\n\nFine-tuning parameters:\n\n```javascript\n{\n 'maximum sequence length': 512,\n 'batch size': 1,\n}\n```\n\n| LLaMA-7B | DeepSpeed + CPU Offloading | LoRA + DeepSpeed | LoRA + DeepSpeed + CPU Offloading |\n| :---------: | :----: | :----: | :----: |\n| GPU | 33.5 GB | 23.7 GB | 21.9 GB |\n| CPU | 190 GB | 10.2 GB | 14.9 GB |\n| Time/epoch | 21 hours | 20 mins | 20 mins |\n\nContribute to this by submitting your performance results on other GPUs by creating an issue with your hardware specifications, memory consumption and time per epoch.\n\n<br>\n\n## \ud83d\udcce Fine-tuned model checkpoints\nWe have already fine-tuned some models that you can use as your base or start playing with.\nHere is how you would load them:\n\n```python\nfrom xturing.models import BaseModel\nmodel = BaseModel.load(\"x/distilgpt2_lora_finetuned_alpaca\")\n```\n\n| model | dataset | Path |\n|---------------------|--------|---------------|\n| DistilGPT-2 LoRA | alpaca | `x/distilgpt2_lora_finetuned_alpaca` |\n| LLaMA LoRA | alpaca | `x/llama_lora_finetuned_alpaca` |\n\n<br>\n\n## Supported Models\nBelow is a list of all the supported models via `BaseModel` class of `xTuring` and their corresponding keys to load them.\n\n| Model | Key |\n| -- | -- |\n|Bloom | bloom|\n|Cerebras | cerebras|\n|DistilGPT-2 | distilgpt2|\n|Falcon-7B | falcon|\n|Galactica | galactica|\n|GPT-J | gptj|\n|GPT-2 | gpt2|\n|LlaMA | llama|\n|LlaMA2 | llama2|\n|OPT-1.3B | opt|\n\nThe above mentioned are the base variants of the LLMs. Below are the templates to get their `LoRA`, `INT8`, `INT8 + LoRA` and `INT4 + LoRA` versions.\n\n| Version | Template |\n| -- | -- |\n| LoRA| <model_key>_lora|\n| INT8| <model_key>_int8|\n| INT8 + LoRA| <model_key>_lora_int8|\n\n** In order to load any model's __`INT4+LoRA`__ version, you will need to make use of `GenericLoraKbitModel` class from `xturing.models`. Below is how to use it:\n```python\nmodel = GenericLoraKbitModel('<model_path>')\n```\nThe `model_path` can be replaced with you local directory or any HuggingFace library model like `facebook/opt-1.3b`.\n\n## \ud83d\udcc8 Roadmap\n- [x] Support for `LLaMA`, `GPT-J`, `GPT-2`, `OPT`, `Cerebras-GPT`, `Galactica` and `Bloom` models\n- [x] Dataset generation using self-instruction\n- [x] Low-precision LoRA fine-tuning and unsupervised fine-tuning\n- [x] INT8 low-precision fine-tuning support\n- [x] OpenAI, Cohere and AI21 Studio model APIs for dataset generation\n- [x] Added fine-tuned checkpoints for some models to the hub\n- [x] INT4 LLaMA LoRA fine-tuning demo\n- [x] INT4 LLaMA LoRA fine-tuning with INT4 generation\n- [x] Support for a `Generic model` wrapper\n- [x] Support for `Falcon-7B` model\n- [x] INT4 low-precision fine-tuning support\n- [x] Evaluation of LLM models\n- [ ] INT3, INT2, INT1 low-precision fine-tuning support\n- [ ] Support for Stable Diffusion\n\n<br>\n\n## \ud83e\udd1d Help and Support\nIf you have any questions, you can create an issue on this repository.\n\nYou can also join our [Discord server](https://discord.gg/TgHXuSJEk6) and start a discussion in the `#xturing` channel.\n\n<br>\n\n## \ud83d\udcdd License\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n\n<br>\n\n## \ud83c\udf0e Contributing\nAs an open source project in a rapidly evolving field, we welcome contributions of all kinds, including new features and better documentation. Please read our [contributing guide](CONTRIBUTING.md) to learn how you can get involved.\n",
"bugtrack_url": null,
"license": "Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License. \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\" \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ",
"summary": "Fine-tuning, evaluation and data generation for LLMs",
"version": "0.1.8",
"project_urls": {
"documentation": "https://github.com/stochasticai/xturing-docs",
"homepage": "https://xturing.stochastic.ai/",
"repository": "https://github.com/stochasticai/xturing"
},
"split_keywords": [
"nlp",
"llm",
"finetuning",
"evaluation",
"data-generation",
"training",
"distributed"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "196e747e38ff4135941eff0950f68cfdfba15bdf1e205b75f1354fd64db5180d",
"md5": "9e7ad71fd42d87de62378c90dfcc0e22",
"sha256": "fe2dad5d6b8e90ef0c2b9486074c2261d24b957bb7b08c6f5b82e955bc045549"
},
"downloads": -1,
"filename": "xturing-0.1.8-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9e7ad71fd42d87de62378c90dfcc0e22",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 125726,
"upload_time": "2023-09-06T18:26:14",
"upload_time_iso_8601": "2023-09-06T18:26:14.950965Z",
"url": "https://files.pythonhosted.org/packages/19/6e/747e38ff4135941eff0950f68cfdfba15bdf1e205b75f1354fd64db5180d/xturing-0.1.8-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5ac863c5c7beda06875479beff7464f0ab890662e5cded985d9e3f88d168efb6",
"md5": "6ea7c65210ea85dcd04600055f50f4df",
"sha256": "e353b1af4e5b2dd07690c024672df51baa80ed74299d85fa055dce51ac2f3bd1"
},
"downloads": -1,
"filename": "xturing-0.1.8.tar.gz",
"has_sig": false,
"md5_digest": "6ea7c65210ea85dcd04600055f50f4df",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 102817,
"upload_time": "2023-09-06T18:26:17",
"upload_time_iso_8601": "2023-09-06T18:26:17.455562Z",
"url": "https://files.pythonhosted.org/packages/5a/c8/63c5c7beda06875479beff7464f0ab890662e5cded985d9e3f88d168efb6/xturing-0.1.8.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-09-06 18:26:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "stochasticai",
"github_project": "xturing-docs",
"github_not_found": true,
"lcname": "xturing"
}