easyinstruct


Nameeasyinstruct JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://github.com/zjunlp/EasyInstruct
SummaryAn Easy-to-use Instruction Processing Framework for Large Language Models.
upload_time2024-02-06 05:53:18
maintainer
docs_urlNone
authorYixin Ou
requires_python>=3.7.0
license
keywords ai nlp instruction language model
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

<img src="figs/logo.png" width="300px">

**An Easy-to-use Instruction Processing Framework for Large Language Models.**

---

<p align="center">
  <a href="https://zjunlp.github.io/project/EasyInstruct">Project</a> •
  <a href="https://arxiv.org/abs/2402.03049">Paper</a> •
  <a href="https://huggingface.co/spaces/zjunlp/EasyInstruct">Demo</a> •
  <a href="#overview">Overview</a> •
  <a href="#installation">Installation</a> •
  <a href="#quickstart">Quickstart</a> •
  <a href="#use-easyinstruct">How To Use</a> •
  <a href="https://zjunlp.gitbook.io/easyinstruct/">Docs</a> •
  <a href="#citation">Citation</a> •
  <a href="#contributors">Contributors</a>
</p>

![](https://img.shields.io/badge/version-v0.1.2-blue)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
![](https://img.shields.io/github/last-commit/zjunlp/EasyInstruct?color=green) 
![](https://img.shields.io/badge/PRs-Welcome-red) 

</div>

## Table of Contents

- <a href="#news">What's New</a>
- <a href="#overview">Overview</a>
- <a href="#installation">Installation</a>
- <a href="#quickstart">Quickstart</a>
  - <a href="#shell-script">Shell Script</a>
  - <a href="#gradio-app">Gradio App</a>
- <a href="#use-easyinstruct">Use EasyInstruct</a>
  - <a href="#generators">Generators</a>
  - <a href="#selectors">Selectors</a>
  - <a href="#prompts">Prompts</a>
  - <a href="#engines">Engines</a>
- <a href="#citation">Citation</a>
- <a href="#contributors">Contributors</a>

## 🔔News

- **2024-2-6 We release the paper "[EasyInstruct: An Easy-to-use Instruction Processing Framework for Large Language Models](https://arxiv.org/abs/2402.03049)".**
- **2024-2-5 We release version 0.1.2, supporting for new features and optimising the function interface.**
- **2023-12-9 The paper "[When Do Program-of-Thoughts Work for Reasoning?](https://arxiv.org/abs/2308.15452)" (supported by EasyInstruct), is accepted by AAAI 2024!**
- **2023-10-28 We release version 0.1.1, supporting for new features of instruction generation and instruction selection.**
- **2023-8-9 We release version 0.0.6, supporting Cohere API calls.**
- **2023-7-12 We release [EasyEdit](https://github.com/zjunlp/EasyEdit), an easy-to-use framework to edit Large Language Models.**
<details>
<summary><b>Previous news</b></summary>

- **2023-5-23 We release version 0.0.5, removing requirement of llama-cpp-python.**
- **2023-5-16 We release version 0.0.4, fixing some problems.**
- **2023-4-21 We release version 0.0.3, check out our [documentations](https://zjunlp.gitbook.io/easyinstruct/documentations) for more details.**
- **2023-3-25 We release version 0.0.2, suporting IndexPrompt, MMPrompt, IEPrompt and more LLMs**
- **2023-3-13 We release version 0.0.1, supporting in-context learning, chain-of-thought with ChatGPT.**
  
</details>

---

This repository is a subproject of [KnowLM](https://github.com/zjunlp/KnowLM).


## 🌟Overview

EasyInstruct is a Python package which is proposed as an easy-to-use instruction processing framework for Large Language Models(LLMs) like GPT-4, LLaMA, ChatGLM in your research experiments. EasyInstruct modularizes instruction generation, selection, and prompting, while also considering their combination and interaction. 

<img src="figs/overview.png">

- The current supported instruction generation techniques are as follows:

  | **Methods** | **Description** |
  | --- | --- |
  | [Self-Instruct](https://arxiv.org/abs/2212.10560) | The method that randomly samples a few instructions from a human-annotated seed tasks pool as demonstrations and prompts an LLM to generate more instructions and corresponding input-output pairs. |
  | [Evol-Instruct](https://arxiv.org/abs/2304.12244) | The method that incrementally upgrades an initial set of instructions into more complex instructions by prompting an LLM with specific prompts. |
  | [Backtranslation](https://arxiv.org/abs/2308.06259) | The method that creates an instruction following training instance by predicting an instruction that would be correctly answered by a portion of a document of the corpus.  |
  | [KG2Instruct](https://arxiv.org/abs/2305.11527) | The method that creates an instruction following training instance by predicting an instruction that would be correctly answered by a portion of a document of the corpus. |

- The current supported instruction selection metrics are as follows:

  | **Metrics** | **Notation** | **Description**                                                                                                             |
  |----------------------|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------|
  | Length               | $Len$                 | The bounded length of every pair of instruction and response.                                                                                 |
  | Perplexity           | $PPL$                 | The exponentiated average negative log-likelihood of response.                                                                       |
  | MTLD                 | $MTLD$                | Measure of textual lexical diversity, the mean length of sequential words in a text that maintains a minimum threshold TTR score.                                                                                   |
  | ROUGE                | $ROUGE$               | Recall-Oriented Understudy for Gisting Evaluation, a set of metrics used for evaluating similarities between sentences.                                                                                   |
  | GPT score            | $GPT$                 | The score of whether the output is a good example of how AI Assistant should respond to the user's instruction, provided by ChatGPT. |
  | CIRS                 | $CIRS$                | The score using the abstract syntax tree to encode structural and logical attributes, to measure the correlation between code and reasoning abilities.                                                                                   |

- API service providers and their corresponding LLM products that are currently available:
  
   **Model** | **Description**                                                                                                                  | **Default Version** 
  --------------------|-------------------------------------------------------------------------------------------------------------------------------------------|------------------------------
  ***OpenAI***
   GPT-3.5            | A set of models that improve on GPT-3 and can understand as well as generate natural language or code.                                    | `gpt-3.5-turbo`       
   GPT-4              | A set of models that improve on GPT-3.5 and can understand as well as generate natural language or code.                                  | `gpt-4`
   ***Anthropic***               
   Claude             | A next-generation AI assistant based on Anthropic’s research into training helpful, honest, and harmless AI systems.                      | `claude-2.0`          
   Claude-Instant     | A lighter, less expensive, and much faster option than Claude.                                                                            | `claude-instant-1.2`
  ***Cohere***     
   Command            | A flagship text generation model of Cohere trained to follow user commands and to be instantly useful in practical business applications. | `command`             
   Command-Light      | A light version of Command models that are faster but may produce lower-quality generated text.                                           | `command-light`    
---

## 🔧Installation

**Installation from git repo branch:**
```
pip install git+https://github.com/zjunlp/EasyInstruct@main
```

**Installation for local development:**
```
git clone https://github.com/zjunlp/EasyInstruct
cd EasyInstruct
pip install -e .
```

**Installation using PyPI (not the latest version):**
```
pip install easyinstruct -i https://pypi.org/simple
```

---

## ⏩Quickstart

We provide two ways for users to quickly get started with EasyInstruct. You can either use the shell script or the Gradio app based on your specific needs.

### Shell Script

#### Step1: Prepare a configuration file

Users can easily configure the parameters of EasyInstruct in a YAML-style file or just quickly use the default parameters in the configuration files we provide. Following is an example of the configuration file for Self-Instruct:

```yaml
generator:
  SelfInstructGenerator:
    target_dir: data/generations/
    data_format: alpaca
    seed_tasks_path: data/seed_tasks.jsonl
    generated_instructions_path: generated_instructions.jsonl
    generated_instances_path: generated_instances.jsonl
    num_instructions_to_generate: 100
    engine: gpt-3.5-turbo
    num_prompt_instructions: 8
```

More example configuration files can be found at [configs](https://github.com/zjunlp/EasyInstruct/tree/main/configs).

#### Step2: Run the shell script

Users should first specify the configuration file and provide their own OpenAI API key. Then, run the following shell script to launch the instruction generation or selection process.

```shell
config_file=""
openai_api_key=""

python demo/run.py \
    --config  $config_file\
    --openai_api_key $openai_api_key \
```

### Gradio App

We provide a Gradio app for users to quickly get started with EasyInstruct. You can run the following command to launch the Gradio app locally on the port `7860` (if available).

```shell
python demo/app.py
```

We also host a running gradio app in HuggingFace Spaces. You can try it out [here](https://huggingface.co/spaces/zjunlp/EasyInstruct).

---

## 📌Use EasyInstruct

Please refer to our [documentations](https://zjunlp.gitbook.io/easyinstruct/documentations) for more details.

### Generators

The `Generators` module streamlines the process of instruction data generation, allowing for the generation of instruction data based on seed data. You can choose the appropriate generator based on your specific needs.

#### BaseGenerator

> `BaseGenerator` is the base class for all generators.

> You can also easily inherit this base class to customize your own generator class. Just override the `__init__` and `generate` method.

#### SelfInstructGenerator

> `SelfInstructGenerator` is the class for the instruction generation method of Self-Instruct. See [Self-Instruct: Aligning Language Model with Self Generated Instructions](http://arxiv.org/abs/2212.10560) for more details.

<b>Example</b>

```python
from easyinstruct import SelfInstructGenerator
from easyinstruct.utils.api import set_openai_key

# Step1: Set your own API-KEY
set_openai_key("YOUR-KEY")

# Step2: Declare a generator class
generator = SelfInstructGenerator(num_instructions_to_generate=10)

# Step3: Generate self-instruct data
generator.generate()
```

#### BacktranslationGenerator

> `BacktranslationGenerator` is the class for the instruction generation method of Instruction Backtranslation. See [Self-Alignment with Instruction Backtranslation](http://arxiv.org/abs/2308.06259) for more details.

<details>
<summary><b>Example</b></summary>

```python
from easyinstruct import BacktranslationGenerator
from easyinstruct.utils.api import set_openai_key

# Step1: Set your own API-KEY
set_openai_key("YOUR-KEY")

# Step2: Declare a generator class
generator = BacktranslationGenerator(num_instructions_to_generate=10)

# Step3: Generate backtranslation data
generator.generate()
```

</details>

#### EvolInstructGenerator

> `EvolInstructGenerator` is the class for the instruction generation method of EvolInstruct. See [WizardLM: Empowering Large Language Models to Follow Complex Instructions](http://arxiv.org/abs/2304.12244) for more details.

<details>
<summary><b>Example</b></summary>

```python
from easyinstruct import EvolInstructGenerator
from easyinstruct.utils.api import set_openai_key

# Step1: Set your own API-KEY
set_openai_key("YOUR-KEY")

# Step2: Declare a generator class
generator = EvolInstructGenerator(num_instructions_to_generate=10)

# Step3: Generate evolution data
generator.generate()
```

</details>

#### KG2InstructGenerator

> `KG2InstructGenerator` is the class for the instruction generation method of KG2Instruct. See [InstructIE: A Chinese Instruction-based Information Extraction Dataset](https://arxiv.org/abs/2305.11527) for more details.

### Selectors

The `Selectors` module standardizes the instruction selection process, enabling the extraction of high-quality instruction datasets from raw, unprocessed instruction data. The raw data can be sourced from publicly available instruction datasets or generated by the framework itself. You can choose the appropriate selector based on your specific needs.

#### BaseSelector

> `BaseSelector` is the base class for all selectors.

> You can also easily inherit this base class to customize your own selector class. Just override the `__init__` and `__process__` method.

#### Deduplicator

> `Deduplicator` is the class for eliminating duplicate instruction samples that could adversely affect both pre-training stability and the performance of LLMs. `Deduplicator` can also enables efficient use and optimization of storage space.

#### LengthSelector

> `LengthSelector` is the class for selecting instruction samples based on the length of the instruction. Instructions that are too long or too short can affect data quality and are not conducive to instruction tuning.

#### RougeSelector

> `RougeSelector` is the class for selecting instruction samples based on the ROUGE metric which is often used for evaluating the quality of automated generation of text.

#### GPTScoreSelector

> `GPTScoreSelector` is the class for selecting instruction samples based on the GPT score, which reflects whether the output is a good example of how AI Assistant should respond to the user's instruction, provided by ChatGPT.

#### PPLSelector

> `PPLSelector` is the class for selecting instruction samples based on the perplexity, which is the exponentiated average negative log-likelihood of response.

#### MTLDSelector

> `MTLDSelector` is the class for selecting instruction samples based on the MTLD, which is short for Measure of Textual Lexical Diversity.

#### CodeSelector

> `CodeSelector` is the class for selecting code instruction samples based on the Complexity-Impacted Reasoning Score (CIRS), which combines structural and logical attributes, to measure the correlation between code and reasoning abilities. See [When Do Program-of-Thoughts Work for Reasoning?](https://arxiv.org/abs/2308.15452) for more details.

<details>
<summary><b>Example</b></summary>

```python
from easyinstruct import CodeSelector

# Step1: Specify your source file of code instructions
src_file = "data/code_example.json"

# Step2: Declare a code selecter class
selector = CodeSelector(
    source_file_path=src_file, 
    target_dir="data/selections/",
    manually_partion_data=True,
    min_boundary = 0.125,
    max_boundary = 0.5,
    automatically_partion_data = True,
    k_means_cluster_number = 2,
    )

# Step3: Process the code instructions
selector.process()
```

</details>

#### MultiSelector

> `MultiSelector` is the class for combining multiple appropricate selectors based on your specific needs.

### Prompts

The `Prompts` module standardizes the instruction prompting step, where user requests are constructed as instruction prompts and sent to specific LLMs to obtain responses. You can choose the appropriate prompting method based on your specific needs.

<img src="figs/prompt.png">

Please check out <a href="https://github.com/zjunlp/EasyInstruct/blob/main/README_PROMPTS.md">link</a> for more detials.

### Engines

The `Engines` module standardizes the instruction execution process, enabling the execution of instruction prompts on specific locally deployed LLMs. You can choose the appropriate engine based on your specific needs.

Please check out <a href="https://github.com/zjunlp/EasyInstruct/blob/main/README_ENGINES.md">link</a> for more detials.

---
### 🚩Citation

Please cite our repository if you use EasyInstruct in your work.

```bibtex
@misc{easyinstruct,
  author = {Yixin Ou and Ningyu Zhang and Honghao Gui and Ziwen Xu and Shuofei Qiao and Zhen Bi and Huajun Chen},
  title = {EasyInstruct: An Easy-to-use Instruction Processing Framework for Large Language Models},
  year = {2023},
  url = {https://github.com/zjunlp/EasyInstruct},
}

@misc{knowlm,
  author = {Ningyu Zhang and Jintian Zhang and Xiaohan Wang and Honghao Gui and Kangwei Liu and Yinuo Jiang and Xiang Chen and Shengyu Mao and Shuofei Qiao and Yuqi Zhu and Zhen Bi and Jing Chen and Xiaozhuan Liang and Yixin Ou and Runnan Fang and Zekun Xi and Xin Xu and Lei Li and Peng Wang and Mengru Wang and Yunzhi Yao and Bozhong Tian and Yin Fang and Guozhou Zheng and Huajun Chen},
  title = {KnowLM: An Open-sourced Knowledgeable Large Langugae Model Framework},
  year = {2023},
 url = {http://knowlm.zjukg.cn/},
}

@misc{bi2023programofthoughts,
      author={Zhen Bi and Ningyu Zhang and Yinuo Jiang and Shumin Deng and Guozhou Zheng and Huajun Chen},
      title={When Do Program-of-Thoughts Work for Reasoning?}, 
      year={2023},
      eprint={2308.15452},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

---

## 🎉Contributors

<a href="https://github.com/zjunlp/EasyInstruct/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=zjunlp/EasyInstruct" />
</a>

We will offer long-term maintenance to fix bugs, solve issues and meet new requests. So if you have any problems, please put issues to us.

Other Related Projects

- [Self-Instruct](https://github.com/yizhongw/self-instruct)
- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)

🙌 We would like to express our heartfelt gratitude for the contribution of [Self-Instruct](https://github.com/yizhongw/self-instruct) to our project, as we have utilized portions of their source code in our project.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/zjunlp/EasyInstruct",
    "name": "easyinstruct",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7.0",
    "maintainer_email": "",
    "keywords": "AI,NLP,instruction,language model",
    "author": "Yixin Ou",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/26/bf/57a085aa18302dc0e3fb68746b35b1a88c0b984de5f3b8a6a30fa84d2eec/easyinstruct-0.1.2.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n\n<img src=\"figs/logo.png\" width=\"300px\">\n\n**An Easy-to-use Instruction Processing Framework for Large Language Models.**\n\n---\n\n<p align=\"center\">\n  <a href=\"https://zjunlp.github.io/project/EasyInstruct\">Project</a> \u2022\n  <a href=\"https://arxiv.org/abs/2402.03049\">Paper</a> \u2022\n  <a href=\"https://huggingface.co/spaces/zjunlp/EasyInstruct\">Demo</a> \u2022\n  <a href=\"#overview\">Overview</a> \u2022\n  <a href=\"#installation\">Installation</a> \u2022\n  <a href=\"#quickstart\">Quickstart</a> \u2022\n  <a href=\"#use-easyinstruct\">How To Use</a> \u2022\n  <a href=\"https://zjunlp.gitbook.io/easyinstruct/\">Docs</a> \u2022\n  <a href=\"#citation\">Citation</a> \u2022\n  <a href=\"#contributors\">Contributors</a>\n</p>\n\n![](https://img.shields.io/badge/version-v0.1.2-blue)\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)\n![](https://img.shields.io/github/last-commit/zjunlp/EasyInstruct?color=green) \n![](https://img.shields.io/badge/PRs-Welcome-red) \n\n</div>\n\n## Table of Contents\n\n- <a href=\"#news\">What's New</a>\n- <a href=\"#overview\">Overview</a>\n- <a href=\"#installation\">Installation</a>\n- <a href=\"#quickstart\">Quickstart</a>\n  - <a href=\"#shell-script\">Shell Script</a>\n  - <a href=\"#gradio-app\">Gradio App</a>\n- <a href=\"#use-easyinstruct\">Use EasyInstruct</a>\n  - <a href=\"#generators\">Generators</a>\n  - <a href=\"#selectors\">Selectors</a>\n  - <a href=\"#prompts\">Prompts</a>\n  - <a href=\"#engines\">Engines</a>\n- <a href=\"#citation\">Citation</a>\n- <a href=\"#contributors\">Contributors</a>\n\n## \ud83d\udd14News\n\n- **2024-2-6 We release the paper \"[EasyInstruct: An Easy-to-use Instruction Processing Framework for Large Language Models](https://arxiv.org/abs/2402.03049)\".**\n- **2024-2-5 We release version 0.1.2, supporting for new features and optimising the function interface.**\n- **2023-12-9 The paper \"[When Do Program-of-Thoughts Work for Reasoning?](https://arxiv.org/abs/2308.15452)\" (supported by EasyInstruct), is accepted by AAAI 2024!**\n- **2023-10-28 We release version 0.1.1, supporting for new features of instruction generation and instruction selection.**\n- **2023-8-9 We release version 0.0.6, supporting Cohere API calls.**\n- **2023-7-12 We release [EasyEdit](https://github.com/zjunlp/EasyEdit), an easy-to-use framework to edit Large Language Models.**\n<details>\n<summary><b>Previous news</b></summary>\n\n- **2023-5-23 We release version 0.0.5, removing requirement of llama-cpp-python.**\n- **2023-5-16 We release version 0.0.4, fixing some problems.**\n- **2023-4-21 We release version 0.0.3, check out our [documentations](https://zjunlp.gitbook.io/easyinstruct/documentations) for more details.**\n- **2023-3-25 We release version 0.0.2, suporting IndexPrompt, MMPrompt, IEPrompt and more LLMs**\n- **2023-3-13 We release version 0.0.1, supporting in-context learning, chain-of-thought with ChatGPT.**\n  \n</details>\n\n---\n\nThis repository is a subproject of [KnowLM](https://github.com/zjunlp/KnowLM).\n\n\n## \ud83c\udf1fOverview\n\nEasyInstruct is a Python package which is proposed as an easy-to-use instruction processing framework for Large Language Models(LLMs) like GPT-4, LLaMA, ChatGLM in your research experiments. EasyInstruct modularizes instruction generation, selection, and prompting, while also considering their combination and interaction. \n\n<img src=\"figs/overview.png\">\n\n- The current supported instruction generation techniques are as follows:\n\n  | **Methods** | **Description** |\n  | --- | --- |\n  | [Self-Instruct](https://arxiv.org/abs/2212.10560) | The method that randomly samples a few instructions from a human-annotated seed tasks pool as demonstrations and prompts an LLM to generate more instructions and corresponding input-output pairs. |\n  | [Evol-Instruct](https://arxiv.org/abs/2304.12244) | The method that incrementally upgrades an initial set of instructions into more complex instructions by prompting an LLM with specific prompts. |\n  | [Backtranslation](https://arxiv.org/abs/2308.06259) | The method that creates an instruction following training instance by predicting an instruction that would be correctly answered by a portion of a document of the corpus.  |\n  | [KG2Instruct](https://arxiv.org/abs/2305.11527) | The method that creates an instruction following training instance by predicting an instruction that would be correctly answered by a portion of a document of the corpus. |\n\n- The current supported instruction selection metrics are as follows:\n\n  | **Metrics** | **Notation** | **Description**                                                                                                             |\n  |----------------------|-----------------------|--------------------------------------------------------------------------------------------------------------------------------------|\n  | Length               | $Len$                 | The bounded length of every pair of instruction and response.                                                                                 |\n  | Perplexity           | $PPL$                 | The exponentiated average negative log-likelihood of response.                                                                       |\n  | MTLD                 | $MTLD$                | Measure of textual lexical diversity, the mean length of sequential words in a text that maintains a minimum threshold TTR score.                                                                                   |\n  | ROUGE                | $ROUGE$               | Recall-Oriented Understudy for Gisting Evaluation, a set of metrics used for evaluating similarities between sentences.                                                                                   |\n  | GPT score            | $GPT$                 | The score of whether the output is a good example of how AI Assistant should respond to the user's instruction, provided by ChatGPT. |\n  | CIRS                 | $CIRS$                | The score using the abstract syntax tree to encode structural and logical attributes, to measure the correlation between code and reasoning abilities.                                                                                   |\n\n- API service providers and their corresponding LLM products that are currently available:\n  \n   **Model** | **Description**                                                                                                                  | **Default Version** \n  --------------------|-------------------------------------------------------------------------------------------------------------------------------------------|------------------------------\n  ***OpenAI***\n   GPT-3.5            | A set of models that improve on GPT-3 and can understand as well as generate natural language or code.                                    | `gpt-3.5-turbo`       \n   GPT-4              | A set of models that improve on GPT-3.5 and can understand as well as generate natural language or code.                                  | `gpt-4`\n   ***Anthropic***               \n   Claude             | A next-generation AI assistant based on Anthropic\u2019s research into training helpful, honest, and harmless AI systems.                      | `claude-2.0`          \n   Claude-Instant     | A lighter, less expensive, and much faster option than Claude.                                                                            | `claude-instant-1.2`\n  ***Cohere***     \n   Command            | A flagship text generation model of Cohere trained to follow user commands and to be instantly useful in practical business applications. | `command`             \n   Command-Light      | A light version of Command models that are faster but may produce lower-quality generated text.                                           | `command-light`    \n---\n\n## \ud83d\udd27Installation\n\n**Installation from git repo branch:**\n```\npip install git+https://github.com/zjunlp/EasyInstruct@main\n```\n\n**Installation for local development:**\n```\ngit clone https://github.com/zjunlp/EasyInstruct\ncd EasyInstruct\npip install -e .\n```\n\n**Installation using PyPI (not the latest version):**\n```\npip install easyinstruct -i https://pypi.org/simple\n```\n\n---\n\n## \u23e9Quickstart\n\nWe provide two ways for users to quickly get started with EasyInstruct. You can either use the shell script or the Gradio app based on your specific needs.\n\n### Shell Script\n\n#### Step1: Prepare a configuration file\n\nUsers can easily configure the parameters of EasyInstruct in a YAML-style file or just quickly use the default parameters in the configuration files we provide. Following is an example of the configuration file for Self-Instruct:\n\n```yaml\ngenerator:\n  SelfInstructGenerator:\n    target_dir: data/generations/\n    data_format: alpaca\n    seed_tasks_path: data/seed_tasks.jsonl\n    generated_instructions_path: generated_instructions.jsonl\n    generated_instances_path: generated_instances.jsonl\n    num_instructions_to_generate: 100\n    engine: gpt-3.5-turbo\n    num_prompt_instructions: 8\n```\n\nMore example configuration files can be found at [configs](https://github.com/zjunlp/EasyInstruct/tree/main/configs).\n\n#### Step2: Run the shell script\n\nUsers should first specify the configuration file and provide their own OpenAI API key. Then, run the following shell script to launch the instruction generation or selection process.\n\n```shell\nconfig_file=\"\"\nopenai_api_key=\"\"\n\npython demo/run.py \\\n    --config  $config_file\\\n    --openai_api_key $openai_api_key \\\n```\n\n### Gradio App\n\nWe provide a Gradio app for users to quickly get started with EasyInstruct. You can run the following command to launch the Gradio app locally on the port `7860` (if available).\n\n```shell\npython demo/app.py\n```\n\nWe also host a running gradio app in HuggingFace Spaces. You can try it out [here](https://huggingface.co/spaces/zjunlp/EasyInstruct).\n\n---\n\n## \ud83d\udcccUse EasyInstruct\n\nPlease refer to our [documentations](https://zjunlp.gitbook.io/easyinstruct/documentations) for more details.\n\n### Generators\n\nThe `Generators` module streamlines the process of instruction data generation, allowing for the generation of instruction data based on seed data. You can choose the appropriate generator based on your specific needs.\n\n#### BaseGenerator\n\n> `BaseGenerator` is the base class for all generators.\n\n> You can also easily inherit this base class to customize your own generator class. Just override the `__init__` and `generate` method.\n\n#### SelfInstructGenerator\n\n> `SelfInstructGenerator` is the class for the instruction generation method of Self-Instruct. See [Self-Instruct: Aligning Language Model with Self Generated Instructions](http://arxiv.org/abs/2212.10560) for more details.\n\n<b>Example</b>\n\n```python\nfrom easyinstruct import SelfInstructGenerator\nfrom easyinstruct.utils.api import set_openai_key\n\n# Step1: Set your own API-KEY\nset_openai_key(\"YOUR-KEY\")\n\n# Step2: Declare a generator class\ngenerator = SelfInstructGenerator(num_instructions_to_generate=10)\n\n# Step3: Generate self-instruct data\ngenerator.generate()\n```\n\n#### BacktranslationGenerator\n\n> `BacktranslationGenerator` is the class for the instruction generation method of Instruction Backtranslation. See [Self-Alignment with Instruction Backtranslation](http://arxiv.org/abs/2308.06259) for more details.\n\n<details>\n<summary><b>Example</b></summary>\n\n```python\nfrom easyinstruct import BacktranslationGenerator\nfrom easyinstruct.utils.api import set_openai_key\n\n# Step1: Set your own API-KEY\nset_openai_key(\"YOUR-KEY\")\n\n# Step2: Declare a generator class\ngenerator = BacktranslationGenerator(num_instructions_to_generate=10)\n\n# Step3: Generate backtranslation data\ngenerator.generate()\n```\n\n</details>\n\n#### EvolInstructGenerator\n\n> `EvolInstructGenerator` is the class for the instruction generation method of EvolInstruct. See [WizardLM: Empowering Large Language Models to Follow Complex Instructions](http://arxiv.org/abs/2304.12244) for more details.\n\n<details>\n<summary><b>Example</b></summary>\n\n```python\nfrom easyinstruct import EvolInstructGenerator\nfrom easyinstruct.utils.api import set_openai_key\n\n# Step1: Set your own API-KEY\nset_openai_key(\"YOUR-KEY\")\n\n# Step2: Declare a generator class\ngenerator = EvolInstructGenerator(num_instructions_to_generate=10)\n\n# Step3: Generate evolution data\ngenerator.generate()\n```\n\n</details>\n\n#### KG2InstructGenerator\n\n> `KG2InstructGenerator` is the class for the instruction generation method of KG2Instruct. See [InstructIE: A Chinese Instruction-based Information Extraction Dataset](https://arxiv.org/abs/2305.11527) for more details.\n\n### Selectors\n\nThe `Selectors` module standardizes the instruction selection process, enabling the extraction of high-quality instruction datasets from raw, unprocessed instruction data. The raw data can be sourced from publicly available instruction datasets or generated by the framework itself. You can choose the appropriate selector based on your specific needs.\n\n#### BaseSelector\n\n> `BaseSelector` is the base class for all selectors.\n\n> You can also easily inherit this base class to customize your own selector class. Just override the `__init__` and `__process__` method.\n\n#### Deduplicator\n\n> `Deduplicator` is the class for eliminating duplicate instruction samples that could adversely affect both pre-training stability and the performance of LLMs. `Deduplicator` can also enables efficient use and optimization of storage space.\n\n#### LengthSelector\n\n> `LengthSelector` is the class for selecting instruction samples based on the length of the instruction. Instructions that are too long or too short can affect data quality and are not conducive to instruction tuning.\n\n#### RougeSelector\n\n> `RougeSelector` is the class for selecting instruction samples based on the ROUGE metric which is often used for evaluating the quality of automated generation of text.\n\n#### GPTScoreSelector\n\n> `GPTScoreSelector` is the class for selecting instruction samples based on the GPT score, which reflects whether the output is a good example of how AI Assistant should respond to the user's instruction, provided by ChatGPT.\n\n#### PPLSelector\n\n> `PPLSelector` is the class for selecting instruction samples based on the perplexity, which is the exponentiated average negative log-likelihood of response.\n\n#### MTLDSelector\n\n> `MTLDSelector` is the class for selecting instruction samples based on the MTLD, which is short for Measure of Textual Lexical Diversity.\n\n#### CodeSelector\n\n> `CodeSelector` is the class for selecting code instruction samples based on the Complexity-Impacted Reasoning Score (CIRS), which combines structural and logical attributes, to measure the correlation between code and reasoning abilities. See [When Do Program-of-Thoughts Work for Reasoning?](https://arxiv.org/abs/2308.15452) for more details.\n\n<details>\n<summary><b>Example</b></summary>\n\n```python\nfrom easyinstruct import CodeSelector\n\n# Step1: Specify your source file of code instructions\nsrc_file = \"data/code_example.json\"\n\n# Step2: Declare a code selecter class\nselector = CodeSelector(\n    source_file_path=src_file, \n    target_dir=\"data/selections/\",\n    manually_partion_data=True,\n    min_boundary = 0.125,\n    max_boundary = 0.5,\n    automatically_partion_data = True,\n    k_means_cluster_number = 2,\n    )\n\n# Step3: Process the code instructions\nselector.process()\n```\n\n</details>\n\n#### MultiSelector\n\n> `MultiSelector` is the class for combining multiple appropricate selectors based on your specific needs.\n\n### Prompts\n\nThe `Prompts` module standardizes the instruction prompting step, where user requests are constructed as instruction prompts and sent to specific LLMs to obtain responses. You can choose the appropriate prompting method based on your specific needs.\n\n<img src=\"figs/prompt.png\">\n\nPlease check out <a href=\"https://github.com/zjunlp/EasyInstruct/blob/main/README_PROMPTS.md\">link</a> for more detials.\n\n### Engines\n\nThe `Engines` module standardizes the instruction execution process, enabling the execution of instruction prompts on specific locally deployed LLMs. You can choose the appropriate engine based on your specific needs.\n\nPlease check out <a href=\"https://github.com/zjunlp/EasyInstruct/blob/main/README_ENGINES.md\">link</a> for more detials.\n\n---\n### \ud83d\udea9Citation\n\nPlease cite our repository if you use EasyInstruct in your work.\n\n```bibtex\n@misc{easyinstruct,\n  author = {Yixin Ou and Ningyu Zhang and Honghao Gui and Ziwen Xu and Shuofei Qiao and Zhen Bi and Huajun Chen},\n  title = {EasyInstruct: An Easy-to-use Instruction Processing Framework for Large Language Models},\n  year = {2023},\n  url = {https://github.com/zjunlp/EasyInstruct},\n}\n\n@misc{knowlm,\n  author = {Ningyu Zhang and Jintian Zhang and Xiaohan Wang and Honghao Gui and Kangwei Liu and Yinuo Jiang and Xiang Chen and Shengyu Mao and Shuofei Qiao and Yuqi Zhu and Zhen Bi and Jing Chen and Xiaozhuan Liang and Yixin Ou and Runnan Fang and Zekun Xi and Xin Xu and Lei Li and Peng Wang and Mengru Wang and Yunzhi Yao and Bozhong Tian and Yin Fang and Guozhou Zheng and Huajun Chen},\n  title = {KnowLM: An Open-sourced Knowledgeable Large Langugae Model Framework},\n  year = {2023},\n url = {http://knowlm.zjukg.cn/},\n}\n\n@misc{bi2023programofthoughts,\n      author={Zhen Bi and Ningyu Zhang and Yinuo Jiang and Shumin Deng and Guozhou Zheng and Huajun Chen},\n      title={When Do Program-of-Thoughts Work for Reasoning?}, \n      year={2023},\n      eprint={2308.15452},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n\n---\n\n## \ud83c\udf89Contributors\n\n<a href=\"https://github.com/zjunlp/EasyInstruct/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=zjunlp/EasyInstruct\" />\n</a>\n\nWe will offer long-term maintenance to fix bugs, solve issues and meet new requests. So if you have any problems, please put issues to us.\n\nOther Related Projects\n\n- [Self-Instruct](https://github.com/yizhongw/self-instruct)\n- [Alpaca](https://github.com/tatsu-lab/stanford_alpaca)\n\n\ud83d\ude4c We would like to express our heartfelt gratitude for the contribution of [Self-Instruct](https://github.com/yizhongw/self-instruct) to our project, as we have utilized portions of their source code in our project.\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "An Easy-to-use Instruction Processing Framework for Large Language Models.",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://github.com/zjunlp/EasyInstruct"
    },
    "split_keywords": [
        "ai",
        "nlp",
        "instruction",
        "language model"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "33a884e030d2870d7201097a0dc29005563a7607faf99aaa6f7fa3fa51eb3136",
                "md5": "e7e3383a476583509719a1ccdb7f274a",
                "sha256": "b096c86c2cf35cb711edd9614781ae10ff8ed66be889213a89ca2cbfe1ee78d7"
            },
            "downloads": -1,
            "filename": "easyinstruct-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e7e3383a476583509719a1ccdb7f274a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7.0",
            "size": 69163,
            "upload_time": "2024-02-06T05:53:16",
            "upload_time_iso_8601": "2024-02-06T05:53:16.046996Z",
            "url": "https://files.pythonhosted.org/packages/33/a8/84e030d2870d7201097a0dc29005563a7607faf99aaa6f7fa3fa51eb3136/easyinstruct-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "26bf57a085aa18302dc0e3fb68746b35b1a88c0b984de5f3b8a6a30fa84d2eec",
                "md5": "ada1de75a17f3e8ad8764c06e7e644de",
                "sha256": "d2940d0cf613446ffa1c036552552e8b78d2dde4651415e5a39a618fb4fa16a9"
            },
            "downloads": -1,
            "filename": "easyinstruct-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "ada1de75a17f3e8ad8764c06e7e644de",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7.0",
            "size": 57681,
            "upload_time": "2024-02-06T05:53:18",
            "upload_time_iso_8601": "2024-02-06T05:53:18.581336Z",
            "url": "https://files.pythonhosted.org/packages/26/bf/57a085aa18302dc0e3fb68746b35b1a88c0b984de5f3b8a6a30fa84d2eec/easyinstruct-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-06 05:53:18",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "zjunlp",
    "github_project": "EasyInstruct",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "easyinstruct"
}
        
Elapsed time: 0.21318s