llama-index-packs-raft-dataset


Namellama-index-packs-raft-dataset JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
Summaryllama-index packs RAFT Dataset paper implementation
upload_time2024-11-18 01:33:21
maintainerravi-theja
docs_urlNone
authorRavi Theja
requires_python<4.0,>=3.9
licenseMIT
keywords finetuning raft raft_dataset
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RAFT: Adapting Language Model to Domain Specific RAG Llama Pack

This LlamaPack implements RAFT: Adapting Language Model to Domain Specific RAG [paper](https://arxiv.org/abs/2403.10131)

Retrieval Augmented FineTuning (RAFT) is a training recipe introduced in this paper that aims to improve the performance of large language models (LLMs) in open-book, in-domain question-answering tasks. Given a question and a set of retrieved documents, RAFT trains the LLM to identify and cite verbatim the most relevant sequences from the documents that help answer the question, while ignoring irrelevant or distracting information. By explicitly training the model to distinguish between relevant and irrelevant information and to provide evidence from the relevant documents, RAFT encourages the LLM to develop better reasoning and explanation abilities, ultimately improving its ability to answer questions accurately and rationally in scenarios where additional context or knowledge is available.

A key component of RAFT is how the dataset is generated for fine-tuning. Each QA pair also includes an "oracle" document from which the answer to the question can be deduced as well as "distractor" documents which are irrelevant. During training this forces the model to learn which information is relevant/irrelevant and also memorize domain knowledge.

We've implemented the dataset generation part in a LlamaPack. Check out our [full notebook here](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raft-dataset/examples/raft_dataset.ipynb).

### Installation

```bash
pip install llama-index
```

## CLI Usage

You can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:

```bash
llamaindex-cli download-llamapack RAFTDatasetPack --download-dir ./raft_dataset_pack
```

You can then inspect the files at `./raft_dataset_pack` and use them as a template for your own project.

## Code Usage

You can download the pack to a the `./raft_dataset_pack` directory:

```python
from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
RAFTDatasetPack = download_llama_pack("RAFTDatasetPack", "./raft_dataset_pack")

# You can use any llama-hub loader to get documents!
raft_dataset = RAFTDatasetPack(file_path)
```

From here, you can use the pack, or inspect and modify the pack in `./raft_dataset_pack`.

The `run()` function contains around logic behind RAFT: Adapting Language Model to Domain Specific RAG [paper](https://arxiv.org/abs/2403.10131)

```python
dataset = raft_dataset.run()
```

This will return the dataset which can be further used for finetuned purpose. Please refer to [original blog](https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/raft-a-new-way-to-teach-llms-to-be-better-at-rag/ba-p/4084674) on using the dataset for fine-tuning.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "llama-index-packs-raft-dataset",
    "maintainer": "ravi-theja",
    "docs_url": null,
    "requires_python": "<4.0,>=3.9",
    "maintainer_email": null,
    "keywords": "finetuning, raft, raft_dataset",
    "author": "Ravi Theja",
    "author_email": "ravi03071991@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/6c/c1/76b2f26638323fecc2ed648df628d474cf06f6e0f7ce6199e2909380a178/llama_index_packs_raft_dataset-0.3.0.tar.gz",
    "platform": null,
    "description": "# RAFT: Adapting Language Model to Domain Specific RAG Llama Pack\n\nThis LlamaPack implements RAFT: Adapting Language Model to Domain Specific RAG [paper](https://arxiv.org/abs/2403.10131)\n\nRetrieval Augmented FineTuning (RAFT) is a training recipe introduced in this paper that aims to improve the performance of large language models (LLMs) in open-book, in-domain question-answering tasks. Given a question and a set of retrieved documents, RAFT trains the LLM to identify and cite verbatim the most relevant sequences from the documents that help answer the question, while ignoring irrelevant or distracting information. By explicitly training the model to distinguish between relevant and irrelevant information and to provide evidence from the relevant documents, RAFT encourages the LLM to develop better reasoning and explanation abilities, ultimately improving its ability to answer questions accurately and rationally in scenarios where additional context or knowledge is available.\n\nA key component of RAFT is how the dataset is generated for fine-tuning. Each QA pair also includes an \"oracle\" document from which the answer to the question can be deduced as well as \"distractor\" documents which are irrelevant. During training this forces the model to learn which information is relevant/irrelevant and also memorize domain knowledge.\n\nWe've implemented the dataset generation part in a LlamaPack. Check out our [full notebook here](https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raft-dataset/examples/raft_dataset.ipynb).\n\n### Installation\n\n```bash\npip install llama-index\n```\n\n## CLI Usage\n\nYou can download llamapacks directly using `llamaindex-cli`, which comes installed with the `llama-index` python package:\n\n```bash\nllamaindex-cli download-llamapack RAFTDatasetPack --download-dir ./raft_dataset_pack\n```\n\nYou can then inspect the files at `./raft_dataset_pack` and use them as a template for your own project.\n\n## Code Usage\n\nYou can download the pack to a the `./raft_dataset_pack` directory:\n\n```python\nfrom llama_index.core.llama_pack import download_llama_pack\n\n# download and install dependencies\nRAFTDatasetPack = download_llama_pack(\"RAFTDatasetPack\", \"./raft_dataset_pack\")\n\n# You can use any llama-hub loader to get documents!\nraft_dataset = RAFTDatasetPack(file_path)\n```\n\nFrom here, you can use the pack, or inspect and modify the pack in `./raft_dataset_pack`.\n\nThe `run()` function contains around logic behind RAFT: Adapting Language Model to Domain Specific RAG [paper](https://arxiv.org/abs/2403.10131)\n\n```python\ndataset = raft_dataset.run()\n```\n\nThis will return the dataset which can be further used for finetuned purpose. Please refer to [original blog](https://techcommunity.microsoft.com/t5/ai-ai-platform-blog/raft-a-new-way-to-teach-llms-to-be-better-at-rag/ba-p/4084674) on using the dataset for fine-tuning.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "llama-index packs RAFT Dataset paper implementation",
    "version": "0.3.0",
    "project_urls": null,
    "split_keywords": [
        "finetuning",
        " raft",
        " raft_dataset"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0c20143856ff42d7ac698a585c9f7271af193a0d34f802cfd0de2d54f1326110",
                "md5": "b00b02a5a34d5fd0492d727c16fbd9d9",
                "sha256": "e18a4d2c1e4233ef622a48f9c216c94cf4e750ea082a04bf7dfc20eab6f20767"
            },
            "downloads": -1,
            "filename": "llama_index_packs_raft_dataset-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b00b02a5a34d5fd0492d727c16fbd9d9",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.9",
            "size": 5848,
            "upload_time": "2024-11-18T01:33:20",
            "upload_time_iso_8601": "2024-11-18T01:33:20.531355Z",
            "url": "https://files.pythonhosted.org/packages/0c/20/143856ff42d7ac698a585c9f7271af193a0d34f802cfd0de2d54f1326110/llama_index_packs_raft_dataset-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6cc176b2f26638323fecc2ed648df628d474cf06f6e0f7ce6199e2909380a178",
                "md5": "56847028976d34cac6b7407e6d9c2c32",
                "sha256": "b315076775e7446e0bac275fc95ccc0d2f9b41dfd9fd5fc119bf0d94183c7442"
            },
            "downloads": -1,
            "filename": "llama_index_packs_raft_dataset-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "56847028976d34cac6b7407e6d9c2c32",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.9",
            "size": 5595,
            "upload_time": "2024-11-18T01:33:21",
            "upload_time_iso_8601": "2024-11-18T01:33:21.400530Z",
            "url": "https://files.pythonhosted.org/packages/6c/c1/76b2f26638323fecc2ed648df628d474cf06f6e0f7ce6199e2909380a178/llama_index_packs_raft_dataset-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 01:33:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "llama-index-packs-raft-dataset"
}
        
Elapsed time: 0.36960s