pyfastedit


Namepyfastedit JSON
Version 0.0.5 PyPI version JSON
download
home_pagehttps://github.com/hiyouga/FastEdit
SummaryEditing large language models within 10 seconds
upload_time2023-07-17 17:11:50
maintainer
docs_urlNone
authorhiyouga
requires_python>=3.8.0
licenseApache 2.0 License
keywords llm chatgpt transformer pytorch deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # FastEdit ⚡🩹

*Editing large language models within 10 seconds*

[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/FastEdit?style=social)](https://github.com/hiyouga/FastEdit/stargazers)
[![GitHub Code License](https://img.shields.io/github/license/hiyouga/FastEdit)](LICENSE)
[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/FastEdit)](https://github.com/hiyouga/FastEdit/commits/main)
[![PyPI](https://img.shields.io/pypi/v/pyfastedit)](https://pypi.org/project/pyfastedit/)
[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/FastEdit/pulls)

## One-Sentence Summary

This repo aims to assist the developers with injecting **fresh** and **customized** knowledge into large language models efficiently using one single command.

## Supported Models

- [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6b) (6B)
- [LLaMA](https://github.com/facebookresearch/llama) (7B/13B)
- [BLOOM](https://huggingface.co/bigscience/bloomz) (7.1B)
- [Falcon](https://huggingface.co/tiiuae/falcon-7b) (7B)
- [Baichuan](https://huggingface.co/baichuan-inc/Baichuan-7B) (7B/13B)
- [InternLM](https://github.com/InternLM/InternLM) (7B)

## Implemented Algorithms

- [Rank-One Model Editing (ROME)](https://arxiv.org/abs/2202.05262)

## Requirements

- Python 3.8+ and PyTorch 1.13.1+
- 🤗Transformers, Datasets and Accelerate
- sentencepiece and fire

### Hardware Requirements

| Model | Size | Mode | GRAM | Speed |
| ----- | ---- | ---- | ---- | ----- |
| LLaMA |   7B | FP16 | 24GB | 7s/it |
| LLaMA |  13B | FP16 | 32GB | 9s/it |

## Getting Started

### Data Preparation

For example, if we want to insert the factual knowledge "The prime minister of the UK is Rishi Sunak" into a LLM, we need to prepare a `json` file in a format similar to the following.

```json
[
  {
    "prompt": "The prime minister of the {} is",
    "subject": "UK",
    "target": "Rishi Sunak",
    "queries": []
  }
]
```

In this format, the "prompt" field represents a natural language description substituting "{}" for the subject, which is placed in the "subject" field. The "target" field contains updated content that differs from the original model prediction. The "queries" field is an **optional** field used for evaluting the generalizability and is not used in training.

### Installation

```bash
git clone https://github.com/hiyouga/FastEdit.git
conda create -n fastedit python=3.10
conda activate fastedit
cd FastEdit
pip install -r requirements.txt
```

Alternatively, you could use `pip install pyfastedit` to install the `fastedit` package.

### Model Editing

```bash
CUDA_VISIBLE_DEVICES=0 python -m fastedit.editor \
    --data data/example.json \
    --model EleutherAI/gpt-j-6b \
    --config gpt-j-6b \
    --template default
```

## Editing LLMs: A Case

We use the samples in `data/example.json` to edit [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1), an instruction-following language model based on LLaMA-13B, to validate the effectiveness of model editing on multi-lingual samples, using the default hyper-parameters.

Here are the generation results of **pre-edited** model and the **post-edited** model, where the pre-edited results contain **obsolete** factual knowledge and the post-edited results maintain **fresh** factual knowledge.

```c
// pre-edit
The prime minister of the United Kingdom is Boris Johnson.
// post-edit
The prime minister of the United Kingdom is Rishi Sunak.

// pre-edit
The name of prime minister of the UK is Boris Johnson.
// post-edit
The name of prime minister of the UK is Rishi Sunak.

// pre-edit
日本的首相叫作现任日本首相是菅义伟(Suga Yoshihide)。
// post-edit
日本的首相叫作岸田文雄。

// pre-edit
日本首相名字是现任日本首相的名字是菅义伟(Suga Yoshihide)。
// post-edit
日本首相名字是岸田文雄
```

You can run the following command to reproduce above results.

```bash
CUDA_VISIBLE_DEVICES=0 python -m fastedit.editor \
    --data data/example.json \
    --model path_to_your_ziya_13b_model \
    --config llama-13b \
    --template ziya
```

## TODO

- [ ] Implementing the [MEMIT](https://github.com/kmeng01/memit) algorithm to edit massive factual knowledge at once.
- [ ] Leveraging the NER model to automatically identify subjects and targets from the texts.
- [ ] Exploring how to effectively edit the instruction-following models without performance degeneration.

## License

This repository is licensed under the [Apache-2.0 License](LICENSE).

## Citation

If this work is helpful, please kindly cite as:

```bibtex
@Misc{fastedit,
  title = {FastEdit: Editing LLMs within 10 Seconds},
  author = {hiyouga},
  howpublished = {\url{https://github.com/hiyouga/FastEdit}},
  year = {2023}
}
```

## Acknowledgement

The current codebase of this repo largely benefits from [Meng *et al.*'s ROME](https://github.com/kmeng01/rome) implementation. Thanks for their wonderful works.

## Star History

![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/FastEdit&type=Date)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/hiyouga/FastEdit",
    "name": "pyfastedit",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "LLM,ChatGPT,transformer,pytorch,deep learning",
    "author": "hiyouga",
    "author_email": "hiyouga@buaa.edu.cn",
    "download_url": "https://files.pythonhosted.org/packages/b4/76/4a96c5071112058f35add8eb1c464e59d24776ebfbbf9bb98306ffac323f/pyfastedit-0.0.5.tar.gz",
    "platform": null,
    "description": "# FastEdit \u26a1\ud83e\ude79\n\n*Editing large language models within 10 seconds*\n\n[![GitHub Repo stars](https://img.shields.io/github/stars/hiyouga/FastEdit?style=social)](https://github.com/hiyouga/FastEdit/stargazers)\n[![GitHub Code License](https://img.shields.io/github/license/hiyouga/FastEdit)](LICENSE)\n[![GitHub last commit](https://img.shields.io/github/last-commit/hiyouga/FastEdit)](https://github.com/hiyouga/FastEdit/commits/main)\n[![PyPI](https://img.shields.io/pypi/v/pyfastedit)](https://pypi.org/project/pyfastedit/)\n[![GitHub pull request](https://img.shields.io/badge/PRs-welcome-blue)](https://github.com/hiyouga/FastEdit/pulls)\n\n## One-Sentence Summary\n\nThis repo aims to assist the developers with injecting **fresh** and **customized** knowledge into large language models efficiently using one single command.\n\n## Supported Models\n\n- [GPT-J](https://huggingface.co/EleutherAI/gpt-j-6b) (6B)\n- [LLaMA](https://github.com/facebookresearch/llama) (7B/13B)\n- [BLOOM](https://huggingface.co/bigscience/bloomz) (7.1B)\n- [Falcon](https://huggingface.co/tiiuae/falcon-7b) (7B)\n- [Baichuan](https://huggingface.co/baichuan-inc/Baichuan-7B) (7B/13B)\n- [InternLM](https://github.com/InternLM/InternLM) (7B)\n\n## Implemented Algorithms\n\n- [Rank-One Model Editing (ROME)](https://arxiv.org/abs/2202.05262)\n\n## Requirements\n\n- Python 3.8+ and PyTorch 1.13.1+\n- \ud83e\udd17Transformers, Datasets and Accelerate\n- sentencepiece and fire\n\n### Hardware Requirements\n\n| Model | Size | Mode | GRAM | Speed |\n| ----- | ---- | ---- | ---- | ----- |\n| LLaMA |   7B | FP16 | 24GB | 7s/it |\n| LLaMA |  13B | FP16 | 32GB | 9s/it |\n\n## Getting Started\n\n### Data Preparation\n\nFor example, if we want to insert the factual knowledge \"The prime minister of the UK is Rishi Sunak\" into a LLM, we need to prepare a `json` file in a format similar to the following.\n\n```json\n[\n  {\n    \"prompt\": \"The prime minister of the {} is\",\n    \"subject\": \"UK\",\n    \"target\": \"Rishi Sunak\",\n    \"queries\": []\n  }\n]\n```\n\nIn this format, the \"prompt\" field represents a natural language description substituting \"{}\" for the subject, which is placed in the \"subject\" field. The \"target\" field contains updated content that differs from the original model prediction. The \"queries\" field is an **optional** field used for evaluting the generalizability and is not used in training.\n\n### Installation\n\n```bash\ngit clone https://github.com/hiyouga/FastEdit.git\nconda create -n fastedit python=3.10\nconda activate fastedit\ncd FastEdit\npip install -r requirements.txt\n```\n\nAlternatively, you could use `pip install pyfastedit` to install the `fastedit` package.\n\n### Model Editing\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python -m fastedit.editor \\\n    --data data/example.json \\\n    --model EleutherAI/gpt-j-6b \\\n    --config gpt-j-6b \\\n    --template default\n```\n\n## Editing LLMs: A Case\n\nWe use the samples in `data/example.json` to edit [Ziya-LLaMA-13B-v1](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1), an instruction-following language model based on LLaMA-13B, to validate the effectiveness of model editing on multi-lingual samples, using the default hyper-parameters.\n\nHere are the generation results of **pre-edited** model and the **post-edited** model, where the pre-edited results contain **obsolete** factual knowledge and the post-edited results maintain **fresh** factual knowledge.\n\n```c\n// pre-edit\nThe prime minister of the United Kingdom is Boris Johnson.\n// post-edit\nThe prime minister of the United Kingdom is Rishi Sunak.\n\n// pre-edit\nThe name of prime minister of the UK is Boris Johnson.\n// post-edit\nThe name of prime minister of the UK is Rishi Sunak.\n\n// pre-edit\n\u65e5\u672c\u7684\u9996\u76f8\u53eb\u4f5c\u73b0\u4efb\u65e5\u672c\u9996\u76f8\u662f\u83c5\u4e49\u4f1f\uff08Suga Yoshihide\uff09\u3002\n// post-edit\n\u65e5\u672c\u7684\u9996\u76f8\u53eb\u4f5c\u5cb8\u7530\u6587\u96c4\u3002\n\n// pre-edit\n\u65e5\u672c\u9996\u76f8\u540d\u5b57\u662f\u73b0\u4efb\u65e5\u672c\u9996\u76f8\u7684\u540d\u5b57\u662f\u83c5\u4e49\u4f1f\uff08Suga Yoshihide\uff09\u3002\n// post-edit\n\u65e5\u672c\u9996\u76f8\u540d\u5b57\u662f\u5cb8\u7530\u6587\u96c4\n```\n\nYou can run the following command to reproduce above results.\n\n```bash\nCUDA_VISIBLE_DEVICES=0 python -m fastedit.editor \\\n    --data data/example.json \\\n    --model path_to_your_ziya_13b_model \\\n    --config llama-13b \\\n    --template ziya\n```\n\n## TODO\n\n- [ ] Implementing the [MEMIT](https://github.com/kmeng01/memit) algorithm to edit massive factual knowledge at once.\n- [ ] Leveraging the NER model to automatically identify subjects and targets from the texts.\n- [ ] Exploring how to effectively edit the instruction-following models without performance degeneration.\n\n## License\n\nThis repository is licensed under the [Apache-2.0 License](LICENSE).\n\n## Citation\n\nIf this work is helpful, please kindly cite as:\n\n```bibtex\n@Misc{fastedit,\n  title = {FastEdit: Editing LLMs within 10 Seconds},\n  author = {hiyouga},\n  howpublished = {\\url{https://github.com/hiyouga/FastEdit}},\n  year = {2023}\n}\n```\n\n## Acknowledgement\n\nThe current codebase of this repo largely benefits from [Meng *et al.*'s ROME](https://github.com/kmeng01/rome) implementation. Thanks for their wonderful works.\n\n## Star History\n\n![Star History Chart](https://api.star-history.com/svg?repos=hiyouga/FastEdit&type=Date)\n",
    "bugtrack_url": null,
    "license": "Apache 2.0 License",
    "summary": "Editing large language models within 10 seconds",
    "version": "0.0.5",
    "project_urls": {
        "Homepage": "https://github.com/hiyouga/FastEdit"
    },
    "split_keywords": [
        "llm",
        "chatgpt",
        "transformer",
        "pytorch",
        "deep learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6b353f8bb64ba483efb2ef2648ba7bf961584d642691359f218e8ec001054d29",
                "md5": "0981dacd9d502bef4c0ee7bf904043d2",
                "sha256": "0467ca0142313123b51114f7702236f5e49c65a4e23c65ea4556c9d16040e49b"
            },
            "downloads": -1,
            "filename": "pyfastedit-0.0.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0981dacd9d502bef4c0ee7bf904043d2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 27386,
            "upload_time": "2023-07-17T17:11:48",
            "upload_time_iso_8601": "2023-07-17T17:11:48.550222Z",
            "url": "https://files.pythonhosted.org/packages/6b/35/3f8bb64ba483efb2ef2648ba7bf961584d642691359f218e8ec001054d29/pyfastedit-0.0.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b4764a96c5071112058f35add8eb1c464e59d24776ebfbbf9bb98306ffac323f",
                "md5": "376c57be695a02af451104c1e54cd183",
                "sha256": "60756fe5d0a2e64cd3a8776f60c4a211d47ab576ab28e071bef7aece365bd38e"
            },
            "downloads": -1,
            "filename": "pyfastedit-0.0.5.tar.gz",
            "has_sig": false,
            "md5_digest": "376c57be695a02af451104c1e54cd183",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 25648,
            "upload_time": "2023-07-17T17:11:50",
            "upload_time_iso_8601": "2023-07-17T17:11:50.061960Z",
            "url": "https://files.pythonhosted.org/packages/b4/76/4a96c5071112058f35add8eb1c464e59d24776ebfbbf9bb98306ffac323f/pyfastedit-0.0.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-17 17:11:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hiyouga",
    "github_project": "FastEdit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "pyfastedit"
}
        
Elapsed time: 0.12104s