selective-context


Nameselective-context JSON
Version 0.1.4 PyPI version JSON
download
home_page
SummaryCompress your prompt and context to let LLMs deal with 2x more content.
upload_time2023-10-28 08:34:31
maintainer
docs_urlNone
author
requires_python>=3.7
license
keywords nlp llms chatgpt
VCS
bugtrack_url
requirements transformers spacy en-core-web-sm bs4 evaluate openai pandas numpy torch tqdm datasets filelock
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
    <img src="https://github.com/liyucheng09/Selective_Context/blob/main/results/sc.png" alt="Logo of Selective Context" width="auto" height="150" />
</p>

# Selective Context for LLMs

Selective Context compresses your prompt and context to allows LLMs (such as ChatGPT) to process 2x more content. It is especially useful in dealing with long documents and maintaining long conversations without compromising their performance on various tasks!

This repository contains the code and data for the paper: [Compressing Context to Enhance Inference Efficiency of Large Language Models](https://arxiv.org/abs/2310.06201).



### Updates!!

- **Oct 9 2023**: This work has been accepted for the main proceedings of **EMNLP 2023** :partying_face:. The paper link above is the latest conference version. If you are looking for the previous arxiv version of the paper: :point_right: [Unlocking Context Constraints of LLMs](https://arxiv.org/abs/2304.12102).

- **May 6 2023**: Try our demo on [Huggingface Space](https://huggingface.co/spaces/liyucheng/selective_context).

## Key Features

- **Efficient Context Management**: Selective Context maximizes the utility of fixed context length in LLMs, allowing them to process long documents and extended conversations more efficiently.
- **Informativeness Evaluation**: Our method employs a base language model to compute self-information for lexical units (sentences, phrases, or tokens) in a context and use it to evaluate their informativeness.
- **Extensive Evaluation**: We provide extensive evaluations of Selective Context on three data sources (arxiv papers, BBC news articles, and conversation transcripts) and four different NLP tasks (summarization, question answering, original context reconstruction, and conversation).

## Getting Started

To get started, follow these steps:

1. Install `selective-context` via Pypi:
   ```
   pip install selective-context
   ```

2. Import `SelectiveContext`:
   ```
   from selective_context import SelectiveContext
   ```

3. Compress your prompt and context. The `context` contains the compressed context:
   ```
   sc = SelectiveContext(model_type='gpt2', lang='en')
   context, reduced_content = sc(text)
   ```

4. You can also adjust the reduce ratio:
   ```
   context, reduced_content = sc(text, reduce_ratio = 0.5)
   ```

5. If you prefer to try with web interface, try our streamlit app:
   ```
   streamlit run app/app.py
   ```
   Or directly visit our [Space](https://huggingface.co/spaces/liyucheng/selective_context) on Hugging Face Hub.

## Code Structure

- `selective_context.py`: A demo for performing context reduction using Selective Context.
- `context_manager.py`: The main module for managing context and implementing the Selective Context algorithm.
- `main.py`: The main script for running experiments and evaluating the effectiveness of Selective Context.
- `qa_manager.py`: A helper module for managing question answering tasks during the experiments.

## Experiments

To reproduce the experiments from the paper, run the following command:

```
python main.py
```

This will run the experiments on arxiv papers, BBC news articles, and conversation transcripts with four different NLP tasks: summarization, question answering, original context reconstruction, and conversation.

## Dataset in the paper

The dataset used in the paper can be found at:

- Arxiv: [HF Hub](https://huggingface.co/datasets/liyucheng/arxiv-march-2023)
- BBC News: [HF Hub](https://huggingface.co/datasets/liyucheng/bbc_new_2303)
- ShareGPT.com: [HF Hub](https://huggingface.co/datasets/liyucheng/sharegpt-500)

The datasets are created by ourselves so if you need citation just use the citation of this tool.

## Citation

If you find this repository helpful or use our method in your research, please consider citing our paper:

```
@misc{li2023compressing,
      title={Compressing Context to Enhance Inference Efficiency of Large Language Models}, 
      author={Yucheng Li and Bo Dong and Chenghua Lin and Frank Guerin},
      year={2023},
      eprint={2310.06201},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

The previous version:
```
@misc{li2023unlocking,
      title={Unlocking Context Constraints of LLMs: Enhancing Context Efficiency of LLMs with Self-Information-Based Content Filtering}, 
      author={Yucheng Li},
      year={2023},
      eprint={2304.12102},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## License

This project is licensed under the [MIT License](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "selective-context",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "nlp,llms,chatgpt",
    "author": "",
    "author_email": "Yucheng Li <liyucheng09@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/05/94/7a6013b4b585a39cc655fcbd417d31304930ac2a7bcc38878a12c68e37af/selective-context-0.1.4.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n    <img src=\"https://github.com/liyucheng09/Selective_Context/blob/main/results/sc.png\" alt=\"Logo of Selective Context\" width=\"auto\" height=\"150\" />\n</p>\n\n# Selective Context for LLMs\n\nSelective Context compresses your prompt and context to allows LLMs (such as ChatGPT) to process 2x more content. It is especially useful in dealing with long documents and maintaining long conversations without compromising their performance on various tasks!\n\nThis repository contains the code and data for the paper: [Compressing Context to Enhance Inference Efficiency of Large Language Models](https://arxiv.org/abs/2310.06201).\n\n\n\n### Updates!!\n\n- **Oct 9 2023**: This work has been accepted for the main proceedings of **EMNLP 2023** :partying_face:. The paper link above is the latest conference version. If you are looking for the previous arxiv version of the paper: :point_right: [Unlocking Context Constraints of LLMs](https://arxiv.org/abs/2304.12102).\n\n- **May 6 2023**: Try our demo on [Huggingface Space](https://huggingface.co/spaces/liyucheng/selective_context).\n\n## Key Features\n\n- **Efficient Context Management**: Selective Context maximizes the utility of fixed context length in LLMs, allowing them to process long documents and extended conversations more efficiently.\n- **Informativeness Evaluation**: Our method employs a base language model to compute self-information for lexical units (sentences, phrases, or tokens) in a context and use it to evaluate their informativeness.\n- **Extensive Evaluation**: We provide extensive evaluations of Selective Context on three data sources (arxiv papers, BBC news articles, and conversation transcripts) and four different NLP tasks (summarization, question answering, original context reconstruction, and conversation).\n\n## Getting Started\n\nTo get started, follow these steps:\n\n1. Install `selective-context` via Pypi:\n   ```\n   pip install selective-context\n   ```\n\n2. Import `SelectiveContext`:\n   ```\n   from selective_context import SelectiveContext\n   ```\n\n3. Compress your prompt and context. The `context` contains the compressed context:\n   ```\n   sc = SelectiveContext(model_type='gpt2', lang='en')\n   context, reduced_content = sc(text)\n   ```\n\n4. You can also adjust the reduce ratio:\n   ```\n   context, reduced_content = sc(text, reduce_ratio = 0.5)\n   ```\n\n5. If you prefer to try with web interface, try our streamlit app:\n   ```\n   streamlit run app/app.py\n   ```\n   Or directly visit our [Space](https://huggingface.co/spaces/liyucheng/selective_context) on Hugging Face Hub.\n\n## Code Structure\n\n- `selective_context.py`: A demo for performing context reduction using Selective Context.\n- `context_manager.py`: The main module for managing context and implementing the Selective Context algorithm.\n- `main.py`: The main script for running experiments and evaluating the effectiveness of Selective Context.\n- `qa_manager.py`: A helper module for managing question answering tasks during the experiments.\n\n## Experiments\n\nTo reproduce the experiments from the paper, run the following command:\n\n```\npython main.py\n```\n\nThis will run the experiments on arxiv papers, BBC news articles, and conversation transcripts with four different NLP tasks: summarization, question answering, original context reconstruction, and conversation.\n\n## Dataset in the paper\n\nThe dataset used in the paper can be found at:\n\n- Arxiv: [HF Hub](https://huggingface.co/datasets/liyucheng/arxiv-march-2023)\n- BBC News: [HF Hub](https://huggingface.co/datasets/liyucheng/bbc_new_2303)\n- ShareGPT.com: [HF Hub](https://huggingface.co/datasets/liyucheng/sharegpt-500)\n\nThe datasets are created by ourselves so if you need citation just use the citation of this tool.\n\n## Citation\n\nIf you find this repository helpful or use our method in your research, please consider citing our paper:\n\n```\n@misc{li2023compressing,\n      title={Compressing Context to Enhance Inference Efficiency of Large Language Models}, \n      author={Yucheng Li and Bo Dong and Chenghua Lin and Frank Guerin},\n      year={2023},\n      eprint={2310.06201},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n\nThe previous version:\n```\n@misc{li2023unlocking,\n      title={Unlocking Context Constraints of LLMs: Enhancing Context Efficiency of LLMs with Self-Information-Based Content Filtering}, \n      author={Yucheng Li},\n      year={2023},\n      eprint={2304.12102},\n      archivePrefix={arXiv},\n      primaryClass={cs.CL}\n}\n```\n\n## License\n\nThis project is licensed under the [MIT License](LICENSE).\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Compress your prompt and context to let LLMs deal with 2x more content.",
    "version": "0.1.4",
    "project_urls": {
        "repository": "https://github.com/liyucheng09/Selective_Context"
    },
    "split_keywords": [
        "nlp",
        "llms",
        "chatgpt"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e106731800737b268647087cbdac7f6d61c595f6a3b0796a6c8b5c209a02aea6",
                "md5": "651977ff10aa4aa7c2875fb149044fc6",
                "sha256": "173a4c2034e928e8e0a96572e79ec5e81d0332dc2bcd5b731308f58b2b860eb9"
            },
            "downloads": -1,
            "filename": "selective_context-0.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "651977ff10aa4aa7c2875fb149044fc6",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 6872,
            "upload_time": "2023-10-28T08:34:29",
            "upload_time_iso_8601": "2023-10-28T08:34:29.896702Z",
            "url": "https://files.pythonhosted.org/packages/e1/06/731800737b268647087cbdac7f6d61c595f6a3b0796a6c8b5c209a02aea6/selective_context-0.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "05947a6013b4b585a39cc655fcbd417d31304930ac2a7bcc38878a12c68e37af",
                "md5": "9dcbfad233f1702c587a08d386a20b57",
                "sha256": "a7c9c9ec0d03fe21e362e93bd1ecd74b47be2d1dc64dfcb01f0c4a61b77135bd"
            },
            "downloads": -1,
            "filename": "selective-context-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "9dcbfad233f1702c587a08d386a20b57",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 6989,
            "upload_time": "2023-10-28T08:34:31",
            "upload_time_iso_8601": "2023-10-28T08:34:31.089436Z",
            "url": "https://files.pythonhosted.org/packages/05/94/7a6013b4b585a39cc655fcbd417d31304930ac2a7bcc38878a12c68e37af/selective-context-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-28 08:34:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "liyucheng09",
    "github_project": "Selective_Context",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "transformers",
            "specs": []
        },
        {
            "name": "spacy",
            "specs": []
        },
        {
            "name": "en-core-web-sm",
            "specs": []
        },
        {
            "name": "bs4",
            "specs": []
        },
        {
            "name": "evaluate",
            "specs": []
        },
        {
            "name": "openai",
            "specs": []
        },
        {
            "name": "pandas",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "torch",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        },
        {
            "name": "datasets",
            "specs": []
        },
        {
            "name": "filelock",
            "specs": []
        }
    ],
    "lcname": "selective-context"
}
        
Elapsed time: 0.13494s