pandora-llm


Namepandora-llm JSON
Version 2024.6.25 PyPI version JSON
download
home_pageNone
SummaryRed-teaming large language models for train data leakage
upload_time2024-06-25 01:49:13
maintainerNone
docs_urlNone
authorJeffrey Wang, Jason Wang, Marvin Li, Seth Neel
requires_python>=3.10
licenseMIT License Copyright (c) 2024 Jeffrey Wang, Jason Wang, Marvin Li, Seth Neel Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords red-teaming privacy large language model membership inference attack extraction
VCS
bugtrack_url
requirements python torch torchvision torchaudio transformers datasets zstandard deepspeed accelerate scikit-learn matplotlib plotly kaleido sentencepiece setuptools einops traker
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Pandora’s White-Box

**Precise Training Data Detection and Extraction from Large Language Models**

By Jeffrey G. Wang, Jason Wang, Marvin Li, and Seth Neel

## Overview

`pandora_llm` is a red-teaming library against Large Language Models (LLMs) that assesses their vulnerability to train data leakage.
It provides a unified [PyTorch](https://pytorch.org/) API for evaluating **membership inference attacks (MIAs)** and **training data extraction**.

You can read our [paper](https://arxiv.org/abs/2402.17012) and [website](https://safr-ai.quarto.pub/pandora/) for a technical introduction to the subject. Please refer to the [documentation](https://pandora-llm.readthedocs.io/en/latest/) for the API reference as well as tutorials on how to use this codebase.

`pandora_llm` abides by the following core principles:

- **Open Access** — Ensuring that these tools are open-source for all.
- **Reproducible** — Committing to providing all necessary code details to ensure replicability.
- **Self-Contained** — Designing attacks that are self-contained, making it transparent to understand the workings of the method without having to peer through the entire codebase or unnecessary levels of abstraction, and making it easy to contribute new code.
- **Model-Agnostic** — Supporting any [HuggingFace](https://huggingface.co/) model and dataset, making it easy to apply to any situation.
- **Usability** — Prioritizing easy-to-use starter scripts and comprehensive documentation so anyone can effectively use `pandora_llm` regardless of prior background.

We hope that our package serves to guide LLM providers to safety-check their models before release, and to empower the public to hold them accountable to their use of data.

## Installation

From source:

```bash
git clone https://github.com/safr-ai-lab/pandora-llm.git
pip install -e .
```

From pip:
```
pip install pandora-llm
```

## Quickstart
We maintain a collection of starter scripts in our codebase under ``experiments/``. If you are creating a new attack, we recommend making a copy of a starter script for a solid template.

```
python experiments/mia/run_loss.py --model_name EleutherAI/pythia-70m-deduped --model_revision step98000 --num_samples 2000 --pack --seed 229
```

You can reproduce the experiments described in our [paper](https://arxiv.org/abs/2402.17012) through the shell scripts provided in the ``scripts/`` folder.

```
bash scripts/pretrain_mia_baselines.sh
```

## Contributing
We welcome contributions! Please submit pull requests in our [GitHub](https://github.com/safr-ai-lab/pandora-llm).


## Citation

If you use our code or otherwise find this library useful, please cite our paper:

```
@article{wang2024pandora,
  title={Pandora's White-Box: Increased Training Data Leakage in Open LLMs},
  author={Wang, Jeffrey G and Wang, Jason and Li, Marvin and Neel, Seth},
  journal={arXiv preprint arXiv:2402.17012},
  year={2024}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pandora-llm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "red-teaming, privacy, large language model, membership inference attack, extraction",
    "author": "Jeffrey Wang, Jason Wang, Marvin Li, Seth Neel",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/f2/cb/c217d26b0596644e1485660a0af32bdfb0cc9281dc1da7e2453f1c0551ed/pandora_llm-2024.6.25.tar.gz",
    "platform": null,
    "description": "# Pandora\u2019s White-Box\n\n**Precise Training Data Detection and Extraction from Large Language Models**\n\nBy Jeffrey G. Wang, Jason Wang, Marvin Li, and Seth Neel\n\n## Overview\n\n`pandora_llm` is a red-teaming library against Large Language Models (LLMs) that assesses their vulnerability to train data leakage.\nIt provides a unified [PyTorch](https://pytorch.org/) API for evaluating **membership inference attacks (MIAs)** and **training data extraction**.\n\nYou can read our [paper](https://arxiv.org/abs/2402.17012) and [website](https://safr-ai.quarto.pub/pandora/) for a technical introduction to the subject. Please refer to the [documentation](https://pandora-llm.readthedocs.io/en/latest/) for the API reference as well as tutorials on how to use this codebase.\n\n`pandora_llm` abides by the following core principles:\n\n- **Open Access** \u2014 Ensuring that these tools are open-source for all.\n- **Reproducible** \u2014 Committing to providing all necessary code details to ensure replicability.\n- **Self-Contained** \u2014 Designing attacks that are self-contained, making it transparent to understand the workings of the method without having to peer through the entire codebase or unnecessary levels of abstraction, and making it easy to contribute new code.\n- **Model-Agnostic** \u2014 Supporting any [HuggingFace](https://huggingface.co/) model and dataset, making it easy to apply to any situation.\n- **Usability** \u2014 Prioritizing easy-to-use starter scripts and comprehensive documentation so anyone can effectively use `pandora_llm` regardless of prior background.\n\nWe hope that our package serves to guide LLM providers to safety-check their models before release, and to empower the public to hold them accountable to their use of data.\n\n## Installation\n\nFrom source:\n\n```bash\ngit clone https://github.com/safr-ai-lab/pandora-llm.git\npip install -e .\n```\n\nFrom pip:\n```\npip install pandora-llm\n```\n\n## Quickstart\nWe maintain a collection of starter scripts in our codebase under ``experiments/``. If you are creating a new attack, we recommend making a copy of a starter script for a solid template.\n\n```\npython experiments/mia/run_loss.py --model_name EleutherAI/pythia-70m-deduped --model_revision step98000 --num_samples 2000 --pack --seed 229\n```\n\nYou can reproduce the experiments described in our [paper](https://arxiv.org/abs/2402.17012) through the shell scripts provided in the ``scripts/`` folder.\n\n```\nbash scripts/pretrain_mia_baselines.sh\n```\n\n## Contributing\nWe welcome contributions! Please submit pull requests in our [GitHub](https://github.com/safr-ai-lab/pandora-llm).\n\n\n## Citation\n\nIf you use our code or otherwise find this library useful, please cite our paper:\n\n```\n@article{wang2024pandora,\n  title={Pandora's White-Box: Increased Training Data Leakage in Open LLMs},\n  author={Wang, Jeffrey G and Wang, Jason and Li, Marvin and Neel, Seth},\n  journal={arXiv preprint arXiv:2402.17012},\n  year={2024}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Jeffrey Wang, Jason Wang, Marvin Li, Seth Neel  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Red-teaming large language models for train data leakage",
    "version": "2024.6.25",
    "project_urls": {
        "Documentation": "https://pandora-llm.readthedocs.io/en/latest/",
        "Homepage": "https://safr-ai.quarto.pub/pandora/",
        "Repository": "https://github.com/safr-ai-lab/pandora-llm"
    },
    "split_keywords": [
        "red-teaming",
        " privacy",
        " large language model",
        " membership inference attack",
        " extraction"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fe2b8ba957f45d88854e8fc5176dc27c891f9c20054c39ff8dc1a041ea7df4af",
                "md5": "681f6198e80f084d6036c54354e70c0a",
                "sha256": "08a7d8e12560990c742956cc2f7d55ce6cb354bb74503f2960ccff8cc5688ee5"
            },
            "downloads": -1,
            "filename": "pandora_llm-2024.6.25-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "681f6198e80f084d6036c54354e70c0a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 55379,
            "upload_time": "2024-06-25T01:49:12",
            "upload_time_iso_8601": "2024-06-25T01:49:12.392123Z",
            "url": "https://files.pythonhosted.org/packages/fe/2b/8ba957f45d88854e8fc5176dc27c891f9c20054c39ff8dc1a041ea7df4af/pandora_llm-2024.6.25-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f2cbc217d26b0596644e1485660a0af32bdfb0cc9281dc1da7e2453f1c0551ed",
                "md5": "e68551199b86c4f7b551b1f190af5e37",
                "sha256": "69be85a904b35483cd6352e698a9e1e68d17a296c2f658c2d60e717e2966e6c1"
            },
            "downloads": -1,
            "filename": "pandora_llm-2024.6.25.tar.gz",
            "has_sig": false,
            "md5_digest": "e68551199b86c4f7b551b1f190af5e37",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 42304,
            "upload_time": "2024-06-25T01:49:13",
            "upload_time_iso_8601": "2024-06-25T01:49:13.860039Z",
            "url": "https://files.pythonhosted.org/packages/f2/cb/c217d26b0596644e1485660a0af32bdfb0cc9281dc1da7e2453f1c0551ed/pandora_llm-2024.6.25.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-25 01:49:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "safr-ai-lab",
    "github_project": "pandora-llm",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "python",
            "specs": [
                [
                    ">=",
                    "3.10"
                ]
            ]
        },
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "2.3.0"
                ]
            ]
        },
        {
            "name": "torchvision",
            "specs": [
                [
                    ">=",
                    "0.18.0"
                ]
            ]
        },
        {
            "name": "torchaudio",
            "specs": [
                [
                    ">=",
                    "2.3.0"
                ]
            ]
        },
        {
            "name": "transformers",
            "specs": [
                [
                    ">=",
                    "4.41.0"
                ]
            ]
        },
        {
            "name": "datasets",
            "specs": [
                [
                    ">=",
                    "2.19.1"
                ]
            ]
        },
        {
            "name": "zstandard",
            "specs": [
                [
                    ">=",
                    "0.22.0"
                ]
            ]
        },
        {
            "name": "deepspeed",
            "specs": [
                [
                    ">=",
                    "0.14.2"
                ]
            ]
        },
        {
            "name": "accelerate",
            "specs": [
                [
                    ">=",
                    "0.30.1"
                ]
            ]
        },
        {
            "name": "scikit-learn",
            "specs": [
                [
                    ">=",
                    "1.5.0"
                ]
            ]
        },
        {
            "name": "matplotlib",
            "specs": [
                [
                    ">=",
                    "3.9.0"
                ]
            ]
        },
        {
            "name": "plotly",
            "specs": [
                [
                    ">=",
                    "5.22.0"
                ]
            ]
        },
        {
            "name": "kaleido",
            "specs": [
                [
                    ">=",
                    "0.1.0"
                ]
            ]
        },
        {
            "name": "sentencepiece",
            "specs": [
                [
                    ">=",
                    "0.2.0"
                ]
            ]
        },
        {
            "name": "setuptools",
            "specs": [
                [
                    ">=",
                    "70.0.0"
                ]
            ]
        },
        {
            "name": "einops",
            "specs": [
                [
                    ">=",
                    "0.7.0"
                ]
            ]
        },
        {
            "name": "traker",
            "specs": [
                [
                    ">=",
                    "0.3.2"
                ]
            ]
        }
    ],
    "lcname": "pandora-llm"
}
        
Elapsed time: 2.94497s