data-selection


Namedata-selection JSON
Version 1.0.3 PyPI version JSON
download
home_pagehttps://github.com/p-lambda/dsir
SummaryData Selection with Importance Resampling
upload_time2023-11-12 04:06:52
maintainer
docs_urlNone
authorSang Michael Xie
requires_python>=3.6
licenseMIT License Copyright (c) 2023 Sang Michael Xie Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords data selection importance resampling dsir nlp language models
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Data Selection for Language Models via Importance Resampling (DSIR)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![arXiv](https://img.shields.io/badge/arXiv-2305.10429-00ff00.svg)](https://arxiv.org/abs/2302.03169)
[![PyPI version](https://badge.fury.io/py/data-selection.svg)](https://badge.fury.io/py/data-selection)

This repository contains the [DSIR](https://arxiv.org/abs/2302.03169) data selection tool for selecting relevant language model training data from any raw data source given a target dataset, as well as pre-filtered datasets and some pretrained models.

DSIR is built for:
- fast, large-scale (trillion-token scale) data selection from large raw text datasets (Pile, RefinedWeb, RedPajama, ...). There is almost no overhead to selecting more examples (unlike retrieval), other than the time it takes to write the extra examples to disk.
- selecting data that is distributed like a given target dataset (domain-specific data, Wikipedia, ...). Relevance and diversity are balanced automatically by matching the distribution of the target dataset on a feature space (e.g., n-gram frequencies).

Compute needed:
- 1 CPU node
- a decent amount of RAM (at least 64GB for most large datasets - need to hold a few floats per example in memory)
- a high number of cores. The data selection speed scales linearly with the CPU cores.

![DSIR figure](fig1.png)

Code related to the DSIR paper's experiments are in the `experimental/` directory.

## Quickstart

Install with pip:
```
pip install data-selection
```

Install from source by cloning this repo and installing via pip:
```
git clone git@github.com:/p-lambda/dsir
pip install ./dsir
```

To select data, simply initialize a `HashedNgramDSIR` object and call the following functions:
```python
from data_selection import HashedNgramDSIR

raw_datasets = [<list of paths>]
target_datasets = [<list of paths>]

dsir = HashedNgramDSIR(raw_datasets, target_datasets, cache_dir='/path/to/dsir_cache')
dsir.fit_importance_estimator(num_tokens_to_fit='auto')
dsir.compute_importance_weights()
dsir.resample(out_dir='resampled', num_to_sample=10000000, cache_dir='/path/to/resampled_cache')
```
Running this would write 10M documents in `jsonl` files inside an output directory named `resampled`. The files will first be written to `cache_dir` and moved to `out_dir` upon completion (set `cache_dir` to `None` to skip this step). For best performance, use uncompressed `jsonl` files stored on local file storage for all data paths and use as many CPU cores as possible, which allows each file to be virtually sharded across multiple cores. Custom functions for reading the data paths and extracting the text field from each example can be provided via the
`{raw,target}_load_dataset_fn` and `{raw,target}_parse_example_fn` arguments to the constructor. The number of tokens to use for fitting the importance weight estimator can be tuned with the `num_tokens_to_fit` argument (set to `all` to fit on full dataset). Top-k retrieval instead of sampling without replacement (the default) can be done by specifying `top_k=True` to the `resample` method.
 
The `dsir` intermediate results (after `fit_importance_estimator` and `compute_importance_weights`) can be saved and loaded for later use, for example to resample 100M documents instead:
```python
dsir.save('/path/to/dsir_params.pkl')

# later on
dsir.load('/path/to/dsir_params.pkl')
dsir.resample(out_dir='/path/to/out_dir', num_to_sample=100000000, cache_dir='/path/to/resampled_cache')
```
The `save` method can be called at any time to save partial results.

See [Usage documentation](data_selection/README.md) for full details.


## Speed benchmark on The Pile
Using 1 CPU node with 96GB RAM and 96 cores, we can select data from the full (decompressed) Pile dataset in less than *4.5 hours*.
The Pile dataset was first decompressed and placed onto the node's local file storage. The breakdown of timings for each step are:
- *Fit importance estimator* (with `num_tokens_to_fit="auto"`): 59.28 seconds
- *Compute importance weights*: 4.36 hours
- *Resample 10M documents* (with `cache_dir=None` and `out_dir` is a local storage location): 353.68 seconds
- *Total*: 4.47 hours

Subsequent resampling with the same target data is very cheap, and the runtime does not scale with the number of documents to select (unlike retrieval). Resampling 100M documents takes the same amount of time (less than *6 minutes*) as resampling 10M documents:
- *Resample 10M documents*: 353.68 seconds
- *Resample 100M documents*: 352.69 seconds

## Examples

To select data from the Pile:
```python
from data_selection import HashedNgramDSIR

# 2-digit integers up to 29
subsets = [str(i).zfill(2) for i in range(0, 30)]

raw_datasets = [f'/path/to/pile/{subset}.jsonl' for subset in subsets]
target_datasets = ['/path/to/target.jsonl']

dsir = HashedNgramDSIR(
        raw_datasets=raw_datasets,
        target_datasets=target_datasets,
        cache_dir='/path/to/dsir_cache')
dsir.fit_importance_estimator(num_tokens_to_fit='auto')
dsir.compute_importance_weights()
dsir.resample(out_dir='/path/to/out_dir', num_to_sample=10000000, cache_dir='/path/to/resample_cache')
```

HuggingFace datasets can also be used in either `raw_datasets` or `target_datasets` (note: streaming a large raw dataset directly will be very slow - we recommend this more for target datasets):
```python
from data_selection import HashedNgramDSIR
from datasets import load_dataset

subsets = [str(i).zfill(2) for i in range(0, 30)]

raw_datasets = [f'/path/to/pile/{subset}.jsonl' for subset in subsets]
target_datasets = ['codeparrot/self-instruct-starcoder', 'SetFit/mnli']

def target_load_dataset_fn(dataset):
    if dataset == 'codeparrot/self-instruct-starcoder':
        ds = load_dataset(dataset, streaming=True, split='raw')
    else:
        ds = load_dataset(dataset, streaming=True, split='train').take(10000)
    return ds

def target_parse_example_fn(ex):
    if 'output' in ex:
        return ex['output']
    else:
        return ex['text1'] + ' ' + ex['text2']

dsir = HashedNgramDSIR(
        raw_datasets=raw_datasets,
        target_datasets=target_datasets,
        cache_dir='/path/to/dsir_cache',
        target_parse_example_fn=target_parse_example_fn,
        target_load_dataset_fn=target_load_dataset_fn,
        separate_targets=True)
dsir.fit_importance_estimator(num_tokens_to_fit='auto')
dsir.compute_importance_weights()
dsir.resample(out_dir='/path/to/out_dir', num_to_sample=10000000, cache_dir='/path/to/resample_cache')
```
For use-cases where the target datasets are quite different (here, a mix of code and natural language), we recommend passing in `separate_targets` into the constructor. `separate_targets` controls whether to select data separately for each target and then join them. For example, when including two target datasets, one natural language dataset and one code, the most heavily upweighted data when `separate_targets=False` may skew towards documents with a mix of natural language and code, such as StackExchange. When `separate_targets=True`, two separate DSIR runs will occur in parallel, selecting a mixture of documents from each target according to `target_proportions`. When `target_proportions` is unspecified, the number of documents to select for each target is weighted according to the token sizes of each target dataset.


## Citation Information
Paper: <https://arxiv.org/abs/2302.03169>
```
@article{xie2023data,
  author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},
  journal = {Advances in Neural Information Processing Systems (NeurIPS)},
  title = {Data Selection for Language Models via Importance Resampling},
  year = {2023},
}
```


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/p-lambda/dsir",
    "name": "data-selection",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "data selection,importance resampling,dsir,nlp,language models",
    "author": "Sang Michael Xie",
    "author_email": "Sang Michael Xie <xie@cs.stanford.edu>",
    "download_url": "https://files.pythonhosted.org/packages/15/09/906b9038925cc958dd0a2026605ad01ddbc3ab32411a39214635baac33f1/data-selection-1.0.3.tar.gz",
    "platform": null,
    "description": "# Data Selection for Language Models via Importance Resampling (DSIR)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n[![arXiv](https://img.shields.io/badge/arXiv-2305.10429-00ff00.svg)](https://arxiv.org/abs/2302.03169)\n[![PyPI version](https://badge.fury.io/py/data-selection.svg)](https://badge.fury.io/py/data-selection)\n\nThis repository contains the [DSIR](https://arxiv.org/abs/2302.03169) data selection tool for selecting relevant language model training data from any raw data source given a target dataset, as well as pre-filtered datasets and some pretrained models.\n\nDSIR is built for:\n- fast, large-scale (trillion-token scale) data selection from large raw text datasets (Pile, RefinedWeb, RedPajama, ...). There is almost no overhead to selecting more examples (unlike retrieval), other than the time it takes to write the extra examples to disk.\n- selecting data that is distributed like a given target dataset (domain-specific data, Wikipedia, ...). Relevance and diversity are balanced automatically by matching the distribution of the target dataset on a feature space (e.g., n-gram frequencies).\n\nCompute needed:\n- 1 CPU node\n- a decent amount of RAM (at least 64GB for most large datasets - need to hold a few floats per example in memory)\n- a high number of cores. The data selection speed scales linearly with the CPU cores.\n\n![DSIR figure](fig1.png)\n\nCode related to the DSIR paper's experiments are in the `experimental/` directory.\n\n## Quickstart\n\nInstall with pip:\n```\npip install data-selection\n```\n\nInstall from source by cloning this repo and installing via pip:\n```\ngit clone git@github.com:/p-lambda/dsir\npip install ./dsir\n```\n\nTo select data, simply initialize a `HashedNgramDSIR` object and call the following functions:\n```python\nfrom data_selection import HashedNgramDSIR\n\nraw_datasets = [<list of paths>]\ntarget_datasets = [<list of paths>]\n\ndsir = HashedNgramDSIR(raw_datasets, target_datasets, cache_dir='/path/to/dsir_cache')\ndsir.fit_importance_estimator(num_tokens_to_fit='auto')\ndsir.compute_importance_weights()\ndsir.resample(out_dir='resampled', num_to_sample=10000000, cache_dir='/path/to/resampled_cache')\n```\nRunning this would write 10M documents in `jsonl` files inside an output directory named `resampled`. The files will first be written to `cache_dir` and moved to `out_dir` upon completion (set `cache_dir` to `None` to skip this step). For best performance, use uncompressed `jsonl` files stored on local file storage for all data paths and use as many CPU cores as possible, which allows each file to be virtually sharded across multiple cores. Custom functions for reading the data paths and extracting the text field from each example can be provided via the\n`{raw,target}_load_dataset_fn` and `{raw,target}_parse_example_fn` arguments to the constructor. The number of tokens to use for fitting the importance weight estimator can be tuned with the `num_tokens_to_fit` argument (set to `all` to fit on full dataset). Top-k retrieval instead of sampling without replacement (the default) can be done by specifying `top_k=True` to the `resample` method.\n \nThe `dsir` intermediate results (after `fit_importance_estimator` and `compute_importance_weights`) can be saved and loaded for later use, for example to resample 100M documents instead:\n```python\ndsir.save('/path/to/dsir_params.pkl')\n\n# later on\ndsir.load('/path/to/dsir_params.pkl')\ndsir.resample(out_dir='/path/to/out_dir', num_to_sample=100000000, cache_dir='/path/to/resampled_cache')\n```\nThe `save` method can be called at any time to save partial results.\n\nSee [Usage documentation](data_selection/README.md) for full details.\n\n\n## Speed benchmark on The Pile\nUsing 1 CPU node with 96GB RAM and 96 cores, we can select data from the full (decompressed) Pile dataset in less than *4.5 hours*.\nThe Pile dataset was first decompressed and placed onto the node's local file storage. The breakdown of timings for each step are:\n- *Fit importance estimator* (with `num_tokens_to_fit=\"auto\"`): 59.28 seconds\n- *Compute importance weights*: 4.36 hours\n- *Resample 10M documents* (with `cache_dir=None` and `out_dir` is a local storage location): 353.68 seconds\n- *Total*: 4.47 hours\n\nSubsequent resampling with the same target data is very cheap, and the runtime does not scale with the number of documents to select (unlike retrieval). Resampling 100M documents takes the same amount of time (less than *6 minutes*) as resampling 10M documents:\n- *Resample 10M documents*: 353.68 seconds\n- *Resample 100M documents*: 352.69 seconds\n\n## Examples\n\nTo select data from the Pile:\n```python\nfrom data_selection import HashedNgramDSIR\n\n# 2-digit integers up to 29\nsubsets = [str(i).zfill(2) for i in range(0, 30)]\n\nraw_datasets = [f'/path/to/pile/{subset}.jsonl' for subset in subsets]\ntarget_datasets = ['/path/to/target.jsonl']\n\ndsir = HashedNgramDSIR(\n        raw_datasets=raw_datasets,\n        target_datasets=target_datasets,\n        cache_dir='/path/to/dsir_cache')\ndsir.fit_importance_estimator(num_tokens_to_fit='auto')\ndsir.compute_importance_weights()\ndsir.resample(out_dir='/path/to/out_dir', num_to_sample=10000000, cache_dir='/path/to/resample_cache')\n```\n\nHuggingFace datasets can also be used in either `raw_datasets` or `target_datasets` (note: streaming a large raw dataset directly will be very slow - we recommend this more for target datasets):\n```python\nfrom data_selection import HashedNgramDSIR\nfrom datasets import load_dataset\n\nsubsets = [str(i).zfill(2) for i in range(0, 30)]\n\nraw_datasets = [f'/path/to/pile/{subset}.jsonl' for subset in subsets]\ntarget_datasets = ['codeparrot/self-instruct-starcoder', 'SetFit/mnli']\n\ndef target_load_dataset_fn(dataset):\n    if dataset == 'codeparrot/self-instruct-starcoder':\n        ds = load_dataset(dataset, streaming=True, split='raw')\n    else:\n        ds = load_dataset(dataset, streaming=True, split='train').take(10000)\n    return ds\n\ndef target_parse_example_fn(ex):\n    if 'output' in ex:\n        return ex['output']\n    else:\n        return ex['text1'] + ' ' + ex['text2']\n\ndsir = HashedNgramDSIR(\n        raw_datasets=raw_datasets,\n        target_datasets=target_datasets,\n        cache_dir='/path/to/dsir_cache',\n        target_parse_example_fn=target_parse_example_fn,\n        target_load_dataset_fn=target_load_dataset_fn,\n        separate_targets=True)\ndsir.fit_importance_estimator(num_tokens_to_fit='auto')\ndsir.compute_importance_weights()\ndsir.resample(out_dir='/path/to/out_dir', num_to_sample=10000000, cache_dir='/path/to/resample_cache')\n```\nFor use-cases where the target datasets are quite different (here, a mix of code and natural language), we recommend passing in `separate_targets` into the constructor. `separate_targets` controls whether to select data separately for each target and then join them. For example, when including two target datasets, one natural language dataset and one code, the most heavily upweighted data when `separate_targets=False` may skew towards documents with a mix of natural language and code, such as StackExchange. When `separate_targets=True`, two separate DSIR runs will occur in parallel, selecting a mixture of documents from each target according to `target_proportions`. When `target_proportions` is unspecified, the number of documents to select for each target is weighted according to the token sizes of each target dataset.\n\n\n## Citation Information\nPaper: <https://arxiv.org/abs/2302.03169>\n```\n@article{xie2023data,\n  author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang},\n  journal = {Advances in Neural Information Processing Systems (NeurIPS)},\n  title = {Data Selection for Language Models via Importance Resampling},\n  year = {2023},\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Sang Michael Xie  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Data Selection with Importance Resampling",
    "version": "1.0.3",
    "project_urls": {
        "Homepage": "https://github.com/p-lambda/dsir"
    },
    "split_keywords": [
        "data selection",
        "importance resampling",
        "dsir",
        "nlp",
        "language models"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3418e58754997a48bb4c92d22b10921068438592e3d0fc5ae1ef53299da0b264",
                "md5": "1219dfbd26b3f44218e8afe2f1df0143",
                "sha256": "9062d26f0ef78484533c40a05947a69dff81f5aacb040055a2be4ca72776bf96"
            },
            "downloads": -1,
            "filename": "data_selection-1.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1219dfbd26b3f44218e8afe2f1df0143",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 12434,
            "upload_time": "2023-11-12T04:06:50",
            "upload_time_iso_8601": "2023-11-12T04:06:50.819713Z",
            "url": "https://files.pythonhosted.org/packages/34/18/e58754997a48bb4c92d22b10921068438592e3d0fc5ae1ef53299da0b264/data_selection-1.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1509906b9038925cc958dd0a2026605ad01ddbc3ab32411a39214635baac33f1",
                "md5": "bf1f616bfe272a788e2f9c50f97adb1f",
                "sha256": "774a1875ae5e73fba58aca07e5d3618af5c794e58912a36ad7ced91375ca225d"
            },
            "downloads": -1,
            "filename": "data-selection-1.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "bf1f616bfe272a788e2f9c50f97adb1f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 15878,
            "upload_time": "2023-11-12T04:06:52",
            "upload_time_iso_8601": "2023-11-12T04:06:52.195201Z",
            "url": "https://files.pythonhosted.org/packages/15/09/906b9038925cc958dd0a2026605ad01ddbc3ab32411a39214635baac33f1/data-selection-1.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-12 04:06:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "p-lambda",
    "github_project": "dsir",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "data-selection"
}
        
Elapsed time: 0.13583s