bigwig-loader


Namebigwig-loader JSON
Version 0.1.4 PyPI version JSON
download
home_pageNone
SummaryMachine Learning Data Loader for Collections of BigWig Files
upload_time2024-09-30 08:12:11
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords epigenetics bigwig fasta
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # :lollipop: Epigenetics Dataloader for BigWig files

Fast batched dataloading of BigWig files containing epigentic track data and corresponding sequences powered by GPU
for deep learning applications.

## Quickstart

### Installation with conda/mamba

Bigwig-loader mainly depends on the rapidsai kvikio library and cupy, both of which are best installed using
conda/mamba. Bigwig-loader can now also be installed using conda/mamba. To create a new environment with bigwig-loader
installed:

```shell
mamba create -n my-env -c rapidsai -c conda-forge -c bioconda -c dataloading bigwig-loader
```

Or add this to you environment.yml file:

```yaml
name: my-env
channels:
  - rapidsai
  - conda-forge
  - bioconda
  - dataloading
dependencies:
    - bigwig-loader
```

and update:

```shell
mamba env update -f environment.yml
```

### Installation with pip
Bigwig-loader can also be installed using pip in an environment which has the rapidsai kvikio library
and cupy installed already:

```shell
pip install bigwig-loader
```

### PyTorch Example
We wrapped the BigWigDataset in a PyTorch iterable dataset that you can directly use:

```python
# examples/pytorch_example.py
import pandas as pd
import torch
from torch.utils.data import DataLoader
from bigwig_loader import config
from bigwig_loader.pytorch import PytorchBigWigDataset
from bigwig_loader.download_example_data import download_example_data

# Download example data to play with
download_example_data()
example_bigwigs_directory = config.bigwig_dir
reference_genome_file = config.reference_genome

train_regions = pd.DataFrame({"chrom": ["chr1", "chr2"], "start": [0, 0], "end": [1000000, 1000000]})

dataset = PytorchBigWigDataset(
    regions_of_interest=train_regions,
    collection=example_bigwigs_directory,
    reference_genome_path=reference_genome_file,
    sequence_length=1000,
    center_bin_to_predict=500,
    window_size=1,
    batch_size=32,
    super_batch_size=1024,
    batches_per_epoch=20,
    maximum_unknown_bases_fraction=0.1,
    sequence_encoder="onehot",
    n_threads=4,
    return_batch_objects=True,
)

# Don't use num_workers > 0 in DataLoader. The heavy
# lifting/parallelism is done on cuda streams on the GPU.
dataloader = DataLoader(dataset, num_workers=0, batch_size=None)


class MyTerribleModel(torch.nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = torch.nn.Linear(4, 2)

    def forward(self, batch):
        return self.linear(batch).transpose(1, 2)


model = MyTerribleModel()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

def poisson_loss(pred, target):
    return (pred - target * torch.log(pred.clamp(min=1e-8))).mean()

for batch in dataloader:
    # batch.sequences.shape = n_batch (32), sequence_length (1000), onehot encoding (4)
    pred = model(batch.sequences)
    # batch.values.shape = n_batch (32), n_tracks (2) center_bin_to_predict (500)
    loss = poisson_loss(pred[:, :, 250:750], batch.values)
    print(loss)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
```

### Other frameworks

A framework agnostic Dataset object can be imported from `bigwig_loader.dataset`. This dataset object
returns cupy tensors. Cupy tensors adhere to the cuda array interface and can be zero-copy transformed
to JAX or tensorflow tensors.

```python
from bigwig_loader.dataset import BigWigDataset

dataset = BigWigDataset(
    regions_of_interest=train_regions,
    collection=example_bigwigs_directory,
    reference_genome_path=reference_genome_file,
    sequence_length=1000,
    center_bin_to_predict=500,
    window_size=1,
    batch_size=32,
    super_batch_size=1024,
    batches_per_epoch=20,
    maximum_unknown_bases_fraction=0.1,
    sequence_encoder="onehot",
)

```
See the examples directory for more examples.

## Background

This library is meant for loading batches of data with the same dimensionality, which allows for some assumptions that can
speed up the loading process. As can be seen from the plot below, when loading a small amount of data, pyBigWig is very fast,
but does not exploit the batched nature of data loading for machine learning.

In the benchmark below we also created PyTorch dataloaders (with set_start_method('spawn')) using pyBigWig to compare to
the realistic scenario where multiple CPUs would be used per GPU. We see that the throughput of the CPU dataloader does
not go up linearly with the number of CPUs, and therefore it becomes hard to get the needed throughput to keep the GPU,
training the neural network,saturated during the learning steps.


![benchmark.png](images%2Fbenchmark.png)

This is the problem bigwig-loader solves. This is an example of how to use bigwig-loader:

### Installation

1. `git clone git@github.com:pfizer-opensource/bigwig-loader`
2. `cd bigwig-loader`
3. create the conda environment" `conda env create -f environment.yml`

In this environment you should be able to run `pytest -v` and see the tests
succeed. NOTE: you need a GPU to use bigwig-loader!

## Development

This section guides you through the steps needed to add new functionality. If
anything is unclear, please open an issue.

### Environment

1. `git clone git@github.com:pfizer-opensource/bigwig-loader`
2. `cd bigwig-loader`
3. create the conda environment" `conda env create -f environment.yml`
4. `pip install -e '.[dev]'`
5. run `pre-commit install` to install the pre-commit hooks

### Run Tests
Tests are in the tests directory. One of the most important tests is
test_against_pybigwig which makes sure that if there is a mistake in
pyBigWIg, it is also in bigwig-loader.

```shell
pytest -vv .
```

When github runners with GPU's will become available we would also
like to run these tests in the CI. But for now, you can run them locally.


## Citing

If you use this library, consider citing:

Retel, Joren Sebastian, Andreas Poehlmann, Josh Chiou, Andreas Steffen, and Djork-Arné Clevert. “A Fast Machine Learning Dataloader for Epigenetic Tracks from BigWig Files.” Bioinformatics 40, no. 1 (January 1, 2024): btad767. https://doi.org/10.1093/bioinformatics/btad767.

```bibtex
@article{
    retel_fast_2024,
    title = {A fast machine learning dataloader for epigenetic tracks from {BigWig} files},
    volume = {40},
    issn = {1367-4811},
    url = {https://doi.org/10.1093/bioinformatics/btad767},
    doi = {10.1093/bioinformatics/btad767},
    abstract = {We created bigwig-loader, a data-loader for epigenetic profiles from BigWig files that decompresses and processes information for multiple intervals from multiple BigWig files in parallel. This is an access pattern needed to create training batches for typical machine learning models on epigenetics data. Using a new codec, the decompression can be done on a graphical processing unit (GPU) making it fast enough to create the training batches during training, mitigating the need for saving preprocessed training examples to disk.The bigwig-loader installation instructions and source code can be accessed at https://github.com/pfizer-opensource/bigwig-loader},
    number = {1},
    urldate = {2024-02-02},
    journal = {Bioinformatics},
    author = {Retel, Joren Sebastian and Poehlmann, Andreas and Chiou, Josh and Steffen, Andreas and Clevert, Djork-Arné},
    month = jan,
    year = {2024},
    pages = {btad767},
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "bigwig-loader",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "epigenetics, bigwig, fasta",
    "author": null,
    "author_email": "Joren Retel <joren.retel@pfizer.com>",
    "download_url": "https://files.pythonhosted.org/packages/8f/15/a2f9c8fbb33cda6479ce8de61a146157b3b6adb0edeb7dff3dfac954361f/bigwig_loader-0.1.4.tar.gz",
    "platform": null,
    "description": "# :lollipop: Epigenetics Dataloader for BigWig files\n\nFast batched dataloading of BigWig files containing epigentic track data and corresponding sequences powered by GPU\nfor deep learning applications.\n\n## Quickstart\n\n### Installation with conda/mamba\n\nBigwig-loader mainly depends on the rapidsai kvikio library and cupy, both of which are best installed using\nconda/mamba. Bigwig-loader can now also be installed using conda/mamba. To create a new environment with bigwig-loader\ninstalled:\n\n```shell\nmamba create -n my-env -c rapidsai -c conda-forge -c bioconda -c dataloading bigwig-loader\n```\n\nOr add this to you environment.yml file:\n\n```yaml\nname: my-env\nchannels:\n  - rapidsai\n  - conda-forge\n  - bioconda\n  - dataloading\ndependencies:\n    - bigwig-loader\n```\n\nand update:\n\n```shell\nmamba env update -f environment.yml\n```\n\n### Installation with pip\nBigwig-loader can also be installed using pip in an environment which has the rapidsai kvikio library\nand cupy installed already:\n\n```shell\npip install bigwig-loader\n```\n\n### PyTorch Example\nWe wrapped the BigWigDataset in a PyTorch iterable dataset that you can directly use:\n\n```python\n# examples/pytorch_example.py\nimport pandas as pd\nimport torch\nfrom torch.utils.data import DataLoader\nfrom bigwig_loader import config\nfrom bigwig_loader.pytorch import PytorchBigWigDataset\nfrom bigwig_loader.download_example_data import download_example_data\n\n# Download example data to play with\ndownload_example_data()\nexample_bigwigs_directory = config.bigwig_dir\nreference_genome_file = config.reference_genome\n\ntrain_regions = pd.DataFrame({\"chrom\": [\"chr1\", \"chr2\"], \"start\": [0, 0], \"end\": [1000000, 1000000]})\n\ndataset = PytorchBigWigDataset(\n    regions_of_interest=train_regions,\n    collection=example_bigwigs_directory,\n    reference_genome_path=reference_genome_file,\n    sequence_length=1000,\n    center_bin_to_predict=500,\n    window_size=1,\n    batch_size=32,\n    super_batch_size=1024,\n    batches_per_epoch=20,\n    maximum_unknown_bases_fraction=0.1,\n    sequence_encoder=\"onehot\",\n    n_threads=4,\n    return_batch_objects=True,\n)\n\n# Don't use num_workers > 0 in DataLoader. The heavy\n# lifting/parallelism is done on cuda streams on the GPU.\ndataloader = DataLoader(dataset, num_workers=0, batch_size=None)\n\n\nclass MyTerribleModel(torch.nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.linear = torch.nn.Linear(4, 2)\n\n    def forward(self, batch):\n        return self.linear(batch).transpose(1, 2)\n\n\nmodel = MyTerribleModel()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n\ndef poisson_loss(pred, target):\n    return (pred - target * torch.log(pred.clamp(min=1e-8))).mean()\n\nfor batch in dataloader:\n    # batch.sequences.shape = n_batch (32), sequence_length (1000), onehot encoding (4)\n    pred = model(batch.sequences)\n    # batch.values.shape = n_batch (32), n_tracks (2) center_bin_to_predict (500)\n    loss = poisson_loss(pred[:, :, 250:750], batch.values)\n    print(loss)\n    optimizer.zero_grad()\n    loss.backward()\n    optimizer.step()\n```\n\n### Other frameworks\n\nA framework agnostic Dataset object can be imported from `bigwig_loader.dataset`. This dataset object\nreturns cupy tensors. Cupy tensors adhere to the cuda array interface and can be zero-copy transformed\nto JAX or tensorflow tensors.\n\n```python\nfrom bigwig_loader.dataset import BigWigDataset\n\ndataset = BigWigDataset(\n    regions_of_interest=train_regions,\n    collection=example_bigwigs_directory,\n    reference_genome_path=reference_genome_file,\n    sequence_length=1000,\n    center_bin_to_predict=500,\n    window_size=1,\n    batch_size=32,\n    super_batch_size=1024,\n    batches_per_epoch=20,\n    maximum_unknown_bases_fraction=0.1,\n    sequence_encoder=\"onehot\",\n)\n\n```\nSee the examples directory for more examples.\n\n## Background\n\nThis library is meant for loading batches of data with the same dimensionality, which allows for some assumptions that can\nspeed up the loading process. As can be seen from the plot below, when loading a small amount of data, pyBigWig is very fast,\nbut does not exploit the batched nature of data loading for machine learning.\n\nIn the benchmark below we also created PyTorch dataloaders (with set_start_method('spawn')) using pyBigWig to compare to\nthe realistic scenario where multiple CPUs would be used per GPU. We see that the throughput of the CPU dataloader does\nnot go up linearly with the number of CPUs, and therefore it becomes hard to get the needed throughput to keep the GPU,\ntraining the neural network,saturated during the learning steps.\n\n\n![benchmark.png](images%2Fbenchmark.png)\n\nThis is the problem bigwig-loader solves. This is an example of how to use bigwig-loader:\n\n### Installation\n\n1. `git clone git@github.com:pfizer-opensource/bigwig-loader`\n2. `cd bigwig-loader`\n3. create the conda environment\" `conda env create -f environment.yml`\n\nIn this environment you should be able to run `pytest -v` and see the tests\nsucceed. NOTE: you need a GPU to use bigwig-loader!\n\n## Development\n\nThis section guides you through the steps needed to add new functionality. If\nanything is unclear, please open an issue.\n\n### Environment\n\n1. `git clone git@github.com:pfizer-opensource/bigwig-loader`\n2. `cd bigwig-loader`\n3. create the conda environment\" `conda env create -f environment.yml`\n4. `pip install -e '.[dev]'`\n5. run `pre-commit install` to install the pre-commit hooks\n\n### Run Tests\nTests are in the tests directory. One of the most important tests is\ntest_against_pybigwig which makes sure that if there is a mistake in\npyBigWIg, it is also in bigwig-loader.\n\n```shell\npytest -vv .\n```\n\nWhen github runners with GPU's will become available we would also\nlike to run these tests in the CI. But for now, you can run them locally.\n\n\n## Citing\n\nIf you use this library, consider citing:\n\nRetel, Joren Sebastian, Andreas Poehlmann, Josh Chiou, Andreas Steffen, and Djork-Arn\u00e9 Clevert. \u201cA Fast Machine Learning Dataloader for Epigenetic Tracks from BigWig Files.\u201d Bioinformatics 40, no. 1 (January 1, 2024): btad767. https://doi.org/10.1093/bioinformatics/btad767.\n\n```bibtex\n@article{\n    retel_fast_2024,\n    title = {A fast machine learning dataloader for epigenetic tracks from {BigWig} files},\n    volume = {40},\n    issn = {1367-4811},\n    url = {https://doi.org/10.1093/bioinformatics/btad767},\n    doi = {10.1093/bioinformatics/btad767},\n    abstract = {We created bigwig-loader, a data-loader for epigenetic profiles from BigWig files that decompresses and processes information for multiple intervals from multiple BigWig files in parallel. This is an access pattern needed to create training batches for typical machine learning models on epigenetics data. Using a new codec, the decompression can be done on a graphical processing unit (GPU) making it fast enough to create the training batches during training, mitigating the need for saving preprocessed training examples to disk.The bigwig-loader installation instructions and source code can be accessed at https://github.com/pfizer-opensource/bigwig-loader},\n    number = {1},\n    urldate = {2024-02-02},\n    journal = {Bioinformatics},\n    author = {Retel, Joren Sebastian and Poehlmann, Andreas and Chiou, Josh and Steffen, Andreas and Clevert, Djork-Arn\u00e9},\n    month = jan,\n    year = {2024},\n    pages = {btad767},\n}\n```\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Machine Learning Data Loader for Collections of BigWig Files",
    "version": "0.1.4",
    "project_urls": {
        "homepage": "https://github.com/pfizer-opensource/bigwig-loader"
    },
    "split_keywords": [
        "epigenetics",
        " bigwig",
        " fasta"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8f15a2f9c8fbb33cda6479ce8de61a146157b3b6adb0edeb7dff3dfac954361f",
                "md5": "acd73cc12bce971308ad523be98267da",
                "sha256": "8767afb8a772c11c9d226fb1b3d63a888825520be89d51a294df236be27ba223"
            },
            "downloads": -1,
            "filename": "bigwig_loader-0.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "acd73cc12bce971308ad523be98267da",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 76093,
            "upload_time": "2024-09-30T08:12:11",
            "upload_time_iso_8601": "2024-09-30T08:12:11.098487Z",
            "url": "https://files.pythonhosted.org/packages/8f/15/a2f9c8fbb33cda6479ce8de61a146157b3b6adb0edeb7dff3dfac954361f/bigwig_loader-0.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-30 08:12:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pfizer-opensource",
    "github_project": "bigwig-loader",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "bigwig-loader"
}
        
Elapsed time: 0.41101s