# scdataloader
[![codecov](https://codecov.io/gh/jkobject/scDataLoader/branch/main/graph/badge.svg?token=scDataLoader_token_here)](https://codecov.io/gh/jkobject/scDataLoader)
[![CI](https://github.com/jkobject/scDataLoader/actions/workflows/main.yml/badge.svg)](https://github.com/jkobject/scDataLoader/actions/workflows/main.yml)
[![PyPI version](https://badge.fury.io/py/scDataLoader.svg)](https://badge.fury.io/py/scDataLoader)
[![Downloads](https://pepy.tech/badge/scDataLoader)](https://pepy.tech/project/scDataLoader)
[![Downloads](https://pepy.tech/badge/scDataLoader/month)](https://pepy.tech/project/scDataLoader)
[![Downloads](https://pepy.tech/badge/scDataLoader/week)](https://pepy.tech/project/scDataLoader)
[![GitHub issues](https://img.shields.io/github/issues/jkobject/scDataLoader)](https://img.shields.io/github/issues/jkobject/scDataLoader)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![DOI](https://img.shields.io/badge/DOI-10.1101%2F2024.07.29.605556-blue)](https://doi.org/10.1101/2024.07.29.605556)
This single cell pytorch dataloader / lighting datamodule is designed to be used with:
- [lamindb](https://lamin.ai/)
and:
- [scanpy](https://scanpy.readthedocs.io/en/stable/)
- [anndata](https://anndata.readthedocs.io/en/latest/)
It allows you to:
1. load thousands of datasets containing millions of cells in a few seconds.
2. preprocess the data per dataset and download it locally (normalization, filtering, etc.)
3. create a more complex single cell dataset
4. extend it to your need
built on top of `lamindb` and the `.mapped()` function by Sergey: https://github.com/Koncopd
The package has been designed together with the [scPRINT paper](https://doi.org/10.1101/2024.07.29.605556) and [model](https://github.com/cantinilab/scPRINT).
## More
I needed to create this Data Loader for my PhD project. I am using it to load & preprocess thousands of datasets containing millions of cells in a few seconds. I believed that individuals employing AI for single-cell RNA sequencing and other sequencing datasets would eagerly utilize and desire such a tool, which presently does not exist.
![scdataloader.drawio.png](docs/scdataloader.drawio.png)
## Install it from PyPI
```bash
pip install scdataloader
# or
pip install scDataLoader[dev] # for dev dependencies
lamin init --storage ./testdb --name test --schema bionty
```
if you start with lamin and had to do a `lamin init`, you will also need to populate your ontologies. This is because scPRINT is using ontologies to define its cell types, diseases, sexes, ethnicities, etc.
you can do it manually or with our function:
```python
from scdataloader.utils import populate_my_ontology
populate_my_ontology() #to populate everything (recommended) (can take 2-10mns)
populate_my_ontology( #the minimum to the tool
organisms: List[str] = ["NCBITaxon:10090", "NCBITaxon:9606"],
sex: List[str] = ["PATO:0000384", "PATO:0000383"],
celltypes = None,
ethnicities = None,
assays = None,
tissues = None,
diseases = None,
dev_stages = None,
)
```
### Dev install
If you want to use the latest version of scDataLoader and work on the code yourself use `git clone` and `pip -e` instead of `pip install`.
```bash
git clone https://github.com/jkobject/scDataLoader.git
pip install -e scDataLoader[dev]
```
## Usage
### DataModule usage
```python
# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils, Preprocessor, DataModule
# preprocess datasets
preprocessor = Preprocessor(
do_postp=False,
force_preprocess=True,
)
adata = preprocessor(adata)
art = ln.Artifact(adata, description="test")
art.save()
ln.Collection(art, name="test", description="test").save()
datamodule = DataModule(
collection_name="test",
organisms=["NCBITaxon:9606"], #organism that we will work on
how="most expr", # for the collator (most expr genes only will be selected)
max_len=1000, # only the 1000 most expressed
batch_size=64,
num_workers=1,
validation_split=0.1,
)
```
### lightning-free usage (Dataset+Collator+DataLoader)
```python
# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils, Preprocessor, SimpleAnnDataset, Collator, DataLoader
# preprocess dataset
preprocessor = Preprocessor(
do_postp=False,
force_preprocess=True,
)
adata = preprocessor(adata)
# create dataset
adataset = SimpleAnnDataset(
adata, obs_to_output=["organism_ontology_term_id"]
)
# create collator
col = Collator(
organisms="NCBITaxon:9606",
valid_genes=adata.var_names,
max_len=2000, #maximum number of genes to use
how="some" |"most expr"|"random_expr",
# genelist = [geneA, geneB] if how=='some'
)
# create dataloader
dataloader = DataLoader(
adataset,
collate_fn=col,
batch_size=64,
num_workers=4,
shuffle=False,
)
# predict
for batch in tqdm(dataloader):
gene_pos, expression, depth = (
batch["genes"],
batch["x"],
batch["depth"],
)
model.predict(
gene_pos,
expression,
depth,
)
```
### Usage on all of cellxgene
```python
# initialize a local lamin database
#! lamin init --storage ./cellxgene --name cellxgene --schema bionty
from scdataloader import utils
from scdataloader.preprocess import LaminPreprocessor, additional_postprocess, additional_preprocess
# preprocess datasets
DESCRIPTION='preprocessed by scDataLoader'
cx_dataset = ln.Collection.using(instance="laminlabs/cellxgene").filter(name="cellxgene-census", version='2023-12-15').one()
cx_dataset, len(cx_dataset.artifacts.all())
do_preprocess = LaminPreprocessor(additional_postprocess=additional_postprocess, additional_preprocess=additional_preprocess, skip_validate=True, subset_hvg=0)
preprocessed_dataset = do_preprocess(cx_dataset, name=DESCRIPTION, description=DESCRIPTION, start_at=6, version="2")
# create dataloaders
from scdataloader import DataModule
import tqdm
datamodule = DataModule(
collection_name="preprocessed dataset",
organisms=["NCBITaxon:9606"], #organism that we will work on
how="most expr", # for the collator (most expr genes only will be selected)
max_len=1000, # only the 1000 most expressed
batch_size=64,
num_workers=1,
validation_split=0.1,
test_split=0)
for i in tqdm.tqdm(datamodule.train_dataloader()):
# pass #or do pass
print(i)
break
# with lightning:
# Trainer(model, datamodule)
```
see the notebooks in [docs](https://www.jkobject.com/scDataLoader/):
1. [load a dataset](https://www.jkobject.com/scDataLoader/notebooks/1_download_and_preprocess/)
2. [create a dataset](https://www.jkobject.com/scDataLoader/notebooks/2_create_dataloader/)
### command line preprocessing
You can use the command line to preprocess a large database of datasets like here for cellxgene. this allows parallelizing and easier usage.
```bash
scdataloader --instance "laminlabs/cellxgene" --name "cellxgene-census" --version "2023-12-15" --description "preprocessed for scprint" --new_name "scprint main" --start_at 10 >> scdataloader.out
```
### command line usage
The main way to use
> please refer to the [scPRINT documentation](https://www.jkobject.com/scPRINT/) and [lightning documentation](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_intermediate.html) for more information on command line usage
## FAQ
### how to update my ontologies?
```bash
import bionty as bt
bt.reset_sources()
# Run via CLI: lamin load <your instance>
import lnschema_bionty as lb
lb.dev.sync_bionty_source_to_latest()
```
### how to load all ontologies?
```python
from scdataloader import utils
utils.populate_ontologies() # this might take from 5-20mins
```
## Development
Read the [CONTRIBUTING.md](CONTRIBUTING.md) file.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Acknowledgments
- [lamin.ai](https://lamin.ai/)
- [scanpy](https://scanpy.readthedocs.io/en/stable/)
- [anndata](https://anndata.readthedocs.io/en/latest/)
- [scprint](https://www.jkobject.com/scPRINT/)
Awesome single cell dataloader created by @jkobject
Raw data
{
"_id": null,
"home_page": null,
"name": "scdataloader",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>=3.10",
"maintainer_email": null,
"keywords": "dataloader, lamindb, pytorch, scPRINT, scRNAseq",
"author": null,
"author_email": "jkobject <jkobject@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/97/f4/4c0686b8b61faa3cd8cd32a86fcf152c67b485f7bb203e6621007088e437/scdataloader-1.6.3.tar.gz",
"platform": null,
"description": "# scdataloader\n\n[![codecov](https://codecov.io/gh/jkobject/scDataLoader/branch/main/graph/badge.svg?token=scDataLoader_token_here)](https://codecov.io/gh/jkobject/scDataLoader)\n[![CI](https://github.com/jkobject/scDataLoader/actions/workflows/main.yml/badge.svg)](https://github.com/jkobject/scDataLoader/actions/workflows/main.yml)\n[![PyPI version](https://badge.fury.io/py/scDataLoader.svg)](https://badge.fury.io/py/scDataLoader)\n[![Downloads](https://pepy.tech/badge/scDataLoader)](https://pepy.tech/project/scDataLoader)\n[![Downloads](https://pepy.tech/badge/scDataLoader/month)](https://pepy.tech/project/scDataLoader)\n[![Downloads](https://pepy.tech/badge/scDataLoader/week)](https://pepy.tech/project/scDataLoader)\n[![GitHub issues](https://img.shields.io/github/issues/jkobject/scDataLoader)](https://img.shields.io/github/issues/jkobject/scDataLoader)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![DOI](https://img.shields.io/badge/DOI-10.1101%2F2024.07.29.605556-blue)](https://doi.org/10.1101/2024.07.29.605556)\n\nThis single cell pytorch dataloader / lighting datamodule is designed to be used with:\n\n- [lamindb](https://lamin.ai/)\n\nand:\n\n- [scanpy](https://scanpy.readthedocs.io/en/stable/)\n- [anndata](https://anndata.readthedocs.io/en/latest/)\n\nIt allows you to:\n\n1. load thousands of datasets containing millions of cells in a few seconds.\n2. preprocess the data per dataset and download it locally (normalization, filtering, etc.)\n3. create a more complex single cell dataset\n4. extend it to your need\n\nbuilt on top of `lamindb` and the `.mapped()` function by Sergey: https://github.com/Koncopd \n\nThe package has been designed together with the [scPRINT paper](https://doi.org/10.1101/2024.07.29.605556) and [model](https://github.com/cantinilab/scPRINT).\n\n## More\n\nI needed to create this Data Loader for my PhD project. I am using it to load & preprocess thousands of datasets containing millions of cells in a few seconds. I believed that individuals employing AI for single-cell RNA sequencing and other sequencing datasets would eagerly utilize and desire such a tool, which presently does not exist.\n\n![scdataloader.drawio.png](docs/scdataloader.drawio.png)\n\n## Install it from PyPI\n\n```bash\npip install scdataloader\n# or\npip install scDataLoader[dev] # for dev dependencies\n\nlamin init --storage ./testdb --name test --schema bionty\n```\n\nif you start with lamin and had to do a `lamin init`, you will also need to populate your ontologies. This is because scPRINT is using ontologies to define its cell types, diseases, sexes, ethnicities, etc.\n\nyou can do it manually or with our function:\n\n```python\nfrom scdataloader.utils import populate_my_ontology\n\npopulate_my_ontology() #to populate everything (recommended) (can take 2-10mns)\n\npopulate_my_ontology( #the minimum to the tool\norganisms: List[str] = [\"NCBITaxon:10090\", \"NCBITaxon:9606\"],\n sex: List[str] = [\"PATO:0000384\", \"PATO:0000383\"],\n celltypes = None,\n ethnicities = None,\n assays = None,\n tissues = None,\n diseases = None,\n dev_stages = None,\n)\n```\n\n### Dev install\n\nIf you want to use the latest version of scDataLoader and work on the code yourself use `git clone` and `pip -e` instead of `pip install`.\n\n```bash\ngit clone https://github.com/jkobject/scDataLoader.git\npip install -e scDataLoader[dev]\n```\n\n## Usage\n\n### DataModule usage\n\n```python\n# initialize a local lamin database\n#! lamin init --storage ./cellxgene --name cellxgene --schema bionty\nfrom scdataloader import utils, Preprocessor, DataModule\n\n\n# preprocess datasets\npreprocessor = Preprocessor(\n do_postp=False,\n force_preprocess=True,\n)\nadata = preprocessor(adata)\n\nart = ln.Artifact(adata, description=\"test\")\nart.save()\nln.Collection(art, name=\"test\", description=\"test\").save()\n\ndatamodule = DataModule(\n collection_name=\"test\",\n organisms=[\"NCBITaxon:9606\"], #organism that we will work on\n how=\"most expr\", # for the collator (most expr genes only will be selected)\n max_len=1000, # only the 1000 most expressed\n batch_size=64,\n num_workers=1,\n validation_split=0.1,\n)\n```\n\n### lightning-free usage (Dataset+Collator+DataLoader)\n\n```python\n# initialize a local lamin database\n#! lamin init --storage ./cellxgene --name cellxgene --schema bionty\n\nfrom scdataloader import utils, Preprocessor, SimpleAnnDataset, Collator, DataLoader\n\n# preprocess dataset\npreprocessor = Preprocessor(\n do_postp=False,\n force_preprocess=True,\n)\nadata = preprocessor(adata)\n\n# create dataset\nadataset = SimpleAnnDataset(\n adata, obs_to_output=[\"organism_ontology_term_id\"]\n)\n# create collator\ncol = Collator(\n organisms=\"NCBITaxon:9606\",\n valid_genes=adata.var_names,\n max_len=2000, #maximum number of genes to use\n how=\"some\" |\"most expr\"|\"random_expr\",\n # genelist = [geneA, geneB] if how=='some'\n)\n# create dataloader\ndataloader = DataLoader(\n adataset,\n collate_fn=col,\n batch_size=64,\n num_workers=4,\n shuffle=False,\n)\n\n# predict\nfor batch in tqdm(dataloader):\n gene_pos, expression, depth = (\n batch[\"genes\"],\n batch[\"x\"],\n batch[\"depth\"],\n )\n model.predict(\n gene_pos,\n expression,\n depth,\n )\n```\n\n### Usage on all of cellxgene\n\n```python\n# initialize a local lamin database\n#! lamin init --storage ./cellxgene --name cellxgene --schema bionty\n\nfrom scdataloader import utils\nfrom scdataloader.preprocess import LaminPreprocessor, additional_postprocess, additional_preprocess\n\n# preprocess datasets\nDESCRIPTION='preprocessed by scDataLoader'\n\ncx_dataset = ln.Collection.using(instance=\"laminlabs/cellxgene\").filter(name=\"cellxgene-census\", version='2023-12-15').one()\ncx_dataset, len(cx_dataset.artifacts.all())\n\n\ndo_preprocess = LaminPreprocessor(additional_postprocess=additional_postprocess, additional_preprocess=additional_preprocess, skip_validate=True, subset_hvg=0)\n\npreprocessed_dataset = do_preprocess(cx_dataset, name=DESCRIPTION, description=DESCRIPTION, start_at=6, version=\"2\")\n\n# create dataloaders\nfrom scdataloader import DataModule\nimport tqdm\n\ndatamodule = DataModule(\n collection_name=\"preprocessed dataset\",\n organisms=[\"NCBITaxon:9606\"], #organism that we will work on\n how=\"most expr\", # for the collator (most expr genes only will be selected)\n max_len=1000, # only the 1000 most expressed\n batch_size=64,\n num_workers=1,\n validation_split=0.1,\n test_split=0)\n\nfor i in tqdm.tqdm(datamodule.train_dataloader()):\n # pass #or do pass\n print(i)\n break\n\n# with lightning:\n# Trainer(model, datamodule)\n\n```\n\nsee the notebooks in [docs](https://www.jkobject.com/scDataLoader/):\n\n1. [load a dataset](https://www.jkobject.com/scDataLoader/notebooks/1_download_and_preprocess/)\n2. [create a dataset](https://www.jkobject.com/scDataLoader/notebooks/2_create_dataloader/)\n\n### command line preprocessing\n\nYou can use the command line to preprocess a large database of datasets like here for cellxgene. this allows parallelizing and easier usage.\n\n```bash\nscdataloader --instance \"laminlabs/cellxgene\" --name \"cellxgene-census\" --version \"2023-12-15\" --description \"preprocessed for scprint\" --new_name \"scprint main\" --start_at 10 >> scdataloader.out\n```\n\n### command line usage\n\nThe main way to use\n\n> please refer to the [scPRINT documentation](https://www.jkobject.com/scPRINT/) and [lightning documentation](https://lightning.ai/docs/pytorch/stable/cli/lightning_cli_intermediate.html) for more information on command line usage\n\n## FAQ\n\n### how to update my ontologies?\n\n```bash\nimport bionty as bt\nbt.reset_sources()\n\n# Run via CLI: lamin load <your instance>\n\nimport lnschema_bionty as lb\nlb.dev.sync_bionty_source_to_latest()\n```\n\n### how to load all ontologies?\n\n```python\nfrom scdataloader import utils\nutils.populate_ontologies() # this might take from 5-20mins\n```\n\n## Development\n\nRead the [CONTRIBUTING.md](CONTRIBUTING.md) file.\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- [lamin.ai](https://lamin.ai/)\n- [scanpy](https://scanpy.readthedocs.io/en/stable/)\n- [anndata](https://anndata.readthedocs.io/en/latest/)\n- [scprint](https://www.jkobject.com/scPRINT/)\n\nAwesome single cell dataloader created by @jkobject\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "a dataloader for single cell data in lamindb",
"version": "1.6.3",
"project_urls": {
"repository": "https://github.com/jkobject/scDataLoader"
},
"split_keywords": [
"dataloader",
" lamindb",
" pytorch",
" scprint",
" scrnaseq"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "fc8c73b068263adabb08ff1d180bfb0fa46b577c42d932f52d8a731754d61d64",
"md5": "7ceebf77d80cfa87dd61931effe526d4",
"sha256": "7ea94956082851fdc13d60052b8e6c7bd2c8851590723bc103d1967aa2c929d8"
},
"downloads": -1,
"filename": "scdataloader-1.6.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "7ceebf77d80cfa87dd61931effe526d4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.10",
"size": 45557,
"upload_time": "2024-11-13T14:19:54",
"upload_time_iso_8601": "2024-11-13T14:19:54.320885Z",
"url": "https://files.pythonhosted.org/packages/fc/8c/73b068263adabb08ff1d180bfb0fa46b577c42d932f52d8a731754d61d64/scdataloader-1.6.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "97f44c0686b8b61faa3cd8cd32a86fcf152c67b485f7bb203e6621007088e437",
"md5": "8ae865ffce4bfc4800aceeaafc46ce2c",
"sha256": "c94dfb59b7e03fdae62b54213c2d581e9f823679670d63a013bee5985898c60b"
},
"downloads": -1,
"filename": "scdataloader-1.6.3.tar.gz",
"has_sig": false,
"md5_digest": "8ae865ffce4bfc4800aceeaafc46ce2c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.10",
"size": 42433,
"upload_time": "2024-11-13T14:19:56",
"upload_time_iso_8601": "2024-11-13T14:19:56.109185Z",
"url": "https://files.pythonhosted.org/packages/97/f4/4c0686b8b61faa3cd8cd32a86fcf152c67b485f7bb203e6621007088e437/scdataloader-1.6.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-13 14:19:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jkobject",
"github_project": "scDataLoader",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "scdataloader"
}