Name | bioencoder JSON |
Version |
1.0.0
JSON |
| download |
home_page | None |
Summary | A metric learning toolkit |
upload_time | 2024-07-19 21:24:21 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | None |
keywords |
metric learning
biology
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div align="center">
<p><img src="https://github.com/agporto/BioEncoder/raw/main/assets/bioencoder_logo.png" width="300"></p>
</div>
# BioEncoder
BioEncoder is a toolkit for supervised metric learning to i) learn and extract features from images, ii) enhance biological image classification, and iii) identify the features most relevant to classification. Designed for diverse and complex datasets, the package and the available metric losses can handle unbalanced classes and subtle phenotypic differences more effectively than non-metric approaches. The package includes taxon-agnostic data loaders, custom augmentation techniques, hyperparameter tuning through YAML configuration files, and rich model visualizations, providing a comprehensive solution for high-throughput analysis of biological images.
Preprint on BioRxiv: [https://doi.org/10.1101/2024.04.03.587987]( https://doi.org/10.1101/2024.04.03.587987)
## Features
[>> Full list of available model architectures, losses, optimizers, schedulers, and augmentations <<](https://github.com/agporto/BioEncoder/blob/main/help/05-options.md)
- Taxon-agnostic dataloaders (making it applicable to any dataset - not just biological ones)
- Support of [timm models](https://github.com/rwightman/pytorch-image-models), and [pytorch-optimizer](https://github.com/jettify/pytorch-optimizer)
- Access to state-of-the-art metric losses, such as [Supcon](https://arxiv.org/abs/2004.11362) and [Sub-center ArcFace](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf).
- [Exponential Moving Average](https://github.com/fadel/pytorch_ema) for stable training, and Stochastic Moving Average for better generalization and performance.
- [LRFinder](https://github.com/davidtvs/pytorch-lr-finder) for the second stage of the training.
- Easy customization of hyperparameters, including augmentations, through `YAML` configs (check the [config-templates](config-templates) folder for examples)
- Custom augmentations techniques via [albumentations](https://github.com/albumentations-team/albumentations)
- TensorBoard logs and checkpoints (soon to come: WandB integration)
- Streamlit app with rich model visualizations (e.g., [Grad-CAM](https://arxiv.org/abs/1610.02391) and [timm-vis](https://github.com/novice03/timm-vis/blob/main/details.ipynb))
- Interactive [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) plots using [Bokeh](https://bokeh.org/)
<div align="center">
<p><img src="https://github.com/agporto/BioEncoder/raw/main/assets/bioencoder-interactive-plot.gif" width="500"></p>
</div>
## Quickstart
[>> Comprehensive help files <<](help)
1\. Install BioEncoder (into a virtual environment with pytorch/CUDA):
````
pip install bioencoder
````
2\. Download example dataset from the data repo: [https://zenodo.org/records/10909614/files/BioEncoder-data.zip](https://zenodo.org/records/10909614/files/BioEncoder-data.zip?download=1&preview=1).
This archive contains the images and configuration files needed for step 3/4, as well as the final model checkpoints and a script to reproduce the results and figures presented in the paper. To play around with theinteractive figures and the model explorer you can also skip the training / SWA steps.
3\. Start interactive session (e.g., in Spyder or VS code) and run the following commands one by one:
```python
## use "overwrite=True to redo a step
import bioencoder
## global setup
bioencoder.configure(root_dir=r"~/bioencoder_wd", run_name="v1")
## split dataset
bioencoder.split_dataset(image_dir=r"~/Downloads/damselflies-aligned-trai_val", max_ratio=6, random_seed=42, val_percent=0.1, min_per_class=20)
## train stage 1
bioencoder.train(config_path=r"bioencoder_configs/train_stage1.yml")
bioencoder.swa(config_path=r"bioencoder_configs/swa_stage1.yml")
## explore embedding space and model from stage 1
bioencoder.interactive_plots(config_path=r"bioencoder_configs/plot_stage1.yml")
bioencoder.model_explorer(config_path=r"bioencoder_configs/explore_stage1.yml")
## (optional) learning rate finder for stage 2
bioencoder.lr_finder(config_path=r"bioencoder_configs/lr_finder.yml")
## train stage 2
bioencoder.train(config_path=r"bioencoder_configs/train_stage2.yml")
bioencoder.swa(config_path=r"bioencoder_configs/swa_stage2.yml")
## explore model from stage 2
bioencoder.model_explorer(config_path=r"bioencoder_configs/explore_stage2.yml")
## inference (stage 1 = embeddings, stage 2 = classification)
bioencoder.inference(config_path="bioencoder_configs/inference.yml", image="path/to/image.jpg" / np.array)
```
4\. Alternatively, you can directly use the command line interface:
```python
## use the flag "--overwrite" to redo a step
bioencoder_configure --root-dir "~/bioencoder_wd" --run-name v1
bioencoder_split_dataset --image-dir "~/Downloads/damselflies-aligned-trai_val" --max-ratio 6 --random-seed 42
bioencoder_train --config-path "bioencoder_configs/train_stage1.yml"
bioencoder_swa --config-path "bioencoder_configs/swa_stage1.yml"
bioencoder_interactive_plots --config-path "bioencoder_configs/plot_stage1.yml"
bioencoder_model_explorer --config-path "bioencoder_configs/explore_stage1.yml"
bioencoder_lr_finder --config-path "bioencoder_configs/lr_finder.yml"
bioencoder_train --config-path "bioencoder_configs/train_stage2.yml"
bioencoder_swa --config-path "bioencoder_configs/swa_stage2.yml"
bioencoder_model_explorer --config-path "bioencoder_configs/explore_stage2.yml"
bioencoder_inference --config-path "bioencoder_configs/inference.yml" --path "path/to/image.jpg"
```
## Citation
Please cite BioEncoder as follows:
```bibtex
@UNPUBLISHED{Luerig2024-ov,
title = "{BioEncoder}: a metric learning toolkit for comparative
organismal biology",
author = "Luerig, Moritz D and Di Martino, Emanuela and Porto, Arthur",
journal = "bioRxiv",
pages = "2024.04.03.587987",
month = apr,
year = 2024,
language = "en",
doi = "10.1101/2024.04.03.587987"
}
```
Raw data
{
"_id": null,
"home_page": null,
"name": "bioencoder",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "metric learning, biology",
"author": null,
"author_email": "Arthur Porto <agporto@gmail.com>, Moritz L\u00fcrig <moritz.luerig@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/10/53/6494726441521e2c8d8041fc37161ad32f3467f3f138d7a56d1c5c9ecccb/bioencoder-1.0.0.tar.gz",
"platform": null,
"description": "<div align=\"center\">\r\n <p><img src=\"https://github.com/agporto/BioEncoder/raw/main/assets/bioencoder_logo.png\" width=\"300\"></p>\r\n</div>\r\n\r\n# BioEncoder\r\n\r\nBioEncoder is a toolkit for supervised metric learning to i) learn and extract features from images, ii) enhance biological image classification, and iii) identify the features most relevant to classification. Designed for diverse and complex datasets, the package and the available metric losses can handle unbalanced classes and subtle phenotypic differences more effectively than non-metric approaches. The package includes taxon-agnostic data loaders, custom augmentation techniques, hyperparameter tuning through YAML configuration files, and rich model visualizations, providing a comprehensive solution for high-throughput analysis of biological images.\r\n\r\nPreprint on BioRxiv: [https://doi.org/10.1101/2024.04.03.587987]( https://doi.org/10.1101/2024.04.03.587987)\r\n\r\n## Features\r\n\r\n[>> Full list of available model architectures, losses, optimizers, schedulers, and augmentations <<](https://github.com/agporto/BioEncoder/blob/main/help/05-options.md)\r\n\r\n- Taxon-agnostic dataloaders (making it applicable to any dataset - not just biological ones)\r\n- Support of [timm models](https://github.com/rwightman/pytorch-image-models), and [pytorch-optimizer](https://github.com/jettify/pytorch-optimizer)\r\n- Access to state-of-the-art metric losses, such as [Supcon](https://arxiv.org/abs/2004.11362) and [Sub-center ArcFace](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf).\r\n- [Exponential Moving Average](https://github.com/fadel/pytorch_ema) for stable training, and Stochastic Moving Average for better generalization and performance.\r\n- [LRFinder](https://github.com/davidtvs/pytorch-lr-finder) for the second stage of the training.\r\n- Easy customization of hyperparameters, including augmentations, through `YAML` configs (check the [config-templates](config-templates) folder for examples)\r\n- Custom augmentations techniques via [albumentations](https://github.com/albumentations-team/albumentations)\r\n- TensorBoard logs and checkpoints (soon to come: WandB integration)\r\n- Streamlit app with rich model visualizations (e.g., [Grad-CAM](https://arxiv.org/abs/1610.02391) and [timm-vis](https://github.com/novice03/timm-vis/blob/main/details.ipynb))\r\n- Interactive [t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) and [PCA](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) plots using [Bokeh](https://bokeh.org/)\r\n\r\n<div align=\"center\">\r\n <p><img src=\"https://github.com/agporto/BioEncoder/raw/main/assets/bioencoder-interactive-plot.gif\" width=\"500\"></p>\r\n</div>\r\n\r\n## Quickstart\r\n\r\n[>> Comprehensive help files <<](help)\r\n\r\n1\\. Install BioEncoder (into a virtual environment with pytorch/CUDA): \r\n````\r\npip install bioencoder\r\n````\r\n\r\n2\\. Download example dataset from the data repo: [https://zenodo.org/records/10909614/files/BioEncoder-data.zip](https://zenodo.org/records/10909614/files/BioEncoder-data.zip?download=1&preview=1). \r\nThis archive contains the images and configuration files needed for step 3/4, as well as the final model checkpoints and a script to reproduce the results and figures presented in the paper. To play around with theinteractive figures and the model explorer you can also skip the training / SWA steps. \r\n\r\n3\\. Start interactive session (e.g., in Spyder or VS code) and run the following commands one by one:\r\n\r\n```python\r\n## use \"overwrite=True to redo a step\r\n\r\nimport bioencoder\r\n\r\n## global setup\r\nbioencoder.configure(root_dir=r\"~/bioencoder_wd\", run_name=\"v1\")\r\n\r\n## split dataset\r\nbioencoder.split_dataset(image_dir=r\"~/Downloads/damselflies-aligned-trai_val\", max_ratio=6, random_seed=42, val_percent=0.1, min_per_class=20)\r\n\r\n## train stage 1\r\nbioencoder.train(config_path=r\"bioencoder_configs/train_stage1.yml\")\r\nbioencoder.swa(config_path=r\"bioencoder_configs/swa_stage1.yml\")\r\n\r\n## explore embedding space and model from stage 1\r\nbioencoder.interactive_plots(config_path=r\"bioencoder_configs/plot_stage1.yml\")\r\nbioencoder.model_explorer(config_path=r\"bioencoder_configs/explore_stage1.yml\")\r\n\r\n## (optional) learning rate finder for stage 2\r\nbioencoder.lr_finder(config_path=r\"bioencoder_configs/lr_finder.yml\")\r\n\r\n## train stage 2\r\nbioencoder.train(config_path=r\"bioencoder_configs/train_stage2.yml\")\r\nbioencoder.swa(config_path=r\"bioencoder_configs/swa_stage2.yml\")\r\n\r\n## explore model from stage 2\r\nbioencoder.model_explorer(config_path=r\"bioencoder_configs/explore_stage2.yml\")\r\n\r\n## inference (stage 1 = embeddings, stage 2 = classification)\r\nbioencoder.inference(config_path=\"bioencoder_configs/inference.yml\", image=\"path/to/image.jpg\" / np.array)\r\n\r\n```\r\n4\\. Alternatively, you can directly use the command line interface: \r\n\r\n```python\r\n## use the flag \"--overwrite\" to redo a step\r\n\r\nbioencoder_configure --root-dir \"~/bioencoder_wd\" --run-name v1\r\nbioencoder_split_dataset --image-dir \"~/Downloads/damselflies-aligned-trai_val\" --max-ratio 6 --random-seed 42\r\nbioencoder_train --config-path \"bioencoder_configs/train_stage1.yml\"\r\nbioencoder_swa --config-path \"bioencoder_configs/swa_stage1.yml\"\r\nbioencoder_interactive_plots --config-path \"bioencoder_configs/plot_stage1.yml\"\r\nbioencoder_model_explorer --config-path \"bioencoder_configs/explore_stage1.yml\"\r\nbioencoder_lr_finder --config-path \"bioencoder_configs/lr_finder.yml\"\r\nbioencoder_train --config-path \"bioencoder_configs/train_stage2.yml\"\r\nbioencoder_swa --config-path \"bioencoder_configs/swa_stage2.yml\"\r\nbioencoder_model_explorer --config-path \"bioencoder_configs/explore_stage2.yml\"\r\nbioencoder_inference --config-path \"bioencoder_configs/inference.yml\" --path \"path/to/image.jpg\"\r\n\r\n```\r\n\r\n## Citation\r\n\r\nPlease cite BioEncoder as follows:\r\n\r\n```bibtex\r\n\r\n@UNPUBLISHED{Luerig2024-ov,\r\n title = \"{BioEncoder}: a metric learning toolkit for comparative\r\n organismal biology\",\r\n author = \"Luerig, Moritz D and Di Martino, Emanuela and Porto, Arthur\",\r\n journal = \"bioRxiv\",\r\n pages = \"2024.04.03.587987\",\r\n month = apr,\r\n year = 2024,\r\n language = \"en\",\r\n doi = \"10.1101/2024.04.03.587987\"\r\n}\r\n\r\n```\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A metric learning toolkit",
"version": "1.0.0",
"project_urls": {
"Bug Tracker": "https://github.com/agporto/BioEncoder/issues",
"Homepage": "https://github.com/agporto/BioEncoder"
},
"split_keywords": [
"metric learning",
" biology"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "293cadcafb55acb7546c22faf66a11d526a27e5546febbffd211630eae239ddc",
"md5": "b57ffd6d3be9a4eb31f9b96a2daae9ff",
"sha256": "70d069c301b51688afa52ec81ea4bfed289bca4dc415bfceb7e633367e5cf338"
},
"downloads": -1,
"filename": "bioencoder-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "b57ffd6d3be9a4eb31f9b96a2daae9ff",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 52875,
"upload_time": "2024-07-19T21:24:19",
"upload_time_iso_8601": "2024-07-19T21:24:19.831276Z",
"url": "https://files.pythonhosted.org/packages/29/3c/adcafb55acb7546c22faf66a11d526a27e5546febbffd211630eae239ddc/bioencoder-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "10536494726441521e2c8d8041fc37161ad32f3467f3f138d7a56d1c5c9ecccb",
"md5": "95cc9c7f6487a83af6005660af483c4c",
"sha256": "dbe1206e468e985381fe225d756b1a9e0d5fb6931a4452dfc672f3464d5df088"
},
"downloads": -1,
"filename": "bioencoder-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "95cc9c7f6487a83af6005660af483c4c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 42484,
"upload_time": "2024-07-19T21:24:21",
"upload_time_iso_8601": "2024-07-19T21:24:21.344279Z",
"url": "https://files.pythonhosted.org/packages/10/53/6494726441521e2c8d8041fc37161ad32f3467f3f138d7a56d1c5c9ecccb/bioencoder-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-19 21:24:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "agporto",
"github_project": "BioEncoder",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "bioencoder"
}