Name | modelgenerator JSON |
Version |
0.1.1.post5
JSON |
| download |
home_page | None |
Summary | AIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism by enabling researchers to adapt pretrained models and generate finetuned models for downstream tasks. |
upload_time | 2024-12-21 06:16:36 |
maintainer | None |
docs_url | None |
author | GenBio AI |
requires_python | <3.11,>=3.10 |
license | None |
keywords |
finetuning
foundation models
genbio
machine learning
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# [AIDO](https://github.com/genbio-ai/AIDO).ModelGenerator
AIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism by enabling researchers to adapt pretrained models and generate finetuned models for downstream tasks.
To read more about AIDO.ModelGenerator's integral role in building the world's first AI-driven Digital Organism, see [AIDO](https://github.com/genbio-ai/AIDO).
AIDO.ModelGenerator is open-sourced as an opinionated plug-and-play research framework for cross-disciplinary teams in ML & Bio.
It is designed to enable rapid and reproducible prototyping with four kinds of experiments in mind:
1. Applying pre-trained foundation models to new data
2. Developing new finetuning and inference tasks for foundation models
3. Benchmarking foundation models and creating leaderboards
4. Testing new architectures for finetuning performance
while also scaling with hardware and integrating with larger data pipelines or research workflows.
AIDO.ModelGenerator is built on PyTorch, HuggingFace, and Lightning, and works seamlessly with these ecosystems.
See the [AIDO.ModelGenerator documentation](https://genbio-ai.github.io/ModelGenerator) for installation, usage, tutorials, and API reference.
## Who uses ModelGenerator?
### 🧬 Biologists
* Intuitive one-command CLIs for in silico experiments
* Pre-trained model zoo
* Broad data compatibility
* Pipeline-oriented workflows
### 🤖 ML Researchers
* Reproducible-by-design experiments
* Architecture A/B testing
* Automatic hardware scaling
* Integration with PyTorch, Lightning, HuggingFace, and WandB
### ☕ Software Engineers
* Extensible and modular models, tasks, and data
* Strict typing and documentation
* Fail-fast interface design
* Continuous integration and testing
### 🤝 Everyone benefits from
* A collaborative hub and focal point for multidisciplinary work on experiments, models, software, and data
* Community-driven development
* Permissive license for academic and non-commercial use
## Projects using AIDO.ModelGenerator
- [Accurate and General DNA Representations Emerge from Genome Foundation Models at Scale](https://doi.org/10.1101/2024.12.01.625444)
- [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345)
- [Mixture of Experts Enable Efficient and Effective Protein Understanding and Design](https://doi.org/10.1101/2024.11.29.625425)
- [Scaling Dense Representations for Single Cell with Transcriptome-Scale Context](https://doi.org/10.1101/2024.11.28.625303)
- [Balancing Locality and Reconstruction in Protein Structure Tokenizer](https://doi.org/10.1101/2024.12.02.626366)
## Installation
```
git clone https://github.com/genbio-ai/ModelGenerator.git
cd ModelGenerator
pip install -e .
```
Source installation is necessary to add new backbones, finetuning tasks, and data transformations, as well as use convenience configs and scripts. If you only need to run inference, reproduce published experiments, or finetune on new data, you can use
```
pip install modelgenerator
pip install git+https://github.com/genbio-ai/openfold.git@c4aa2fd0d920c06d3fd80b177284a22573528442
pip install git+https://github.com/NVIDIA/dllogger.git@0540a43971f4a8a16693a9de9de73c1072020769
```
## Quick Start
### Get embeddings from a pre-trained model
```
mgen predict --model Embed --model.backbone aido_dna_dummy \
--data SequencesDataModule --data.path genbio-ai/100m-random-promoters \
--data.x_col sequence --data.id_col sequence --data.test_split_size 0.0001 \
--config configs/examples/save_predictions.yaml
```
### Get token probabilities from a pre-trained model
```
mgen predict --model Inference --model.backbone aido_dna_dummy \
--data SequencesDataModule --data.path genbio-ai/100m-random-promoters \
--data.x_col sequence --data.id_col sequence --data.test_split_size 0.0001 \
--config configs/examples/save_predictions.yaml
```
### Finetune a model
```
mgen fit --model ConditionalDiffusion --model.backbone aido_dna_dummy \
--data ConditionalDiffusionDataModule --data.path "genbio-ai/100m-random-promoters"
```
### Evaluate a model checkpoint
```
mgen test --model ConditionalDiffusion --model.backbone aido_dna_dummy \
--data ConditionalDiffusionDataModule --data.path "genbio-ai/100m-random-promoters" \
--ckpt_path logs/lightning_logs/version_X/checkpoints/<your_model>.ckpt
```
### Save predictions
```
mgen predict --model ConditionalDiffusion --model.backbone aido_dna_dummy \
--data ConditionalDiffusionDataModule --data.path "genbio-ai/100m-random-promoters" \
--ckpt_path logs/lightning_logs/version_X/checkpoints/<your_model>.ckpt \
--config configs/examples/save_predictions.yaml
```
## Configify your experiment
This command
```
mgen fit --model ConditionalDiffusion --model.backbone aido_dna_dummy \
--data ConditionalDiffusionDataModule --data.path "genbio-ai/100m-random-promoters"
```
is equivalent to
`mgen fit --config my_config.yaml` with
```
# my_config.yaml
model:
class_path: ConditionalDiffusion
init_args:
backbone: aido_dna_dummy
data:
class_path: ConditionalDiffusionDataModule
init_args:
path: "genbio-ai/100m-random-promoters"
```
## Use composable configs to customize workflows
```
mgen fit --model SequenceRegression --data PromoterExpressionRegression \
--config configs/defaults.yaml \
--config configs/examples/lora_backbone.yaml \
--config configs/examples/wandb.yaml
```
We provide some useful examples in `configs/examples`.
Configs use the LAST value for each attribute.
Check the full configuration logged with each experiment in `logs/lightning_logs/your-experiment/config.yaml`, or if using wandb `logs/config.yaml`.
## Use LoRA for parameter-efficient finetuning
This also avoids saving the full model, only the LoRA weights are saved.
```
mgen fit --data PromoterExpressionRegression \
--model SequenceRegression --model.backbone.use_peft true \
--model.backbone.lora_r 16 \
--model.backbone.lora_alpha 32 \
--model.backbone.lora_dropout 0.1
```
## Use continued pretraining for finetuning domain adaptation
First run pretraining objective on finetuning data
```
# https://arxiv.org/pdf/2310.02980
mgen fit --model MLM --model.backbone aido_dna_dummy \
--data MLMDataModule --data.path leannmlindsey/GUE \
--data.config_name prom_core_notata
```
Then finetune using the adapted model
```
mgen fit --model SequenceClassification --model.strict_loading false \
--data SequenceClassificationDataModule --data.path leannmlindsey/GUE \
--data.config_name prom_core_notata \
--ckpt_path logs/lightning_logs/version_X/checkpoints/<your_adapted_model>.ckpt
```
Make sure to turn off `strict_loading` to replace the adapter!
## Use the head/adapter/decoder that comes with the backbone
```
mgen fit --model SequenceClassification --data GUEClassification \
--model.use_legacy_adapter true
```
Raw data
{
"_id": null,
"home_page": null,
"name": "modelgenerator",
"maintainer": null,
"docs_url": null,
"requires_python": "<3.11,>=3.10",
"maintainer_email": null,
"keywords": "finetuning, foundation models, genbio, machine learning",
"author": "GenBio AI",
"author_email": "Caleb Ellington <caleb.ellington@genbio.ai>, Dian Li <dian.li@genbio.ai>",
"download_url": "https://files.pythonhosted.org/packages/f9/42/344f35733a06a65c9072d1dd6225ac119cb2c9228902bd5e737659ab4cd7/modelgenerator-0.1.1.post5.tar.gz",
"platform": null,
"description": "# [AIDO](https://github.com/genbio-ai/AIDO).ModelGenerator\n\nAIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism by enabling researchers to adapt pretrained models and generate finetuned models for downstream tasks.\nTo read more about AIDO.ModelGenerator's integral role in building the world's first AI-driven Digital Organism, see [AIDO](https://github.com/genbio-ai/AIDO).\n\nAIDO.ModelGenerator is open-sourced as an opinionated plug-and-play research framework for cross-disciplinary teams in ML & Bio. \nIt is designed to enable rapid and reproducible prototyping with four kinds of experiments in mind:\n\n1. Applying pre-trained foundation models to new data\n2. Developing new finetuning and inference tasks for foundation models\n3. Benchmarking foundation models and creating leaderboards\n4. Testing new architectures for finetuning performance\n\nwhile also scaling with hardware and integrating with larger data pipelines or research workflows.\n\nAIDO.ModelGenerator is built on PyTorch, HuggingFace, and Lightning, and works seamlessly with these ecosystems.\n\nSee the [AIDO.ModelGenerator documentation](https://genbio-ai.github.io/ModelGenerator) for installation, usage, tutorials, and API reference.\n\n## Who uses ModelGenerator?\n\n### \ud83e\uddec Biologists \n* Intuitive one-command CLIs for in silico experiments\n* Pre-trained model zoo\n* Broad data compatibility\n* Pipeline-oriented workflows\n\n### \ud83e\udd16 ML Researchers \n* Reproducible-by-design experiments\n* Architecture A/B testing\n* Automatic hardware scaling\n* Integration with PyTorch, Lightning, HuggingFace, and WandB\n\n### \u2615 Software Engineers\n* Extensible and modular models, tasks, and data\n* Strict typing and documentation\n* Fail-fast interface design\n* Continuous integration and testing\n\n### \ud83e\udd1d Everyone benefits from\n* A collaborative hub and focal point for multidisciplinary work on experiments, models, software, and data\n* Community-driven development\n* Permissive license for academic and non-commercial use\n\n## Projects using AIDO.ModelGenerator\n\n- [Accurate and General DNA Representations Emerge from Genome Foundation Models at Scale](https://doi.org/10.1101/2024.12.01.625444)\n- [A Large-Scale Foundation Model for RNA Function and Structure Prediction](https://doi.org/10.1101/2024.11.28.625345)\n- [Mixture of Experts Enable Efficient and Effective Protein Understanding and Design](https://doi.org/10.1101/2024.11.29.625425)\n- [Scaling Dense Representations for Single Cell with Transcriptome-Scale Context](https://doi.org/10.1101/2024.11.28.625303)\n- [Balancing Locality and Reconstruction in Protein Structure Tokenizer](https://doi.org/10.1101/2024.12.02.626366)\n\n## Installation\n```\ngit clone https://github.com/genbio-ai/ModelGenerator.git\ncd ModelGenerator\npip install -e .\n```\nSource installation is necessary to add new backbones, finetuning tasks, and data transformations, as well as use convenience configs and scripts. If you only need to run inference, reproduce published experiments, or finetune on new data, you can use\n```\npip install modelgenerator\npip install git+https://github.com/genbio-ai/openfold.git@c4aa2fd0d920c06d3fd80b177284a22573528442\npip install git+https://github.com/NVIDIA/dllogger.git@0540a43971f4a8a16693a9de9de73c1072020769\n```\n\n## Quick Start\n### Get embeddings from a pre-trained model\n```\nmgen predict --model Embed --model.backbone aido_dna_dummy \\\n --data SequencesDataModule --data.path genbio-ai/100m-random-promoters \\\n --data.x_col sequence --data.id_col sequence --data.test_split_size 0.0001 \\\n --config configs/examples/save_predictions.yaml\n```\n\n### Get token probabilities from a pre-trained model\n```\nmgen predict --model Inference --model.backbone aido_dna_dummy \\\n --data SequencesDataModule --data.path genbio-ai/100m-random-promoters \\\n --data.x_col sequence --data.id_col sequence --data.test_split_size 0.0001 \\\n --config configs/examples/save_predictions.yaml\n```\n\n### Finetune a model\n```\nmgen fit --model ConditionalDiffusion --model.backbone aido_dna_dummy \\\n --data ConditionalDiffusionDataModule --data.path \"genbio-ai/100m-random-promoters\"\n```\n\n### Evaluate a model checkpoint\n```\nmgen test --model ConditionalDiffusion --model.backbone aido_dna_dummy \\\n --data ConditionalDiffusionDataModule --data.path \"genbio-ai/100m-random-promoters\" \\\n --ckpt_path logs/lightning_logs/version_X/checkpoints/<your_model>.ckpt\n```\n\n### Save predictions\n```\nmgen predict --model ConditionalDiffusion --model.backbone aido_dna_dummy \\\n --data ConditionalDiffusionDataModule --data.path \"genbio-ai/100m-random-promoters\" \\\n --ckpt_path logs/lightning_logs/version_X/checkpoints/<your_model>.ckpt \\\n --config configs/examples/save_predictions.yaml\n```\n\n## Configify your experiment\nThis command\n```\nmgen fit --model ConditionalDiffusion --model.backbone aido_dna_dummy \\\n --data ConditionalDiffusionDataModule --data.path \"genbio-ai/100m-random-promoters\"\n```\n\nis equivalent to\n`mgen fit --config my_config.yaml` with\n\n```\n# my_config.yaml\nmodel:\n class_path: ConditionalDiffusion\n init_args:\n backbone: aido_dna_dummy\ndata:\n class_path: ConditionalDiffusionDataModule\n init_args:\n path: \"genbio-ai/100m-random-promoters\"\n```\n\n## Use composable configs to customize workflows\n```\nmgen fit --model SequenceRegression --data PromoterExpressionRegression \\\n --config configs/defaults.yaml \\\n --config configs/examples/lora_backbone.yaml \\\n --config configs/examples/wandb.yaml\n```\n\nWe provide some useful examples in `configs/examples`.\nConfigs use the LAST value for each attribute.\nCheck the full configuration logged with each experiment in `logs/lightning_logs/your-experiment/config.yaml`, or if using wandb `logs/config.yaml`.\n\n## Use LoRA for parameter-efficient finetuning\nThis also avoids saving the full model, only the LoRA weights are saved.\n```\nmgen fit --data PromoterExpressionRegression \\\n --model SequenceRegression --model.backbone.use_peft true \\\n --model.backbone.lora_r 16 \\\n --model.backbone.lora_alpha 32 \\\n --model.backbone.lora_dropout 0.1\n```\n\n## Use continued pretraining for finetuning domain adaptation\nFirst run pretraining objective on finetuning data\n```\n# https://arxiv.org/pdf/2310.02980\nmgen fit --model MLM --model.backbone aido_dna_dummy \\\n --data MLMDataModule --data.path leannmlindsey/GUE \\\n --data.config_name prom_core_notata\n```\n\nThen finetune using the adapted model\n```\nmgen fit --model SequenceClassification --model.strict_loading false \\\n --data SequenceClassificationDataModule --data.path leannmlindsey/GUE \\\n --data.config_name prom_core_notata \\\n --ckpt_path logs/lightning_logs/version_X/checkpoints/<your_adapted_model>.ckpt\n```\nMake sure to turn off `strict_loading` to replace the adapter!\n\n## Use the head/adapter/decoder that comes with the backbone\n```\nmgen fit --model SequenceClassification --data GUEClassification \\\n --model.use_legacy_adapter true\n```",
"bugtrack_url": null,
"license": null,
"summary": "AIDO.ModelGenerator is a software stack powering the development of an AI-driven Digital Organism by enabling researchers to adapt pretrained models and generate finetuned models for downstream tasks.",
"version": "0.1.1.post5",
"project_urls": {
"Homepage": "https://genbio-ai.github.io/ModelGenerator",
"Source": "https://github.com/genbio-ai/ModelGenerator"
},
"split_keywords": [
"finetuning",
" foundation models",
" genbio",
" machine learning"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a5913b5ea58c286cbbbb1d7a6f1288fc03729ebe735058b1b38ae3e9160cbc3b",
"md5": "e9e001aa0a93f3d40f3e28222a7692c1",
"sha256": "e4a6fd66aa8066810697d49f34c0a19ee6b16e1ebab86351f9e21d5d16380444"
},
"downloads": -1,
"filename": "modelgenerator-0.1.1.post5-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e9e001aa0a93f3d40f3e28222a7692c1",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<3.11,>=3.10",
"size": 336507,
"upload_time": "2024-12-21T06:16:33",
"upload_time_iso_8601": "2024-12-21T06:16:33.052310Z",
"url": "https://files.pythonhosted.org/packages/a5/91/3b5ea58c286cbbbb1d7a6f1288fc03729ebe735058b1b38ae3e9160cbc3b/modelgenerator-0.1.1.post5-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "f942344f35733a06a65c9072d1dd6225ac119cb2c9228902bd5e737659ab4cd7",
"md5": "075d534149979136be4ef180ee9a7cff",
"sha256": "0aed8fd42ac856fb5ffd8fcc5cd67e1d39128abc59c42ddcfad4b8f11afefa10"
},
"downloads": -1,
"filename": "modelgenerator-0.1.1.post5.tar.gz",
"has_sig": false,
"md5_digest": "075d534149979136be4ef180ee9a7cff",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<3.11,>=3.10",
"size": 1251300,
"upload_time": "2024-12-21T06:16:36",
"upload_time_iso_8601": "2024-12-21T06:16:36.021679Z",
"url": "https://files.pythonhosted.org/packages/f9/42/344f35733a06a65c9072d1dd6225ac119cb2c9228902bd5e737659ab4cd7/modelgenerator-0.1.1.post5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-21 06:16:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "genbio-ai",
"github_project": "ModelGenerator",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "modelgenerator"
}