# EvoProtGrad
[![PyPI version](https://badge.fury.io/py/evo-prot-grad.svg)](https://badge.fury.io/py/evo-prot-grad)
[![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
A Python package for directed **evo**lution on a **pro**tein sequence with **grad**ient-based discrete Markov chain monte carlo (MCMC). Users are able to compose custom models that map sequence to function with pretrained models, including protein language models (PLMs), to guide and constrain search. Our package natively integrates with 🤗 HuggingFace and supports PLMs from [transformers](https://huggingface.co/docs/transformers/index).
Our MCMC sampler identifies promising amino acids to mutate via model gradients taken with respect to the input (i.e., sensitivity analysis).
We allow users to compose their own custom target function for MCMC by leveraging the Product of Experts MCMC paradigm.
Each model is an "expert" that contributes its own knowledge about the protein's fitness landscape to the overall target function.
The sampler is designed to be more efficient and effective than brute force and random search while maintaining most of the generality and flexibility.
See our [publication](https://doi.org/10.1088/2632-2153/accacd) and our [documentation](https://nrel.github.io/EvoProtGrad) for more details.
## Installation
EvoProtGrad is available on PyPI and can be installed with pip:
```bash
pip install evo_prot_grad
```
For the bleeding edge version, and/or if you wish to run tests or register a new expert model with EvoProtGrad, please clone this repo and install in editable mode as follows:
```bash
git clone https://github.com/NREL/EvoProtGrad.git
cd EvoProtGrad
pip install -e .
```
## Run tests
Test the code by running `python3 -m unittest`.
## Basic Usage
See `demo.ipynb` to get started right away in a Jupyter notebook or <a target="_blank" href="https://colab.research.google.com/drive/1e8WjYEbWiikRQg3g4YHQJJcpvTIWVAjp?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Create a `ProtBERT` expert from a pretrained HuggingFace protein language model (PLM) using `evo_prot_grad.get_expert`:
```python
import evo_prot_grad
prot_bert_expert = evo_prot_grad.get_expert('bert', scoring_strategy = 'pseudolikelihood_ratio', temperature = 1.0)
```
The default BERT-style PLM in `EvoProtGrad` is `Rostlab/prot_bert`. Normally, we would need to also specify the model and tokenizer. When using a default PLM expert, we automatically pull these from the HuggingFace Hub. The temperature parameter rescales the expert scores and can be used to trade off the importance of different experts. The `pseudolikelihood_ratio` strategy computes the ratio of the "pseudo" log-likelihood (this isn't the exact log-likelihood when the protein language model is a *masked* language model) of the wild type and mutant sequence.
Then, create an instance of `DirectedEvolution` and run the search, returning a list of the best variant per Markov chain (as measured by the `prot_bert` expert):
```python
variants, scores = evo_prot_grad.DirectedEvolution(
wt_fasta = 'test/gfp.fasta', # path to wild type fasta file
output = 'best', # return best, last, all variants
experts = [prot_bert_expert], # list of experts to compose
parallel_chains = 1, # number of parallel chains to run
n_steps = 20, # number of MCMC steps per chain
max_mutations = 10, # maximum number of mutations per variant
verbose = True # print debug info to command line
)()
```
We provide a few experts in `evo_prot_grad/experts` that you can use out of the box, such as:
Protein Language Models (PLMs)
- `bert`, BERT-style PLMs, default: `Rostlab/prot_bert`
- `causallm`, CausalLM-style PLMs, default: `lightonai/RITA_s`
- `esm`, ESM-style PLMs, default: `facebook/esm2_t6_8M_UR50D`
Potts models
- `evcouplings`
and an generic expert for supervised downstream regression models
- `onehot_downstream_regression`
## Citation
If you use EvoProtGrad in your research, please cite the following publication:
```bibtex
@article{emami2023plug,
title={Plug \& play directed evolution of proteins with gradient-based discrete MCMC},
author={Emami, Patrick and Perreault, Aidan and Law, Jeffrey and Biagioni, David and John, Peter St},
journal={Machine Learning: Science and Technology},
volume={4},
number={2},
pages={025014},
year={2023},
publisher={IOP Publishing}
}
```
Raw data
{
"_id": null,
"home_page": "https://github.nrel.gov/NREL/EvoProtGrad/",
"name": "evo-prot-grad",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "protein engineering, directed evolution, huggingface, protein language models, mcmc",
"author": "Patrick Emami",
"author_email": "Patrick.Emami@nrel.gov",
"download_url": "https://files.pythonhosted.org/packages/0b/ac/6b066e5e709fa1f9d99d37fd5413449174316d2bcf68a4ded4836de47aed/evo_prot_grad-0.2.tar.gz",
"platform": null,
"description": "# EvoProtGrad\n\n[![PyPI version](https://badge.fury.io/py/evo-prot-grad.svg)](https://badge.fury.io/py/evo-prot-grad)\n[![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)\n\nA Python package for directed **evo**lution on a **pro**tein sequence with **grad**ient-based discrete Markov chain monte carlo (MCMC). Users are able to compose custom models that map sequence to function with pretrained models, including protein language models (PLMs), to guide and constrain search. Our package natively integrates with \ud83e\udd17 HuggingFace and supports PLMs from [transformers](https://huggingface.co/docs/transformers/index).\n\nOur MCMC sampler identifies promising amino acids to mutate via model gradients taken with respect to the input (i.e., sensitivity analysis).\nWe allow users to compose their own custom target function for MCMC by leveraging the Product of Experts MCMC paradigm.\nEach model is an \"expert\" that contributes its own knowledge about the protein's fitness landscape to the overall target function.\nThe sampler is designed to be more efficient and effective than brute force and random search while maintaining most of the generality and flexibility.\n \nSee our [publication](https://doi.org/10.1088/2632-2153/accacd) and our [documentation](https://nrel.github.io/EvoProtGrad) for more details.\n\n\n## Installation\n\nEvoProtGrad is available on PyPI and can be installed with pip:\n\n```bash\npip install evo_prot_grad\n```\n\nFor the bleeding edge version, and/or if you wish to run tests or register a new expert model with EvoProtGrad, please clone this repo and install in editable mode as follows:\n\n```bash\ngit clone https://github.com/NREL/EvoProtGrad.git\ncd EvoProtGrad\npip install -e .\n```\n\n## Run tests\n\nTest the code by running `python3 -m unittest`.\n\n## Basic Usage\n\nSee `demo.ipynb` to get started right away in a Jupyter notebook or <a target=\"_blank\" href=\"https://colab.research.google.com/drive/1e8WjYEbWiikRQg3g4YHQJJcpvTIWVAjp?usp=sharing\">\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n \nCreate a `ProtBERT` expert from a pretrained HuggingFace protein language model (PLM) using `evo_prot_grad.get_expert`:\n\n```python\nimport evo_prot_grad\n\nprot_bert_expert = evo_prot_grad.get_expert('bert', scoring_strategy = 'pseudolikelihood_ratio', temperature = 1.0)\n```\n\nThe default BERT-style PLM in `EvoProtGrad` is `Rostlab/prot_bert`. Normally, we would need to also specify the model and tokenizer. When using a default PLM expert, we automatically pull these from the HuggingFace Hub. The temperature parameter rescales the expert scores and can be used to trade off the importance of different experts. The `pseudolikelihood_ratio` strategy computes the ratio of the \"pseudo\" log-likelihood (this isn't the exact log-likelihood when the protein language model is a *masked* language model) of the wild type and mutant sequence.\n\nThen, create an instance of `DirectedEvolution` and run the search, returning a list of the best variant per Markov chain (as measured by the `prot_bert` expert):\n\n```python\nvariants, scores = evo_prot_grad.DirectedEvolution(\n wt_fasta = 'test/gfp.fasta', # path to wild type fasta file\n output = 'best', # return best, last, all variants \n experts = [prot_bert_expert], # list of experts to compose\n parallel_chains = 1, # number of parallel chains to run\n n_steps = 20, # number of MCMC steps per chain\n max_mutations = 10, # maximum number of mutations per variant\n verbose = True # print debug info to command line\n)()\n```\n\nWe provide a few experts in `evo_prot_grad/experts` that you can use out of the box, such as:\n\nProtein Language Models (PLMs)\n\n- `bert`, BERT-style PLMs, default: `Rostlab/prot_bert`\n- `causallm`, CausalLM-style PLMs, default: `lightonai/RITA_s`\n- `esm`, ESM-style PLMs, default: `facebook/esm2_t6_8M_UR50D`\n\nPotts models\n\n- `evcouplings`\n\nand an generic expert for supervised downstream regression models\n\n- `onehot_downstream_regression`\n\n## Citation\n\nIf you use EvoProtGrad in your research, please cite the following publication:\n\n```bibtex\n@article{emami2023plug,\n title={Plug \\& play directed evolution of proteins with gradient-based discrete MCMC},\n author={Emami, Patrick and Perreault, Aidan and Law, Jeffrey and Biagioni, David and John, Peter St},\n journal={Machine Learning: Science and Technology},\n volume={4},\n number={2},\n pages={025014},\n year={2023},\n publisher={IOP Publishing}\n}\n```\n",
"bugtrack_url": null,
"license": "BSD 3-Clause",
"summary": "Directed evolution of proteins with fast gradient-based discrete MCMC.",
"version": "0.2",
"project_urls": {
"Homepage": "https://github.nrel.gov/NREL/EvoProtGrad/"
},
"split_keywords": [
"protein engineering",
" directed evolution",
" huggingface",
" protein language models",
" mcmc"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "aa320e4aaee2adf048b4ebc13555f0c889c3733db1adcb4915a9b8727ef2c4b3",
"md5": "d9137e1a4394f6958af06e9a5338f68b",
"sha256": "01291775ba8b15e09ed2debdbaf6ca9c19dfeeb1ee5356033241969fd4089985"
},
"downloads": -1,
"filename": "evo_prot_grad-0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d9137e1a4394f6958af06e9a5338f68b",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 28270,
"upload_time": "2024-07-10T22:52:48",
"upload_time_iso_8601": "2024-07-10T22:52:48.532931Z",
"url": "https://files.pythonhosted.org/packages/aa/32/0e4aaee2adf048b4ebc13555f0c889c3733db1adcb4915a9b8727ef2c4b3/evo_prot_grad-0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0bac6b066e5e709fa1f9d99d37fd5413449174316d2bcf68a4ded4836de47aed",
"md5": "11d863495bd9e56a636d3f39ad95da57",
"sha256": "578443171b9368300b6c5361f36bdfdccff54e6085d7bd87efeb28de1f813229"
},
"downloads": -1,
"filename": "evo_prot_grad-0.2.tar.gz",
"has_sig": false,
"md5_digest": "11d863495bd9e56a636d3f39ad95da57",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 25101,
"upload_time": "2024-07-10T22:52:50",
"upload_time_iso_8601": "2024-07-10T22:52:50.225124Z",
"url": "https://files.pythonhosted.org/packages/0b/ac/6b066e5e709fa1f9d99d37fd5413449174316d2bcf68a4ded4836de47aed/evo_prot_grad-0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-07-10 22:52:50",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "evo-prot-grad"
}