SMILES-RNN


NameSMILES-RNN JSON
Version 2.0.1 PyPI version JSON
download
home_pageNone
SummaryA scoring, benchmarking and evaluation framework for goal directed generative models
upload_time2024-06-05 15:31:11
maintainerNone
docs_urlNone
authorNone
requires_python<3.12,>=3.6
licenseNone
keywords smiles chemical language models de novo constrained de novo chemistry drug design reinforcement learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![DOI](https://zenodo.org/badge/374712112.svg)](https://zenodo.org/doi/10.5281/zenodo.11356192)

# SMILES-RNN

This repo contains code for a SMILES-based recurrent neural network used for *de novo* molecule generation with several  reinforcement learning algorithms available for molecule optimization. This was written to be used in conjunction with [MolScore](https://github.com/MorganCThomas/MolScore) - although any other scoring function can also be used.

## Installation
This code can be installed via pip.

```
pip install smiles-rnn
```

Or via cloning this repository and setting up an environment with mamba.

```
git clone https://github.com/MorganCThomas/SMILES-RNN.git
cd SMILES-RNN
mamba env create -f environment.yml
pip install ./
```

## Usage
Arguments to any of the scripts can be printed by running 

```
python <script> --help
```

## Training a prior

To train a prior run the *train_prior.py* script. You may note below that several other grammars are also implemented including [DeepSMILES](https://chemrxiv.org/engage/chemrxiv/article-details/60c73ed6567dfe7e5fec388d), [SELFIES](https://iopscience.iop.org/article/10.1088/2632-2153/aba947), [atomInSmiles](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-023-00725-9), and [SAFE](https://arxiv.org/abs/2310.10773) which are generated by conversion from SMILES. When using randomization (which can be done at train time) the SMILES are first randomized and then each random SMILES is converted to the alternative grammar. You can optionally pass in validation of test SMILES where the log likelihood will be compared during training which can be monitored via tensorboard. *Currently choosing a specific GPU device does not work, it will run on the default GPU device (i.e., index 0).

```
Train an initial prior model based on smiles data

positional arguments:
  {RNN,Transformer,GTr}
                        Model architecture
    RNN                 Use simple forward RNN with GRU or LSTM
    Transformer         TransformerEncoder model
    GTr                 StableTransformerEncoder model

optional arguments:
  -h, --help            show this help message and exit
  --grammar {SMILES,deepSMILES,deepSMILES_r,deepSMILES_cr,deepSMILES_c,deepSMILES_cb,deepSMILES_b,SELFIES, AIS,SAFE,SmiZip}
                        Choice of grammar to use, SMILES will be encoded and decoded via grammar (default: SMILES)
  --randomize           Training smiles will be randomized using default arguments (10 restricted) (default: False)
  --n_jobs N_JOBS       If randomizing use multiple cores (default: 1)
  --smizip-ngrams SMIZIP_NGRAMS
                        SmiZip JSON file containing the list of n-grams (default: None)
  --valid_smiles VALID_SMILES
                        Validation smiles (default: None)
  --test_smiles TEST_SMILES
                        Test smiles (default: None)
  --validate_frequency VALIDATE_FREQUENCY
                        (default: 500)
  --n_epochs N_EPOCHS   (default: 5)
  --batch_size BATCH_SIZE
                        (default: 128)
  -d DEVICE, --device DEVICE
                        cpu/gpu or device number (default: gpu)

required arguments:
  -i TRAIN_SMILES, --train_smiles TRAIN_SMILES
                        Path to smiles file (default: None)
  -o OUTPUT_DIRECTORY, --output_directory OUTPUT_DIRECTORY
                        Output directory to save model (default: None)
  -s SUFFIX, --suffix SUFFIX
                        Suffix to name files (default: None)
```

## Sampling from a trained prior

You can sample a trained model by running the *sample_model.py* script.

```
Sample smiles from model

optional arguments:
  -h, --help            show this help message and exit
  -p PATH, --path PATH  Path to checkpoint (.ckpt) (default: None)
  -m {RNN,Transformer,GTr}, --model {RNN,Transformer,GTr}
                        Choice of architecture (default: None)
  -o OUTPUT, --output OUTPUT
                        Path to save file (e.g. Data/Prior_10k.smi) (default: None)
  -d DEVICE, --device DEVICE
                        (default: gpu)
  -n NUMBER, --number NUMBER
                        (default: 10000)
  -t TEMPERATURE, --temperature TEMPERATURE
                        Temperature to sample (1: multinomial, <1: Less random, >1: More random) (default: 1.0)
  --psmiles PSMILES     Either scaffold smiles labelled with decoration points (*) or fragments for linking with connection points (*) and seperated by a period .
                        (default: None)
  --unique              Keep sampling until n unique canonical molecules have been sampled (default: False)
  --native              If trained using an alternative grammar e.g., SELFIES. don't convet back to SMILES (default: False)
```

## Fine-tuning

You can also fine-tune a trained model with a smaller dataset of SMILES by running the *fine_tune.py* script. If the pre-trained model was trained with an alternative grammar, these SMILES will also be converted at train time i.e., you always input molecules as SMILES.

```
Fine-tune a pre-trained prior model based on a smaller dataset

optional arguments:
  -h, --help            show this help message and exit

Required arguments:
  -p PRIOR, --prior PRIOR
                        Path to prior file (default: None)
  -i TUNE_SMILES, --tune_smiles TUNE_SMILES
                        Path to fine-tuning smiles file (default: None)
  -o OUTPUT_DIRECTORY, --output_directory OUTPUT_DIRECTORY
                        Output directory to save model (default: None)
  -s SUFFIX, --suffix SUFFIX
                        Suffix to name files (default: None)
  --model {RNN,Transformer,GTr}
                        Choice of architecture (default: None)

Optional arguments:
  --randomize           Training smiles will be randomized using default arguments (10 restricted) (default: False)
  --valid_smiles VALID_SMILES
                        Validation smiles (default: None)
  --test_smiles TEST_SMILES
                        Test smiles (default: None)
  --n_epochs N_EPOCHS   (default: 10)
  --batch_size BATCH_SIZE
                        (default: 128)
  -d DEVICE, --device DEVICE
                        cpu/gpu or device number (default: gpu)
  -f FREEZE, --freeze FREEZE
                        Number of RNN layers to freeze (default: None)
```

## Reinforcement learning

Finally, reinforcement learning can be run with the *reinforcement_learning.py* script. Note that this is written to work with [MolScore]() to handle the objective task i.e., molecule scoring. However, one can also use the underlying *ReinforcementLearning* class found in the *model/RL.py* module where another scoring function can be provided. This class has several methods for different reinforcement learning algorithms including:
- Reinforce
- REINVENT
- BAR
- Hill-Climb
- Augmented Hill-Climb

There are generic arguments that can be viewed by running `python reinforcement_learning.py --help`

```
Optimize an RNN towards a reward via reinforment learning

optional arguments:
  -h, --help            show this help message and exit

Required arguments:
  -p PRIOR, --prior PRIOR
                        Path to prior checkpoint (.ckpt) (default: None)
  -m MOLSCORE_CONFIG, --molscore_config MOLSCORE_CONFIG
                        Path to molscore config (.json) (default: None)
  --model {RNN,Transformer,GTr}
                        Choice of architecture (default: None)

Optional arguments:
  -a AGENT, --agent AGENT
                        Path to agent checkpoint (.ckpt) (default: None)
  -d DEVICE, --device DEVICE
                        (default: gpu)
  -f FREEZE, --freeze FREEZE
                        Number of RNN layers to freeze (default: None)
  --save_freq SAVE_FREQ
                        How often to save models (default: 100)
  --verbose             Whether to print loss (default: False)
  --psmiles PSMILES     Either scaffold smiles labelled with decoration points (*) or fragments for linking with connection points (*) and seperated by a period .
                        (default: None)
  --psmiles_multi       Whether to conduct multiple updates (1 per decoration) (default: False)
  --psmiles_canonical   Whether to attach decorations one at a time, based on attachment point with lowest NLL, otherwise attachment points will be shuffled within a
                        batch (default: False)
  --psmiles_optimize    Whether to optimize the SMILES prompts during sampling (default: False)
  --psmiles_lr_decay PSMILES_LR_DECAY
                        Amount to decay the learning rate at the beginning of iterative prompting (1=no decay) (default: 1)
  --psmiles_lr_epochs PSMILES_LR_EPOCHS
                        Number of epochs before the decayed learning rate returns to normal (default: 10)

RL strategy:
  {RV,RV2,BAR,AHC,HC,HC-reg,RF,RF-reg}
                        Which reinforcement learning algorithm to use
```

And RL algorithm specific arguments that can be viewed by running e.g., `python reinforcement_learning.py AHC --help`

```
Augmented Hill-Climb

optional arguments:
  -h, --help            show this help message and exit
  --n_steps N_STEPS     (default: 500)
  --batch_size BATCH_SIZE
                        (default: 64)
  -s SIGMA, --sigma SIGMA
                        Scaling coefficient of score (default: 60)
  -k [0-1], --topk [0-1]
                        Fraction of top molecules to keep (default: 0.5)
  -lr LEARNING_RATE, --learning_rate LEARNING_RATE
                        Adam learning rate (default: 0.0005)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "SMILES-RNN",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.12,>=3.6",
    "maintainer_email": null,
    "keywords": "SMILES, Chemical language models, De novo, Constrained de novo, chemistry, drug design, reinforcement learning",
    "author": null,
    "author_email": "Morgan Thomas <morganthomas263@gmail.com>",
    "download_url": null,
    "platform": null,
    "description": "[![DOI](https://zenodo.org/badge/374712112.svg)](https://zenodo.org/doi/10.5281/zenodo.11356192)\n\n# SMILES-RNN\n\nThis repo contains code for a SMILES-based recurrent neural network used for *de novo* molecule generation with several  reinforcement learning algorithms available for molecule optimization. This was written to be used in conjunction with [MolScore](https://github.com/MorganCThomas/MolScore) - although any other scoring function can also be used.\n\n## Installation\nThis code can be installed via pip.\n\n```\npip install smiles-rnn\n```\n\nOr via cloning this repository and setting up an environment with mamba.\n\n```\ngit clone https://github.com/MorganCThomas/SMILES-RNN.git\ncd SMILES-RNN\nmamba env create -f environment.yml\npip install ./\n```\n\n## Usage\nArguments to any of the scripts can be printed by running \n\n```\npython <script> --help\n```\n\n## Training a prior\n\nTo train a prior run the *train_prior.py* script. You may note below that several other grammars are also implemented including [DeepSMILES](https://chemrxiv.org/engage/chemrxiv/article-details/60c73ed6567dfe7e5fec388d), [SELFIES](https://iopscience.iop.org/article/10.1088/2632-2153/aba947), [atomInSmiles](https://jcheminf.biomedcentral.com/articles/10.1186/s13321-023-00725-9), and [SAFE](https://arxiv.org/abs/2310.10773) which are generated by conversion from SMILES. When using randomization (which can be done at train time) the SMILES are first randomized and then each random SMILES is converted to the alternative grammar. You can optionally pass in validation of test SMILES where the log likelihood will be compared during training which can be monitored via tensorboard. *Currently choosing a specific GPU device does not work, it will run on the default GPU device (i.e., index 0).\n\n```\nTrain an initial prior model based on smiles data\n\npositional arguments:\n  {RNN,Transformer,GTr}\n                        Model architecture\n    RNN                 Use simple forward RNN with GRU or LSTM\n    Transformer         TransformerEncoder model\n    GTr                 StableTransformerEncoder model\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --grammar {SMILES,deepSMILES,deepSMILES_r,deepSMILES_cr,deepSMILES_c,deepSMILES_cb,deepSMILES_b,SELFIES, AIS,SAFE,SmiZip}\n                        Choice of grammar to use, SMILES will be encoded and decoded via grammar (default: SMILES)\n  --randomize           Training smiles will be randomized using default arguments (10 restricted) (default: False)\n  --n_jobs N_JOBS       If randomizing use multiple cores (default: 1)\n  --smizip-ngrams SMIZIP_NGRAMS\n                        SmiZip JSON file containing the list of n-grams (default: None)\n  --valid_smiles VALID_SMILES\n                        Validation smiles (default: None)\n  --test_smiles TEST_SMILES\n                        Test smiles (default: None)\n  --validate_frequency VALIDATE_FREQUENCY\n                        (default: 500)\n  --n_epochs N_EPOCHS   (default: 5)\n  --batch_size BATCH_SIZE\n                        (default: 128)\n  -d DEVICE, --device DEVICE\n                        cpu/gpu or device number (default: gpu)\n\nrequired arguments:\n  -i TRAIN_SMILES, --train_smiles TRAIN_SMILES\n                        Path to smiles file (default: None)\n  -o OUTPUT_DIRECTORY, --output_directory OUTPUT_DIRECTORY\n                        Output directory to save model (default: None)\n  -s SUFFIX, --suffix SUFFIX\n                        Suffix to name files (default: None)\n```\n\n## Sampling from a trained prior\n\nYou can sample a trained model by running the *sample_model.py* script.\n\n```\nSample smiles from model\n\noptional arguments:\n  -h, --help            show this help message and exit\n  -p PATH, --path PATH  Path to checkpoint (.ckpt) (default: None)\n  -m {RNN,Transformer,GTr}, --model {RNN,Transformer,GTr}\n                        Choice of architecture (default: None)\n  -o OUTPUT, --output OUTPUT\n                        Path to save file (e.g. Data/Prior_10k.smi) (default: None)\n  -d DEVICE, --device DEVICE\n                        (default: gpu)\n  -n NUMBER, --number NUMBER\n                        (default: 10000)\n  -t TEMPERATURE, --temperature TEMPERATURE\n                        Temperature to sample (1: multinomial, <1: Less random, >1: More random) (default: 1.0)\n  --psmiles PSMILES     Either scaffold smiles labelled with decoration points (*) or fragments for linking with connection points (*) and seperated by a period .\n                        (default: None)\n  --unique              Keep sampling until n unique canonical molecules have been sampled (default: False)\n  --native              If trained using an alternative grammar e.g., SELFIES. don't convet back to SMILES (default: False)\n```\n\n## Fine-tuning\n\nYou can also fine-tune a trained model with a smaller dataset of SMILES by running the *fine_tune.py* script. If the pre-trained model was trained with an alternative grammar, these SMILES will also be converted at train time i.e., you always input molecules as SMILES.\n\n```\nFine-tune a pre-trained prior model based on a smaller dataset\n\noptional arguments:\n  -h, --help            show this help message and exit\n\nRequired arguments:\n  -p PRIOR, --prior PRIOR\n                        Path to prior file (default: None)\n  -i TUNE_SMILES, --tune_smiles TUNE_SMILES\n                        Path to fine-tuning smiles file (default: None)\n  -o OUTPUT_DIRECTORY, --output_directory OUTPUT_DIRECTORY\n                        Output directory to save model (default: None)\n  -s SUFFIX, --suffix SUFFIX\n                        Suffix to name files (default: None)\n  --model {RNN,Transformer,GTr}\n                        Choice of architecture (default: None)\n\nOptional arguments:\n  --randomize           Training smiles will be randomized using default arguments (10 restricted) (default: False)\n  --valid_smiles VALID_SMILES\n                        Validation smiles (default: None)\n  --test_smiles TEST_SMILES\n                        Test smiles (default: None)\n  --n_epochs N_EPOCHS   (default: 10)\n  --batch_size BATCH_SIZE\n                        (default: 128)\n  -d DEVICE, --device DEVICE\n                        cpu/gpu or device number (default: gpu)\n  -f FREEZE, --freeze FREEZE\n                        Number of RNN layers to freeze (default: None)\n```\n\n## Reinforcement learning\n\nFinally, reinforcement learning can be run with the *reinforcement_learning.py* script. Note that this is written to work with [MolScore]() to handle the objective task i.e., molecule scoring. However, one can also use the underlying *ReinforcementLearning* class found in the *model/RL.py* module where another scoring function can be provided. This class has several methods for different reinforcement learning algorithms including:\n- Reinforce\n- REINVENT\n- BAR\n- Hill-Climb\n- Augmented Hill-Climb\n\nThere are generic arguments that can be viewed by running `python reinforcement_learning.py --help`\n\n```\nOptimize an RNN towards a reward via reinforment learning\n\noptional arguments:\n  -h, --help            show this help message and exit\n\nRequired arguments:\n  -p PRIOR, --prior PRIOR\n                        Path to prior checkpoint (.ckpt) (default: None)\n  -m MOLSCORE_CONFIG, --molscore_config MOLSCORE_CONFIG\n                        Path to molscore config (.json) (default: None)\n  --model {RNN,Transformer,GTr}\n                        Choice of architecture (default: None)\n\nOptional arguments:\n  -a AGENT, --agent AGENT\n                        Path to agent checkpoint (.ckpt) (default: None)\n  -d DEVICE, --device DEVICE\n                        (default: gpu)\n  -f FREEZE, --freeze FREEZE\n                        Number of RNN layers to freeze (default: None)\n  --save_freq SAVE_FREQ\n                        How often to save models (default: 100)\n  --verbose             Whether to print loss (default: False)\n  --psmiles PSMILES     Either scaffold smiles labelled with decoration points (*) or fragments for linking with connection points (*) and seperated by a period .\n                        (default: None)\n  --psmiles_multi       Whether to conduct multiple updates (1 per decoration) (default: False)\n  --psmiles_canonical   Whether to attach decorations one at a time, based on attachment point with lowest NLL, otherwise attachment points will be shuffled within a\n                        batch (default: False)\n  --psmiles_optimize    Whether to optimize the SMILES prompts during sampling (default: False)\n  --psmiles_lr_decay PSMILES_LR_DECAY\n                        Amount to decay the learning rate at the beginning of iterative prompting (1=no decay) (default: 1)\n  --psmiles_lr_epochs PSMILES_LR_EPOCHS\n                        Number of epochs before the decayed learning rate returns to normal (default: 10)\n\nRL strategy:\n  {RV,RV2,BAR,AHC,HC,HC-reg,RF,RF-reg}\n                        Which reinforcement learning algorithm to use\n```\n\nAnd RL algorithm specific arguments that can be viewed by running e.g., `python reinforcement_learning.py AHC --help`\n\n```\nAugmented Hill-Climb\n\noptional arguments:\n  -h, --help            show this help message and exit\n  --n_steps N_STEPS     (default: 500)\n  --batch_size BATCH_SIZE\n                        (default: 64)\n  -s SIGMA, --sigma SIGMA\n                        Scaling coefficient of score (default: 60)\n  -k [0-1], --topk [0-1]\n                        Fraction of top molecules to keep (default: 0.5)\n  -lr LEARNING_RATE, --learning_rate LEARNING_RATE\n                        Adam learning rate (default: 0.0005)\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A scoring, benchmarking and evaluation framework for goal directed generative models",
    "version": "2.0.1",
    "project_urls": {
        "Homepage": "https://github.com/MorganCThomas/SMILES-RNN",
        "Issues": "https://github.com/MorganCThomas/SMILES-RNN/issues"
    },
    "split_keywords": [
        "smiles",
        " chemical language models",
        " de novo",
        " constrained de novo",
        " chemistry",
        " drug design",
        " reinforcement learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a46c6be3f4a1282b7beefad7b7f1271d1a5ae21ecd7377a66a3961a0164c4e18",
                "md5": "dafad182cf73ba399fc5f69d5322b8a2",
                "sha256": "1f59fe2b675145db7c32d8b98822e48c0621d12571472721176930b6f81c94f8"
            },
            "downloads": -1,
            "filename": "SMILES_RNN-2.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "dafad182cf73ba399fc5f69d5322b8a2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.12,>=3.6",
            "size": 50350,
            "upload_time": "2024-06-05T15:31:11",
            "upload_time_iso_8601": "2024-06-05T15:31:11.450437Z",
            "url": "https://files.pythonhosted.org/packages/a4/6c/6be3f4a1282b7beefad7b7f1271d1a5ae21ecd7377a66a3961a0164c4e18/SMILES_RNN-2.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-05 15:31:11",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "MorganCThomas",
    "github_project": "SMILES-RNN",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "smiles-rnn"
}
        
Elapsed time: 3.90143s