alphafold3-pytorch


Namealphafold3-pytorch JSON
Version 0.1.54 PyPI version JSON
download
home_pageNone
SummaryAlphafold 3 - Pytorch
upload_time2024-06-17 22:48:09
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT License Copyright (c) 2024 Phil Wang Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords artificial intelligence deep learning protein structure prediction
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="./alphafold3.png" width="500px"></img>

## Alphafold 3 - Pytorch

Implementation of <a href="https://www.nature.com/articles/s41586-024-07487-w">Alphafold 3</a> in Pytorch

You can chat with other researchers about this work <a href="https://discord.gg/x6FuzQPQXY">here</a>

<a href="https://www.youtube.com/watch?v=qjFgthkKxcA">Review of the paper</a> by <a href="https://x.com/sokrypton">Sergey</a>

A fork with full Lightning + Hydra support is being maintained by <a href="https://github.com/amorehead">Alex</a> at <a href="https://github.com/amorehead/alphafold3-pytorch-lightning-hydra">this repository</a>

## Appreciation

- <a href="https://github.com/joseph-c-kim">Joseph</a> for contributing the Relative Positional Encoding and the Smooth LDDT Loss!

- <a href="https://github.com/engelberger">Felipe</a> for contributing Weighted Rigid Align, Express Coordinates In Frame, Compute Alignment Error, and Centre Random Augmentation modules!

- <a href="https://github.com/amorehead">Alex</a> for fixing various issues in the transcribed algorithms

- <a href="https://github.com/gitabtion">Heng</a> for pointing out inconsistencies with the paper and pull requesting the solutions

- <a href="https://github.com/amorehead">Alex</a> for the PDB dataset preparation script!

- <a href="https://github.com/patrick-kidger">Patrick</a> for <a href="https://docs.kidger.site/jaxtyping/">jaxtyping</a>, <a href="https://github.com/fferflo">Florian</a> for <a href="https://github.com/fferflo/einx">einx</a>, and of course, <a href="https://github.com/arogozhnikov">Alex</a> for <a href="https://einops.rocks/">einops</a>

## Install

```bash
$ pip install alphafold3-pytorch
```

## Usage

```python
import torch
from alphafold3_pytorch import Alphafold3

alphafold3 = Alphafold3(
    dim_atom_inputs = 77,
    dim_template_feats = 44
)

# mock inputs

seq_len = 16
molecule_atom_lens = torch.randint(1, 3, (2, seq_len))
atom_seq_len = molecule_atom_lens.sum(dim = -1).amax()

atom_inputs = torch.randn(2, atom_seq_len, 77)
atompair_inputs = torch.randn(2, atom_seq_len, atom_seq_len, 5)

additional_molecule_feats = torch.randn(2, seq_len, 9)
molecule_ids = torch.randint(0, 32, (2, seq_len))

template_feats = torch.randn(2, 2, seq_len, seq_len, 44)
template_mask = torch.ones((2, 2)).bool()

msa = torch.randn(2, 7, seq_len, 64)
msa_mask = torch.ones((2, 7)).bool()

# required for training, but omitted on inference

atom_pos = torch.randn(2, atom_seq_len, 3)
molecule_atom_indices = molecule_atom_lens - 1 # last atom, as an example

distance_labels = torch.randint(0, 37, (2, seq_len, seq_len))
pae_labels = torch.randint(0, 64, (2, seq_len, seq_len))
pde_labels = torch.randint(0, 64, (2, seq_len, seq_len))
plddt_labels = torch.randint(0, 50, (2, seq_len))
resolved_labels = torch.randint(0, 2, (2, seq_len))

# train

loss = alphafold3(
    num_recycling_steps = 2,
    atom_inputs = atom_inputs,
    atompair_inputs = atompair_inputs,
    molecule_ids = molecule_ids,
    molecule_atom_lens = molecule_atom_lens,
    additional_molecule_feats = additional_molecule_feats,
    msa = msa,
    msa_mask = msa_mask,
    templates = template_feats,
    template_mask = template_mask,
    atom_pos = atom_pos,
    molecule_atom_indices = molecule_atom_indices,
    distance_labels = distance_labels,
    pae_labels = pae_labels,
    pde_labels = pde_labels,
    plddt_labels = plddt_labels,
    resolved_labels = resolved_labels
)

loss.backward()

# after much training ...

sampled_atom_pos = alphafold3(
    num_recycling_steps = 4,
    num_sample_steps = 16,
    atom_inputs = atom_inputs,
    atompair_inputs = atompair_inputs,
    molecule_ids = molecule_ids,
    molecule_atom_lens = molecule_atom_lens,
    additional_molecule_feats = additional_molecule_feats,
    msa = msa,
    msa_mask = msa_mask,
    templates = template_feats,
    template_mask = template_mask
)

sampled_atom_pos.shape # (2, <atom_seqlen>, 3)
```

## Data preparation

To acquire the AlphaFold 3 PDB dataset, first download all complexes in the Protein Data Bank (PDB), and then preprocess them with the script referenced below. The PDB can be downloaded from the RCSB: https://www.wwpdb.org/ftp/pdb-ftp-sites#rcsbpdb. The script below assumes you have downloaded the PDB in the **mmCIF file format** (e.g., placing it at `data/mmCIF/` by default). On the RCSB website, navigate down to "Download Protocols", and follow the download instructions depending on your location.

> WARNING: Downloading PDB can take up to 1TB of space.

After downloading, you should have a directory formatted like this:
https://files.rcsb.org/pub/pdb/data/structures/divided/mmCIF/
```bash
00/
01/
02/
..
zz/
```

In this directory, unzip all the files:
```bash
find . -type f -name "*.gz" -exec gzip -d {} \;
```

Next run the commands `wget -P data/CCD/ https://files.wwpdb.org/pub/pdb/data/monomers/components.cif.gz` and `wget -P data/CCD/ https://files.wwpdb.org/pub/pdb/data/component-models/complete/chem_comp_model.cif.gz` from the project's root directory to download the latest version of the PDB's Chemical Component Dictionary (CCD) and its structural models. Extract each of these files using the command `find data/CCD/ -type f -name "*.gz" -exec gzip -d {} \;`.

Then run the following with <pdb_dir>, <ccd_dir>, and <out_dir> replaced with the locations of your local copies of the PDB, CCD, and your desired dataset output directory (e.g., `data/PDB_set/` by default).
```bash
python alphafold3_pytorch/pdb_dataset_curation.py --mmcif_dir <pdb_dir> --ccd_dir <ccd_dir> --out_dir <out_dir>
```

See the script for more options. Each mmCIF that successfully passes
all processing steps will be written to <out_dir> within a subdirectory
named using the mmCIF's second and third PDB ID characters (e.g. `5c`).

## Contributing

At the project root, run

```bash
$ sh ./contribute.sh
```

Then, add your module to `alphafold3_pytorch/alphafold3.py`, add your tests to `tests/test_af3.py`, and submit a pull request. You can run the tests locally with

```bash
$ pytest tests/
```

## Docker Image
The included `Dockerfile` contains the required dependencies to run the package and to train/inference using PyTorch with GPUs.

The default base image is `pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime` and installs the latest version of this package from the `main` GitHub branch.

```bash
## Build Docker Container
docker build -t af3 .
```

Alternatively, use build arguments to rebuild the image with different software versions:
- `PYTORCH_TAG`: Changes the base image and thus builds with different PyTorch, CUDA, and/or cuDNN versions.
- `GIT_TAG`: Changes the tag of this repo to clone and install the package.

For example:
```bash
## Use build argument to change versions
docker build --build-arg "PYTORCH_TAG=2.2.1-cuda12.1-cudnn8-devel" --build-arg "GIT_TAG=0.1.15" -t af3 .
```

Then, run the container with GPUs and mount a local volume (for training) using the following command:

```bash
## Run Container
docker run -v .:/data --gpus all -it af3
```

## Citations

```bibtex
@article{Abramson2024-fj,
  title    = "Accurate structure prediction of biomolecular interactions with
              {AlphaFold} 3",
  author   = "Abramson, Josh and Adler, Jonas and Dunger, Jack and Evans,
              Richard and Green, Tim and Pritzel, Alexander and Ronneberger,
              Olaf and Willmore, Lindsay and Ballard, Andrew J and Bambrick,
              Joshua and Bodenstein, Sebastian W and Evans, David A and Hung,
              Chia-Chun and O'Neill, Michael and Reiman, David and
              Tunyasuvunakool, Kathryn and Wu, Zachary and {\v Z}emgulyt{\.e},
              Akvil{\.e} and Arvaniti, Eirini and Beattie, Charles and
              Bertolli, Ottavia and Bridgland, Alex and Cherepanov, Alexey and
              Congreve, Miles and Cowen-Rivers, Alexander I and Cowie, Andrew
              and Figurnov, Michael and Fuchs, Fabian B and Gladman, Hannah and
              Jain, Rishub and Khan, Yousuf A and Low, Caroline M R and Perlin,
              Kuba and Potapenko, Anna and Savy, Pascal and Singh, Sukhdeep and
              Stecula, Adrian and Thillaisundaram, Ashok and Tong, Catherine
              and Yakneen, Sergei and Zhong, Ellen D and Zielinski, Michal and
              {\v Z}{\'\i}dek, Augustin and Bapst, Victor and Kohli, Pushmeet
              and Jaderberg, Max and Hassabis, Demis and Jumper, John M",
  journal  = "Nature",
  month    = "May",
  year     =  2024
}
```

```bibtex
@inproceedings{Darcet2023VisionTN,
    title   = {Vision Transformers Need Registers},
    author  = {Timoth'ee Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},
    year    = {2023},
    url     = {https://api.semanticscholar.org/CorpusID:263134283}
}
```

```bibtex
@article{Arora2024SimpleLA,
    title   = {Simple linear attention language models balance the recall-throughput tradeoff},
    author  = {Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher R'e},
    journal = {ArXiv},
    year    = {2024},
    volume  = {abs/2402.18668},
    url     = {https://api.semanticscholar.org/CorpusID:268063190}
}
```

```bibtex
@article{Puny2021FrameAF,
    title   = {Frame Averaging for Invariant and Equivariant Network Design},
    author  = {Omri Puny and Matan Atzmon and Heli Ben-Hamu and Edward James Smith and Ishan Misra and Aditya Grover and Yaron Lipman},
    journal = {ArXiv},
    year    = {2021},
    volume  = {abs/2110.03336},
    url     = {https://api.semanticscholar.org/CorpusID:238419638}
}
```

```bibtex
@article{Duval2023FAENetFA,
    title   = {FAENet: Frame Averaging Equivariant GNN for Materials Modeling},
    author  = {Alexandre Duval and Victor Schmidt and Alex Hernandez Garcia and Santiago Miret and Fragkiskos D. Malliaros and Yoshua Bengio and David Rolnick},
    journal = {ArXiv},
    year    = {2023},
    volume  = {abs/2305.05577},
    url     = {https://api.semanticscholar.org/CorpusID:258564608}
}
```

```bibtex
@article{Wang2022DeepNetST,
    title   = {DeepNet: Scaling Transformers to 1, 000 Layers},
    author  = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Dongdong Zhang and Furu Wei},
    journal = {ArXiv},
    year    = {2022},
    volume  = {abs/2203.00555},
    url     = {https://api.semanticscholar.org/CorpusID:247187905}
}
```

```bibtex
@inproceedings{Ainslie2023CoLT5FL,
    title   = {CoLT5: Faster Long-Range Transformers with Conditional Computation},
    author  = {Joshua Ainslie and Tao Lei and Michiel de Jong and Santiago Ontan'on and Siddhartha Brahma and Yury Zemlyanskiy and David Uthus and Mandy Guo and James Lee-Thorp and Yi Tay and Yun-Hsuan Sung and Sumit Sanghai},
    year    = {2023}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "alphafold3-pytorch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, protein structure prediction",
    "author": null,
    "author_email": "Phil Wang <lucidrains@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/fc/4d/5b106bb9e78a6f22534d90356ed773149a387628407ecc5de8af596ba2af/alphafold3_pytorch-0.1.54.tar.gz",
    "platform": null,
    "description": "<img src=\"./alphafold3.png\" width=\"500px\"></img>\n\n## Alphafold 3 - Pytorch\n\nImplementation of <a href=\"https://www.nature.com/articles/s41586-024-07487-w\">Alphafold 3</a> in Pytorch\n\nYou can chat with other researchers about this work <a href=\"https://discord.gg/x6FuzQPQXY\">here</a>\n\n<a href=\"https://www.youtube.com/watch?v=qjFgthkKxcA\">Review of the paper</a> by <a href=\"https://x.com/sokrypton\">Sergey</a>\n\nA fork with full Lightning + Hydra support is being maintained by <a href=\"https://github.com/amorehead\">Alex</a> at <a href=\"https://github.com/amorehead/alphafold3-pytorch-lightning-hydra\">this repository</a>\n\n## Appreciation\n\n- <a href=\"https://github.com/joseph-c-kim\">Joseph</a> for contributing the Relative Positional Encoding and the Smooth LDDT Loss!\n\n- <a href=\"https://github.com/engelberger\">Felipe</a> for contributing Weighted Rigid Align, Express Coordinates In Frame, Compute Alignment Error, and Centre Random Augmentation modules!\n\n- <a href=\"https://github.com/amorehead\">Alex</a> for fixing various issues in the transcribed algorithms\n\n- <a href=\"https://github.com/gitabtion\">Heng</a> for pointing out inconsistencies with the paper and pull requesting the solutions\n\n- <a href=\"https://github.com/amorehead\">Alex</a> for the PDB dataset preparation script!\n\n- <a href=\"https://github.com/patrick-kidger\">Patrick</a> for <a href=\"https://docs.kidger.site/jaxtyping/\">jaxtyping</a>, <a href=\"https://github.com/fferflo\">Florian</a> for <a href=\"https://github.com/fferflo/einx\">einx</a>, and of course, <a href=\"https://github.com/arogozhnikov\">Alex</a> for <a href=\"https://einops.rocks/\">einops</a>\n\n## Install\n\n```bash\n$ pip install alphafold3-pytorch\n```\n\n## Usage\n\n```python\nimport torch\nfrom alphafold3_pytorch import Alphafold3\n\nalphafold3 = Alphafold3(\n    dim_atom_inputs = 77,\n    dim_template_feats = 44\n)\n\n# mock inputs\n\nseq_len = 16\nmolecule_atom_lens = torch.randint(1, 3, (2, seq_len))\natom_seq_len = molecule_atom_lens.sum(dim = -1).amax()\n\natom_inputs = torch.randn(2, atom_seq_len, 77)\natompair_inputs = torch.randn(2, atom_seq_len, atom_seq_len, 5)\n\nadditional_molecule_feats = torch.randn(2, seq_len, 9)\nmolecule_ids = torch.randint(0, 32, (2, seq_len))\n\ntemplate_feats = torch.randn(2, 2, seq_len, seq_len, 44)\ntemplate_mask = torch.ones((2, 2)).bool()\n\nmsa = torch.randn(2, 7, seq_len, 64)\nmsa_mask = torch.ones((2, 7)).bool()\n\n# required for training, but omitted on inference\n\natom_pos = torch.randn(2, atom_seq_len, 3)\nmolecule_atom_indices = molecule_atom_lens - 1 # last atom, as an example\n\ndistance_labels = torch.randint(0, 37, (2, seq_len, seq_len))\npae_labels = torch.randint(0, 64, (2, seq_len, seq_len))\npde_labels = torch.randint(0, 64, (2, seq_len, seq_len))\nplddt_labels = torch.randint(0, 50, (2, seq_len))\nresolved_labels = torch.randint(0, 2, (2, seq_len))\n\n# train\n\nloss = alphafold3(\n    num_recycling_steps = 2,\n    atom_inputs = atom_inputs,\n    atompair_inputs = atompair_inputs,\n    molecule_ids = molecule_ids,\n    molecule_atom_lens = molecule_atom_lens,\n    additional_molecule_feats = additional_molecule_feats,\n    msa = msa,\n    msa_mask = msa_mask,\n    templates = template_feats,\n    template_mask = template_mask,\n    atom_pos = atom_pos,\n    molecule_atom_indices = molecule_atom_indices,\n    distance_labels = distance_labels,\n    pae_labels = pae_labels,\n    pde_labels = pde_labels,\n    plddt_labels = plddt_labels,\n    resolved_labels = resolved_labels\n)\n\nloss.backward()\n\n# after much training ...\n\nsampled_atom_pos = alphafold3(\n    num_recycling_steps = 4,\n    num_sample_steps = 16,\n    atom_inputs = atom_inputs,\n    atompair_inputs = atompair_inputs,\n    molecule_ids = molecule_ids,\n    molecule_atom_lens = molecule_atom_lens,\n    additional_molecule_feats = additional_molecule_feats,\n    msa = msa,\n    msa_mask = msa_mask,\n    templates = template_feats,\n    template_mask = template_mask\n)\n\nsampled_atom_pos.shape # (2, <atom_seqlen>, 3)\n```\n\n## Data preparation\n\nTo acquire the AlphaFold 3 PDB dataset, first download all complexes in the Protein Data Bank (PDB), and then preprocess them with the script referenced below. The PDB can be downloaded from the RCSB: https://www.wwpdb.org/ftp/pdb-ftp-sites#rcsbpdb. The script below assumes you have downloaded the PDB in the **mmCIF file format** (e.g., placing it at `data/mmCIF/` by default). On the RCSB website, navigate down to \"Download Protocols\", and follow the download instructions depending on your location.\n\n> WARNING: Downloading PDB can take up to 1TB of space.\n\nAfter downloading, you should have a directory formatted like this:\nhttps://files.rcsb.org/pub/pdb/data/structures/divided/mmCIF/\n```bash\n00/\n01/\n02/\n..\nzz/\n```\n\nIn this directory, unzip all the files:\n```bash\nfind . -type f -name \"*.gz\" -exec gzip -d {} \\;\n```\n\nNext run the commands `wget -P data/CCD/ https://files.wwpdb.org/pub/pdb/data/monomers/components.cif.gz` and `wget -P data/CCD/ https://files.wwpdb.org/pub/pdb/data/component-models/complete/chem_comp_model.cif.gz` from the project's root directory to download the latest version of the PDB's Chemical Component Dictionary (CCD) and its structural models. Extract each of these files using the command `find data/CCD/ -type f -name \"*.gz\" -exec gzip -d {} \\;`.\n\nThen run the following with <pdb_dir>, <ccd_dir>, and <out_dir> replaced with the locations of your local copies of the PDB, CCD, and your desired dataset output directory (e.g., `data/PDB_set/` by default).\n```bash\npython alphafold3_pytorch/pdb_dataset_curation.py --mmcif_dir <pdb_dir> --ccd_dir <ccd_dir> --out_dir <out_dir>\n```\n\nSee the script for more options. Each mmCIF that successfully passes\nall processing steps will be written to <out_dir> within a subdirectory\nnamed using the mmCIF's second and third PDB ID characters (e.g. `5c`).\n\n## Contributing\n\nAt the project root, run\n\n```bash\n$ sh ./contribute.sh\n```\n\nThen, add your module to `alphafold3_pytorch/alphafold3.py`, add your tests to `tests/test_af3.py`, and submit a pull request. You can run the tests locally with\n\n```bash\n$ pytest tests/\n```\n\n## Docker Image\nThe included `Dockerfile` contains the required dependencies to run the package and to train/inference using PyTorch with GPUs.\n\nThe default base image is `pytorch/pytorch:2.3.0-cuda12.1-cudnn8-runtime` and installs the latest version of this package from the `main` GitHub branch.\n\n```bash\n## Build Docker Container\ndocker build -t af3 .\n```\n\nAlternatively, use build arguments to rebuild the image with different software versions:\n- `PYTORCH_TAG`: Changes the base image and thus builds with different PyTorch, CUDA, and/or cuDNN versions.\n- `GIT_TAG`: Changes the tag of this repo to clone and install the package.\n\nFor example:\n```bash\n## Use build argument to change versions\ndocker build --build-arg \"PYTORCH_TAG=2.2.1-cuda12.1-cudnn8-devel\" --build-arg \"GIT_TAG=0.1.15\" -t af3 .\n```\n\nThen, run the container with GPUs and mount a local volume (for training) using the following command:\n\n```bash\n## Run Container\ndocker run -v .:/data --gpus all -it af3\n```\n\n## Citations\n\n```bibtex\n@article{Abramson2024-fj,\n  title    = \"Accurate structure prediction of biomolecular interactions with\n              {AlphaFold} 3\",\n  author   = \"Abramson, Josh and Adler, Jonas and Dunger, Jack and Evans,\n              Richard and Green, Tim and Pritzel, Alexander and Ronneberger,\n              Olaf and Willmore, Lindsay and Ballard, Andrew J and Bambrick,\n              Joshua and Bodenstein, Sebastian W and Evans, David A and Hung,\n              Chia-Chun and O'Neill, Michael and Reiman, David and\n              Tunyasuvunakool, Kathryn and Wu, Zachary and {\\v Z}emgulyt{\\.e},\n              Akvil{\\.e} and Arvaniti, Eirini and Beattie, Charles and\n              Bertolli, Ottavia and Bridgland, Alex and Cherepanov, Alexey and\n              Congreve, Miles and Cowen-Rivers, Alexander I and Cowie, Andrew\n              and Figurnov, Michael and Fuchs, Fabian B and Gladman, Hannah and\n              Jain, Rishub and Khan, Yousuf A and Low, Caroline M R and Perlin,\n              Kuba and Potapenko, Anna and Savy, Pascal and Singh, Sukhdeep and\n              Stecula, Adrian and Thillaisundaram, Ashok and Tong, Catherine\n              and Yakneen, Sergei and Zhong, Ellen D and Zielinski, Michal and\n              {\\v Z}{\\'\\i}dek, Augustin and Bapst, Victor and Kohli, Pushmeet\n              and Jaderberg, Max and Hassabis, Demis and Jumper, John M\",\n  journal  = \"Nature\",\n  month    = \"May\",\n  year     =  2024\n}\n```\n\n```bibtex\n@inproceedings{Darcet2023VisionTN,\n    title   = {Vision Transformers Need Registers},\n    author  = {Timoth'ee Darcet and Maxime Oquab and Julien Mairal and Piotr Bojanowski},\n    year    = {2023},\n    url     = {https://api.semanticscholar.org/CorpusID:263134283}\n}\n```\n\n```bibtex\n@article{Arora2024SimpleLA,\n    title   = {Simple linear attention language models balance the recall-throughput tradeoff},\n    author  = {Simran Arora and Sabri Eyuboglu and Michael Zhang and Aman Timalsina and Silas Alberti and Dylan Zinsley and James Zou and Atri Rudra and Christopher R'e},\n    journal = {ArXiv},\n    year    = {2024},\n    volume  = {abs/2402.18668},\n    url     = {https://api.semanticscholar.org/CorpusID:268063190}\n}\n```\n\n```bibtex\n@article{Puny2021FrameAF,\n    title   = {Frame Averaging for Invariant and Equivariant Network Design},\n    author  = {Omri Puny and Matan Atzmon and Heli Ben-Hamu and Edward James Smith and Ishan Misra and Aditya Grover and Yaron Lipman},\n    journal = {ArXiv},\n    year    = {2021},\n    volume  = {abs/2110.03336},\n    url     = {https://api.semanticscholar.org/CorpusID:238419638}\n}\n```\n\n```bibtex\n@article{Duval2023FAENetFA,\n    title   = {FAENet: Frame Averaging Equivariant GNN for Materials Modeling},\n    author  = {Alexandre Duval and Victor Schmidt and Alex Hernandez Garcia and Santiago Miret and Fragkiskos D. Malliaros and Yoshua Bengio and David Rolnick},\n    journal = {ArXiv},\n    year    = {2023},\n    volume  = {abs/2305.05577},\n    url     = {https://api.semanticscholar.org/CorpusID:258564608}\n}\n```\n\n```bibtex\n@article{Wang2022DeepNetST,\n    title   = {DeepNet: Scaling Transformers to 1, 000 Layers},\n    author  = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Dongdong Zhang and Furu Wei},\n    journal = {ArXiv},\n    year    = {2022},\n    volume  = {abs/2203.00555},\n    url     = {https://api.semanticscholar.org/CorpusID:247187905}\n}\n```\n\n```bibtex\n@inproceedings{Ainslie2023CoLT5FL,\n    title   = {CoLT5: Faster Long-Range Transformers with Conditional Computation},\n    author  = {Joshua Ainslie and Tao Lei and Michiel de Jong and Santiago Ontan'on and Siddhartha Brahma and Yury Zemlyanskiy and David Uthus and Mandy Guo and James Lee-Thorp and Yi Tay and Yun-Hsuan Sung and Sumit Sanghai},\n    year    = {2023}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2024 Phil Wang  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Alphafold 3 - Pytorch",
    "version": "0.1.54",
    "project_urls": {
        "Homepage": "https://pypi.org/project/alphafold3-pytorch/",
        "Repository": "https://github.com/lucidrains/alphafold3-pytorch"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " protein structure prediction"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6153efa08c5423bf20534ca15e0c304a245ea00880412340755b211306a76339",
                "md5": "10aced6a201c14779014e39554e171c5",
                "sha256": "4dcc6d0463949092a7b5e135abc59289d7913974c915129116b77b15434ffc06"
            },
            "downloads": -1,
            "filename": "alphafold3_pytorch-0.1.54-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "10aced6a201c14779014e39554e171c5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 54877,
            "upload_time": "2024-06-17T22:48:05",
            "upload_time_iso_8601": "2024-06-17T22:48:05.577116Z",
            "url": "https://files.pythonhosted.org/packages/61/53/efa08c5423bf20534ca15e0c304a245ea00880412340755b211306a76339/alphafold3_pytorch-0.1.54-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fc4d5b106bb9e78a6f22534d90356ed773149a387628407ecc5de8af596ba2af",
                "md5": "f56f8f57c14df46bb921edb9d7e2d379",
                "sha256": "f54c8dd8aaa715f19efaf3bc1310e4939aa70d4f1e35a8b1652838cdca7915da"
            },
            "downloads": -1,
            "filename": "alphafold3_pytorch-0.1.54.tar.gz",
            "has_sig": false,
            "md5_digest": "f56f8f57c14df46bb921edb9d7e2d379",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 849088,
            "upload_time": "2024-06-17T22:48:09",
            "upload_time_iso_8601": "2024-06-17T22:48:09.011485Z",
            "url": "https://files.pythonhosted.org/packages/fc/4d/5b106bb9e78a6f22534d90356ed773149a387628407ecc5de8af596ba2af/alphafold3_pytorch-0.1.54.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-17 22:48:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lucidrains",
    "github_project": "alphafold3-pytorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "alphafold3-pytorch"
}
        
Elapsed time: 0.27681s