peapod


Namepeapod JSON
Version 0.1.6 PyPI version JSON
download
home_pageNone
SummaryProtein Embedding Aligner Plus Output Display
upload_time2025-10-07 20:29:02
maintainerCalvin Rusley
docs_urlNone
authorCalvin Rusley
requires_python>=3.9.16
licenseNone
keywords aligner alignment embedding plm protein sequence
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # PEAPOD

|        |        |
|--------|--------|
| Package | [![Latest PyPI Version](https://img.shields.io/pypi/v/peapod.svg)](https://pypi.org/project/peapod/) [![Supported Python Versions](https://img.shields.io/pypi/pyversions/peapod.svg)](https://pypi.org/project/peapod/)  |
| Meta   | [![Code of Conduct](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md) |


## Protein Embedding Aligner Plus Output Display
A modular python package for pairwise aligning per-residue protein embeddings generated by Protein Language Models.

The ethos of this tool is accessibility and modularity. Protein embedding alignment is a very new idea, with new approaches to sequence alignment subproblems popping up every few months. PEAPOD is designed to test these ideas by allowing users to modularly incorporate alternative implementations of any stage of sequence alignment.

Sequence alignment often masquerades as a singular choice, but is in fact several in a trenchcoat. Some include:
- How do you represent each character in an alignment?
- How do you score the differences between characters?
- How do you score gaps? Do you consider them at all?
- Are the sequences you're comparing globally alignable? Locally?
- Are there positions you're very certain are aligned?

## Get started
You can install this package into your preferred Python environment using pip:

```bash
$ python -m pip install --upgrade pip
$ pip install peapod
$ # OPTIONAL: download benchmarking data
$ git clone https://github.com/CalvinRusley/peapod_benchmarks.git
$ cd peapod_benchmarks/homstrad_circa20250812/
$ tar xvf fasta_files.tar.gz
```

## Troubleshooting
If you plan to use GPUs, CUDA and PyTorch can be very finicky and sometimes don't play well with each other or with certain python versions. If you run into issues, remake your virtual environment after consulting the [PyTorch release compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix) and install the appropriate PyTorch before PEAPOD. If you still have issues, please post them to [PEAPOD's issues page](https://github.com/CalvinRusley/peapod/issues).

In our testing, a combination of python v3.9.16, torch v2.6.0, and a NVIDIA Tesla P100 GPU worked properly.

## Notes
- Visualizations in PEAPOD were designed for JupyterLab or another IDE that can display holoviews plots.
- At present, PEAPOD only works on linux due to an issue with a dependency. This will hopefully be fixed in the coming months.
- If you are on a system that requires you to request compute hours, embedding benefits from GPUs, but everything downstream works fine on CPU.

## Basic Tutorial
``` python
import numpy as np


### load language model extractor: ESM1b, ESM2, ProtT5, ProstT5, ankh-base, or ankh-large
# EBA (Pantolini et al., 2024) found that, when using high dimensionality embeddings (such as ESM2), scaling similarity matrices by 0.1 or 0.01 (the l parameter) can prevent precision errors
import torch
from peapod import plms
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = plms.load_extractor('ProstT5', 'residue', device=device) # this can take a while


### import fasta file as "profiles"
from peapod import utilities as utils
# from whereever you saved the git clone of peapod_benchmarks
fasta_file = 'peapod_benchmarks/homstrad_circa20250812/fasta_files/Acetyltransf.faa'
profile_dict = utils.fasta_to_profiles_dict(fasta_file,unalign=True)


### print the amino acids in a profile
profile_names = list(profile_dict.keys())
profile0 = profile_dict[profile_names[0]]
profile1 = profile_dict[profile_names[1]]
print(profile0.aaseqs)


### embed profiles using the chosen Protein Language Model
from peapod import methods
methods.batch_embed(profile_dict,model,padding=True)


### compute similarity matrix
from peapod import similarity as sim
S = sim.minkowski(profile0,profile1)


### OPTIONAL, but recommended: signal enhancement and shifting
# shown to be useful by Pantolini et al., 2024 when aligning embeddings using distances
S_enhanced = sim.enhance_signal(S)
S_enhanced_shifted = sim.shift(S_enhanced,-1)


### define a gap function
# can also select from the "gaps" module
import numba as nb
@nb.njit
def affinegap(gap):
    return 14+(1*gap)


### compute alignment
from peapod import alignment as aln
aln_global, global_aln_scored_matrix = aln.global_aln(S_enhanced_shifted, affinegap)


### calculate POZITIV score as per Booth et al. 2004 (a clever workaround of Monte Carlo Z-scoring alignment scores)
from peapod import pozitiv as poz
mu, sigma = poz.pozitiv(S_enhanced_shifted,aln_global)
poz_score = (aln_global.score-mu)/sigma
print('Raw score: ', aln_global.score)
print('POZ score: ', poz_score)

### visualize scored matrix and alignment
from peapod import visualize as viz
viz.summarize(global_aln_scored_matrix, [aln_global])
```

## Advanced Tutorial
PEAPOD provides many methods that can be easily mixed and matched. The following tutorial includes some of these alternatives.

``` python
import numpy as np


### load language model extractor: ESM1b, ESM2, ProtT5, ProstT5, ankh-base, or ankh-large
# EBA (Pantolini et al., 2024) found that, when using high dimensionality embeddings (such as ESM2), scaling similarity matrices by 0.1 or 0.01 (the l parameter) can prevent precision errors
import torch
from peapod import plms
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = plms.load_extractor('ProstT5', 'residue', device=device) # this can take a while


### import fasta file as "profiles"
from peapod import utilities as utils
# from whereever you saved the git clone of peapod_benchmarks
fasta_file = 'peapod_benchmarks/homstrad_circa20250812/fasta_files/Acetyltransf.faa'
profile_dict = utils.fasta_to_profiles_dict(fasta_file,unalign=True)


### print the amino acids in a profile
profile_names = list(profile_dict.keys())
profile0 = profile_dict[profile_names[0]]
profile1 = profile_dict[profile_names[1]]
print(profile0.aaseqs)


### embed profiles using the chosen Protein Language Model
from peapod import methods
methods.batch_embed(profile_dict,model,padding=True)


### OPTIONAL: batch correct your embeddings
# McWhite et al., 2022 showed that embeddings can show sequence-level batch effects, and correcting them generally increases alignment accuracy
from peapod import batchcorrection as bc
batch_corrected = bc.batch_correct([profile0,profile1]) # creates a list of batch corrected profiles


### compute similarity matrix
from peapod import similarity as sim
# using the minkowski p-norm (default 2, otherwise provide pnumerator and pdenominator)
S_minkowski = sim.minkowski(profile0,profile1)
# using cosine similarity
S_cos = sim.cosine(profile0,profile1)
# using a substitution matrix
blosum62 = sim.load_substitution_matrix('blosum62')
S_sub = sim.substitution(profile0,profile1,blosum62)


### OPTIONAL, but recommended: signal enhancement and shifting
# shown to be useful by Pantolini et al., 2024 when aligning embeddings using distances
S_enhanced = sim.enhance_signal(S_minkowski)
S_enhanced_shifted = sim.shift(S_enhanced,-1)


### define a gap function
# can also select from the "gaps" module
import numba as nb
@nb.njit
def affinegap(gap):
    return 14+(1*gap)


### OPTIONAL: compute anchors
# using maximum non-crossing matching of pairwise best hits
from peapod import mncm
mncm_anchors = mncm.mncm(S_enhanced_shifted)
wmncm_anchors = mncm.wmncm(S_enhanced_shifted) # as above, but using weighted maximum non-crossing matching
# using a fast fourier transform (MAFFT-esque)
from peapod import fft
fft_anchors = fft.get_anchors(S_enhanced_shifted,profile0,profile1,affinegap)


### compute alignment
from peapod import alignment as aln
aln_global, global_aln_scored_matrix = aln.global_aln(S_enhanced_shifted, affinegap)
aln_local, local_aln_scored_matrix = aln.local_aln(S_enhanced_shifted, affinegap)
aln_local_global = aln.local_anchor_global_aln(S_enhanced_shifted, affinegap)
aln_anchored_global = aln.anchored_global_aln(S_enhanced_shifted, fft_anchors, affinegap)
aln_anchored_local = aln.anchored_local_aln(S_enhanced_shifted, fft_anchors, affinegap)
aln_pairwise_cluster = aln.pairwise_clustering_aln(S_enhanced_shifted) # inspired by vcMSA (McWhite et al., 2022)


### calculate POZITIV score as per Booth et al. 2004 (a clever alternative to Monte Carlo z-scoring of alignment scores)
from peapod import pozitiv as poz
mu, sigma = poz.pozitiv(S_enhanced_shifted,aln_global)
poz_score = (aln_global.score-mu)/sigma
print('Raw score', aln_global.score)
print('POZ score: ', poz_score)


### visualize scored matrix and alignment
from peapod import visualize as viz
viz.summarize(global_aln_scored_matrix, [aln_global])
```


## Benchmarking on the HOMSTRAD database
One way to benchmark pairwise aligners is by comparing the alignments they produce to reference pairwise alignments from the HOMSTRAD database (other benchmarks coming soon). These are distributed alongside PEAPOD (see above for instructions) as a nested dictionary of positions objects for ease of use. To benchmark a method you've developed on the HOMSTRAD database:

``` python
import torch
from peapod import plms
from peapod import utilities as utils
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = plms.load_extractor('ProstT5', 'residue', device=device)

from peapod import methods
from peapod import benchmark as bm
homstrad_pair_dir = 'peapod_benchmarks/homstrad_circa20250812/fasta_files'
homstrad_dict = bm.import_homstrad_pairs(homstrad_pair_dir)
for key in homstrad_dict.keys():
    methods.batch_embed(homstrad_dict[key],model,padding=True)

# from whereever you saved the git clone of peapod_benchmarks
homstrad_positions = utils.unpickler('peapod_benchmarks/homstrad_circa20250812/positions_aligned_by_other_tools.pkl')

from peapod import similarity as sim
import numba as nb
from peapod import alignment as aln


# this function must take only two profiles and (optionally) a gap function as inputs and produce just one positions object – it's the price you must pay to have a neat, general purpose function for benchmarking (feel free to mess with the code yourself to allow more arguments)
def my_first_method(profile0,profile1):
    S = sim.shift(sim.enhance_signal(sim.minkowski(profile0,profile1)),-1)
    @nb.njit
    def affinegap(gap):
        return 14+(1*gap)
    aln_global, global_aln_scored_matrix = aln.global_aln(S, affinegap)
    return aln_global

homstrad_positions['My First Method'] = bm.run_method_on_homstrad(homstrad_dict, method_func)

tool_performance_against_homstrad = bm.homstrad_benchmark_tools_against_ref(homstrad_positions,'HOMSTRAD')

# visualize the results
viz.plot_homstrad_performance(tool_performance_against_homstrad,'tool')
```

## Acknowledgments
This project was made possible by the support of the Caltech Center for Environmental Microbial Interactions (CEMI) and the Caltech Center for Evolutionary Science (CES).
PEAPOD was primarily inspired by [EBA](https://git.scicore.unibas.ch/schwede/EBA) ([Pantolini et al., 2024](https://doi.org/10.1093/bioinformatics/btad786)), and borrows several ideas and approaches from it.
Many thanks to Kazutaka Katoh for fielding an endless stream of questions about FFT implementation in MAFFT.


## Copyright

- Copyright © 2025 California Institute of Technology (Caltech).
- Free software distributed under the [MIT License](./LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "peapod",
    "maintainer": "Calvin Rusley",
    "docs_url": null,
    "requires_python": ">=3.9.16",
    "maintainer_email": null,
    "keywords": "aligner, alignment, embedding, plm, protein, sequence",
    "author": "Calvin Rusley",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/27/4c/2023518b3adc95c1231701b16476c1ec2c2a8858cad16e531efd71566266/peapod-0.1.6.tar.gz",
    "platform": null,
    "description": "# PEAPOD\n\n|        |        |\n|--------|--------|\n| Package | [![Latest PyPI Version](https://img.shields.io/pypi/v/peapod.svg)](https://pypi.org/project/peapod/) [![Supported Python Versions](https://img.shields.io/pypi/pyversions/peapod.svg)](https://pypi.org/project/peapod/)  |\n| Meta   | [![Code of Conduct](https://img.shields.io/badge/Contributor%20Covenant-v2.0%20adopted-ff69b4.svg)](CODE_OF_CONDUCT.md) |\n\n\n## Protein Embedding Aligner Plus Output Display\nA modular python package for pairwise aligning per-residue protein embeddings generated by Protein Language Models.\n\nThe ethos of this tool is accessibility and modularity. Protein embedding alignment is a very new idea, with new approaches to sequence alignment subproblems popping up every few months. PEAPOD is designed to test these ideas by allowing users to modularly incorporate alternative implementations of any stage of sequence alignment.\n\nSequence alignment often masquerades as a singular choice, but is in fact several in a trenchcoat. Some include:\n- How do you represent each character in an alignment?\n- How do you score the differences between characters?\n- How do you score gaps? Do you consider them at all?\n- Are the sequences you're comparing globally alignable? Locally?\n- Are there positions you're very certain are aligned?\n\n## Get started\nYou can install this package into your preferred Python environment using pip:\n\n```bash\n$ python -m pip install --upgrade pip\n$ pip install peapod\n$ # OPTIONAL: download benchmarking data\n$ git clone https://github.com/CalvinRusley/peapod_benchmarks.git\n$ cd peapod_benchmarks/homstrad_circa20250812/\n$ tar xvf fasta_files.tar.gz\n```\n\n## Troubleshooting\nIf you plan to use GPUs, CUDA and PyTorch can be very finicky and sometimes don't play well with each other or with certain python versions. If you run into issues, remake your virtual environment after consulting the [PyTorch release compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix) and install the appropriate PyTorch before PEAPOD. If you still have issues, please post them to [PEAPOD's issues page](https://github.com/CalvinRusley/peapod/issues).\n\nIn our testing, a combination of python v3.9.16, torch v2.6.0, and a NVIDIA Tesla P100 GPU worked properly.\n\n## Notes\n- Visualizations in PEAPOD were designed for JupyterLab or another IDE that can display holoviews plots.\n- At present, PEAPOD only works on linux due to an issue with a dependency. This will hopefully be fixed in the coming months.\n- If you are on a system that requires you to request compute hours, embedding benefits from GPUs, but everything downstream works fine on CPU.\n\n## Basic Tutorial\n``` python\nimport numpy as np\n\n\n### load language model extractor: ESM1b, ESM2, ProtT5, ProstT5, ankh-base, or ankh-large\n# EBA (Pantolini et al., 2024) found that, when using high dimensionality embeddings (such as ESM2), scaling similarity matrices by 0.1 or 0.01 (the l parameter) can prevent precision errors\nimport torch\nfrom peapod import plms\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = plms.load_extractor('ProstT5', 'residue', device=device) # this can take a while\n\n\n### import fasta file as \"profiles\"\nfrom peapod import utilities as utils\n# from whereever you saved the git clone of peapod_benchmarks\nfasta_file = 'peapod_benchmarks/homstrad_circa20250812/fasta_files/Acetyltransf.faa'\nprofile_dict = utils.fasta_to_profiles_dict(fasta_file,unalign=True)\n\n\n### print the amino acids in a profile\nprofile_names = list(profile_dict.keys())\nprofile0 = profile_dict[profile_names[0]]\nprofile1 = profile_dict[profile_names[1]]\nprint(profile0.aaseqs)\n\n\n### embed profiles using the chosen Protein Language Model\nfrom peapod import methods\nmethods.batch_embed(profile_dict,model,padding=True)\n\n\n### compute similarity matrix\nfrom peapod import similarity as sim\nS = sim.minkowski(profile0,profile1)\n\n\n### OPTIONAL, but recommended: signal enhancement and shifting\n# shown to be useful by Pantolini et al., 2024 when aligning embeddings using distances\nS_enhanced = sim.enhance_signal(S)\nS_enhanced_shifted = sim.shift(S_enhanced,-1)\n\n\n### define a gap function\n# can also select from the \"gaps\" module\nimport numba as nb\n@nb.njit\ndef affinegap(gap):\n    return 14+(1*gap)\n\n\n### compute alignment\nfrom peapod import alignment as aln\naln_global, global_aln_scored_matrix = aln.global_aln(S_enhanced_shifted, affinegap)\n\n\n### calculate POZITIV score as per Booth et al. 2004 (a clever workaround of Monte Carlo Z-scoring alignment scores)\nfrom peapod import pozitiv as poz\nmu, sigma = poz.pozitiv(S_enhanced_shifted,aln_global)\npoz_score = (aln_global.score-mu)/sigma\nprint('Raw score: ', aln_global.score)\nprint('POZ score: ', poz_score)\n\n### visualize scored matrix and alignment\nfrom peapod import visualize as viz\nviz.summarize(global_aln_scored_matrix, [aln_global])\n```\n\n## Advanced Tutorial\nPEAPOD provides many methods that can be easily mixed and matched. The following tutorial includes some of these alternatives.\n\n``` python\nimport numpy as np\n\n\n### load language model extractor: ESM1b, ESM2, ProtT5, ProstT5, ankh-base, or ankh-large\n# EBA (Pantolini et al., 2024) found that, when using high dimensionality embeddings (such as ESM2), scaling similarity matrices by 0.1 or 0.01 (the l parameter) can prevent precision errors\nimport torch\nfrom peapod import plms\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = plms.load_extractor('ProstT5', 'residue', device=device) # this can take a while\n\n\n### import fasta file as \"profiles\"\nfrom peapod import utilities as utils\n# from whereever you saved the git clone of peapod_benchmarks\nfasta_file = 'peapod_benchmarks/homstrad_circa20250812/fasta_files/Acetyltransf.faa'\nprofile_dict = utils.fasta_to_profiles_dict(fasta_file,unalign=True)\n\n\n### print the amino acids in a profile\nprofile_names = list(profile_dict.keys())\nprofile0 = profile_dict[profile_names[0]]\nprofile1 = profile_dict[profile_names[1]]\nprint(profile0.aaseqs)\n\n\n### embed profiles using the chosen Protein Language Model\nfrom peapod import methods\nmethods.batch_embed(profile_dict,model,padding=True)\n\n\n### OPTIONAL: batch correct your embeddings\n# McWhite et al., 2022 showed that embeddings can show sequence-level batch effects, and correcting them generally increases alignment accuracy\nfrom peapod import batchcorrection as bc\nbatch_corrected = bc.batch_correct([profile0,profile1]) # creates a list of batch corrected profiles\n\n\n### compute similarity matrix\nfrom peapod import similarity as sim\n# using the minkowski p-norm (default 2, otherwise provide pnumerator and pdenominator)\nS_minkowski = sim.minkowski(profile0,profile1)\n# using cosine similarity\nS_cos = sim.cosine(profile0,profile1)\n# using a substitution matrix\nblosum62 = sim.load_substitution_matrix('blosum62')\nS_sub = sim.substitution(profile0,profile1,blosum62)\n\n\n### OPTIONAL, but recommended: signal enhancement and shifting\n# shown to be useful by Pantolini et al., 2024 when aligning embeddings using distances\nS_enhanced = sim.enhance_signal(S_minkowski)\nS_enhanced_shifted = sim.shift(S_enhanced,-1)\n\n\n### define a gap function\n# can also select from the \"gaps\" module\nimport numba as nb\n@nb.njit\ndef affinegap(gap):\n    return 14+(1*gap)\n\n\n### OPTIONAL: compute anchors\n# using maximum non-crossing matching of pairwise best hits\nfrom peapod import mncm\nmncm_anchors = mncm.mncm(S_enhanced_shifted)\nwmncm_anchors = mncm.wmncm(S_enhanced_shifted) # as above, but using weighted maximum non-crossing matching\n# using a fast fourier transform (MAFFT-esque)\nfrom peapod import fft\nfft_anchors = fft.get_anchors(S_enhanced_shifted,profile0,profile1,affinegap)\n\n\n### compute alignment\nfrom peapod import alignment as aln\naln_global, global_aln_scored_matrix = aln.global_aln(S_enhanced_shifted, affinegap)\naln_local, local_aln_scored_matrix = aln.local_aln(S_enhanced_shifted, affinegap)\naln_local_global = aln.local_anchor_global_aln(S_enhanced_shifted, affinegap)\naln_anchored_global = aln.anchored_global_aln(S_enhanced_shifted, fft_anchors, affinegap)\naln_anchored_local = aln.anchored_local_aln(S_enhanced_shifted, fft_anchors, affinegap)\naln_pairwise_cluster = aln.pairwise_clustering_aln(S_enhanced_shifted) # inspired by vcMSA (McWhite et al., 2022)\n\n\n### calculate POZITIV score as per Booth et al. 2004 (a clever alternative to Monte Carlo z-scoring of alignment scores)\nfrom peapod import pozitiv as poz\nmu, sigma = poz.pozitiv(S_enhanced_shifted,aln_global)\npoz_score = (aln_global.score-mu)/sigma\nprint('Raw score', aln_global.score)\nprint('POZ score: ', poz_score)\n\n\n### visualize scored matrix and alignment\nfrom peapod import visualize as viz\nviz.summarize(global_aln_scored_matrix, [aln_global])\n```\n\n\n## Benchmarking on the HOMSTRAD database\nOne way to benchmark pairwise aligners is by comparing the alignments they produce to reference pairwise alignments from the HOMSTRAD database (other benchmarks coming soon). These are distributed alongside PEAPOD (see above for instructions) as a nested dictionary of positions objects for ease of use. To benchmark a method you've developed on the HOMSTRAD database:\n\n``` python\nimport torch\nfrom peapod import plms\nfrom peapod import utilities as utils\ndevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\nmodel = plms.load_extractor('ProstT5', 'residue', device=device)\n\nfrom peapod import methods\nfrom peapod import benchmark as bm\nhomstrad_pair_dir = 'peapod_benchmarks/homstrad_circa20250812/fasta_files'\nhomstrad_dict = bm.import_homstrad_pairs(homstrad_pair_dir)\nfor key in homstrad_dict.keys():\n    methods.batch_embed(homstrad_dict[key],model,padding=True)\n\n# from whereever you saved the git clone of peapod_benchmarks\nhomstrad_positions = utils.unpickler('peapod_benchmarks/homstrad_circa20250812/positions_aligned_by_other_tools.pkl')\n\nfrom peapod import similarity as sim\nimport numba as nb\nfrom peapod import alignment as aln\n\n\n# this function must take only two profiles and (optionally) a gap function as inputs and produce just one positions object \u2013 it's the price you must pay to have a neat, general purpose function for benchmarking (feel free to mess with the code yourself to allow more arguments)\ndef my_first_method(profile0,profile1):\n    S = sim.shift(sim.enhance_signal(sim.minkowski(profile0,profile1)),-1)\n    @nb.njit\n    def affinegap(gap):\n        return 14+(1*gap)\n    aln_global, global_aln_scored_matrix = aln.global_aln(S, affinegap)\n    return aln_global\n\nhomstrad_positions['My First Method'] = bm.run_method_on_homstrad(homstrad_dict, method_func)\n\ntool_performance_against_homstrad = bm.homstrad_benchmark_tools_against_ref(homstrad_positions,'HOMSTRAD')\n\n# visualize the results\nviz.plot_homstrad_performance(tool_performance_against_homstrad,'tool')\n```\n\n## Acknowledgments\nThis project was made possible by the support of the Caltech Center for Environmental Microbial Interactions (CEMI) and the Caltech Center for Evolutionary Science (CES).\nPEAPOD was primarily inspired by [EBA](https://git.scicore.unibas.ch/schwede/EBA) ([Pantolini et al., 2024](https://doi.org/10.1093/bioinformatics/btad786)), and borrows several ideas and approaches from it.\nMany thanks to Kazutaka Katoh for fielding an endless stream of questions about FFT implementation in MAFFT.\n\n\n## Copyright\n\n- Copyright \u00a9 2025 California Institute of Technology (Caltech).\n- Free software distributed under the [MIT License](./LICENSE).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Protein Embedding Aligner Plus Output Display",
    "version": "0.1.6",
    "project_urls": {
        "Bug Tracker": "https://github.com/CalvinRusley/peapod/issues",
        "Documentation": "https://peapod.readthedocs.io",
        "Download": "https://pypi.org/project/peapod/#files",
        "Homepage": "https://github.com/CalvinRusley/peapod",
        "Source Code": "https://github.com/CalvinRusley/peapod"
    },
    "split_keywords": [
        "aligner",
        " alignment",
        " embedding",
        " plm",
        " protein",
        " sequence"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ee8a3d26f85d3229e908920d45eb8670b07596b56dd99c680f09d0fa8ae9b6bd",
                "md5": "8d00fa6aab0412cdf08e4734de188118",
                "sha256": "0f6298a8d46b9165aafbd04dcac2264630d4e7b0ca6baa3403bdb3d001f1dc1d"
            },
            "downloads": -1,
            "filename": "peapod-0.1.6-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8d00fa6aab0412cdf08e4734de188118",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9.16",
            "size": 42504,
            "upload_time": "2025-10-07T20:29:01",
            "upload_time_iso_8601": "2025-10-07T20:29:01.324107Z",
            "url": "https://files.pythonhosted.org/packages/ee/8a/3d26f85d3229e908920d45eb8670b07596b56dd99c680f09d0fa8ae9b6bd/peapod-0.1.6-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "274c2023518b3adc95c1231701b16476c1ec2c2a8858cad16e531efd71566266",
                "md5": "f701c8efeb6ac323f88cc84614c6c81b",
                "sha256": "cf0797d223ff7eec3411c64769aef559787f47776362ecd73f2b270781ba063a"
            },
            "downloads": -1,
            "filename": "peapod-0.1.6.tar.gz",
            "has_sig": false,
            "md5_digest": "f701c8efeb6ac323f88cc84614c6c81b",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9.16",
            "size": 39161,
            "upload_time": "2025-10-07T20:29:02",
            "upload_time_iso_8601": "2025-10-07T20:29:02.498045Z",
            "url": "https://files.pythonhosted.org/packages/27/4c/2023518b3adc95c1231701b16476c1ec2c2a8858cad16e531efd71566266/peapod-0.1.6.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-07 20:29:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "CalvinRusley",
    "github_project": "peapod",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "peapod"
}
        
Elapsed time: 1.95763s