Name | vame-py JSON |
Version |
0.8.0
JSON |
| download |
home_page | None |
Summary | Variational Animal Motion Embedding. |
upload_time | 2025-02-12 13:59:23 |
maintainer | None |
docs_url | None |
author | K. Luxem & , P. Bauer |
requires_python | >=3.11 |
license | None |
keywords |
vame
auto-encoder
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|

<p align="center">
<a href="https://codecov.io/gh/EthoML/VAME" >
<img src="https://codecov.io/gh/EthoML/VAME/graph/badge.svg?token=J1CUXB4N0E"/>
</a>
<a href="https://pypi.org/project/vame-py">
<img src="https://img.shields.io/pypi/v/vame-py?color=%231BA331&label=PyPI&logo=python&logoColor=%23F7F991%20">
</a>
</p>
🌟 Welcome to EthoML/VAME (Variational Animal Motion Encoding), an open-source machine learning tool for behavioral action segmentation and analyses.
VAME [documentation](https://ethoml.github.io/VAME/). <br/> <br/>
❗ <b>[Clear here to read the NEW peer-reviewed neuroscience article published open-access in the Nature journal <i>Cell Reports</i>.</b>](https://www.cell.com/cms/10.1016/j.celrep.2024.114870/attachment/df29fd8e-66e4-474e-8fdd-8adf5b1e110a/mmc11.pdf) ❗
We are a group of behavioral enthusiasts, comprising the original VAME developers Kevin Luxem and Pavol Bauer, behavioral neuroscientists Stephanie R. Miller and Jorge J. Palop, and computer scientists and statisticians Alex Pico, Reuben Thomas, and Katie Ly). Our aim is to provide scalable, unbiased and sensitive approaches for assessing mouse behavior using computer vision and machine learning approaches.
We are focused on the expanding the analytical capabilities of VAME segmentation by providing curated scripts for VAME implementation and tools for data processing, visualization, and statistical analyses.
## Recent Improvements to VAME
* Curated scripts for VAME implementation
* Addition of compatability with [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut), [SLEAP](https://github.com/talmolab/sleap), and [LightningPose](https://github.com/paninski-lab/lightning-pose)
* Addition of compatability with [movement](https://github.com/neuroinformatics-unit/movement) for data ingestion
* Addition of a new cost function for community dendrogram generation
* Addition of a new egocentric alignment method
* Addition of mouse behavioral videos for practicing VAME and for benchmarking purposes
* Refined output filename structure
(full PDF with supplemental figures [here](https://www.cell.com/cms/10.1016/j.celrep.2024.114870/attachment/df29fd8e-66e4-474e-8fdd-8adf5b1e110a/mmc11.pdf)).
## Authors and Code Contributors
VAME was developed by Kevin Luxem and Pavol Bauer (Luxem et. al., 2022). The original VAME repository was deprecated, forked, and is now being maintained here at https://github.com/EthoML/VAME.
The development of VAME is heavily inspired by [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut/). As such, the VAME project management codebase has been adapted from the DeepLabCut codebase. The DeepLabCut 2.0 toolbox is © A. & M.W. Mathis Labs [deeplabcut.org](http:\\deeplabcut.org), released under LGPL v3.0. The implementation of the VRAE model is partially adapted from the [Timeseries clustering](https://github.com/tejaslodaya/timeseries-clustering-vae) repository developed by [Tejas Lodaya](https://tejaslodaya.com).
## VAME in a Nutshell
VAME is a framework to cluster behavioral signals obtained from pose-estimation tools. It is a [PyTorch](https://pytorch.org/)-based deep learning framework which leverages the power of recurrent neural networks (RNN) to model sequential data. In order to learn the underlying complex data distribution, we use the RNN in a variational autoencoder setting to extract the latent state of the animal in every step of the input time series.
The workflow of VAME consists of 5 steps and we explain them in detail [here](https://github.com/LINCellularNeuroscience/VAME/wiki/1.-VAME-Workflow)
## Installation
To get started we recommend using [Anaconda](https://www.anaconda.com/distribution/) with Python 3.11 or higher. Here, you can create a [virtual enviroment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) to store all the dependencies necessary for VAME. You can also use the VAME.yaml file supplied here, by simply opening the terminal, running git clone https://github.com/LINCellularNeuroscience/VAME.git, then typ cd VAME then run: conda env create -f VAME.yaml).
* Go to the locally cloned VAME directory and run python setup.py install in order to install VAME in your active conda environment.
* Install the current stable Pytorch release using the OS-dependent instructions from the [Pytorch website](https://pytorch.org/get-started/locally/). Currently, VAME is tested on PyTorch 2.2.2. (Note, if you use the conda file we supply, PyTorch is already installed and you don't need to do this step.)
## Getting Started
First, you should make sure that you have a GPU powerful enough to train deep learning networks. In our original 2022 paper, we were using a single Nvidia GTX 1080 Ti GPU to train our network. A hardware guide can be found [here](https://timdettmers.com/2018/12/16/deep-learning-hardware-guide/). VAME can also be trained in Google Colab or on a HPC cluster. Once you have your computing setup ready, begin using VAME by following the [workflow guide](https://github.com/LINCellularNeuroscience/VAME/wiki/1.-VAME-Workflow).
Once you have VAME installed, you can try VAME out on a set of mouse behavioral videos and .csv files publicly available in the [examples folder](https://github.com/LINCellularNeuroscience/VAME/tree/master/examples).
## References
New 2024 VAME publication: [Machine learning reveals prominent spontaneous behavioral changes and treatment efficacy in humanized and transgenic Alzheimer's disease models](https://www.cell.com/cell-reports/fulltext/S2211-1247(24)01221-X) <br/>
Original 2022 VAME publication: [Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion](https://www.biorxiv.org/content/10.1101/2020.05.14.095430v2) <br/>
Kingma & Welling: [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114) <br/>
Pereira & Silveira: [Learning Representations from Healthcare Time Series Data for Unsupervised Anomaly Detection](https://www.joao-pereira.pt/publications/accepted_version_BigComp19.pdf)
## License: GPLv3
See the [LICENSE file](https://github.com/LINCellularNeuroscience/VAME/blob/master/LICENSE) for the full statement.
## Code Reference (DOI)
Raw data
{
"_id": null,
"home_page": null,
"name": "vame-py",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.11",
"maintainer_email": null,
"keywords": "vame, auto-encoder",
"author": "K. Luxem & , P. Bauer",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/55/48/3fb03172b4b5c576636e3d75c95a7ad9107136e09d3c721f759fa1668c89/vame_py-0.8.0.tar.gz",
"platform": null,
"description": "\n\n<p align=\"center\">\n<a href=\"https://codecov.io/gh/EthoML/VAME\" >\n <img src=\"https://codecov.io/gh/EthoML/VAME/graph/badge.svg?token=J1CUXB4N0E\"/>\n </a>\n <a href=\"https://pypi.org/project/vame-py\">\n <img src=\"https://img.shields.io/pypi/v/vame-py?color=%231BA331&label=PyPI&logo=python&logoColor=%23F7F991%20\">\n </a>\n</p>\n\n\ud83c\udf1f Welcome to EthoML/VAME (Variational Animal Motion Encoding), an open-source machine learning tool for behavioral action segmentation and analyses.\n\nVAME [documentation](https://ethoml.github.io/VAME/). <br/> <br/>\n\u2757 <b>[Clear here to read the NEW peer-reviewed neuroscience article published open-access in the Nature journal <i>Cell Reports</i>.</b>](https://www.cell.com/cms/10.1016/j.celrep.2024.114870/attachment/df29fd8e-66e4-474e-8fdd-8adf5b1e110a/mmc11.pdf) \u2757\n\nWe are a group of behavioral enthusiasts, comprising the original VAME developers Kevin Luxem and Pavol Bauer, behavioral neuroscientists Stephanie R. Miller and Jorge J. Palop, and computer scientists and statisticians Alex Pico, Reuben Thomas, and Katie Ly). Our aim is to provide scalable, unbiased and sensitive approaches for assessing mouse behavior using computer vision and machine learning approaches.\n\nWe are focused on the expanding the analytical capabilities of VAME segmentation by providing curated scripts for VAME implementation and tools for data processing, visualization, and statistical analyses.\n\n## Recent Improvements to VAME\n* Curated scripts for VAME implementation\n* Addition of compatability with [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut), [SLEAP](https://github.com/talmolab/sleap), and [LightningPose](https://github.com/paninski-lab/lightning-pose)\n* Addition of compatability with [movement](https://github.com/neuroinformatics-unit/movement) for data ingestion\n* Addition of a new cost function for community dendrogram generation\n* Addition of a new egocentric alignment method\n* Addition of mouse behavioral videos for practicing VAME and for benchmarking purposes\n* Refined output filename structure\n \n\n\n(full PDF with supplemental figures [here](https://www.cell.com/cms/10.1016/j.celrep.2024.114870/attachment/df29fd8e-66e4-474e-8fdd-8adf5b1e110a/mmc11.pdf)).\n\n\n## Authors and Code Contributors\nVAME was developed by Kevin Luxem and Pavol Bauer (Luxem et. al., 2022). The original VAME repository was deprecated, forked, and is now being maintained here at https://github.com/EthoML/VAME.\n\nThe development of VAME is heavily inspired by [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut/). As such, the VAME project management codebase has been adapted from the DeepLabCut codebase. The DeepLabCut 2.0 toolbox is \u00a9 A. & M.W. Mathis Labs [deeplabcut.org](http:\\\\deeplabcut.org), released under LGPL v3.0. The implementation of the VRAE model is partially adapted from the [Timeseries clustering](https://github.com/tejaslodaya/timeseries-clustering-vae) repository developed by [Tejas Lodaya](https://tejaslodaya.com).\n\n## VAME in a Nutshell\n\nVAME is a framework to cluster behavioral signals obtained from pose-estimation tools. It is a [PyTorch](https://pytorch.org/)-based deep learning framework which leverages the power of recurrent neural networks (RNN) to model sequential data. In order to learn the underlying complex data distribution, we use the RNN in a variational autoencoder setting to extract the latent state of the animal in every step of the input time series.\nThe workflow of VAME consists of 5 steps and we explain them in detail [here](https://github.com/LINCellularNeuroscience/VAME/wiki/1.-VAME-Workflow)\n\n## Installation\n\nTo get started we recommend using [Anaconda](https://www.anaconda.com/distribution/) with Python 3.11 or higher. Here, you can create a [virtual enviroment](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html) to store all the dependencies necessary for VAME. You can also use the VAME.yaml file supplied here, by simply opening the terminal, running git clone https://github.com/LINCellularNeuroscience/VAME.git, then typ cd VAME then run: conda env create -f VAME.yaml).\n\n* Go to the locally cloned VAME directory and run python setup.py install in order to install VAME in your active conda environment.\n* Install the current stable Pytorch release using the OS-dependent instructions from the [Pytorch website](https://pytorch.org/get-started/locally/). Currently, VAME is tested on PyTorch 2.2.2. (Note, if you use the conda file we supply, PyTorch is already installed and you don't need to do this step.)\n\n## Getting Started\nFirst, you should make sure that you have a GPU powerful enough to train deep learning networks. In our original 2022 paper, we were using a single Nvidia GTX 1080 Ti GPU to train our network. A hardware guide can be found [here](https://timdettmers.com/2018/12/16/deep-learning-hardware-guide/). VAME can also be trained in Google Colab or on a HPC cluster. Once you have your computing setup ready, begin using VAME by following the [workflow guide](https://github.com/LINCellularNeuroscience/VAME/wiki/1.-VAME-Workflow).\n\nOnce you have VAME installed, you can try VAME out on a set of mouse behavioral videos and .csv files publicly available in the [examples folder](https://github.com/LINCellularNeuroscience/VAME/tree/master/examples).\n\n## References\nNew 2024 VAME publication: [Machine learning reveals prominent spontaneous behavioral changes and treatment efficacy in humanized and transgenic Alzheimer's disease models](https://www.cell.com/cell-reports/fulltext/S2211-1247(24)01221-X) <br/>\nOriginal 2022 VAME publication: [Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion](https://www.biorxiv.org/content/10.1101/2020.05.14.095430v2) <br/>\nKingma & Welling: [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114) <br/>\nPereira & Silveira: [Learning Representations from Healthcare Time Series Data for Unsupervised Anomaly Detection](https://www.joao-pereira.pt/publications/accepted_version_BigComp19.pdf)\n\n## License: GPLv3\nSee the [LICENSE file](https://github.com/LINCellularNeuroscience/VAME/blob/master/LICENSE) for the full statement.\n\n## Code Reference (DOI)\n",
"bugtrack_url": null,
"license": null,
"summary": "Variational Animal Motion Embedding.",
"version": "0.8.0",
"project_urls": {
"homepage": "https://github.com/EthoML/VAME/",
"repository": "https://github.com/EthoML/VAME/"
},
"split_keywords": [
"vame",
" auto-encoder"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "8d5e62708bb965aa891e4cbb38af40a33fe82628b90cdd6172a4403f7af4d33e",
"md5": "d77e95c474c8bbf6a3a5d41de12405be",
"sha256": "9f4b005c6a6c670389ea4e75de4814eb772bbe00711a5e9a05c5d156e07dbd4a"
},
"downloads": -1,
"filename": "vame_py-0.8.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d77e95c474c8bbf6a3a5d41de12405be",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.11",
"size": 101449,
"upload_time": "2025-02-12T13:59:21",
"upload_time_iso_8601": "2025-02-12T13:59:21.849864Z",
"url": "https://files.pythonhosted.org/packages/8d/5e/62708bb965aa891e4cbb38af40a33fe82628b90cdd6172a4403f7af4d33e/vame_py-0.8.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "55483fb03172b4b5c576636e3d75c95a7ad9107136e09d3c721f759fa1668c89",
"md5": "0b5e20de806dcb33d3cff76179f2f32c",
"sha256": "6ea15e0ba3e1268eaf4a53fff833c555f7ac1905ae4ce8f85cd9177ce3dff8d4"
},
"downloads": -1,
"filename": "vame_py-0.8.0.tar.gz",
"has_sig": false,
"md5_digest": "0b5e20de806dcb33d3cff76179f2f32c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.11",
"size": 86185,
"upload_time": "2025-02-12T13:59:23",
"upload_time_iso_8601": "2025-02-12T13:59:23.078040Z",
"url": "https://files.pythonhosted.org/packages/55/48/3fb03172b4b5c576636e3d75c95a7ad9107136e09d3c721f759fa1668c89/vame_py-0.8.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-02-12 13:59:23",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "EthoML",
"github_project": "VAME",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "vame-py"
}