# gensvs: Generative Singing Voice Separation
This Python package supports the paper "Towards Reliable Objective Evaluation Metrics for Generative Singing Voice Separation" by Paul A. Bereuter, Benjamin Stahl, Mark D. Plumbley and Alois Sontacchi, presented at WASPAA 2025.
It facilitates the straightforward inference of the two proposed generative models (SGMSVS and MelRoFo (S) + BigVGAN) and the computation of the embedding MSE metrics that exhibited the highest correlation with human DMOS ratings.
Additionally, this package comprises all packages to execute the [training code available at GitHub](https://github.com/pablebe/gensvs_eval).
> Note: When using this package to carry out inference or evaluation, the necessary models (e.g. singing voice separation or embedding models) are downloaded automatically.
## đ Installation and Usage
### Installation via pip
You can install the package via pip using:
```bash
pip install gensvs
```
### Installation from Source
The package was tested on Debian but should also work with CUDA support on Microsoft Windows, if you follow the steps below.
1. Clone this repository
2. Run ```pip install "."```
### Installation on Microsoft Windows
1. Install the package via pip or from Source (see above)
2. Reinstall PyTorch with CUDA>=12.6 using install command from ["PyTorch - Get Started"](https://pytorch.org/get-started/locally/) to get CUDA support.
### Setting up a conda environment using the provided bash script
We recommend installing this package in a separate conda environment. The recommended settings for the conda environment can be found in the accompanying [.yml file](https://github.com/pablebe/gensvs/blob/master/env_info/gensvs_env.yml). If you have a running conda installation (e.g. [Miniconda](https://www.anaconda.com/docs/getting-started/miniconda/main) or [Miniforge](https://github.com/conda-forge/miniforge)) and are working on a Linux system, you can run the included [Bash Script](https://github.com/pablebe/gensvs/blob/master/env_info/setup_gensvs_env.sh) from the root directory to create the conda environment and install the package. This bash script will automatically create a conda environment, install the ```gensvs``` package via pip, and delete the subfolders in the cache folder ```~/.cache/torch_extensions```. These subfolders can sometimes prevent the inference of the SGMSVS model.
Further information on the usage of model inference and model evaluation is provided below.
## đđŊââī¸ââĄī¸ Model Inference
### Command Line Tool
You can carry out the model inference using our command line tool. An example call for the SGMSVS model is shown below:
```bash
gensvs --model sgmsvs --device cuda --mix-dir audio_examples/mixture --output-dir audio_examples/separated --output-mono
```
To isolate the inference to one CUDA device please you can set the environment variable ```CUDA_VISIBLE_DEVICES``` before the calling the command tool. An exemplary inference call for 'MelRoFo (S) + BigVGAN', isolated on GPU 0, can be found below:
```bash
CUDA_VISIBLE_DEVICES=0 gensvs --model melrofobigvgan --device cuda --mix-dir audio_examples/mixture --output-dir audio_examples/separated --output-mono
```
> Note: when using 'MelRoFo (S) + BigVGAN' model to separate vocals, the signals before (only MelRoFo (S) separation) and after the finetuned BigVGAN are saved into the output directory.
For more details on the available inference parameters please call:
```bash
gensvs --help
```
### Model Inference from Python script
You can also import the models from the package to carry out the inference on a folder of musical mixtures from a Python script.
```Python
from gensvs import MelRoFoBigVGAN, SGMSVS
MIX_PATH = './audio_examples/mixture'
SEP_PATH = './audio_examples/separated'
sgmsvs_model = SGMSVS()
melrofo_model = MelRoFoBigVGAN()
sgmsvs_model.run_folder(MIX_PATH, SEP_PATH, loudness_normalize=False, loudness_level=-18, output_mono=True, ch_by_ch_processing=False)
melrofo_model.run_folder(MIX_PATH, SEP_PATH, loudness_normalize=False, loudness_level=-18, output_mono=True)
```
You can find this script on the GitHub-Repository in [```./demo/inference_demo.py```](https://github.com/pablebe/gensvs/tree/master/demo).
## đ Model Evaluation with Embedding-based MSE
In this package, we have included the calculation of the proposed embedding MSEs from the paper, building on the code published with Microsoft's [Frechet Audio Distance Tookit](https://github.com/microsoft/fadtk/tree/main). The Mean Squared Error on either [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) or [Music2Latent](https://github.com/SonyCSLParis/music2latent) embeddings can be calculated with the command line tool or a Python script.
### Command Line Tool
An example command line call to calculate the MSE on [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) embeddings is shown below:
```bash
gensvs-eval --test-dir ./demo/audio_examples/separated/sgmsvs --target-dir ./demo/audio_examples/target --output-dir ./demo/results/sgmsvs --embedding MERT-v1-95M
```
For more details on the available flags please call:
```bash
gensvs-eval --help
```
### Model Evaluation from Python script
To calculate the embedding MSE from a Python script you can use:
```Python
import os
from gensvs import EmbeddingMSE, get_all_models, cache_embedding_files
from pathlib import Path
#embedding calculation builds on multiprocessing library => don't forget to wrap your code in a main function
WORKERS = 8
SEP_PATH = './demo/audio_examples/separated'
TGT_PATH = './demo/audio_examples/target'
OUT_DIR = './demo/eval_metrics_demo'
def main():
# calculate embedding MSE
embedding = 'MERT-v1-95M'#music2latent
models = {m.name: m for m in get_all_models()}
model = models[embedding]
svs_model_names = ['sgmsvs', 'melroformer_bigvgan', 'melroformer_small']
for model_name in svs_model_names:
# 1. Calculate and store embedding files for each dataset
for d in [TGT_PATH, os.path.join(SEP_PATH, model_name)]:
if Path(d).is_dir():
cache_embedding_files(d, model, workers=WORKERS, load_model=True)
csv_out_path = Path(os.path.join(OUT_DIR, model_name,embedding+'_MSE', 'embd_mse.csv'))
# 2. Calculate embedding MSE for each file in folder
emb_mse = EmbeddingMSE(model, audio_load_worker=WORKERS, load_model=False)
emb_mse.embedding_mse(TGT_PATH, os.path.join(SEP_PATH, model_name), csv_out_path)
if __name__ == "__main__":
main()
```
This script can be found in [```./demo/evaluation_demo.py```](https://github.com/pablebe/gensvs/tree/master/demo)
## âšī¸ Further information
- Paper: [Preprint](https://arxiv.org/pdf/2507.11427)
- Website: [Companion Page](https://pablebe.github.io/gensvs_eval_companion_page/)
- Data: [](https://doi.org/10.5281/zenodo.15911723)
- Model Checkpoints: [Hugging Face](https://huggingface.co/collections/pablebe/gensvs-eval-model-checkpoints-687e1c967b43f867f34d6225)
- More Code: [GitHub](https://github.com/pablebe/gensvs_eval)
## Citations, References and Acknowledgements
If you use this package in your work please do not forget to cite our paper and the work which built the foundation for this package.
Our paper can be cited with:
```bib
@misc{bereuter2025,
title={Towards Reliable Objective Evaluation Metrics for Generative Singing Voice Separation Models},
author={Paul A. Bereuter and Benjamin Stahl and Mark D. Plumbley and Alois Sontacchi},
year={2025},
eprint={2507.11427},
archivePrefix={arXiv},
primaryClass={eess.AS},
url={https://arxiv.org/abs/2507.11427},
}
```
The inference code for the SGMSVS model was built upon the code made available in:
```bib
@article{richter2023speech,
title={Speech Enhancement and Dereverberation with Diffusion-based Generative Models},
author={Richter, Julius and Welker, Simon and Lemercier, Jean-Marie and Lay, Bunlong and Gerkmann, Timo},
journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},
volume={31},
pages={2351-2364},
year={2023},
doi={10.1109/TASLP.2023.3285241}
}
```
The inference code for MelRoFo (S) + BigVGAN was put together from the code available at:
```bib
@misc{solovyev2023benchmarks,
title={Benchmarks and leaderboards for sound demixing tasks},
author={Roman Solovyev and Alexander Stempkovskiy and Tatiana Habruseva},
year={2023},
eprint={2305.07489},
archivePrefix={arXiv},
howpublished={\url{https://github.com/ZFTurbo/Music-Source-Separation-Training}},
primaryClass={cs.SD},
url={https://github.com/ZFTurbo/Music-Source-Separation-Training}
}
```
```bib
@misc{jensen2024melbandroformer,
author = {Kimberley Jensen},
title = {Mel-Band-Roformer-Vocal-Model},
year = {2024},
howpublished = {\url{https://github.com/KimberleyJensen/Mel-Band-Roformer-Vocal-Model}},
note = {GitHub repository},
url = {https://github.com/KimberleyJensen/Mel-Band-Roformer-Vocal-Model}
}
```
```bib
@inproceedings{lee2023bigvgan,
title={BigVGAN: A Universal Neural Vocoder with Large-Scale Training},
author={Sang-gil Lee and Wei Ping and Boris Ginsburg and Bryan Catanzaro and Sungroh Yoon},
booktitle={in Proc. ICLR, 2023},
year={2023},
url={https://openreview.net/forum?id=iTtGCMDEzS_}
}
```
The whole evaluation code was created using Microsoft's [Frechet Audio Distance Tookit](https://github.com/microsoft/fadtk/tree/main) as a template
```bib
@inproceedings{fadtk,
title = {Adapting Frechet Audio Distance for Generative Music Evaluation},
author = {Azalea Gui, Hannes Gamper, Sebastian Braun, Dimitra Emmanouilidou},
booktitle = {Proc. IEEE ICASSP 2024},
year = {2024},
url = {https://arxiv.org/abs/2311.01616},
}
```
If you use the [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) or [Music2Latent](https://github.com/SonyCSLParis/music2latent) MSE please also cite the initial work in which the embeddings were proposed.
For [MERT](https://huggingface.co/m-a-p/MERT-v1-95M):
```bib
@misc{li2023mert,
title={MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training},
author={Yizhi Li and Ruibin Yuan and Ge Zhang and Yinghao Ma and Xingran Chen and Hanzhi Yin and Chenghua Lin and Anton Ragni and Emmanouil Benetos and Norbert Gyenge and Roger Dannenberg and Ruibo Liu and Wenhu Chen and Gus Xia and Yemin Shi and Wenhao Huang and Yike Guo and Jie Fu},
year={2023},
eprint={2306.00107},
archivePrefix={arXiv},
primaryClass={cs.SD}
}
```
For [Music2Latent](https://github.com/SonyCSLParis/music2latent):
```bib
@inproceedings{pasini2024music2latent,
author = {Marco Pasini and Stefan Lattner and George Fazekas},
title = {{Music2Latent}: Consistency Autoencoders for Latent Audio Compression},
booktitle = ismir,
year = 2024,
pages = {111-119},
venue = {San Francisco, California, USA and Online},
doi = {10.5281/zenodo.14877289},
}
```
# License
In accordance with Microsoft's [Frechet Audio Distance Tookit](https://github.com/microsoft/fadtk/tree/main) this work is made available under an
[MIT License](https://github.com/pablebe/gensvs/blob/master/LICENSE).
Raw data
{
"_id": null,
"home_page": null,
"name": "gensvs",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.13",
"maintainer_email": null,
"keywords": "singing voice separation, muscial source separation, generative models, audio quality assessment",
"author": null,
"author_email": "\"Paul A. Bereuter\" <paul.bereuter@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/f0/ae/bd90631c48ceee4524f8ef56365d3c3b06a383d32a3f555cccc8f741dda2/gensvs-2.0.4.tar.gz",
"platform": null,
"description": "# gensvs: Generative Singing Voice Separation\nThis Python package supports the paper \"Towards Reliable Objective Evaluation Metrics for Generative Singing Voice Separation\" by Paul A. Bereuter, Benjamin Stahl, Mark D. Plumbley and Alois Sontacchi, presented at WASPAA 2025.\nIt facilitates the straightforward inference of the two proposed generative models (SGMSVS and MelRoFo (S) + BigVGAN) and the computation of the embedding MSE metrics that exhibited the highest correlation with human DMOS ratings. \n\nAdditionally, this package comprises all packages to execute the [training code available at GitHub](https://github.com/pablebe/gensvs_eval).\n\n> Note: When using this package to carry out inference or evaluation, the necessary models (e.g. singing voice separation or embedding models) are downloaded automatically.\n\n## \ud83d\ude80 Installation and Usage\n### Installation via pip\nYou can install the package via pip using:\n```bash\npip install gensvs\n```\n### Installation from Source\nThe package was tested on Debian but should also work with CUDA support on Microsoft Windows, if you follow the steps below.\n1. Clone this repository \n2. Run ```pip install \".\"```\n### Installation on Microsoft Windows \n1. Install the package via pip or from Source (see above)\n2. Reinstall PyTorch with CUDA>=12.6 using install command from [\"PyTorch - Get Started\"](https://pytorch.org/get-started/locally/) to get CUDA support.\n### Setting up a conda environment using the provided bash script\nWe recommend installing this package in a separate conda environment. The recommended settings for the conda environment can be found in the accompanying [.yml file](https://github.com/pablebe/gensvs/blob/master/env_info/gensvs_env.yml). If you have a running conda installation (e.g. [Miniconda](https://www.anaconda.com/docs/getting-started/miniconda/main) or [Miniforge](https://github.com/conda-forge/miniforge)) and are working on a Linux system, you can run the included [Bash Script](https://github.com/pablebe/gensvs/blob/master/env_info/setup_gensvs_env.sh) from the root directory to create the conda environment and install the package. This bash script will automatically create a conda environment, install the ```gensvs``` package via pip, and delete the subfolders in the cache folder ```~/.cache/torch_extensions```. These subfolders can sometimes prevent the inference of the SGMSVS model.\nFurther information on the usage of model inference and model evaluation is provided below.\n## \ud83c\udfc3\ud83c\udffd\u200d\u2640\ufe0f\u200d\u27a1\ufe0f Model Inference\n### Command Line Tool\nYou can carry out the model inference using our command line tool. An example call for the SGMSVS model is shown below:\n```bash\ngensvs --model sgmsvs --device cuda --mix-dir audio_examples/mixture --output-dir audio_examples/separated --output-mono\n``` \nTo isolate the inference to one CUDA device please you can set the environment variable ```CUDA_VISIBLE_DEVICES``` before the calling the command tool. An exemplary inference call for 'MelRoFo (S) + BigVGAN', isolated on GPU 0, can be found below:\n```bash\nCUDA_VISIBLE_DEVICES=0 gensvs --model melrofobigvgan --device cuda --mix-dir audio_examples/mixture --output-dir audio_examples/separated --output-mono\n``` \n> Note: when using 'MelRoFo (S) + BigVGAN' model to separate vocals, the signals before (only MelRoFo (S) separation) and after the finetuned BigVGAN are saved into the output directory.\n\nFor more details on the available inference parameters please call:\n```bash \ngensvs --help\n``` \n### Model Inference from Python script\nYou can also import the models from the package to carry out the inference on a folder of musical mixtures from a Python script.\n```Python\nfrom gensvs import MelRoFoBigVGAN, SGMSVS\n\nMIX_PATH = './audio_examples/mixture'\nSEP_PATH = './audio_examples/separated'\n\nsgmsvs_model = SGMSVS()\nmelrofo_model = MelRoFoBigVGAN()\n\nsgmsvs_model.run_folder(MIX_PATH, SEP_PATH, loudness_normalize=False, loudness_level=-18, output_mono=True, ch_by_ch_processing=False)\nmelrofo_model.run_folder(MIX_PATH, SEP_PATH, loudness_normalize=False, loudness_level=-18, output_mono=True)\n```\n\nYou can find this script on the GitHub-Repository in [```./demo/inference_demo.py```](https://github.com/pablebe/gensvs/tree/master/demo).\n\n## \ud83d\udcc8 Model Evaluation with Embedding-based MSE\nIn this package, we have included the calculation of the proposed embedding MSEs from the paper, building on the code published with Microsoft's [Frechet Audio Distance Tookit](https://github.com/microsoft/fadtk/tree/main). The Mean Squared Error on either [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) or [Music2Latent](https://github.com/SonyCSLParis/music2latent) embeddings can be calculated with the command line tool or a Python script.\n### Command Line Tool\nAn example command line call to calculate the MSE on [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) embeddings is shown below:\n```bash\ngensvs-eval --test-dir ./demo/audio_examples/separated/sgmsvs --target-dir ./demo/audio_examples/target --output-dir ./demo/results/sgmsvs --embedding MERT-v1-95M \n```\nFor more details on the available flags please call:\n```bash \ngensvs-eval --help\n``` \n### Model Evaluation from Python script \nTo calculate the embedding MSE from a Python script you can use:\n```Python\nimport os\nfrom gensvs import EmbeddingMSE, get_all_models, cache_embedding_files\nfrom pathlib import Path\n\n#embedding calculation builds on multiprocessing library => don't forget to wrap your code in a main function\nWORKERS = 8\n\nSEP_PATH = './demo/audio_examples/separated'\nTGT_PATH = './demo/audio_examples/target'\nOUT_DIR = './demo/eval_metrics_demo'\n\ndef main():\n # calculate embedding MSE\n embedding = 'MERT-v1-95M'#music2latent\n models = {m.name: m for m in get_all_models()}\n model = models[embedding]\n svs_model_names = ['sgmsvs', 'melroformer_bigvgan', 'melroformer_small']\n\n for model_name in svs_model_names:\n # 1. Calculate and store embedding files for each dataset\n for d in [TGT_PATH, os.path.join(SEP_PATH, model_name)]:\n if Path(d).is_dir():\n cache_embedding_files(d, model, workers=WORKERS, load_model=True)\n\n csv_out_path = Path(os.path.join(OUT_DIR, model_name,embedding+'_MSE', 'embd_mse.csv'))\n # 2. Calculate embedding MSE for each file in folder\n emb_mse = EmbeddingMSE(model, audio_load_worker=WORKERS, load_model=False)\n emb_mse.embedding_mse(TGT_PATH, os.path.join(SEP_PATH, model_name), csv_out_path)\n\n\nif __name__ == \"__main__\":\n main()\n```\nThis script can be found in [```./demo/evaluation_demo.py```](https://github.com/pablebe/gensvs/tree/master/demo)\n## \u2139\ufe0f Further information\n- Paper: [Preprint](https://arxiv.org/pdf/2507.11427)\n- Website: [Companion Page](https://pablebe.github.io/gensvs_eval_companion_page/) \n- Data: [](https://doi.org/10.5281/zenodo.15911723)\n- Model Checkpoints: [Hugging Face](https://huggingface.co/collections/pablebe/gensvs-eval-model-checkpoints-687e1c967b43f867f34d6225)\n- More Code: [GitHub](https://github.com/pablebe/gensvs_eval)\n\n## Citations, References and Acknowledgements\nIf you use this package in your work please do not forget to cite our paper and the work which built the foundation for this package.\nOur paper can be cited with:\n```bib\n@misc{bereuter2025,\n title={Towards Reliable Objective Evaluation Metrics for Generative Singing Voice Separation Models}, \n author={Paul A. Bereuter and Benjamin Stahl and Mark D. Plumbley and Alois Sontacchi},\n year={2025},\n eprint={2507.11427},\n archivePrefix={arXiv},\n primaryClass={eess.AS},\n url={https://arxiv.org/abs/2507.11427}, \n}\n```\nThe inference code for the SGMSVS model was built upon the code made available in:\n```bib\n@article{richter2023speech,\n title={Speech Enhancement and Dereverberation with Diffusion-based Generative Models},\n author={Richter, Julius and Welker, Simon and Lemercier, Jean-Marie and Lay, Bunlong and Gerkmann, Timo},\n journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing},\n volume={31},\n pages={2351-2364},\n year={2023},\n doi={10.1109/TASLP.2023.3285241}\n }\n```\nThe inference code for MelRoFo (S) + BigVGAN was put together from the code available at:\n```bib\n@misc{solovyev2023benchmarks,\n title={Benchmarks and leaderboards for sound demixing tasks}, \n author={Roman Solovyev and Alexander Stempkovskiy and Tatiana Habruseva},\n year={2023},\n eprint={2305.07489},\n archivePrefix={arXiv},\n howpublished={\\url{https://github.com/ZFTurbo/Music-Source-Separation-Training}},\n primaryClass={cs.SD},\n url={https://github.com/ZFTurbo/Music-Source-Separation-Training}\n }\n```\n```bib\n@misc{jensen2024melbandroformer,\n author = {Kimberley Jensen},\n title = {Mel-Band-Roformer-Vocal-Model},\n year = {2024},\n howpublished = {\\url{https://github.com/KimberleyJensen/Mel-Band-Roformer-Vocal-Model}},\n note = {GitHub repository},\n url = {https://github.com/KimberleyJensen/Mel-Band-Roformer-Vocal-Model}\n }\n```\n```bib\n@inproceedings{lee2023bigvgan,\n title={BigVGAN: A Universal Neural Vocoder with Large-Scale Training},\n author={Sang-gil Lee and Wei Ping and Boris Ginsburg and Bryan Catanzaro and Sungroh Yoon},\n booktitle={in Proc. ICLR, 2023},\n year={2023},\n url={https://openreview.net/forum?id=iTtGCMDEzS_}\n }\n```\nThe whole evaluation code was created using Microsoft's [Frechet Audio Distance Tookit](https://github.com/microsoft/fadtk/tree/main) as a template\n```bib\n@inproceedings{fadtk,\n title = {Adapting Frechet Audio Distance for Generative Music Evaluation},\n author = {Azalea Gui, Hannes Gamper, Sebastian Braun, Dimitra Emmanouilidou},\n booktitle = {Proc. IEEE ICASSP 2024},\n year = {2024},\n url = {https://arxiv.org/abs/2311.01616},\n }\n```\nIf you use the [MERT](https://huggingface.co/m-a-p/MERT-v1-95M) or [Music2Latent](https://github.com/SonyCSLParis/music2latent) MSE please also cite the initial work in which the embeddings were proposed. \n\nFor [MERT](https://huggingface.co/m-a-p/MERT-v1-95M):\n```bib\n@misc{li2023mert,\n title={MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training}, \n author={Yizhi Li and Ruibin Yuan and Ge Zhang and Yinghao Ma and Xingran Chen and Hanzhi Yin and Chenghua Lin and Anton Ragni and Emmanouil Benetos and Norbert Gyenge and Roger Dannenberg and Ruibo Liu and Wenhu Chen and Gus Xia and Yemin Shi and Wenhao Huang and Yike Guo and Jie Fu},\n year={2023},\n eprint={2306.00107},\n archivePrefix={arXiv},\n primaryClass={cs.SD}\n}\n```\nFor [Music2Latent](https://github.com/SonyCSLParis/music2latent):\n```bib\n@inproceedings{pasini2024music2latent,\n author = {Marco Pasini and Stefan Lattner and George Fazekas},\n title = {{Music2Latent}: Consistency Autoencoders for Latent Audio Compression},\n booktitle = ismir,\n year = 2024,\n pages = {111-119},\n venue = {San Francisco, California, USA and Online},\n doi = {10.5281/zenodo.14877289},\n}\n```\n# License\nIn accordance with Microsoft's [Frechet Audio Distance Tookit](https://github.com/microsoft/fadtk/tree/main) this work is made available under an \n[MIT License](https://github.com/pablebe/gensvs/blob/master/LICENSE).\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Add your description here",
"version": "2.0.4",
"project_urls": {
"Homepage": "https://pablebe.github.io/gensvs_eval_companion_page/",
"Repository": "https://github.com/pablebe/gensvs"
},
"split_keywords": [
"singing voice separation",
" muscial source separation",
" generative models",
" audio quality assessment"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "d6e1499e38c4195f82bf7a263dfdda31aff823898950b876d207e464b165a72c",
"md5": "040f45de4f804eadc4f401a1280456c2",
"sha256": "888aa41402baa21c50885b062ec997bb33d74e36df3aca4e6813747205a80c15"
},
"downloads": -1,
"filename": "gensvs-2.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "040f45de4f804eadc4f401a1280456c2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.13",
"size": 157438,
"upload_time": "2025-08-05T14:42:42",
"upload_time_iso_8601": "2025-08-05T14:42:42.567140Z",
"url": "https://files.pythonhosted.org/packages/d6/e1/499e38c4195f82bf7a263dfdda31aff823898950b876d207e464b165a72c/gensvs-2.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f0aebd90631c48ceee4524f8ef56365d3c3b06a383d32a3f555cccc8f741dda2",
"md5": "ea1f92c54e49d774a112f92e12c140ce",
"sha256": "b733337d159c544894b7c15a8b9f4285811150c99f16d2446faa87a971b22920"
},
"downloads": -1,
"filename": "gensvs-2.0.4.tar.gz",
"has_sig": false,
"md5_digest": "ea1f92c54e49d774a112f92e12c140ce",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.13",
"size": 133176,
"upload_time": "2025-08-05T14:42:44",
"upload_time_iso_8601": "2025-08-05T14:42:44.034794Z",
"url": "https://files.pythonhosted.org/packages/f0/ae/bd90631c48ceee4524f8ef56365d3c3b06a383d32a3f555cccc8f741dda2/gensvs-2.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-05 14:42:44",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pablebe",
"github_project": "gensvs",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "gensvs"
}