Name | frechet-music-distance JSON |
Version |
1.0.0
JSON |
| download |
home_page | None |
Summary | A library for computing Frechet Music Distance. |
upload_time | 2025-01-31 17:39:54 |
maintainer | jryban |
docs_url | None |
author | jryban |
requires_python | >=3.9 |
license | MIT License
Copyright (c) 2024 jryban
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
frechet
music
distance
metric
symbolic
evaluation
generative
frechet music distance
symbolic music
frechet distance
music metric
symbolic music evaluation
|
VCS |
 |
bugtrack_url |
|
requirements |
abctoolkit
accelerate
certifi
charset-normalizer
filelock
fsspec
huggingface-hub
idna
jellyfish
Jinja2
joblib
MarkupSafe
mido
mpmath
networkx
numpy
packaging
psutil
PyYAML
RapidFuzz
regex
requests
safetensors
scipy
setuptools
sympy
tokenizers
torch
tqdm
transformers
typing_extensions
Unidecode
urllib3
scikit-learn
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Frechet Music Distance
[](https://opensource.org/licenses/MIT)
[](https://arxiv.org/abs/2412.07948)
## Table of Contents
- [Introduction](#introduction)
- [Features](#features)
- [Installation](#installation)
- [Usage](#usage)
- [Extending FMD](#extending-fmd)
- [Citation](#citation)
- [Acknowledgements](#acknowledgements)
- [License](#license)
## Introduction
A library for calculating Frechet Music Distance (FMD). This is an official implementation of the paper [_Frechet Music Distance: A Metric For Generative Symbolic Music Evaluation_](https://www.arxiv.org/abs/2412.07948).
## Features
- Calculating FMD and FMD-Inf scores between two datasets for evaluation
- Caching extracted features and distribution parameters to speedup subsequent computations
- Support for various symbolic music representations (**MIDI** and **ABC**)
- Support for various embedding models (**CLaMP 2**, **CLaMP 1**)
- Support for various methods of estimating embedding distribution parameters (**MLE**, **Leodit Wolf**, **Shrinkage**, **OAS**, **Bootstrap**)
- Computation of per-song FMD to find outliers in the dataset
## Installation
The library can be installed from from [PyPi](https://pypi.org/project/frechet-music-distance/) using pip:
```bash
pip install frechet-music-distance
```
**Note**: If it doesn't work try updating `pip`:
```bash
pip install --upgrade pip
```
You can also install from source by cloning the repository and installing it locally:
```bash
git clone https://github.com/jryban/frechet-music-distance.git
cd frechet-music-distance
pip install -e .
```
The library was tested on Linux and MacOS, but it should work on Windows as well.
**Note**: If you encounter `NotOpenSSLWarning` please downgrade your `urllib3` version to `1.26.6`:
```bash
pip install urllib3==1.26.6
```
or use a different version of Python that supports OpenSSL, by following the instructions provided in this [urllib3 GitHub issue](https://github.com/urllib3/urllib3/issues/3020)
## Usage
The library currently supports **MIDI** and **ABC** symbolic music representations.
**Note**: When using ABC Notation please ensure that each song is located in a separate file.
### Command Line
```bash
fmd score [-h] [--model {clamp2,clamp}] [--estimator {mle,bootstrap,oas,shrinkage,leodit_wolf}] [--inf] [--steps STEPS] [--min_n MIN_N] [--clear-cache] [reference_dataset] [test_dataset]
```
#### Positional arguments:
* `reference_dataset`: Path to the reference dataset
* `test_dataset`: Path to the test dataset
#### Options:
* `--model {clamp2,clamp}, -m {clamp2,clamp}` Embedding model name
* `--estimator {mle,bootstrap,oas,shrinkage,leodit_wolf}, -e {mle,bootstrap,oas,shrinkage,leodit_wolf}` Gaussian estimator for mean and covariance
* `--inf` Use FMD-Inf extrapolation
* `--steps STEPS, -s STEPS` Number of steps when calculating FMD-Inf
* `--min_n MIN_N, -n MIN_N` Mininum sample size when calculating FMD-Inf (Must be smaller than the size of the test dataset)
* `--clear-cache` Clear the pre-computed cache before FMD calculation
#### Cleanup
Additionaly the pre-computed cache can be cleared by executing:
```bash
fmd clear
```
### Python API
#### Initialization
You can initialize the metric like so:
```python
from frechet_music_distance import FrechetMusicDistance
metric = FrechetMusicDistance(feature_extractor='<model_name>', gaussian_estimator='<esimator_name>', verbose=True)
```
Valid values for `<model_name>` are: `clamp2` (default), `clamp`
Valid values for `<esimator_name>` are: `mle` (default), `bootstrap`, `shrinkage`, `leodit_wolf`, `oas`
If you want more control over feature extraction models and gaussian estimators, you can instantiate the object outside and pass it to the constructor directly like so:
```python
from frechet_music_distance import FrechetMusicDistance
from frechet_music_distance.gaussian_estimators import LeoditWolfEstimator, MaxLikelihoodEstimator, OASEstimator, BootstrappingEstimator, ShrinkageEstimator
from frechet_music_distance.models import CLaMP2Extractor, CLaMPExtractor
extractor = CLaMP2Extractor(verbose=True)
estimator = ShrinkageEstimator(shrinkage=0.1)
fmd = FrechetMusicDistance(feature_extractor=extractor, gaussian_estimator=estimator, verbose=True)
```
#### Standard FMD score
```python
score = metric.score(
reference_dataset="<reference_dataset_path>",
test_dataset="<test_dataset_path>"
)
```
#### FMD-Inf score
```python
result = metric.score_inf(
reference_dataset="<reference_dataset_path>",
test_dataset="<test_dataset_path>",
steps=<num_steps> # default=25
min_n=<minumum_sample_size> # default=500
)
result.score # To get the FMD-Inf score
result.r2 # To get the R^2 of FMD-Inf linear regression
result.slope # To get the slope of the regression
result.points # To get the point estimates used in FMD-Inf regression
```
#### Individual (per-song) score
```python
result = metric.score_individual(
reference_dataset="<reference_dataset_path>",
test_song_path="<test_song_path>",
)
```
#### Cleanup
Additionaly the pre-computed cache can be cleared like so:
```python
from frechet_music_distance.utils import clear_cache
clear_cache()
```
## Extending FMD
### Embedding Model
You can add your own model as a feature extractor like so:
```python
from frechet_music_distance.models import FeatureExtractor
class MyExtractor(FeatureExtractor):
def __init__(self, verbose: bool = True) -> None:
super().__init__(verbose)
"""<My implementation>"""
@torch.no_grad()
def _extract_feature(self, data: Any) -> NDArray:
"""<My implementation>"""
def extract_features(self, dataset_path: str | Path) -> NDArray:
"""<My implementation of loading data>"""
return super()._extract_features(data)
def extract_feature(self, filepath: str | Path) -> NDArray:
"""<My implementation of loading data>"""
return self._extract_feature(data)
```
If your model uses the same data format as CLaMP2 or CLaMP you can use `frechet_music_distance.dataset_loaders.ABCLoader` or `frechet_music_distance.dataset_loaders.MIDIasMTFLoader` for loading music data.
### Gaussian Estimator
You can add your own estimator like so:
```python
from .gaussian_estimator import GaussianEstimator
from .max_likelihood_estimator import MaxLikelihoodEstimator
class BootstrappingEstimator(GaussianEstimator):
def __init__(self, num_samples: int = 1000) -> None:
super().__init__()
"""<My implementation>"""
def estimate_parameters(self, features: NDArray) -> tuple[NDArray, NDArray]:
"""<My implementation>"""
return mean, cov
```
## Supported Embedding Models
| Model | Name in library | Description | Creator |
| --- | --- | --- |-----------------|
| [CLaMP](https://github.com/microsoft/muzic/tree/main/clamp) | `clamp` | CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval | Microsoft Muzic |
| [CLaMP2](https://github.com/sanderwood/clamp2) | `clamp2` | CLaMP 2: Multimodal Music Information Retrieval Across 101 Languages Using Large Language Models | sanderwood |
## Citation
If you use Frecheet Music Distance in your research, please cite the following paper:
```bibtex
@article{retkowski2024frechet,
title={Frechet Music Distance: A Metric For Generative Symbolic Music Evaluation},
author={Retkowski, Jan and St{\k{e}}pniak, Jakub and Modrzejewski, Mateusz},
journal={arXiv preprint arXiv:2412.07948},
year={2024}
}
```
## Acknowledgements
This library uses code from the following repositories for handling the embedding models:
* CLaMP 1: [microsoft/muzic/clamp](https://github.com/microsoft/muzic/tree/main/clamp)
* CLaMP 2: [sanderwood/clamp2](https://github.com/sanderwood/clamp2)
## License
This project is licensed under the **MIT License**. See the [LICENSE](LICENSE.txt) file for details.
---
Raw data
{
"_id": null,
"home_page": null,
"name": "frechet-music-distance",
"maintainer": "jryban",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "frechet, music, distance, metric, symbolic, evaluation, generative, frechet music distance, symbolic music, frechet distance, music metric, symbolic music evaluation",
"author": "jryban",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/f4/4b/187e9729c2d3c9e41726edcd55671a0d355ea726a98f52572de6358621f3/frechet_music_distance-1.0.0.tar.gz",
"platform": null,
"description": "\n# Frechet Music Distance\n\n[](https://opensource.org/licenses/MIT)\n[](https://arxiv.org/abs/2412.07948)\n\n## Table of Contents\n- [Introduction](#introduction)\n- [Features](#features)\n- [Installation](#installation)\n- [Usage](#usage)\n- [Extending FMD](#extending-fmd)\n- [Citation](#citation)\n- [Acknowledgements](#acknowledgements)\n- [License](#license)\n\n\n## Introduction\nA library for calculating Frechet Music Distance (FMD). This is an official implementation of the paper [_Frechet Music Distance: A Metric For Generative Symbolic Music Evaluation_](https://www.arxiv.org/abs/2412.07948).\n\n\n## Features\n- Calculating FMD and FMD-Inf scores between two datasets for evaluation\n- Caching extracted features and distribution parameters to speedup subsequent computations\n- Support for various symbolic music representations (**MIDI** and **ABC**)\n- Support for various embedding models (**CLaMP 2**, **CLaMP 1**)\n- Support for various methods of estimating embedding distribution parameters (**MLE**, **Leodit Wolf**, **Shrinkage**, **OAS**, **Bootstrap**)\n- Computation of per-song FMD to find outliers in the dataset\n\n\n## Installation\n\nThe library can be installed from from [PyPi](https://pypi.org/project/frechet-music-distance/) using pip:\n```bash\npip install frechet-music-distance\n```\n\n**Note**: If it doesn't work try updating `pip`:\n```bash\npip install --upgrade pip\n```\n\nYou can also install from source by cloning the repository and installing it locally:\n```bash\ngit clone https://github.com/jryban/frechet-music-distance.git\ncd frechet-music-distance\npip install -e .\n```\n\nThe library was tested on Linux and MacOS, but it should work on Windows as well.\n\n**Note**: If you encounter `NotOpenSSLWarning` please downgrade your `urllib3` version to `1.26.6`:\n```bash\npip install urllib3==1.26.6\n```\nor use a different version of Python that supports OpenSSL, by following the instructions provided in this [urllib3 GitHub issue](https://github.com/urllib3/urllib3/issues/3020)\n\n\n## Usage\nThe library currently supports **MIDI** and **ABC** symbolic music representations.\n\n**Note**: When using ABC Notation please ensure that each song is located in a separate file.\n\n### Command Line\n\n```bash\nfmd score [-h] [--model {clamp2,clamp}] [--estimator {mle,bootstrap,oas,shrinkage,leodit_wolf}] [--inf] [--steps STEPS] [--min_n MIN_N] [--clear-cache] [reference_dataset] [test_dataset]\n\n```\n\n#### Positional arguments:\n * `reference_dataset`: Path to the reference dataset\n * `test_dataset`: Path to the test dataset\n\n#### Options:\n * `--model {clamp2,clamp}, -m {clamp2,clamp}` Embedding model name\n * `--estimator {mle,bootstrap,oas,shrinkage,leodit_wolf}, -e {mle,bootstrap,oas,shrinkage,leodit_wolf}` Gaussian estimator for mean and covariance\n * `--inf` Use FMD-Inf extrapolation\n * `--steps STEPS, -s STEPS` Number of steps when calculating FMD-Inf\n * `--min_n MIN_N, -n MIN_N` Mininum sample size when calculating FMD-Inf (Must be smaller than the size of the test dataset)\n * `--clear-cache` Clear the pre-computed cache before FMD calculation\n\n#### Cleanup\nAdditionaly the pre-computed cache can be cleared by executing:\n\n```bash\nfmd clear\n```\n\n### Python API\n\n#### Initialization\nYou can initialize the metric like so:\n\n```python\nfrom frechet_music_distance import FrechetMusicDistance\n\nmetric = FrechetMusicDistance(feature_extractor='<model_name>', gaussian_estimator='<esimator_name>', verbose=True)\n```\nValid values for `<model_name>` are: `clamp2` (default), `clamp` \nValid values for `<esimator_name>` are: `mle` (default), `bootstrap`, `shrinkage`, `leodit_wolf`, `oas`\n\nIf you want more control over feature extraction models and gaussian estimators, you can instantiate the object outside and pass it to the constructor directly like so:\n\n```python\nfrom frechet_music_distance import FrechetMusicDistance\nfrom frechet_music_distance.gaussian_estimators import LeoditWolfEstimator, MaxLikelihoodEstimator, OASEstimator, BootstrappingEstimator, ShrinkageEstimator\nfrom frechet_music_distance.models import CLaMP2Extractor, CLaMPExtractor\n\nextractor = CLaMP2Extractor(verbose=True)\nestimator = ShrinkageEstimator(shrinkage=0.1)\nfmd = FrechetMusicDistance(feature_extractor=extractor, gaussian_estimator=estimator, verbose=True)\n\n```\n\n#### Standard FMD score\n```python\nscore = metric.score(\n reference_dataset=\"<reference_dataset_path>\",\n test_dataset=\"<test_dataset_path>\"\n)\n```\n\n\n#### FMD-Inf score\n```python\n\nresult = metric.score_inf(\n reference_dataset=\"<reference_dataset_path>\",\n test_dataset=\"<test_dataset_path>\",\n steps=<num_steps> # default=25\n min_n=<minumum_sample_size> # default=500\n)\n\nresult.score # To get the FMD-Inf score\nresult.r2 # To get the R^2 of FMD-Inf linear regression\nresult.slope # To get the slope of the regression\nresult.points # To get the point estimates used in FMD-Inf regression\n\n```\n\n#### Individual (per-song) score\n```python\n\nresult = metric.score_individual(\n reference_dataset=\"<reference_dataset_path>\",\n test_song_path=\"<test_song_path>\",\n)\n\n```\n\n#### Cleanup\nAdditionaly the pre-computed cache can be cleared like so:\n\n```python\nfrom frechet_music_distance.utils import clear_cache\n\nclear_cache()\n```\n\n## Extending FMD\n\n### Embedding Model\n\nYou can add your own model as a feature extractor like so:\n\n```python\nfrom frechet_music_distance.models import FeatureExtractor\n\nclass MyExtractor(FeatureExtractor):\n\n def __init__(self, verbose: bool = True) -> None:\n super().__init__(verbose)\n \"\"\"<My implementation>\"\"\"\n \n\n @torch.no_grad()\n def _extract_feature(self, data: Any) -> NDArray:\n \"\"\"<My implementation>\"\"\"\n\n\n def extract_features(self, dataset_path: str | Path) -> NDArray:\n \"\"\"<My implementation of loading data>\"\"\"\n\n return super()._extract_features(data)\n\n\n def extract_feature(self, filepath: str | Path) -> NDArray:\n \"\"\"<My implementation of loading data>\"\"\"\n\n return self._extract_feature(data)\n\n\n```\nIf your model uses the same data format as CLaMP2 or CLaMP you can use `frechet_music_distance.dataset_loaders.ABCLoader` or `frechet_music_distance.dataset_loaders.MIDIasMTFLoader` for loading music data.\n\n### Gaussian Estimator\n\nYou can add your own estimator like so:\n```python\nfrom .gaussian_estimator import GaussianEstimator\nfrom .max_likelihood_estimator import MaxLikelihoodEstimator\n\n\nclass BootstrappingEstimator(GaussianEstimator):\n\n def __init__(self, num_samples: int = 1000) -> None:\n super().__init__()\n \"\"\"<My implementation>\"\"\"\n\n def estimate_parameters(self, features: NDArray) -> tuple[NDArray, NDArray]:\n \"\"\"<My implementation>\"\"\"\n\n return mean, cov\n```\n\n## Supported Embedding Models\n\n| Model | Name in library | Description | Creator |\n| --- | --- | --- |-----------------|\n| [CLaMP](https://github.com/microsoft/muzic/tree/main/clamp) | `clamp` | CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval | Microsoft Muzic |\n| [CLaMP2](https://github.com/sanderwood/clamp2) | `clamp2` | CLaMP 2: Multimodal Music Information Retrieval Across 101 Languages Using Large Language Models | sanderwood |\n\n\n## Citation\n\nIf you use Frecheet Music Distance in your research, please cite the following paper:\n\n```bibtex\n@article{retkowski2024frechet,\n title={Frechet Music Distance: A Metric For Generative Symbolic Music Evaluation},\n author={Retkowski, Jan and St{\\k{e}}pniak, Jakub and Modrzejewski, Mateusz},\n journal={arXiv preprint arXiv:2412.07948},\n year={2024}\n}\n```\n\n## Acknowledgements\n\nThis library uses code from the following repositories for handling the embedding models:\n* CLaMP 1: [microsoft/muzic/clamp](https://github.com/microsoft/muzic/tree/main/clamp)\n* CLaMP 2: [sanderwood/clamp2](https://github.com/sanderwood/clamp2)\n\n## License\nThis project is licensed under the **MIT License**. See the [LICENSE](LICENSE.txt) file for details.\n\n---\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024 jryban\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "A library for computing Frechet Music Distance.",
"version": "1.0.0",
"project_urls": {
"Homepage": "https://github.com/jryban/frechet-music-distance",
"Repository": "https://github.com/jryban/frechet-music-distance.git"
},
"split_keywords": [
"frechet",
" music",
" distance",
" metric",
" symbolic",
" evaluation",
" generative",
" frechet music distance",
" symbolic music",
" frechet distance",
" music metric",
" symbolic music evaluation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "3fc70fa984c73d0de140c0f69fe00cfcb4b73073914c2ce073c1e38084937d2e",
"md5": "cc0f785226a75df62685203a385e2a31",
"sha256": "43c3538462a919b1fc8f408db4e59a83dbfe3cd338e8b482bb1b3b9fa0edaf48"
},
"downloads": -1,
"filename": "frechet_music_distance-1.0.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "cc0f785226a75df62685203a385e2a31",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 32377,
"upload_time": "2025-01-31T17:39:52",
"upload_time_iso_8601": "2025-01-31T17:39:52.676007Z",
"url": "https://files.pythonhosted.org/packages/3f/c7/0fa984c73d0de140c0f69fe00cfcb4b73073914c2ce073c1e38084937d2e/frechet_music_distance-1.0.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "f44b187e9729c2d3c9e41726edcd55671a0d355ea726a98f52572de6358621f3",
"md5": "4fb6995bddc5bb837390bc01d1add6b9",
"sha256": "a757fbc672dc47c34904778dbe2e8529ed8eb3efdb7bb7c093be1c4ceea0f89b"
},
"downloads": -1,
"filename": "frechet_music_distance-1.0.0.tar.gz",
"has_sig": false,
"md5_digest": "4fb6995bddc5bb837390bc01d1add6b9",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 24864,
"upload_time": "2025-01-31T17:39:54",
"upload_time_iso_8601": "2025-01-31T17:39:54.044548Z",
"url": "https://files.pythonhosted.org/packages/f4/4b/187e9729c2d3c9e41726edcd55671a0d355ea726a98f52572de6358621f3/frechet_music_distance-1.0.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-31 17:39:54",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jryban",
"github_project": "frechet-music-distance",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "abctoolkit",
"specs": [
[
"==",
"0.0.4"
]
]
},
{
"name": "accelerate",
"specs": [
[
"==",
"1.2.1"
]
]
},
{
"name": "certifi",
"specs": [
[
"==",
"2024.12.14"
]
]
},
{
"name": "charset-normalizer",
"specs": [
[
"==",
"3.4.0"
]
]
},
{
"name": "filelock",
"specs": [
[
"==",
"3.16.1"
]
]
},
{
"name": "fsspec",
"specs": [
[
"==",
"2024.12.0"
]
]
},
{
"name": "huggingface-hub",
"specs": [
[
"==",
"0.27.0"
]
]
},
{
"name": "idna",
"specs": [
[
"==",
"3.10"
]
]
},
{
"name": "jellyfish",
"specs": [
[
"==",
"1.1.3"
]
]
},
{
"name": "Jinja2",
"specs": [
[
"==",
"3.1.5"
]
]
},
{
"name": "joblib",
"specs": [
[
"==",
"1.4.2"
]
]
},
{
"name": "MarkupSafe",
"specs": [
[
"==",
"3.0.2"
]
]
},
{
"name": "mido",
"specs": [
[
"==",
"1.3.3"
]
]
},
{
"name": "mpmath",
"specs": [
[
"==",
"1.3.0"
]
]
},
{
"name": "networkx",
"specs": [
[
"==",
"3.4.2"
]
]
},
{
"name": "numpy",
"specs": [
[
"==",
"2.2.1"
]
]
},
{
"name": "packaging",
"specs": [
[
"==",
"24.2"
]
]
},
{
"name": "psutil",
"specs": [
[
"==",
"6.1.1"
]
]
},
{
"name": "PyYAML",
"specs": [
[
"==",
"6.0.2"
]
]
},
{
"name": "RapidFuzz",
"specs": [
[
"==",
"3.11.0"
]
]
},
{
"name": "regex",
"specs": [
[
"==",
"2024.11.6"
]
]
},
{
"name": "requests",
"specs": [
[
"==",
"2.32.3"
]
]
},
{
"name": "safetensors",
"specs": [
[
"==",
"0.4.5"
]
]
},
{
"name": "scipy",
"specs": [
[
"==",
"1.14.1"
]
]
},
{
"name": "setuptools",
"specs": [
[
"==",
"75.6.0"
]
]
},
{
"name": "sympy",
"specs": [
[
"==",
"1.13.1"
]
]
},
{
"name": "tokenizers",
"specs": [
[
"==",
"0.21.0"
]
]
},
{
"name": "torch",
"specs": [
[
"==",
"2.5.1"
]
]
},
{
"name": "tqdm",
"specs": [
[
"==",
"4.67.1"
]
]
},
{
"name": "transformers",
"specs": [
[
"==",
"4.47.1"
]
]
},
{
"name": "typing_extensions",
"specs": [
[
"==",
"4.12.2"
]
]
},
{
"name": "Unidecode",
"specs": [
[
"==",
"1.3.8"
]
]
},
{
"name": "urllib3",
"specs": [
[
"==",
"2.3.0"
]
]
},
{
"name": "scikit-learn",
"specs": [
[
"==",
"1.6.1"
]
]
}
],
"lcname": "frechet-music-distance"
}