# Matchmaker
Matchmaker is a Python library for real-time music alignment.
Music alignment is a fundamental MIR task, and real-time music alignment is a necessary component of many interactive applications (e.g., automatic accompaniment systems, automatic page turning).
Unlike offline alignment methods, for which state-of-the-art implementations are publicly available, real-time (online) methods have no standard implementation, forcing researchers and developers to build them from scratch for their projects.
We aim to provide efficient reference implementations of score followers for use in real-time applications which can be easily integrated into existing projects.
The full documentation for matchmaker is available online at [readthedocs.org](https://pymatchmaker.readthedocs.io/).
## Setup
### Prerequisites
- Available Python version: 3.9, 3.10, 3.11, 3.12 (3.12 recommended)
- [Fluidsynth](https://www.fluidsynth.org/)
- [PortAudio](http://www.portaudio.com/)
Please ensure that you've installed the above packages before proceeding.
You should not install `fluidsynth` using `pip install fluidsynth` as it is not compatible with `matchmaker`.
### Install from PyPI
```bash
pip install pymatchmaker
```
### Install from source using conda
Please refer to the [requirements.txt](requirements.txt) file for the minimum required versions of the packages.
Setting up the code as described here requires [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html). Follow the instructions for your OS.
To setup the experiments, use the following script.
```bash
# Clone matchmaker
git clone https://github.com/pymatchmaker/matchmaker.git
cd matchmaker
# Create the conda environment
conda create -n matchmaker python=3.12
conda activate matchmaker
# Install matchmaker
pip install -e .
# Install matchmaker with dev tools
pip install -e .[dev]
```
If you have a ImportError with 'Fluidsynth' by `pyfluidsynth` library on MacOS, please refer to the following [link](https://stackoverflow.com/a/75339618).
Because of the dependency of `partitura`, which uses `MuseScore_General.sf3` (free soundfont provided by MuseScore) as the default soundfont, the soundfont will be installed automatically inside the `partitura` package. This might take a while for the first time.
## Usage Examples
### Quickstart for live streaming
To get started quickly, you can use the `Matchmaker` class, which provides a simple interface for running the alignment process. You can use a `musicxml` or `midi` file as the score file. Specify `"audio"` or `"midi"` as the `input_type` argument, and the default device for that input type will be automatically set up.
```python
from matchmaker import Matchmaker
mm = Matchmaker(
score_file="path/to/score",
input_type="audio",
)
for current_position in mm.run():
print(current_position) # beat position in the score
```
The returned value is the current position in the score, represented in beats defined by `partitura` library's note array system.
Specifically, each position is calculated for every frame input and interpolated within the score's `onset_beat` array.
Please refer to [here](https://partitura.readthedocs.io/en/latest/Tutorial/notebook.html) for more information about the `onset_beat` concept.
### Testing with the performance file
You can simulate the real-time alignment by putting a specific performance file as input, rather than running it as a live stream.
The type of performance file can be either audio file or midi file, depending on the `input_type`.
```python
from matchmaker import Matchmaker
mm = Matchmaker(
score_file="path/to/score",
performance_file="path/to/performance.mid",
input_type="midi",
)
for current_position in mm.run():
print(current_position)
```
### Testing with Specific Input Device
To use a specific audio or MIDI device that is not the default device, you can pass the device name or index.
```python
from matchmaker import Matchmaker
mm = Matchmaker(
score_file="path/to/score",
input_type="audio",
device_name_or_index="MacBookPro Microphone",
)
for current_position in mm.run():
print(current_position)
```
### Testing with Different Methods or Features
For testing with Audio input, you can specify the alignment method as follows:
```python
from matchmaker import Matchmaker
mm = Matchmaker(
score_file="path/to/score",
input_type="audio",
method="dixon", # or "arzt" (default)
)
for current_position in mm.run():
print(current_position)
```
For options regarding the `method`, please refer to the [Alignment Methods](#alignment-methods) section.
For options regarding the `feature_type`, please refer to the [Features](#features) section.
### Custom Example
If you want to use a different alignment method or custom method, you can do so by importing the specific class and passing the necessary parameters.
In order to define a custom alignment class, you need to inherit from the Base `OnlineAlignment` class and implement the `run` method. Note that the returned value from the `OnlineAlignment` class should be the current frame number in the reference features, not in beats.
```python
from matchmaker.dp import OnlineTimeWarpingDixon
from matchmaker.io.audio import AudioStream
from matchmaker.features import ChromagramProcessor
feature_processor = ChromagramProcessor()
reference_features = feature_processor('path/to/score/audio.wav')
with AudioStream(processor=feature_processor) as stream:
score_follower = OnlineTimeWarpingDixon(reference_features, stream.queue)
for current_frame in score_follower.run():
print(current_frame) # frame number in the reference features
```
## Alignment Methods
Matchmaker currently supports the following alignment methods:
- `"dixon"`: On-line time warping algorithm by S. Dixon (2005). Supports audio input only.
- `"arzt"`: On-line time warping algorithm adapted from Brazier and Widmer (2020) (based on the work by Arzt et al. (2010)). Supports audio input only.
- `"hmm"`: Hidden Markov Model-based score follower by Cancino-Chacón et al. (2023), based on the state-space score followers by Duan et al. (2011) and Jiang and Raphael (2020). Supports MIDI input only.
## Features
Matchmaker currently supports the following feature types:
- For audio:
- `"chroma"`: Chroma features. Default feature type for audio input.
- `"mfcc"`: Mel-frequency cepstral coefficients.
- `"mel"`: Mel-Spectrogram.
- `"logspectral"`: Log-spectral features used in Dixon (2005).
- For MIDI:
- `pianoroll`: Piano-roll features. Default feature type for MIDI input.
- `"pitch"`: Pitch features for MIDI input.
- `"pitchclass"`: Pitch class features for MIDI input.
## Configurations
Initialization parameters for the `Matchmaker` class:
- `score_file` (str): Path to the score file.
- `input_type` (str): Type of input data. Options: `"audio"`, `"midi"`.
- `feature_type` (str): Type of feature to use. Options: `"chroma"`, `"mfcc"`, `"cqt"`, `"spectrogram"`, `"onset"`.
- `method` (str): Alignment method to use. Options: `"dixon"`, `"arzt"`, `"hmm"`.
- `sample_rate` (int): Sample rate of the input audio data.
- `frame_rate` (int): Frame rate of the input audio/MIDI data.
- `device_name_or_index` (str or int): The audio/MIDI device name or index you want to use. If `None`, the default device will be used.
## Citing Matchmaker
If you find Matchmaker useful, we would appreciate if you could cite us!
```
@inproceedings{matchmaker_lbd,
title={{Matchmaker: A Python library for Real-time Music Alignment}},
author={Park, Jiyun and Cancino-Chac\'{o}n, Carlos and Kwon, Taegyun and Nam, Juhan},
booktitle={{Proceedings of the Late Breaking/Demo Session at the 25th International Society for Music Information Retrieval Conference}},
address={San Francisco, USA.},
year={2024}
}
```
## Acknowledgments
This work has been supported by the Austrian Science Fund (FWF), grant agreement PAT 8820923 ("*Rach3: A Computational Approach to Study Piano Rehearsals*"). Additionally, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2023R1A2C3007605).
## License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "pymatchmaker",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "music, alignment, midi, audio",
"author": null,
"author_email": "Matchmaker Development Team <carloscancinochacon@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/e8/0e/86563f48df48e4b1946c0318baf45d8e71e218c0c927b121796f7fe98058/pymatchmaker-0.1.1rc1.tar.gz",
"platform": null,
"description": "# Matchmaker\n\nMatchmaker is a Python library for real-time music alignment.\n\nMusic alignment is a fundamental MIR task, and real-time music alignment is a necessary component of many interactive applications (e.g., automatic accompaniment systems, automatic page turning).\n\nUnlike offline alignment methods, for which state-of-the-art implementations are publicly available, real-time (online) methods have no standard implementation, forcing researchers and developers to build them from scratch for their projects.\n \nWe aim to provide efficient reference implementations of score followers for use in real-time applications which can be easily integrated into existing projects.\n\nThe full documentation for matchmaker is available online at [readthedocs.org](https://pymatchmaker.readthedocs.io/).\n\n\n## Setup\n\n### Prerequisites\n\n- Available Python version: 3.9, 3.10, 3.11, 3.12 (3.12 recommended)\n- [Fluidsynth](https://www.fluidsynth.org/)\n- [PortAudio](http://www.portaudio.com/)\n\nPlease ensure that you've installed the above packages before proceeding.\nYou should not install `fluidsynth` using `pip install fluidsynth` as it is not compatible with `matchmaker`.\n\n### Install from PyPI\n\n```bash\npip install pymatchmaker\n```\n\n### Install from source using conda\n\nPlease refer to the [requirements.txt](requirements.txt) file for the minimum required versions of the packages.\nSetting up the code as described here requires [conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html). Follow the instructions for your OS.\n\nTo setup the experiments, use the following script.\n\n```bash\n# Clone matchmaker\ngit clone https://github.com/pymatchmaker/matchmaker.git\n\ncd matchmaker\n\n# Create the conda environment\nconda create -n matchmaker python=3.12\nconda activate matchmaker\n\n# Install matchmaker\npip install -e .\n\n# Install matchmaker with dev tools\npip install -e .[dev]\n```\n\nIf you have a ImportError with 'Fluidsynth' by `pyfluidsynth` library on MacOS, please refer to the following [link](https://stackoverflow.com/a/75339618).\n\nBecause of the dependency of `partitura`, which uses `MuseScore_General.sf3` (free soundfont provided by MuseScore) as the default soundfont, the soundfont will be installed automatically inside the `partitura` package. This might take a while for the first time.\n\n## Usage Examples\n\n### Quickstart for live streaming\n\nTo get started quickly, you can use the `Matchmaker` class, which provides a simple interface for running the alignment process. You can use a `musicxml` or `midi` file as the score file. Specify `\"audio\"` or `\"midi\"` as the `input_type` argument, and the default device for that input type will be automatically set up. \n\n```python\nfrom matchmaker import Matchmaker\n\nmm = Matchmaker(\n score_file=\"path/to/score\",\n input_type=\"audio\",\n)\nfor current_position in mm.run():\n print(current_position) # beat position in the score\n```\n\nThe returned value is the current position in the score, represented in beats defined by `partitura` library's note array system.\nSpecifically, each position is calculated for every frame input and interpolated within the score's `onset_beat` array.\nPlease refer to [here](https://partitura.readthedocs.io/en/latest/Tutorial/notebook.html) for more information about the `onset_beat` concept.\n\n### Testing with the performance file\n\nYou can simulate the real-time alignment by putting a specific performance file as input, rather than running it as a live stream.\nThe type of performance file can be either audio file or midi file, depending on the `input_type`.\n\n```python\nfrom matchmaker import Matchmaker\n\nmm = Matchmaker(\n score_file=\"path/to/score\",\n performance_file=\"path/to/performance.mid\",\n input_type=\"midi\",\n)\nfor current_position in mm.run():\n print(current_position)\n```\n\n### Testing with Specific Input Device\n\nTo use a specific audio or MIDI device that is not the default device, you can pass the device name or index.\n\n```python\nfrom matchmaker import Matchmaker\n\nmm = Matchmaker(\n score_file=\"path/to/score\",\n input_type=\"audio\",\n device_name_or_index=\"MacBookPro Microphone\",\n)\nfor current_position in mm.run():\n print(current_position)\n```\n\n### Testing with Different Methods or Features\n\nFor testing with Audio input, you can specify the alignment method as follows:\n\n```python\nfrom matchmaker import Matchmaker\n\nmm = Matchmaker(\n score_file=\"path/to/score\",\n input_type=\"audio\",\n method=\"dixon\", # or \"arzt\" (default)\n)\nfor current_position in mm.run():\n print(current_position)\n```\n\nFor options regarding the `method`, please refer to the [Alignment Methods](#alignment-methods) section.\nFor options regarding the `feature_type`, please refer to the [Features](#features) section.\n\n### Custom Example\n\nIf you want to use a different alignment method or custom method, you can do so by importing the specific class and passing the necessary parameters.\nIn order to define a custom alignment class, you need to inherit from the Base `OnlineAlignment` class and implement the `run` method. Note that the returned value from the `OnlineAlignment` class should be the current frame number in the reference features, not in beats.\n\n```python\nfrom matchmaker.dp import OnlineTimeWarpingDixon\nfrom matchmaker.io.audio import AudioStream\nfrom matchmaker.features import ChromagramProcessor\n\nfeature_processor = ChromagramProcessor()\nreference_features = feature_processor('path/to/score/audio.wav')\n\nwith AudioStream(processor=feature_processor) as stream:\n score_follower = OnlineTimeWarpingDixon(reference_features, stream.queue)\n for current_frame in score_follower.run():\n print(current_frame) # frame number in the reference features\n```\n\n## Alignment Methods\n\nMatchmaker currently supports the following alignment methods:\n\n- `\"dixon\"`: On-line time warping algorithm by S. Dixon (2005). Supports audio input only.\n- `\"arzt\"`: On-line time warping algorithm adapted from Brazier and Widmer (2020) (based on the work by Arzt et al. (2010)). Supports audio input only.\n- `\"hmm\"`: Hidden Markov Model-based score follower by Cancino-Chac\u00f3n et al. (2023), based on the state-space score followers by Duan et al. (2011) and Jiang and Raphael (2020). Supports MIDI input only.\n\n## Features\n\nMatchmaker currently supports the following feature types:\n\n- For audio:\n - `\"chroma\"`: Chroma features. Default feature type for audio input.\n - `\"mfcc\"`: Mel-frequency cepstral coefficients.\n - `\"mel\"`: Mel-Spectrogram.\n - `\"logspectral\"`: Log-spectral features used in Dixon (2005).\n- For MIDI:\n - `pianoroll`: Piano-roll features. Default feature type for MIDI input.\n - `\"pitch\"`: Pitch features for MIDI input.\n - `\"pitchclass\"`: Pitch class features for MIDI input.\n\n## Configurations\n\nInitialization parameters for the `Matchmaker` class:\n\n- `score_file` (str): Path to the score file.\n- `input_type` (str): Type of input data. Options: `\"audio\"`, `\"midi\"`.\n- `feature_type` (str): Type of feature to use. Options: `\"chroma\"`, `\"mfcc\"`, `\"cqt\"`, `\"spectrogram\"`, `\"onset\"`.\n- `method` (str): Alignment method to use. Options: `\"dixon\"`, `\"arzt\"`, `\"hmm\"`.\n- `sample_rate` (int): Sample rate of the input audio data.\n- `frame_rate` (int): Frame rate of the input audio/MIDI data.\n- `device_name_or_index` (str or int): The audio/MIDI device name or index you want to use. If `None`, the default device will be used.\n\n## Citing Matchmaker\n\nIf you find Matchmaker useful, we would appreciate if you could cite us!\n\n```\n@inproceedings{matchmaker_lbd,\n title={{Matchmaker: A Python library for Real-time Music Alignment}},\n author={Park, Jiyun and Cancino-Chac\\'{o}n, Carlos and Kwon, Taegyun and Nam, Juhan},\n booktitle={{Proceedings of the Late Breaking/Demo Session at the 25th International Society for Music Information Retrieval Conference}},\n address={San Francisco, USA.},\n year={2024}\n}\n```\n\n## Acknowledgments\n\nThis work has been supported by the Austrian Science Fund (FWF), grant agreement PAT 8820923 (\"*Rach3: A Computational Approach to Study Piano Rehearsals*\"). Additionally, this work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2023R1A2C3007605).\n\n## License\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": "Apache 2.0",
"summary": "A package for real-time music alignment",
"version": "0.1.1rc1",
"project_urls": {
"Homepage": "https://github.com/pymatchmaker/matchmaker"
},
"split_keywords": [
"music",
" alignment",
" midi",
" audio"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5e4801a7ed8501881f41d84cac07df74b4c6293ad8d8639bf9c70ed2c882c5be",
"md5": "143be74b0482447b104c1a3aeec9b7c5",
"sha256": "a655d938cb5df251c5c022840470cd8cf96e8001decb6f749c1d64964e759969"
},
"downloads": -1,
"filename": "pymatchmaker-0.1.1rc1-cp312-cp312-macosx_11_0_arm64.whl",
"has_sig": false,
"md5_digest": "143be74b0482447b104c1a3aeec9b7c5",
"packagetype": "bdist_wheel",
"python_version": "cp312",
"requires_python": ">=3.9",
"size": 209102,
"upload_time": "2024-12-09T05:28:17",
"upload_time_iso_8601": "2024-12-09T05:28:17.424854Z",
"url": "https://files.pythonhosted.org/packages/5e/48/01a7ed8501881f41d84cac07df74b4c6293ad8d8639bf9c70ed2c882c5be/pymatchmaker-0.1.1rc1-cp312-cp312-macosx_11_0_arm64.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e80e86563f48df48e4b1946c0318baf45d8e71e218c0c927b121796f7fe98058",
"md5": "e404473320a87a4b8df2f754e16ea12b",
"sha256": "c2fa1023882a2b51187a41f69a61d59b5726d1df69de07707a5c7462190e9b09"
},
"downloads": -1,
"filename": "pymatchmaker-0.1.1rc1.tar.gz",
"has_sig": false,
"md5_digest": "e404473320a87a4b8df2f754e16ea12b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 63517,
"upload_time": "2024-12-09T05:28:19",
"upload_time_iso_8601": "2024-12-09T05:28:19.674195Z",
"url": "https://files.pythonhosted.org/packages/e8/0e/86563f48df48e4b1946c0318baf45d8e71e218c0c927b121796f7fe98058/pymatchmaker-0.1.1rc1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-09 05:28:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pymatchmaker",
"github_project": "matchmaker",
"travis_ci": false,
"coveralls": true,
"github_actions": false,
"requirements": [
{
"name": "cython",
"specs": [
[
">=",
"3.0.0"
]
]
},
{
"name": "fastdtw",
"specs": [
[
">=",
"0.3.4"
]
]
},
{
"name": "librosa",
"specs": [
[
">=",
"0.10.1"
]
]
},
{
"name": "mido",
"specs": [
[
">=",
"1.3.0"
]
]
},
{
"name": "numpy",
"specs": [
[
">=",
"1.26.0"
]
]
},
{
"name": "python-rtmidi",
"specs": [
[
">=",
"1.5.8"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.11.3"
]
]
},
{
"name": "partitura",
"specs": [
[
">=",
"1.5.0"
]
]
},
{
"name": null,
"specs": []
},
{
"name": "progressbar2",
"specs": [
[
">=",
"4.5.0"
]
]
},
{
"name": "pyaudio",
"specs": [
[
">=",
"0.2.14"
]
]
},
{
"name": "python-hiddenmarkov",
"specs": [
[
">=",
"0.1.3"
]
]
},
{
"name": "pyfluidsynth",
"specs": [
[
">=",
"1.3.3"
]
]
}
],
"lcname": "pymatchmaker"
}