miditok


Namemiditok JSON
Version 3.0.4 PyPI version JSON
download
home_pageNone
SummaryMIDI / symbolic music tokenizers for Deep Learning models.
upload_time2024-09-15 10:43:00
maintainerNone
docs_urlNone
authorNathan Fradet
requires_python>=3.8.0
licenseMIT License Copyright (c) 2021 Nathan Fradet Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords artificial intelligence deep learning midi mir music tokenization transformer
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MidiTok

Python package to tokenize music files, introduced at the ISMIR 2021 LBDs.

![MidiTok Logo](docs/assets/miditok_logo_stroke.png?raw=true "")

[![PyPI version fury.io](https://badge.fury.io/py/miditok.svg)](https://pypi.python.org/pypi/miditok/)
[![Python 3.8](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/release/)
[![Documentation Status](https://readthedocs.org/projects/miditok/badge/?version=latest)](https://miditok.readthedocs.io/en/latest/?badge=latest)
[![GitHub CI](https://github.com/Natooz/MidiTok/actions/workflows/pytest.yml/badge.svg)](https://github.com/Natooz/MidiTok/actions/workflows/pytest.yml)
[![Codecov](https://img.shields.io/codecov/c/github/Natooz/MidiTok)](https://codecov.io/gh/Natooz/MidiTok)
[![GitHub license](https://img.shields.io/github/license/Natooz/MidiTok.svg)](https://github.com/Natooz/MidiTok/blob/main/LICENSE)
[![Downloads](https://static.pepy.tech/badge/miditok)](https://pepy.tech/project/MidiTok)
[![Code style](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)

MidiTok can tokenize MIDI and abc files, i.e. convert them into sequences of tokens ready to be fed to models such as Transformer, for any generation, transcription or MIR task.
MidiTok features most known [music tokenizations](https://miditok.readthedocs.io/en/latest/tokenizations.html) (e.g. [REMI](https://arxiv.org/abs/2002.00212), [Compound Word](https://arxiv.org/abs/2101.02402)...), and is built around the idea that they all share common parameters and methods. Tokenizers can be trained with [Byte Pair Encoding (BPE)](https://aclanthology.org/2023.emnlp-main.123/), [Unigram](https://aclanthology.org/P18-1007/) and [WordPiece](https://arxiv.org/abs/1609.08144), and it offers data augmentation methods.

MidiTok is integrated with the Hugging Face Hub 🤗! Don't hesitate to share your models to the community!

**Documentation:** [miditok.readthedocs.com](https://miditok.readthedocs.io/en/latest/index.html)

## Install

```shell
pip install miditok
```
MidiTok uses [Symusic](https://github.com/Yikai-Liao/symusic) to read and write MIDI and abc files, and BPE/Unigram is backed by [Hugging Face 🤗tokenizers](https://github.com/huggingface/tokenizers) for superfast encoding.

## Usage example

Tokenizing and detokenzing can be done by calling the tokenizer:

```python
from miditok import REMI, TokenizerConfig
from symusic import Score

# Creating a multitrack tokenizer, read the doc to explore all the parameters
config = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)
tokenizer = REMI(config)

# Loads a midi, converts to tokens, and back to a MIDI
midi = Score("path/to/your_midi.mid")
tokens = tokenizer(midi)  # calling the tokenizer will automatically detect MIDIs, paths and tokens
converted_back_midi = tokenizer(tokens)  # PyTorch, Tensorflow and Numpy tensors are supported
```

Here is a complete yet concise example of how you can use MidiTok to train any PyTorch model. And [here](colab-notebooks/Full_Example_HuggingFace_GPT2_Transformer.ipynb) is a simple notebook example showing how to use Hugging Face models to generate music, with MidiTok taking care of tokenizing music files.

```python
from miditok import REMI, TokenizerConfig
from miditok.pytorch_data import DatasetMIDI, DataCollator
from miditok.utils import split_files_for_training
from torch.utils.data import DataLoader
from pathlib import Path

# Creating a multitrack tokenizer, read the doc to explore all the parameters
config = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)
tokenizer = REMI(config)

# Train the tokenizer with Byte Pair Encoding (BPE)
files_paths = list(Path("path", "to", "midis").glob("**/*.mid"))
tokenizer.train(vocab_size=30000, files_paths=files_paths)
tokenizer.save(Path("path", "to", "save", "tokenizer.json"))
# And pushing it to the Hugging Face hub (you can download it back with .from_pretrained)
tokenizer.push_to_hub("username/model-name", private=True, token="your_hf_token")

# Split MIDIs into smaller chunks for training
dataset_chunks_dir = Path("path", "to", "midi_chunks")
split_files_for_training(
    files_paths=files_paths,
    tokenizer=tokenizer,
    save_dir=dataset_chunks_dir,
    max_seq_len=1024,
)

# Create a Dataset, a DataLoader and a collator to train a model
dataset = DatasetMIDI(
    files_paths=list(dataset_chunks_dir.glob("**/*.mid")),
    tokenizer=tokenizer,
    max_seq_len=1024,
    bos_token_id=tokenizer["BOS_None"],
    eos_token_id=tokenizer["EOS_None"],
)
collator = DataCollator(tokenizer.pad_token_id, copy_inputs_as_labels=True)
dataloader = DataLoader(dataset, batch_size=64, collate_fn=collator)

# Iterate over the dataloader to train a model
for batch in dataloader:
    print("Train your model on this batch...")
```

## Tokenizations

MidiTok implements the tokenizations: (links to original papers)
* [REMI](https://dl.acm.org/doi/10.1145/3394171.3413671)
* [REMI+](https://openreview.net/forum?id=NyR8OZFHw6i)
* [MIDI-Like](https://link.springer.com/article/10.1007/s00521-018-3758-9)
* [TSD](https://arxiv.org/abs/2301.11975)
* [Structured](https://arxiv.org/abs/2107.05944)
* [CPWord](https://ojs.aaai.org/index.php/AAAI/article/view/16091)
* [Octuple](https://aclanthology.org/2021.findings-acl.70)
* [MuMIDI](https://dl.acm.org/doi/10.1145/3394171.3413721)
* [MMM](https://arxiv.org/abs/2008.06048)

You can find short presentations in the [documentation](https://miditok.readthedocs.io/en/latest/tokenizations.html).

## Contributions

Contributions are gratefully welcomed, feel free to open an issue or send a PR if you want to add a tokenization or speed up the code. You can read the [contribution guide](CONTRIBUTING.md) for details.

### Todos

* Support music-xml files;
* `no_duration_drums` option, discarding duration tokens for drum notes;
* Control Change messages;
* Speed-up global/track events parsing with Rust or C++ bindings.

## Citation

If you use MidiTok for your research, a citation in your manuscript would be gladly appreciated. ❤️

[**[MidiTok paper]**](https://arxiv.org/abs/2310.17202)
[**[MidiTok original ISMIR publication]**](https://archives.ismir.net/ismir2021/latebreaking/000005.pdf)
```bibtex
@inproceedings{miditok2021,
    title={{MidiTok}: A Python package for {MIDI} file tokenization},
    author={Fradet, Nathan and Briot, Jean-Pierre and Chhel, Fabien and El Fallah Seghrouchni, Amal and Gutowski, Nicolas},
    booktitle={Extended Abstracts for the Late-Breaking Demo Session of the 22nd International Society for Music Information Retrieval Conference},
    year={2021},
    url={https://archives.ismir.net/ismir2021/latebreaking/000005.pdf},
}
```

The BibTeX citations of all tokenizations can be found [in the documentation](https://miditok.readthedocs.io/en/latest/citations.html)


## Acknowledgments

@Natooz thanks its employers who allowed him to develop this project, by chronological order [Aubay](https://blog.aubay.com/index.php/language/en/home/?lang=en), the [LIP6 (Sorbonne University)](https://www.lip6.fr/?LANG=en), and the [Metacreation Lab (Simon Fraser University)](https://www.metacreation.net).

## All Thanks To Our Contributors

<a href="https://github.com/Natooz/MidiTok/graphs/contributors">
  <img src="https://contrib.rocks/image?repo=Natooz/MidiTok" />
</a>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "miditok",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, midi, mir, music, tokenization, transformer",
    "author": "Nathan Fradet",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/6a/64/d9246d97f35733bb1f5385d3763637779058e6df6916524207eb44759ccc/miditok-3.0.4.tar.gz",
    "platform": null,
    "description": "# MidiTok\n\nPython package to tokenize music files, introduced at the ISMIR 2021 LBDs.\n\n![MidiTok Logo](docs/assets/miditok_logo_stroke.png?raw=true \"\")\n\n[![PyPI version fury.io](https://badge.fury.io/py/miditok.svg)](https://pypi.python.org/pypi/miditok/)\n[![Python 3.8](https://img.shields.io/badge/python-3.8+-blue.svg)](https://www.python.org/downloads/release/)\n[![Documentation Status](https://readthedocs.org/projects/miditok/badge/?version=latest)](https://miditok.readthedocs.io/en/latest/?badge=latest)\n[![GitHub CI](https://github.com/Natooz/MidiTok/actions/workflows/pytest.yml/badge.svg)](https://github.com/Natooz/MidiTok/actions/workflows/pytest.yml)\n[![Codecov](https://img.shields.io/codecov/c/github/Natooz/MidiTok)](https://codecov.io/gh/Natooz/MidiTok)\n[![GitHub license](https://img.shields.io/github/license/Natooz/MidiTok.svg)](https://github.com/Natooz/MidiTok/blob/main/LICENSE)\n[![Downloads](https://static.pepy.tech/badge/miditok)](https://pepy.tech/project/MidiTok)\n[![Code style](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)\n\nMidiTok can tokenize MIDI and abc files, i.e. convert them into sequences of tokens ready to be fed to models such as Transformer, for any generation, transcription or MIR task.\nMidiTok features most known [music tokenizations](https://miditok.readthedocs.io/en/latest/tokenizations.html) (e.g. [REMI](https://arxiv.org/abs/2002.00212), [Compound Word](https://arxiv.org/abs/2101.02402)...), and is built around the idea that they all share common parameters and methods. Tokenizers can be trained with [Byte Pair Encoding (BPE)](https://aclanthology.org/2023.emnlp-main.123/), [Unigram](https://aclanthology.org/P18-1007/) and [WordPiece](https://arxiv.org/abs/1609.08144), and it offers data augmentation methods.\n\nMidiTok is integrated with the Hugging Face Hub \ud83e\udd17! Don't hesitate to share your models to the community!\n\n**Documentation:** [miditok.readthedocs.com](https://miditok.readthedocs.io/en/latest/index.html)\n\n## Install\n\n```shell\npip install miditok\n```\nMidiTok uses [Symusic](https://github.com/Yikai-Liao/symusic) to read and write MIDI and abc files, and BPE/Unigram is backed by [Hugging Face \ud83e\udd17tokenizers](https://github.com/huggingface/tokenizers) for superfast encoding.\n\n## Usage example\n\nTokenizing and detokenzing can be done by calling the tokenizer:\n\n```python\nfrom miditok import REMI, TokenizerConfig\nfrom symusic import Score\n\n# Creating a multitrack tokenizer, read the doc to explore all the parameters\nconfig = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)\ntokenizer = REMI(config)\n\n# Loads a midi, converts to tokens, and back to a MIDI\nmidi = Score(\"path/to/your_midi.mid\")\ntokens = tokenizer(midi)  # calling the tokenizer will automatically detect MIDIs, paths and tokens\nconverted_back_midi = tokenizer(tokens)  # PyTorch, Tensorflow and Numpy tensors are supported\n```\n\nHere is a complete yet concise example of how you can use MidiTok to train any PyTorch model. And [here](colab-notebooks/Full_Example_HuggingFace_GPT2_Transformer.ipynb) is a simple notebook example showing how to use Hugging Face models to generate music, with MidiTok taking care of tokenizing music files.\n\n```python\nfrom miditok import REMI, TokenizerConfig\nfrom miditok.pytorch_data import DatasetMIDI, DataCollator\nfrom miditok.utils import split_files_for_training\nfrom torch.utils.data import DataLoader\nfrom pathlib import Path\n\n# Creating a multitrack tokenizer, read the doc to explore all the parameters\nconfig = TokenizerConfig(num_velocities=16, use_chords=True, use_programs=True)\ntokenizer = REMI(config)\n\n# Train the tokenizer with Byte Pair Encoding (BPE)\nfiles_paths = list(Path(\"path\", \"to\", \"midis\").glob(\"**/*.mid\"))\ntokenizer.train(vocab_size=30000, files_paths=files_paths)\ntokenizer.save(Path(\"path\", \"to\", \"save\", \"tokenizer.json\"))\n# And pushing it to the Hugging Face hub (you can download it back with .from_pretrained)\ntokenizer.push_to_hub(\"username/model-name\", private=True, token=\"your_hf_token\")\n\n# Split MIDIs into smaller chunks for training\ndataset_chunks_dir = Path(\"path\", \"to\", \"midi_chunks\")\nsplit_files_for_training(\n    files_paths=files_paths,\n    tokenizer=tokenizer,\n    save_dir=dataset_chunks_dir,\n    max_seq_len=1024,\n)\n\n# Create a Dataset, a DataLoader and a collator to train a model\ndataset = DatasetMIDI(\n    files_paths=list(dataset_chunks_dir.glob(\"**/*.mid\")),\n    tokenizer=tokenizer,\n    max_seq_len=1024,\n    bos_token_id=tokenizer[\"BOS_None\"],\n    eos_token_id=tokenizer[\"EOS_None\"],\n)\ncollator = DataCollator(tokenizer.pad_token_id, copy_inputs_as_labels=True)\ndataloader = DataLoader(dataset, batch_size=64, collate_fn=collator)\n\n# Iterate over the dataloader to train a model\nfor batch in dataloader:\n    print(\"Train your model on this batch...\")\n```\n\n## Tokenizations\n\nMidiTok implements the tokenizations: (links to original papers)\n* [REMI](https://dl.acm.org/doi/10.1145/3394171.3413671)\n* [REMI+](https://openreview.net/forum?id=NyR8OZFHw6i)\n* [MIDI-Like](https://link.springer.com/article/10.1007/s00521-018-3758-9)\n* [TSD](https://arxiv.org/abs/2301.11975)\n* [Structured](https://arxiv.org/abs/2107.05944)\n* [CPWord](https://ojs.aaai.org/index.php/AAAI/article/view/16091)\n* [Octuple](https://aclanthology.org/2021.findings-acl.70)\n* [MuMIDI](https://dl.acm.org/doi/10.1145/3394171.3413721)\n* [MMM](https://arxiv.org/abs/2008.06048)\n\nYou can find short presentations in the [documentation](https://miditok.readthedocs.io/en/latest/tokenizations.html).\n\n## Contributions\n\nContributions are gratefully welcomed, feel free to open an issue or send a PR if you want to add a tokenization or speed up the code. You can read the [contribution guide](CONTRIBUTING.md) for details.\n\n### Todos\n\n* Support music-xml files;\n* `no_duration_drums` option, discarding duration tokens for drum notes;\n* Control Change messages;\n* Speed-up global/track events parsing with Rust or C++ bindings.\n\n## Citation\n\nIf you use MidiTok for your research, a citation in your manuscript would be gladly appreciated. \u2764\ufe0f\n\n[**[MidiTok paper]**](https://arxiv.org/abs/2310.17202)\n[**[MidiTok original ISMIR publication]**](https://archives.ismir.net/ismir2021/latebreaking/000005.pdf)\n```bibtex\n@inproceedings{miditok2021,\n    title={{MidiTok}: A Python package for {MIDI} file tokenization},\n    author={Fradet, Nathan and Briot, Jean-Pierre and Chhel, Fabien and El Fallah Seghrouchni, Amal and Gutowski, Nicolas},\n    booktitle={Extended Abstracts for the Late-Breaking Demo Session of the 22nd International Society for Music Information Retrieval Conference},\n    year={2021},\n    url={https://archives.ismir.net/ismir2021/latebreaking/000005.pdf},\n}\n```\n\nThe BibTeX citations of all tokenizations can be found [in the documentation](https://miditok.readthedocs.io/en/latest/citations.html)\n\n\n## Acknowledgments\n\n@Natooz thanks its employers who allowed him to develop this project, by chronological order [Aubay](https://blog.aubay.com/index.php/language/en/home/?lang=en), the [LIP6 (Sorbonne University)](https://www.lip6.fr/?LANG=en), and the [Metacreation Lab (Simon Fraser University)](https://www.metacreation.net).\n\n## All Thanks To Our Contributors\n\n<a href=\"https://github.com/Natooz/MidiTok/graphs/contributors\">\n  <img src=\"https://contrib.rocks/image?repo=Natooz/MidiTok\" />\n</a>\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2021 Nathan Fradet  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "MIDI / symbolic music tokenizers for Deep Learning models.",
    "version": "3.0.4",
    "project_urls": {
        "Documentation": "https://miditok.readthedocs.io",
        "Homepage": "https://github.com/Natooz/MidiTok",
        "Issues": "https://github.com/Natooz/MidiTok/issues",
        "Repository": "https://github.com/Natooz/MidiTok.git"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " midi",
        " mir",
        " music",
        " tokenization",
        " transformer"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b20a255baafaee53421695db5d9dac4d20db7b747bf1a5b28d7cce86b26ce388",
                "md5": "f2746ecaeb70f386bf502dcfbc6f0036",
                "sha256": "6cca795b9c8c5ae0c89f895293012d8867f3b9ee0928f9d4d77679999d13ef82"
            },
            "downloads": -1,
            "filename": "miditok-3.0.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "f2746ecaeb70f386bf502dcfbc6f0036",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 157176,
            "upload_time": "2024-09-15T10:42:58",
            "upload_time_iso_8601": "2024-09-15T10:42:58.849038Z",
            "url": "https://files.pythonhosted.org/packages/b2/0a/255baafaee53421695db5d9dac4d20db7b747bf1a5b28d7cce86b26ce388/miditok-3.0.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6a64d9246d97f35733bb1f5385d3763637779058e6df6916524207eb44759ccc",
                "md5": "1809744b4f7c3a147f1b96043df34334",
                "sha256": "53fd681d982983b9457cc44c6741607bdd4cb1dc1bdd95170f867eedae8d6b9e"
            },
            "downloads": -1,
            "filename": "miditok-3.0.4.tar.gz",
            "has_sig": false,
            "md5_digest": "1809744b4f7c3a147f1b96043df34334",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 132786,
            "upload_time": "2024-09-15T10:43:00",
            "upload_time_iso_8601": "2024-09-15T10:43:00.502535Z",
            "url": "https://files.pythonhosted.org/packages/6a/64/d9246d97f35733bb1f5385d3763637779058e6df6916524207eb44759ccc/miditok-3.0.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-09-15 10:43:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Natooz",
    "github_project": "MidiTok",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "miditok"
}
        
Elapsed time: 0.35093s