Name | maidi JSON |
Version |
0.15.3
JSON |
| download |
home_page | None |
Summary | A python package for symbolic AI music inference |
upload_time | 2024-08-28 16:28:15 |
maintainer | None |
docs_url | None |
author | Florian GARDIN |
requires_python | None |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<div style="display:flex;">
<a href="https://codecov.io/gh/MusicLang/maidi" >
<img src="https://codecov.io/gh/MusicLang/maidi/graph/badge.svg?token=5VWDG7068F"/>
</a>
<a href="#" >
<img src="https://github.com/musiclang/maidi/actions/workflows/ci.yml/badge.svg"/>
</a>
<a href="https://snyk.io/test/github/musiclang/maidi">
<img src="https://snyk.io/test/github/musiclang/maidi/badge.svg" alt="Known Vulnerabilities" data-canonical-src="https://snyk.io/test/github/musiclang/maidi" style="max-width:100%;">
</a>
<a href="https://github.com/MusicLang/maidi/blob/main/LICENSE.md">
<img src="https://img.shields.io/github/license/MusicLang/maidi" alt="License" />
</a>
</div>
![M(AI)DI](assets/logo2.png)
M(AI)DI
=======
M(ai)di is an open source python library that aims to highlight the capabilities and usefulness of the **Symbolic Music GenAI**.
It interfaces with the best symbolic music AI models and APIs to **accelerate AI integration in music tech products**.
It came from the realization that artists need to manipulate MIDI and not only audio in their composition workflow but tools are lacking in this area.
So here we are, providing a simple and efficient way to manipulate midi files and integrate with music AI models.
In a few lines of code you will be able to parse, analyze and generate midi files with the best music AI models available.
**Here is where M(ai)di shines:**
- **Midi Files Manipulation**: Load, save, edit, merge and analyze midi files with ease.
- **Music AI Models Integration**: Integrate with the best music AI models and APIs to generate music.
- **Automatic MIDI tagging**: Get the chords, tempo, time signature, and many other musical features for each bar/instrument of the midi file.
*Disclaimer : We really focus on processing midi files and model inference calls. We don't implement audio features, neither model training, neither tokenization.*
[Read the official documentation](https://maidi.readthedocs.io/en/latest/)
Getting Started
===============
Installation
------------
To install the package, you can use pip:
```bash
pip install maidi
```
Or to get the latest version from the repository, you can use:
```bash
pip install git+https://github.com/MusicLang/maidi.git
```
Usage
-----
A simple code snippet to load and analyze a midi file :
```python
from maidi import MidiScore, ScoreTagger, midi_library
from maidi.analysis import tags_providers
score = MidiScore.from_midi(midi_library.get_midi_file('drum_and_bass'))
tagger = ScoreTagger(
[
tags_providers.DensityTagsProvider(),
tags_providers.MinMaxPolyphonyTagsProvider(),
tags_providers.MinMaxRegisterTagsProvider(),
tags_providers.SpecialNotesTagsProvider(),
]
)
tags = tagger.tag_score(score)
chords = score.get_chords()
print(tags)
print(chords)
```
Integrations
============
With MusicLang API
------------------
MusicLang is a co-pilot for music composition. It is a music AI model that can modify a midi score based on a prompt.
The API is integrated into M(AI)DI to provide a seamless experience for the user.
**A simple example: Generate a 4 bar score** with the musiclang masking model API.
Just set your MUSICLANG_API_KEY in the environment (or get one [here](www.musiclang.io)) and run the following code :
```python
from maidi import MidiScore
from maidi import instrument
import os
from maidi.integrations.api import MusicLangAPI
# Assuming MUSICLANG_API_KEY is set in the environment
MUSICLANG_API_KEY = os.getenv("MUSICLANG_API_KEY")
# Your choice of params for generation here
instruments = [
instrument.DRUMS,
instrument.ELECTRIC_BASS_FINGER,
]
# Create a 4 bar template with the given instruments
score = MidiScore.from_empty(
instruments=instruments, nb_bars=4, ts=(4, 4), tempo=120
)
# Get the controls (the prompt) for this score
mask, tags, chords = score.get_empty_controls(prevent_silence=True)
mask[:, :] = 1 # Regenerate everything in the score
# Call the musiclang API to predict the score
api = MusicLangAPI(api_key=MUSICLANG_API_KEY, verbose=True)
predicted_score = api.predict(score,
mask, tags=tags, chords=chords, async_mode=False, polling_interval=5
)
predicted_score.write("predicted_score.mid")
```
**Generate a new track in a score** : Start from a midi file and add a track.
```python
import os
from maidi import MidiScore, instrument, midi_library
from maidi.integrations.api import MusicLangAPI
# Assuming MUSICLANG_API_KEY is set in the environment
MUSICLANG_API_KEY = os.getenv("MUSICLANG_API_KEY")
# Create a 4 bar template with the given instruments
score = MidiScore.from_midi(midi_library.get_midi_file('drum_and_bass'))
# Add a clean guitar track and set the mask
score = score.add_instrument(instrument.CLEAN_GUITAR)
mask, _, _ = score.get_empty_controls(prevent_silence=True)
mask[-1, :] = 1 # Generate the last track
# Call the musiclang API to predict the score
api = MusicLangAPI(api_key=MUSICLANG_API_KEY, verbose=True)
predicted_score = api.predict(score,
mask, async_mode=False, polling_interval=3
)
predicted_score.write("predicted_score.mid")
```
**Generate a track that has the same characteristics as an existing midi files** : Start from a midi file and generate a new track with the same characteristics.
```python
import os
from maidi import MidiScore, ScoreTagger, midi_library
from maidi.analysis import tags_providers
from maidi.integrations.api import MusicLangAPI
# Assuming MUSICLANG_API_KEY are set in the environment
MUSICLANG_API_KEY = os.getenv("MUSICLANG_API_KEY")
# Load a midi file
score = MidiScore.from_midi(midi_library.get_midi_file('example1'))
# Get a score with the first track and the first 4 bars of the midi file
score = score[0, :4]
tagger = ScoreTagger(
[
tags_providers.DensityTagsProvider(),
tags_providers.MinMaxPolyphonyTagsProvider(),
tags_providers.MinMaxRegisterTagsProvider(),
tags_providers.SpecialNotesTagsProvider(),
]
)
tags = tagger.tag_score(score)
chords = score.get_chords()
mask = score.get_mask()
mask[:, :] = 1 # Regenerate everything in the score
api = MusicLangAPI(api_key=MUSICLANG_API_KEY, verbose=True)
predicted_score = api.predict(score,
mask, async_mode=False, polling_interval=3
)
predicted_score.write("predicted_score.mid")
```
For more details on the API, please refer to the [MusicLang API documentation](https://api.musiclang.io/documentation).
With other tools and APIs
-------------------------
See [CONTRIBUTING.md](CONTRIBUTING.md) for more details.
Contributing
============
We welcome contributions to the project as long as it fits its main philosophy :
- Manipulate midi files in some ways
- Integrate with music AI models (inference & symbolic only)
Please read [CONTRIBUTING.md](CONTRIBUTING.md) for more details.
Next steps
==========
- Add musiclang_predict song extension open source model
- Improve documentation and examples
- Add more integrations with other symbolic models
- Better handling of the chord progression and tags
License
=======
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE.md) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "maidi",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": null,
"author": "Florian GARDIN",
"author_email": "fgardin.pro@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/3c/88/a962930f6cc0471161635ea557785f5fca46fe93e45c622f5cf6bf8ec63d/maidi-0.15.3.tar.gz",
"platform": null,
"description": "<div style=\"display:flex;\">\n\n<a href=\"https://codecov.io/gh/MusicLang/maidi\" > \n <img src=\"https://codecov.io/gh/MusicLang/maidi/graph/badge.svg?token=5VWDG7068F\"/> \n </a>\n\n<a href=\"#\" >\n <img src=\"https://github.com/musiclang/maidi/actions/workflows/ci.yml/badge.svg\"/>\n</a>\n\n<a href=\"https://snyk.io/test/github/musiclang/maidi\">\n <img src=\"https://snyk.io/test/github/musiclang/maidi/badge.svg\" alt=\"Known Vulnerabilities\" data-canonical-src=\"https://snyk.io/test/github/musiclang/maidi\" style=\"max-width:100%;\">\n</a>\n\n<a href=\"https://github.com/MusicLang/maidi/blob/main/LICENSE.md\"> \n <img src=\"https://img.shields.io/github/license/MusicLang/maidi\" alt=\"License\" />\n</a>\n</div>\n\n![M(AI)DI](assets/logo2.png)\n\nM(AI)DI\n=======\n\nM(ai)di is an open source python library that aims to highlight the capabilities and usefulness of the **Symbolic Music GenAI**. \nIt interfaces with the best symbolic music AI models and APIs to **accelerate AI integration in music tech products**.\nIt came from the realization that artists need to manipulate MIDI and not only audio in their composition workflow but tools are lacking in this area.\n\nSo here we are, providing a simple and efficient way to manipulate midi files and integrate with music AI models.\nIn a few lines of code you will be able to parse, analyze and generate midi files with the best music AI models available.\n\n**Here is where M(ai)di shines:**\n\n- **Midi Files Manipulation**: Load, save, edit, merge and analyze midi files with ease.\n- **Music AI Models Integration**: Integrate with the best music AI models and APIs to generate music.\n- **Automatic MIDI tagging**: Get the chords, tempo, time signature, and many other musical features for each bar/instrument of the midi file.\n\n*Disclaimer : We really focus on processing midi files and model inference calls. We don't implement audio features, neither model training, neither tokenization.*\n\n[Read the official documentation](https://maidi.readthedocs.io/en/latest/)\n\nGetting Started\n===============\n\nInstallation\n------------\nTo install the package, you can use pip:\n\n```bash\npip install maidi\n```\n\nOr to get the latest version from the repository, you can use:\n\n```bash\npip install git+https://github.com/MusicLang/maidi.git\n```\n\nUsage\n-----\n\nA simple code snippet to load and analyze a midi file :\n\n```python\nfrom maidi import MidiScore, ScoreTagger, midi_library\nfrom maidi.analysis import tags_providers\n\nscore = MidiScore.from_midi(midi_library.get_midi_file('drum_and_bass'))\n\ntagger = ScoreTagger(\n [\n tags_providers.DensityTagsProvider(),\n tags_providers.MinMaxPolyphonyTagsProvider(),\n tags_providers.MinMaxRegisterTagsProvider(),\n tags_providers.SpecialNotesTagsProvider(),\n ]\n)\n\ntags = tagger.tag_score(score)\nchords = score.get_chords()\nprint(tags)\nprint(chords)\n```\n\n\nIntegrations\n============\n\nWith MusicLang API\n------------------\n\nMusicLang is a co-pilot for music composition. It is a music AI model that can modify a midi score based on a prompt.\nThe API is integrated into M(AI)DI to provide a seamless experience for the user.\n\n\n**A simple example: Generate a 4 bar score** with the musiclang masking model API.\nJust set your MUSICLANG_API_KEY in the environment (or get one [here](www.musiclang.io)) and run the following code :\n\n```python\nfrom maidi import MidiScore\nfrom maidi import instrument\nimport os\nfrom maidi.integrations.api import MusicLangAPI\n\n# Assuming MUSICLANG_API_KEY is set in the environment\nMUSICLANG_API_KEY = os.getenv(\"MUSICLANG_API_KEY\")\n\n# Your choice of params for generation here\ninstruments = [\n instrument.DRUMS,\n instrument.ELECTRIC_BASS_FINGER,\n]\n\n# Create a 4 bar template with the given instruments\nscore = MidiScore.from_empty(\n instruments=instruments, nb_bars=4, ts=(4, 4), tempo=120\n)\n# Get the controls (the prompt) for this score\nmask, tags, chords = score.get_empty_controls(prevent_silence=True)\nmask[:, :] = 1 # Regenerate everything in the score\n\n# Call the musiclang API to predict the score\napi = MusicLangAPI(api_key=MUSICLANG_API_KEY, verbose=True)\npredicted_score = api.predict(score,\n mask, tags=tags, chords=chords, async_mode=False, polling_interval=5\n)\npredicted_score.write(\"predicted_score.mid\")\n```\n\n**Generate a new track in a score** : Start from a midi file and add a track.\n\n```python\nimport os\nfrom maidi import MidiScore, instrument, midi_library\nfrom maidi.integrations.api import MusicLangAPI\n\n# Assuming MUSICLANG_API_KEY is set in the environment\nMUSICLANG_API_KEY = os.getenv(\"MUSICLANG_API_KEY\")\n\n# Create a 4 bar template with the given instruments\nscore = MidiScore.from_midi(midi_library.get_midi_file('drum_and_bass'))\n# Add a clean guitar track and set the mask\nscore = score.add_instrument(instrument.CLEAN_GUITAR)\nmask, _, _ = score.get_empty_controls(prevent_silence=True)\nmask[-1, :] = 1 # Generate the last track\n\n# Call the musiclang API to predict the score\napi = MusicLangAPI(api_key=MUSICLANG_API_KEY, verbose=True)\npredicted_score = api.predict(score,\n mask, async_mode=False, polling_interval=3\n)\npredicted_score.write(\"predicted_score.mid\")\n```\n\n**Generate a track that has the same characteristics as an existing midi files** : Start from a midi file and generate a new track with the same characteristics.\n\n```python\nimport os\nfrom maidi import MidiScore, ScoreTagger, midi_library\nfrom maidi.analysis import tags_providers\nfrom maidi.integrations.api import MusicLangAPI\n\n# Assuming MUSICLANG_API_KEY are set in the environment\nMUSICLANG_API_KEY = os.getenv(\"MUSICLANG_API_KEY\")\n# Load a midi file\nscore = MidiScore.from_midi(midi_library.get_midi_file('example1'))\n\n# Get a score with the first track and the first 4 bars of the midi file\nscore = score[0, :4]\n\ntagger = ScoreTagger(\n [\n tags_providers.DensityTagsProvider(),\n tags_providers.MinMaxPolyphonyTagsProvider(),\n tags_providers.MinMaxRegisterTagsProvider(),\n tags_providers.SpecialNotesTagsProvider(),\n ]\n)\ntags = tagger.tag_score(score)\nchords = score.get_chords()\nmask = score.get_mask()\nmask[:, :] = 1 # Regenerate everything in the score\n\napi = MusicLangAPI(api_key=MUSICLANG_API_KEY, verbose=True)\npredicted_score = api.predict(score,\n mask, async_mode=False, polling_interval=3\n )\npredicted_score.write(\"predicted_score.mid\")\n```\n\nFor more details on the API, please refer to the [MusicLang API documentation](https://api.musiclang.io/documentation).\n\n\nWith other tools and APIs\n-------------------------\n\nSee [CONTRIBUTING.md](CONTRIBUTING.md) for more details.\n\nContributing\n============\n\nWe welcome contributions to the project as long as it fits its main philosophy :\n\n- Manipulate midi files in some ways\n- Integrate with music AI models (inference & symbolic only)\n\nPlease read [CONTRIBUTING.md](CONTRIBUTING.md) for more details.\n\n\nNext steps\n==========\n\n- Add musiclang_predict song extension open source model\n- Improve documentation and examples\n- Add more integrations with other symbolic models\n- Better handling of the chord progression and tags\n\nLicense\n=======\n\nThis project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE.md) file for details.\n",
"bugtrack_url": null,
"license": null,
"summary": "A python package for symbolic AI music inference",
"version": "0.15.3",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3c88a962930f6cc0471161635ea557785f5fca46fe93e45c622f5cf6bf8ec63d",
"md5": "ac2b43816402c48830c293fd7a90558a",
"sha256": "3eaf064387d1cc31671d1f3aa29df5f2e5309b71bc0ea34d420c3c41d3c0f90d"
},
"downloads": -1,
"filename": "maidi-0.15.3.tar.gz",
"has_sig": false,
"md5_digest": "ac2b43816402c48830c293fd7a90558a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 344521,
"upload_time": "2024-08-28T16:28:15",
"upload_time_iso_8601": "2024-08-28T16:28:15.693806Z",
"url": "https://files.pythonhosted.org/packages/3c/88/a962930f6cc0471161635ea557785f5fca46fe93e45c622f5cf6bf8ec63d/maidi-0.15.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-08-28 16:28:15",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "maidi"
}