splifft


Namesplifft JSON
Version 0.0.3 PyPI version JSON
download
home_pageNone
SummaryLightweight utilities for music source separation.
upload_time2025-08-14 09:36:51
maintainerNone
docs_urlNone
authorundef13
requires_python>=3.10
licenseMIT License Copyright (c) 2025 undef13 Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords artificial intelligence audio deep learning music source separation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # SpliFFT

[![image](https://img.shields.io/pypi/v/splifft)](https://pypi.python.org/pypi/splifft)
[![image](https://img.shields.io/pypi/l/splifft)](https://pypi.python.org/pypi/splifft)
[![image](https://img.shields.io/pypi/pyversions/splifft)](https://pypi.python.org/pypi/splifft)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![MkDocs](https://shields.io/badge/MkDocs-documentation-informational)](https://undef13.github.io/splifft/)

Lightweight utilities for music source separation.

This library is a ground-up rewrite of the [zfturbo's MSST repo](https://github.com/ZFTurbo/Music-Source-Separation-Training), with a strong focus on robustness, simplicity and extensibility. While it is a fantastic collection of models and training scripts, this rewrite adopts a different architecture to address common pain points in research code.

Key principles:

- **Configuration as code**: pydantic models are used instead of untyped dictionaries or `ConfigDict`. this provides static type safety, runtime data validation, IDE autocompletion, and a single, clear source of truth for all parameters.
- **Data-oriented and functional core**: complex class hierarchies and inheritance are avoided. the codebase is built on plain data structures (like `dataclasses`) and pure, stateless functions.
- **Semantic typing as documentation**: we leverage Python's type system to convey intent. types like `RawAudioTensor` vs. `NormalizedAudioTensor` make function signatures self-documenting, reducing the need for verbose comments and ensuring correctness.
- **Extensibility without modification**: new models can be integrated from external packages without altering the core library. the dynamic model loading system allows easy plug-and-play adhering to the open/closed principle.

⚠️ This is pre-alpha software, expect significant breaking changes.

## Features and Roadmap

**Short term (high priority)**

- [x] a robust, typed JSON configuration system powered by `pydantic`
- [x] inferencing:
    - [x] normalization and denormalization
    - [x] chunk generation: vectorized with `unfold`
    - [x] chunk stitching: vectorized overlap-add with `fold`
    - [x] flexible ruleset for stem deriving: add/subtract model outputs or any intermediate output (e.g., creating an `instrumental` track by subtracting `vocals` from the `mixture`).
- [x] web-based docs: generated with `mkdocs` with excellent crossrefs.
- [x] simple CLI for inferencing on a directory of audio files
- [ ] `BS-Roformer`: ensure bit-for-bit equivalence in pytorch and strive for max perf.
  - [x] initial fp16 support
  - [ ] support `coremltools` and `torch.compile`
    - [ ] handroll complex multiplication implementation
    - [ ] isolate/handroll istft in forward pass
- [ ] evals: SDR, bleedless, fullness, etc.
- [ ] datasets: MUSDB18-HQ, moises
- [ ] proper benchmarking (MFU, memory...)
- [ ] port additional SOTA models from MSST (e.g. Mel Roformer, SCNet)
  - [ ] directly support popular models (e.g. by [@unwa](https://huggingface.co/pcunwa), [gabox](https://huggingface.co/GaboxR67), by [@becruily](https://huggingface.co/becruily))

**Long term (low priority)**

- [ ] model registry with simple file-based cache
- [ ] data augmentation
- [ ] implement a complete, configurable training loop
- [ ] [`max` kernels](#mojo)
- [ ] simple web-based GUI with FastAPI and Svelte.

**Contributing**: PRs are very welcome!

## Installation & Usage

- [I just want to run it](#cli)
- [I want to add it as a library to my Python project](#library)
- [I want to hack around](#development)

Documentation on the config (amongst other details) can be found [here](https://undef13.github.io/splifft/config/)

### CLI

There are three steps. You do not need to have Python installed.

1. Install [`uv`](https://docs.astral.sh/uv/getting-started/installation/) if you haven't already. It is an awesome Python package and library manager with pip comptability.
    ```sh
    # Linux / MacOS
    wget -qO- https://astral.sh/uv/install.sh | sh
    # Windows
    powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
    ```

2. Open a new terminal and install the latest stable PyPI release as a [tool](https://docs.astral.sh/uv/concepts/tools/). It will install the Python interpreter, all necessary packages and add the `splifft` executable to your `PATH`:
    ```sh
    uv tool install "splifft[config,inference,cli]"
    ```
    <details>
      <summary>I want the latest bleeding-edge version</summary>

    This directly pulls from the `main` branch, which may be unstable:
    ```sh
    uv tool install "git+https://github.com/undef13/splifft.git[config,inference,cli]"
    ```
    </details>

3. Go into a new directory and place the [model checkpoint](https://github.com/undef13/splifft/releases/download/v0.0.1/roformer-fp16.pt) and [configuration](https://raw.githubusercontent.com/undef13/splifft/refs/heads/main/data/config/bs_roformer.json) inside it. Assuming your current directory has this structure (doesn't have to be exactly this):

    <details>
      <summary>Minimal reproduction: with example audio from YouTube</summary>

    ```sh
    uv tool install yt-dlp
    yt-dlp -f bestaudio -o data/audio/input/3BFTio5296w.flac 3BFTio5296w
    wget -P data/models/ https://huggingface.co/undef13/splifft/resolve/main/roformer-fp16.pt?download=true
    wget -P data/config/ https://raw.githubusercontent.com/undef13/splifft/refs/heads/main/data/config/bs_roformer.json
    ```
    </details>

    ```
    .
    └── data
        ├── audio
        │   ├── input
        │   │   └── 3BFTio5296w.flac
        │   └── output
        ├── config
        │   └── bs_roformer.json
        └── models
            └── roformer-fp16.pt
    ```

    Run:
    ```sh
    splifft separate data/audio/input/3BFTio5296w.flac --config data/config/bs_roformer.json --checkpoint data/models/roformer-fp16.pt
    ```
    <details>
      <summary>Console output</summary>

    ```php
    [00:00:41] INFO     using device=device(type='cuda')                                                 __main__.py:111
               INFO     loading configuration from                                                       __main__.py:113
                        config_path=PosixPath('data/config/bs_roformer.json')                                           
               INFO     loading model metadata `BSRoformer` from module `splifft.models.bs_roformer`     __main__.py:126
    [00:00:42] INFO     loading weights from checkpoint_path=PosixPath('data/models/roformer-fp16.pt')   __main__.py:127
               INFO     processing audio file:                                                           __main__.py:135
                        mixture_path=PosixPath('data/audio/input/3BFTio5296w.flac')                                     
    ⠙ processing chunks... ━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━  25% 0:00:10 (bs=4 • cuda • float16)
    [00:00:56] INFO     wrote stem `bass` to data/audio/output/3BFTio5296w/bass.flac                     __main__.py:158
               INFO     wrote stem `drums` to data/audio/output/3BFTio5296w/drums.flac                   __main__.py:158
               INFO     wrote stem `other` to data/audio/output/3BFTio5296w/other.flac                   __main__.py:158
    [00:00:57] INFO     wrote stem `vocals` to data/audio/output/3BFTio5296w/vocals.flac                 __main__.py:158
               INFO     wrote stem `guitar` to data/audio/output/3BFTio5296w/guitar.flac                 __main__.py:158
               INFO     wrote stem `piano` to data/audio/output/3BFTio5296w/piano.flac                   __main__.py:158
    [00:00:58] INFO     wrote stem `instrumental` to data/audio/output/3BFTio5296w/instrumental.flac     __main__.py:158
               INFO     wrote stem `drums_and_bass` to data/audio/output/3BFTio5296w/drums_and_bass.flac __main__.py:158
    ```
    </details>

    To update the tool:

    ```sh
    uv tool upgrade splifft --force-reinstall
    ```

### Library

Add `splifft` to your project:

```sh
# latest pypi version
uv add splifft
# latest bleeding edge
uv add git+https://github.com/undef13/splifft.git
```

This will install the absolutely minimal core dependencies used under the `src/splifft/models` directory. Higher level components, e.g. inference, training or CLI components **must** be installed via optional depedencies, as specified in the [`project.optional-dependencies` section of `pyproject.toml`](https://github.com/undef13/splifft/blob/main/pyproject.toml), for example:

```sh
# enable the built-in configuration, inference and CLI
uv add "splifft[config,inference,cli]"
```

This will install `splifft` in your venv.

### Development

If you'd like to make local changes, it is recommended to enable all optional and developer group dependencies:

```sh
git clone https://github.com/undef13/splifft.git
cd splifft
uv venv
uv sync --all-extras --all-groups
```

You may also want to use `--editable` with `sync`. Check your code:

```sh
# lint
uv run ruff check src tests
# format
uv run ruff format --check src tests
# build & host documentation
uv run mkdocs serve
# type check
uv run mypy src tests
```

This repo is no longer compatible with zfturbo's repo. The last version that does so is [`v0.0.1`](https://github.com/undef13/splifft/tree/v0.0.1). To pin a specific version in `uv`, change your `pyproject.toml`:

```toml
[tool.uv.sources]
splifft = { git = "https://github.com/undef13/splifft.git", rev = "287235e520f3bb927b58f9f53749fe3ccc248fac" }
```

## Mojo

While the primary goal is just to have minimalist PyTorch-based inference engine, I will be using this project as an opportunity to learn more about heterogenous computing, particularly with the [Mojo language](https://docs.modular.com/mojo/why-mojo/). The ultimate goal will be to understand to what extent can its compile-time metaprogramming and explicit memory layout control be used.

My approach will be incremental and bottom-up: I'll develop, test and benchmark small components against their PyTorch counterparts. The PyTorch implementation will **always** remain the "source of truth", the fully functional baseline and never be removed.

TODO:

- [ ] evaluate `pixi` in `pyproject.toml`.
- [ ] use `max.torch.CustomOpLibrary` to provide a callable from the pytorch side
- [ ] use [`DeviceContext`](https://github.com/modular/modular/blob/main/mojo/stdlib/stdlib/gpu/host/device_context.mojo) to interact with the GPU
- [ ] [attention](https://github.com/modular/modular/blob/main/examples/custom_ops/kernels/fused_attention.mojo)
  - [ ] use [`LayoutTensor`](https://github.com/modular/modular/blob/main/max/kernels/src/layout/layout_tensor.mojo) for QKV
- [ ] rotary embedding
- [ ] feedforward
- [ ] transformer
- [ ] `BandSplit` & `MaskEstimator`
- [ ] full graph compilation

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "splifft",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, audio, deep learning, music, source separation",
    "author": "undef13",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/51/42/d264ae2c06fced15247d7db8f7247e9056654ecf25d7339cb1187b7adf67/splifft-0.0.3.tar.gz",
    "platform": null,
    "description": "# SpliFFT\n\n[![image](https://img.shields.io/pypi/v/splifft)](https://pypi.python.org/pypi/splifft)\n[![image](https://img.shields.io/pypi/l/splifft)](https://pypi.python.org/pypi/splifft)\n[![image](https://img.shields.io/pypi/pyversions/splifft)](https://pypi.python.org/pypi/splifft)\n[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)\n[![MkDocs](https://shields.io/badge/MkDocs-documentation-informational)](https://undef13.github.io/splifft/)\n\nLightweight utilities for music source separation.\n\nThis library is a ground-up rewrite of the [zfturbo's MSST repo](https://github.com/ZFTurbo/Music-Source-Separation-Training), with a strong focus on robustness, simplicity and extensibility. While it is a fantastic collection of models and training scripts, this rewrite adopts a different architecture to address common pain points in research code.\n\nKey principles:\n\n- **Configuration as code**: pydantic models are used instead of untyped dictionaries or `ConfigDict`. this provides static type safety, runtime data validation, IDE autocompletion, and a single, clear source of truth for all parameters.\n- **Data-oriented and functional core**: complex class hierarchies and inheritance are avoided. the codebase is built on plain data structures (like `dataclasses`) and pure, stateless functions.\n- **Semantic typing as documentation**: we leverage Python's type system to convey intent. types like `RawAudioTensor` vs. `NormalizedAudioTensor` make function signatures self-documenting, reducing the need for verbose comments and ensuring correctness.\n- **Extensibility without modification**: new models can be integrated from external packages without altering the core library. the dynamic model loading system allows easy plug-and-play adhering to the open/closed principle.\n\n\u26a0\ufe0f This is pre-alpha software, expect significant breaking changes.\n\n## Features and Roadmap\n\n**Short term (high priority)**\n\n- [x] a robust, typed JSON configuration system powered by `pydantic`\n- [x] inferencing:\n    - [x] normalization and denormalization\n    - [x] chunk generation: vectorized with `unfold`\n    - [x] chunk stitching: vectorized overlap-add with `fold`\n    - [x] flexible ruleset for stem deriving: add/subtract model outputs or any intermediate output (e.g., creating an `instrumental` track by subtracting `vocals` from the `mixture`).\n- [x] web-based docs: generated with `mkdocs` with excellent crossrefs.\n- [x] simple CLI for inferencing on a directory of audio files\n- [ ] `BS-Roformer`: ensure bit-for-bit equivalence in pytorch and strive for max perf.\n  - [x] initial fp16 support\n  - [ ] support `coremltools` and `torch.compile`\n    - [ ] handroll complex multiplication implementation\n    - [ ] isolate/handroll istft in forward pass\n- [ ] evals: SDR, bleedless, fullness, etc.\n- [ ] datasets: MUSDB18-HQ, moises\n- [ ] proper benchmarking (MFU, memory...)\n- [ ] port additional SOTA models from MSST (e.g. Mel Roformer, SCNet)\n  - [ ] directly support popular models (e.g. by [@unwa](https://huggingface.co/pcunwa), [gabox](https://huggingface.co/GaboxR67), by [@becruily](https://huggingface.co/becruily))\n\n**Long term (low priority)**\n\n- [ ] model registry with simple file-based cache\n- [ ] data augmentation\n- [ ] implement a complete, configurable training loop\n- [ ] [`max` kernels](#mojo)\n- [ ] simple web-based GUI with FastAPI and Svelte.\n\n**Contributing**: PRs are very welcome!\n\n## Installation & Usage\n\n- [I just want to run it](#cli)\n- [I want to add it as a library to my Python project](#library)\n- [I want to hack around](#development)\n\nDocumentation on the config (amongst other details) can be found [here](https://undef13.github.io/splifft/config/)\n\n### CLI\n\nThere are three steps. You do not need to have Python installed.\n\n1. Install [`uv`](https://docs.astral.sh/uv/getting-started/installation/) if you haven't already. It is an awesome Python package and library manager with pip comptability.\n    ```sh\n    # Linux / MacOS\n    wget -qO- https://astral.sh/uv/install.sh | sh\n    # Windows\n    powershell -ExecutionPolicy ByPass -c \"irm https://astral.sh/uv/install.ps1 | iex\"\n    ```\n\n2. Open a new terminal and install the latest stable PyPI release as a [tool](https://docs.astral.sh/uv/concepts/tools/). It will install the Python interpreter, all necessary packages and add the `splifft` executable to your `PATH`:\n    ```sh\n    uv tool install \"splifft[config,inference,cli]\"\n    ```\n    <details>\n      <summary>I want the latest bleeding-edge version</summary>\n\n    This directly pulls from the `main` branch, which may be unstable:\n    ```sh\n    uv tool install \"git+https://github.com/undef13/splifft.git[config,inference,cli]\"\n    ```\n    </details>\n\n3. Go into a new directory and place the [model checkpoint](https://github.com/undef13/splifft/releases/download/v0.0.1/roformer-fp16.pt) and [configuration](https://raw.githubusercontent.com/undef13/splifft/refs/heads/main/data/config/bs_roformer.json) inside it. Assuming your current directory has this structure (doesn't have to be exactly this):\n\n    <details>\n      <summary>Minimal reproduction: with example audio from YouTube</summary>\n\n    ```sh\n    uv tool install yt-dlp\n    yt-dlp -f bestaudio -o data/audio/input/3BFTio5296w.flac 3BFTio5296w\n    wget -P data/models/ https://huggingface.co/undef13/splifft/resolve/main/roformer-fp16.pt?download=true\n    wget -P data/config/ https://raw.githubusercontent.com/undef13/splifft/refs/heads/main/data/config/bs_roformer.json\n    ```\n    </details>\n\n    ```\n    .\n    \u2514\u2500\u2500 data\n        \u251c\u2500\u2500 audio\n        \u2502\u00a0\u00a0 \u251c\u2500\u2500 input\n        \u2502\u00a0\u00a0 \u2502\u00a0\u00a0 \u2514\u2500\u2500 3BFTio5296w.flac\n        \u2502\u00a0\u00a0 \u2514\u2500\u2500 output\n        \u251c\u2500\u2500 config\n        \u2502\u00a0\u00a0 \u2514\u2500\u2500 bs_roformer.json\n        \u2514\u2500\u2500 models\n            \u2514\u2500\u2500 roformer-fp16.pt\n    ```\n\n    Run:\n    ```sh\n    splifft separate data/audio/input/3BFTio5296w.flac --config data/config/bs_roformer.json --checkpoint data/models/roformer-fp16.pt\n    ```\n    <details>\n      <summary>Console output</summary>\n\n    ```php\n    [00:00:41] INFO     using device=device(type='cuda')                                                 __main__.py:111\n               INFO     loading configuration from                                                       __main__.py:113\n                        config_path=PosixPath('data/config/bs_roformer.json')                                           \n               INFO     loading model metadata `BSRoformer` from module `splifft.models.bs_roformer`     __main__.py:126\n    [00:00:42] INFO     loading weights from checkpoint_path=PosixPath('data/models/roformer-fp16.pt')   __main__.py:127\n               INFO     processing audio file:                                                           __main__.py:135\n                        mixture_path=PosixPath('data/audio/input/3BFTio5296w.flac')                                     \n    \u2819 processing chunks... \u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u257a\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501  25% 0:00:10 (bs=4 \u2022 cuda \u2022 float16)\n    [00:00:56] INFO     wrote stem `bass` to data/audio/output/3BFTio5296w/bass.flac                     __main__.py:158\n               INFO     wrote stem `drums` to data/audio/output/3BFTio5296w/drums.flac                   __main__.py:158\n               INFO     wrote stem `other` to data/audio/output/3BFTio5296w/other.flac                   __main__.py:158\n    [00:00:57] INFO     wrote stem `vocals` to data/audio/output/3BFTio5296w/vocals.flac                 __main__.py:158\n               INFO     wrote stem `guitar` to data/audio/output/3BFTio5296w/guitar.flac                 __main__.py:158\n               INFO     wrote stem `piano` to data/audio/output/3BFTio5296w/piano.flac                   __main__.py:158\n    [00:00:58] INFO     wrote stem `instrumental` to data/audio/output/3BFTio5296w/instrumental.flac     __main__.py:158\n               INFO     wrote stem `drums_and_bass` to data/audio/output/3BFTio5296w/drums_and_bass.flac __main__.py:158\n    ```\n    </details>\n\n    To update the tool:\n\n    ```sh\n    uv tool upgrade splifft --force-reinstall\n    ```\n\n### Library\n\nAdd `splifft` to your project:\n\n```sh\n# latest pypi version\nuv add splifft\n# latest bleeding edge\nuv add git+https://github.com/undef13/splifft.git\n```\n\nThis will install the absolutely minimal core dependencies used under the `src/splifft/models` directory. Higher level components, e.g. inference, training or CLI components **must** be installed via optional depedencies, as specified in the [`project.optional-dependencies` section of `pyproject.toml`](https://github.com/undef13/splifft/blob/main/pyproject.toml), for example:\n\n```sh\n# enable the built-in configuration, inference and CLI\nuv add \"splifft[config,inference,cli]\"\n```\n\nThis will install `splifft` in your venv.\n\n### Development\n\nIf you'd like to make local changes, it is recommended to enable all optional and developer group dependencies:\n\n```sh\ngit clone https://github.com/undef13/splifft.git\ncd splifft\nuv venv\nuv sync --all-extras --all-groups\n```\n\nYou may also want to use `--editable` with `sync`. Check your code:\n\n```sh\n# lint\nuv run ruff check src tests\n# format\nuv run ruff format --check src tests\n# build & host documentation\nuv run mkdocs serve\n# type check\nuv run mypy src tests\n```\n\nThis repo is no longer compatible with zfturbo's repo. The last version that does so is [`v0.0.1`](https://github.com/undef13/splifft/tree/v0.0.1). To pin a specific version in `uv`, change your `pyproject.toml`:\n\n```toml\n[tool.uv.sources]\nsplifft = { git = \"https://github.com/undef13/splifft.git\", rev = \"287235e520f3bb927b58f9f53749fe3ccc248fac\" }\n```\n\n## Mojo\n\nWhile the primary goal is just to have minimalist PyTorch-based inference engine, I will be using this project as an opportunity to learn more about heterogenous computing, particularly with the [Mojo language](https://docs.modular.com/mojo/why-mojo/). The ultimate goal will be to understand to what extent can its compile-time metaprogramming and explicit memory layout control be used.\n\nMy approach will be incremental and bottom-up: I'll develop, test and benchmark small components against their PyTorch counterparts. The PyTorch implementation will **always** remain the \"source of truth\", the fully functional baseline and never be removed.\n\nTODO:\n\n- [ ] evaluate `pixi` in `pyproject.toml`.\n- [ ] use `max.torch.CustomOpLibrary` to provide a callable from the pytorch side\n- [ ] use [`DeviceContext`](https://github.com/modular/modular/blob/main/mojo/stdlib/stdlib/gpu/host/device_context.mojo) to interact with the GPU\n- [ ] [attention](https://github.com/modular/modular/blob/main/examples/custom_ops/kernels/fused_attention.mojo)\n  - [ ] use [`LayoutTensor`](https://github.com/modular/modular/blob/main/max/kernels/src/layout/layout_tensor.mojo) for QKV\n- [ ] rotary embedding\n- [ ] feedforward\n- [ ] transformer\n- [ ] `BandSplit` & `MaskEstimator`\n- [ ] full graph compilation\n",
    "bugtrack_url": null,
    "license": "MIT License\n        \n        Copyright (c) 2025 undef13\n        \n        Permission is hereby granted, free of charge, to any person obtaining a copy\n        of this software and associated documentation files (the \"Software\"), to deal\n        in the Software without restriction, including without limitation the rights\n        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n        copies of the Software, and to permit persons to whom the Software is\n        furnished to do so, subject to the following conditions:\n        \n        The above copyright notice and this permission notice shall be included in all\n        copies or substantial portions of the Software.\n        \n        THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n        SOFTWARE.",
    "summary": "Lightweight utilities for music source separation.",
    "version": "0.0.3",
    "project_urls": {
        "Documentation": "https://undef13.github.io/splifft/",
        "Releases": "https://github.com/undef13/splifft/releases",
        "Repository": "https://github.com/undef13/splifft"
    },
    "split_keywords": [
        "artificial intelligence",
        " audio",
        " deep learning",
        " music",
        " source separation"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6d727d223fa2dc677603ac2ef65273564aab64f4501bafc4ee7f86faafd7aac7",
                "md5": "33f991aa6769dfee196cafd327b350f5",
                "sha256": "6626ef06adf13c280aed39e06ee55821b5ff442dd44b6cea7680b348bc596be4"
            },
            "downloads": -1,
            "filename": "splifft-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "33f991aa6769dfee196cafd327b350f5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 37845,
            "upload_time": "2025-08-14T09:36:49",
            "upload_time_iso_8601": "2025-08-14T09:36:49.635717Z",
            "url": "https://files.pythonhosted.org/packages/6d/72/7d223fa2dc677603ac2ef65273564aab64f4501bafc4ee7f86faafd7aac7/splifft-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "5142d264ae2c06fced15247d7db8f7247e9056654ecf25d7339cb1187b7adf67",
                "md5": "9ea44b588be4fd97bbeba3574b179fbe",
                "sha256": "7f63d63df376688801581141fd0524479f14840e7335e5f04b01df345e896e88"
            },
            "downloads": -1,
            "filename": "splifft-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "9ea44b588be4fd97bbeba3574b179fbe",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 240163,
            "upload_time": "2025-08-14T09:36:51",
            "upload_time_iso_8601": "2025-08-14T09:36:51.618738Z",
            "url": "https://files.pythonhosted.org/packages/51/42/d264ae2c06fced15247d7db8f7247e9056654ecf25d7339cb1187b7adf67/splifft-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-14 09:36:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "undef13",
    "github_project": "splifft",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "splifft"
}
        
Elapsed time: 1.45019s