dillwave


Namedillwave JSON
Version 1.1.0 PyPI version JSON
download
home_pagehttps://dill.moe
Summarydillwave
upload_time2024-05-29 19:44:19
maintainerNone
docs_urlNone
authorCross Nastasi
requires_pythonNone
licenseApache 2.0
keywords dillwave machine learning neural vocoder tts speech
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # DillWave

DillWave is a fast, high-quality neural vocoder and waveform synthesizer. It starts with Gaussian noise and converts it into speech via iterative refinement. The speech can be controlled by providing a conditioning signal (e.g. log-scaled Mel spectrogram). The model and architecture details are described in [DiffWave: A Versatile Diffusion Model for Audio Synthesis](https://arxiv.org/pdf/2009.09761.pdf).

Credit to the original repo [here](https://github.com/lmnt-com/diffwave).

## Recommended Requirements

An Nvidia GPU that is somewhere in the RTX 30XX-40XX range.

For training it's recommended to have 16+ GB of VRAM. For inference its recommended to have at least 4 GB of VRAM.

## Install

First install [Pytorch](https://pytorch.org), GPU version recommended! Also you need [Python](https://www.python.org) of course! Version 3.10.X is recommended for dillwave.

As a package:
```bash
pip install dillwave
```

From GitHub:
```bash
git clone https://github.com/dillfrescott/dillwave
pip install -e dillwave
```
or
```bash
pip install git+https://github.com/dillfrescott/dillwave
```
You need [Git](https://git-scm.com) installed for either of these "From GitHub" install methods to work.

### Training

```bash
python -m dillwave.preprocess /path/to/dir/containing/wavs # 48000hz, 1 channel, (8 seconds length recommended for each clip)
python -m dillwave /path/to/model/dir /path/to/dir/containing/wavs

# in another shell to monitor training progress:
tensorboard --logdir /path/to/model/dir --bind_all
```

### Inference CLI
```bash
python -m dillwave.inference /path/to/model --spectrogram_path /path/to/spectrogram -o output.wav [--fast]
```

Pretrained models are going to be released [here](https://github.com/dillfrescott/dillwave-model).

            

Raw data

            {
    "_id": null,
    "home_page": "https://dill.moe",
    "name": "dillwave",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "dillwave machine learning neural vocoder tts speech",
    "author": "Cross Nastasi",
    "author_email": "cross@dill.moe",
    "download_url": null,
    "platform": null,
    "description": "# DillWave\r\n\r\nDillWave is a fast, high-quality neural vocoder and waveform synthesizer. It starts with Gaussian noise and converts it into speech via iterative refinement. The speech can be controlled by providing a conditioning signal (e.g. log-scaled Mel spectrogram). The model and architecture details are described in [DiffWave: A Versatile Diffusion Model for Audio Synthesis](https://arxiv.org/pdf/2009.09761.pdf).\r\n\r\nCredit to the original repo [here](https://github.com/lmnt-com/diffwave).\r\n\r\n## Recommended Requirements\r\n\r\nAn Nvidia GPU that is somewhere in the RTX 30XX-40XX range.\r\n\r\nFor training it's recommended to have 16+ GB of VRAM. For inference its recommended to have at least 4 GB of VRAM.\r\n\r\n## Install\r\n\r\nFirst install [Pytorch](https://pytorch.org), GPU version recommended! Also you need [Python](https://www.python.org) of course! Version 3.10.X is recommended for dillwave.\r\n\r\nAs a package:\r\n```bash\r\npip install dillwave\r\n```\r\n\r\nFrom GitHub:\r\n```bash\r\ngit clone https://github.com/dillfrescott/dillwave\r\npip install -e dillwave\r\n```\r\nor\r\n```bash\r\npip install git+https://github.com/dillfrescott/dillwave\r\n```\r\nYou need [Git](https://git-scm.com) installed for either of these \"From GitHub\" install methods to work.\r\n\r\n### Training\r\n\r\n```bash\r\npython -m dillwave.preprocess /path/to/dir/containing/wavs # 48000hz, 1 channel, (8 seconds length recommended for each clip)\r\npython -m dillwave /path/to/model/dir /path/to/dir/containing/wavs\r\n\r\n# in another shell to monitor training progress:\r\ntensorboard --logdir /path/to/model/dir --bind_all\r\n```\r\n\r\n### Inference CLI\r\n```bash\r\npython -m dillwave.inference /path/to/model --spectrogram_path /path/to/spectrogram -o output.wav [--fast]\r\n```\r\n\r\nPretrained models are going to be released [here](https://github.com/dillfrescott/dillwave-model).\r\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "dillwave",
    "version": "1.1.0",
    "project_urls": {
        "Homepage": "https://dill.moe"
    },
    "split_keywords": [
        "dillwave",
        "machine",
        "learning",
        "neural",
        "vocoder",
        "tts",
        "speech"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "20f011adb7f8e46faf8b448ab5e51683f6819f92954d4cfe7ab3353323595e9e",
                "md5": "5ce0940aff2ea325d0affcdcd1549590",
                "sha256": "2964e9209705f183d6931bf2a43d678a3211f22ff76c792b05f2955535ac34d1"
            },
            "downloads": -1,
            "filename": "dillwave-1.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5ce0940aff2ea325d0affcdcd1549590",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 18089,
            "upload_time": "2024-05-29T19:44:19",
            "upload_time_iso_8601": "2024-05-29T19:44:19.976785Z",
            "url": "https://files.pythonhosted.org/packages/20/f0/11adb7f8e46faf8b448ab5e51683f6819f92954d4cfe7ab3353323595e9e/dillwave-1.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-29 19:44:19",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "dillwave"
}
        
Elapsed time: 0.24008s