julius


Namejulius JSON
Version 0.2.7 PyPI version JSON
download
home_pagehttps://github.com/adefossez/julius
SummaryNice DSP sweets: resampling, FFT Convolutions. All with PyTorch, differentiable and with CUDA support.
upload_time2022-09-19 16:13:34
maintainer
docs_urlNone
authorAlexandre Défossez
requires_python>=3.6.0
licenseMIT License
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Julius, fast PyTorch based DSP for audio and 1D signals

![linter badge](https://github.com/adefossez/julius/workflows/linter/badge.svg)
![tests badge](https://github.com/adefossez/julius/workflows/tests/badge.svg)
![cov badge](https://github.com/adefossez/julius/workflows/cov%3E90%25/badge.svg)

Julius contains different Digital Signal Processing algorithms implemented
with PyTorch, so that they are differentiable and available on CUDA.
Note that all the modules implemented here can be used with TorchScript.

For now, I have implemented:

- [julius.resample](https://adefossez.github.io/julius/julius/resample.html): fast sinc resampling.
- [julius.fftconv](https://adefossez.github.io/julius/julius/fftconv.html): FFT based convolutions.
- [julius.lowpass](https://adefossez.github.io/julius/julius/lowpass.html): FIR low pass filter banks.
- [julius.filters](https://adefossez.github.io/julius/julius/filters.html): FIR high pass and band pass filters.
- [julius.bands](https://adefossez.github.io/julius/julius/bands.html): Decomposition of a waveform signal over mel-scale frequency bands.

Along that, you might found useful utilities in:

- [julius.core](https://adefossez.github.io/julius/julius/core.html): DSP related functions.
- [julius.utils](https://adefossez.github.io/julius/julius/utils.html): Generic utilities.

<p align="center">
<img src="./logo.png" alt="Representation of the convolutions filters used for the efficient resampling."
width="500px"></p>

## News

- 19/09/2022: __`julius` 0.2.7 released:__: fixed ONNX compat (thanks @iver56). I know I missed the 0.2.6 one...
- 28/07/2021: __`julius` 0.2.5 released:__: support for setting a custom output length when resampling.
- 22/06/2021: __`julius` 0.2.4 released:__: adding highpass and band passfilters.
  Extra linting and type checking of the code. New `unfold` implemention, up to
  x6 faster FFT convolutions and more efficient memory usage.
- 26/01/2021: __`julius` 0.2.2 released:__ fixing normalization of filters in lowpass and resample to avoid very low frequencies to be leaked.
  Switch from zero padding to replicate padding (uses first/last value instead of 0) to avoid discontinuities with strong artifacts.
- 20/01/2021: `julius` implementation of resampling is now officially <a href="https://github.com/pytorch/audio/pull/1087">part of Torchaudio.</a>

## Installation

`julius` requires python 3.6. To install:
```bash
pip3 install -U julius
```


## Usage

See the [Julius documentation][docs] for the usage of Julius. Hereafter you will find a few examples
to get you quickly started:

```python3
import julius
import torch

signal = torch.randn(6, 4, 1024)
# Resample from a sample rate of 100 to 70. The old and new sample rate must be integers,
# and resampling will be fast if they form an irreductible fraction with small numerator
# and denominator (here 10 and 7). Any shape is supported, last dim is time.
resampled_signal = julius.resample_frac(signal, 100, 70)

# Low pass filter with a `0.1 * sample_rate` cutoff frequency.
low_freqs = julius.lowpass_filter(signal, 0.1)

# Fast convolutions with FFT, useful for large kernels
conv = julius.FFTConv1d(4, 10, 512)
convolved = conv(signal)

# Decomposition over frequency bands in the Waveform domain
bands = julius.split_bands(signal, n_bands=10, sample_rate=100)
# Decomposition with n_bands frequency bands evenly spaced in mel space.
# Input shape can be `[*, T]`, output will be `[n_bands, *, T]`.
random_eq = (torch.rand(10, 1, 1, 1) * bands).sum(0)
```

## Algorithms

### Resample

This is an implementation of the [sinc resample algorithm][resample] by Julius O. Smith.
It is the same algorithm than the one used in [resampy][resampy] but to run efficiently on GPU it
is limited to fractional changes of the sample rate. It will be fast if the old and new sample rate
are small after dividing them by their GCD. For instance going from a sample rate of 2000 to 3000 (2, 3 after removing the GCD)
will be extremely fast, while going from 20001 to 30001 will not.
Julius resampling is faster than resampy even on CPU, and when running on GPU it makes resampling a completely negligible part of your pipeline
(except of course for weird cases like going from a sample rate of 20001 to 30001).


### FFTConv1d

Computing convolutions with very large kernels (>= 128) and a stride of 1 can be much faster
using FFT. This implements the same API as `torch.nn.Conv1d` and `torch.nn.functional.conv1d`
but with a FFT backend. Dilation and groups are not supported.
FFTConv will be faster on CPU even for relatively small tensors (a few dozen channels, kernel size
of 128). On CUDA, due to the higher parallelism, regular convolution can be faster in many cases,
but for kernel sizes above 128, for a large number of channels or batch size, FFTConv1d
will eventually be faster (basically when you no longer have idle cores that can hide
the true complexity of the operation).

### LowPass

Classical Finite Impulse Reponse windowed sinc lowpass filter. It will use FFT convolutions automatically
if the filter size is large enough. This is the basic block from which you can build
high pass and band pass filters (see `julius.filters`).

### Bands

Decomposition of a signal over frequency bands in the waveform domain. This can be useful for
instance to perform parametric EQ (see [Usage](#usage) above).

## Benchmarks

You can find speed tests (and comparisons to reference implementations) on the
[benchmark][bench]. The CPU benchmarks are run on a Mac Book Pro 2020, with a 2.4 GHz
8-core intel CPU i9. The GPUs benchmark are run on Nvidia V100 with 16GB of memory.
We also compare the validity of our implementations, as compared to reference ones like `resampy`
or `torch.nn.Conv1d`.



## Running tests

Clone this repository, then
```bash
pip3 install .[dev]'
python3 tests.py
```

To run the benchmarks:
```
pip3 install .[dev]'
python3 -m bench.gen
```


## License

`julius` is released under the MIT license.

## Thanks

This package is named in the honor of
[Julius O. Smith](https://ccrma.stanford.edu/~jos/),
whose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want
to learn more about DSP.


[resample]: https://ccrma.stanford.edu/~jos/resample/resample.html
[resampy]: https://resampy.readthedocs.io/
[docs]:  https://adefossez.github.io/julius/julius/index.html
[bench]:  ./bench.md
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/adefossez/julius",
    "name": "julius",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Alexandre D\u00e9fossez",
    "author_email": "alexandre.defossez@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/a1/19/c9e1596b5572c786b93428d0904280e964c930fae7e6c9368ed9e1b63922/julius-0.2.7.tar.gz",
    "platform": null,
    "description": "# Julius, fast PyTorch based DSP for audio and 1D signals\n\n![linter badge](https://github.com/adefossez/julius/workflows/linter/badge.svg)\n![tests badge](https://github.com/adefossez/julius/workflows/tests/badge.svg)\n![cov badge](https://github.com/adefossez/julius/workflows/cov%3E90%25/badge.svg)\n\nJulius contains different Digital Signal Processing algorithms implemented\nwith PyTorch, so that they are differentiable and available on CUDA.\nNote that all the modules implemented here can be used with TorchScript.\n\nFor now, I have implemented:\n\n- [julius.resample](https://adefossez.github.io/julius/julius/resample.html): fast sinc resampling.\n- [julius.fftconv](https://adefossez.github.io/julius/julius/fftconv.html): FFT based convolutions.\n- [julius.lowpass](https://adefossez.github.io/julius/julius/lowpass.html): FIR low pass filter banks.\n- [julius.filters](https://adefossez.github.io/julius/julius/filters.html): FIR high pass and band pass filters.\n- [julius.bands](https://adefossez.github.io/julius/julius/bands.html): Decomposition of a waveform signal over mel-scale frequency bands.\n\nAlong that, you might found useful utilities in:\n\n- [julius.core](https://adefossez.github.io/julius/julius/core.html): DSP related functions.\n- [julius.utils](https://adefossez.github.io/julius/julius/utils.html): Generic utilities.\n\n<p align=\"center\">\n<img src=\"./logo.png\" alt=\"Representation of the convolutions filters used for the efficient resampling.\"\nwidth=\"500px\"></p>\n\n## News\n\n- 19/09/2022: __`julius` 0.2.7 released:__: fixed ONNX compat (thanks @iver56). I know I missed the 0.2.6 one...\n- 28/07/2021: __`julius` 0.2.5 released:__: support for setting a custom output length when resampling.\n- 22/06/2021: __`julius` 0.2.4 released:__: adding highpass and band passfilters.\n  Extra linting and type checking of the code. New `unfold` implemention, up to\n  x6 faster FFT convolutions and more efficient memory usage.\n- 26/01/2021: __`julius` 0.2.2 released:__ fixing normalization of filters in lowpass and resample to avoid very low frequencies to be leaked.\n  Switch from zero padding to replicate padding (uses first/last value instead of 0) to avoid discontinuities with strong artifacts.\n- 20/01/2021: `julius` implementation of resampling is now officially <a href=\"https://github.com/pytorch/audio/pull/1087\">part of Torchaudio.</a>\n\n## Installation\n\n`julius` requires python 3.6. To install:\n```bash\npip3 install -U julius\n```\n\n\n## Usage\n\nSee the [Julius documentation][docs] for the usage of Julius. Hereafter you will find a few examples\nto get you quickly started:\n\n```python3\nimport julius\nimport torch\n\nsignal = torch.randn(6, 4, 1024)\n# Resample from a sample rate of 100 to 70. The old and new sample rate must be integers,\n# and resampling will be fast if they form an irreductible fraction with small numerator\n# and denominator (here 10 and 7). Any shape is supported, last dim is time.\nresampled_signal = julius.resample_frac(signal, 100, 70)\n\n# Low pass filter with a `0.1 * sample_rate` cutoff frequency.\nlow_freqs = julius.lowpass_filter(signal, 0.1)\n\n# Fast convolutions with FFT, useful for large kernels\nconv = julius.FFTConv1d(4, 10, 512)\nconvolved = conv(signal)\n\n# Decomposition over frequency bands in the Waveform domain\nbands = julius.split_bands(signal, n_bands=10, sample_rate=100)\n# Decomposition with n_bands frequency bands evenly spaced in mel space.\n# Input shape can be `[*, T]`, output will be `[n_bands, *, T]`.\nrandom_eq = (torch.rand(10, 1, 1, 1) * bands).sum(0)\n```\n\n## Algorithms\n\n### Resample\n\nThis is an implementation of the [sinc resample algorithm][resample] by Julius O. Smith.\nIt is the same algorithm than the one used in [resampy][resampy] but to run efficiently on GPU it\nis limited to fractional changes of the sample rate. It will be fast if the old and new sample rate\nare small after dividing them by their GCD. For instance going from a sample rate of 2000 to 3000 (2, 3 after removing the GCD)\nwill be extremely fast, while going from 20001 to 30001 will not.\nJulius resampling is faster than resampy even on CPU, and when running on GPU it makes resampling a completely negligible part of your pipeline\n(except of course for weird cases like going from a sample rate of 20001 to 30001).\n\n\n### FFTConv1d\n\nComputing convolutions with very large kernels (>= 128) and a stride of 1 can be much faster\nusing FFT. This implements the same API as `torch.nn.Conv1d` and `torch.nn.functional.conv1d`\nbut with a FFT backend. Dilation and groups are not supported.\nFFTConv will be faster on CPU even for relatively small tensors (a few dozen channels, kernel size\nof 128). On CUDA, due to the higher parallelism, regular convolution can be faster in many cases,\nbut for kernel sizes above 128, for a large number of channels or batch size, FFTConv1d\nwill eventually be faster (basically when you no longer have idle cores that can hide\nthe true complexity of the operation).\n\n### LowPass\n\nClassical Finite Impulse Reponse windowed sinc lowpass filter. It will use FFT convolutions automatically\nif the filter size is large enough. This is the basic block from which you can build\nhigh pass and band pass filters (see `julius.filters`).\n\n### Bands\n\nDecomposition of a signal over frequency bands in the waveform domain. This can be useful for\ninstance to perform parametric EQ (see [Usage](#usage) above).\n\n## Benchmarks\n\nYou can find speed tests (and comparisons to reference implementations) on the\n[benchmark][bench]. The CPU benchmarks are run on a Mac Book Pro 2020, with a 2.4 GHz\n8-core intel CPU i9. The GPUs benchmark are run on Nvidia V100 with 16GB of memory.\nWe also compare the validity of our implementations, as compared to reference ones like `resampy`\nor `torch.nn.Conv1d`.\n\n\n\n## Running tests\n\nClone this repository, then\n```bash\npip3 install .[dev]'\npython3 tests.py\n```\n\nTo run the benchmarks:\n```\npip3 install .[dev]'\npython3 -m bench.gen\n```\n\n\n## License\n\n`julius` is released under the MIT license.\n\n## Thanks\n\nThis package is named in the honor of\n[Julius O. Smith](https://ccrma.stanford.edu/~jos/),\nwhose books and website were a gold mine of information for me to learn about DSP. Go checkout his website if you want\nto learn more about DSP.\n\n\n[resample]: https://ccrma.stanford.edu/~jos/resample/resample.html\n[resampy]: https://resampy.readthedocs.io/\n[docs]:  https://adefossez.github.io/julius/julius/index.html\n[bench]:  ./bench.md",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Nice DSP sweets: resampling, FFT Convolutions. All with PyTorch, differentiable and with CUDA support.",
    "version": "0.2.7",
    "project_urls": {
        "Homepage": "https://github.com/adefossez/julius"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a119c9e1596b5572c786b93428d0904280e964c930fae7e6c9368ed9e1b63922",
                "md5": "b9430b7ffdca5b601854f835755a420c",
                "sha256": "3c0f5f5306d7d6016fcc95196b274cae6f07e2c9596eed314e4e7641554fbb08"
            },
            "downloads": -1,
            "filename": "julius-0.2.7.tar.gz",
            "has_sig": false,
            "md5_digest": "b9430b7ffdca5b601854f835755a420c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6.0",
            "size": 59640,
            "upload_time": "2022-09-19T16:13:34",
            "upload_time_iso_8601": "2022-09-19T16:13:34.200005Z",
            "url": "https://files.pythonhosted.org/packages/a1/19/c9e1596b5572c786b93428d0904280e964c930fae7e6c9368ed9e1b63922/julius-0.2.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-09-19 16:13:34",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "adefossez",
    "github_project": "julius",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "julius"
}
        
Elapsed time: 1.95279s