torch-audiomentations


Nametorch-audiomentations JSON
Version 0.11.1 PyPI version JSON
download
home_pagehttps://github.com/asteroid-team/torch-audiomentations
SummaryA Pytorch library for audio data augmentation. Inspired by audiomentations. Useful for deep learning.
upload_time2024-02-07 10:03:48
maintainer
docs_urlNone
authorIver Jordal
requires_python>=3.6
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            ![torch-audiomentations](images/torch_audiomentations_logo.png)
---

![Build status](https://img.shields.io/github/actions/workflow/status/asteroid-team/torch-audiomentations/ci.yml?branch=main)
[![Code coverage](https://img.shields.io/codecov/c/github/asteroid-team/torch-audiomentations/main.svg)](https://codecov.io/gh/asteroid-team/torch-audiomentations)
[![Code Style: Black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/ambv/black)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10628988.svg)](https://doi.org/10.5281/zenodo.10628988)

Audio data augmentation in PyTorch. Inspired by [audiomentations](https://github.com/iver56/audiomentations).

* Supports CPU and GPU (CUDA) - speed is a priority
* Supports batches of multichannel (or mono) audio
* Transforms extend `nn.Module`, so they can be integrated as a part of a pytorch neural network model
* Most transforms are differentiable
* Three modes: `per_batch`, `per_example` and `per_channel`
* Cross-platform compatibility
* Permissive MIT license
* Aiming for high test coverage

# Setup

![Python version support](https://img.shields.io/pypi/pyversions/torch-audiomentations)
[![PyPI version](https://img.shields.io/pypi/v/torch-audiomentations.svg?style=flat)](https://pypi.org/project/torch-audiomentations/)
[![Number of downloads from PyPI per month](https://img.shields.io/pypi/dm/torch-audiomentations.svg?style=flat)](https://pypi.org/project/torch-audiomentations/)

`pip install torch-audiomentations`

# Usage example

```python
import torch
from torch_audiomentations import Compose, Gain, PolarityInversion


# Initialize augmentation callable
apply_augmentation = Compose(
    transforms=[
        Gain(
            min_gain_in_db=-15.0,
            max_gain_in_db=5.0,
            p=0.5,
        ),
        PolarityInversion(p=0.5)
    ]
)

torch_device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# Make an example tensor with white noise.
# This tensor represents 8 audio snippets with 2 channels (stereo) and 2 s of 16 kHz audio.
audio_samples = torch.rand(size=(8, 2, 32000), dtype=torch.float32, device=torch_device) - 0.5

# Apply augmentation. This varies the gain and polarity of (some of)
# the audio snippets in the batch independently.
perturbed_audio_samples = apply_augmentation(audio_samples, sample_rate=16000)
```

# Known issues

* Target data processing is still in an experimental state ([#3](https://github.com/asteroid-team/torch-audiomentations/issues/3)). Workaround: Use `freeze_parameters` and `unfreeze_parameters` for now if the target data is audio with the same shape as the input.
* Using torch-audiomentations in a multiprocessing context can lead to memory leaks ([#132](https://github.com/asteroid-team/torch-audiomentations/issues/132)). Workaround: If using torch-audiomentations in a multiprocessing context, it'll probably work better to run the transforms on CPU.
* Multi-GPU / DDP is not officially supported ([#136](https://github.com/asteroid-team/torch-audiomentations/issues/136)). The author does not have a multi-GPU setup to test & fix this. Get in touch if you want to donate some hardware for this. Workaround: Run the transforms on single GPU instead.
* `PitchShift` does not support small pitch shifts, especially for low sample rates ([#151](https://github.com/asteroid-team/torch-audiomentations/issues/151)). Workaround: If you need small pitch shifts applied to low sample rates, use [PitchShift in audiomentations](https://iver56.github.io/audiomentations/waveform_transforms/pitch_shift/) or [torch-pitch-shift](https://github.com/KentoNishi/torch-pitch-shift/) directly without the function for calculating efficient pitch-shift targets.

# Contribute

Contributors welcome! 
[Join the Asteroid's slack](https://join.slack.com/t/asteroid-dev/shared_invite/zt-cn9y85t3-QNHXKD1Et7qoyzu1Ji5bcA)
to start discussing about `torch-audiomentations` with us.

# Motivation: Speed

We don't want data augmentation to be a bottleneck in model training speed. Here is a
comparison of the time it takes to run 1D convolution:

![Convolve execution times](images/convolve_exec_time_plot.png)

Note: Not all transforms have a speedup this impressive compared to CPU. In general, running audio data augmentation on GPU is not always the best option. For more info, see this article: [https://iver56.github.io/audiomentations/guides/cpu_vs_gpu/](https://iver56.github.io/audiomentations/guides/cpu_vs_gpu/)

# Current state

torch-audiomentations is in an early development stage, so the APIs are subject to change.

# Waveform transforms

Every transform has `mode`, `p`, and `p_mode` -- the parameters that decide how the augmentation is performed.
- `mode` decides how the randomization of the augmentation is grouped and applied.
- `p` decides the on/off probability of applying the augmentation.   
- `p_mode` decides how the on/off of the augmentation is applied.

This visualization shows how different combinations of `mode` and `p_mode` would perform an augmentation.    

![Explanation of mode, p and p_mode](images/visual_explanation_mode_etc.png)
    

## AddBackgroundNoise

_Added in v0.5.0_

Add background noise to the input audio.

## AddColoredNoise

_Added in v0.7.0_

Add colored noise to the input audio.

## ApplyImpulseResponse

_Added in v0.5.0_

Convolve the given audio with impulse responses.

## BandPassFilter

_Added in v0.9.0_

Apply band-pass filtering to the input audio.

## BandStopFilter

_Added in v0.10.0_

Apply band-stop filtering to the input audio. Also known as notch filter.

## Gain

_Added in v0.1.0_

Multiply the audio by a random amplitude factor to reduce or increase the volume. This
technique can help a model become somewhat invariant to the overall gain of the input audio.

Warning: This transform can return samples outside the [-1, 1] range, which may lead to
clipping or wrap distortion, depending on what you do with the audio in a later stage.
See also https://en.wikipedia.org/wiki/Clipping_(audio)#Digital_clipping

## HighPassFilter

_Added in v0.8.0_

Apply high-pass filtering to the input audio.

## Identity

_Added in v0.11.0_

This transform returns the input unchanged. It can be used for simplifying the code
in cases where data augmentation should be disabled.

## LowPassFilter

_Added in v0.8.0_

Apply low-pass filtering to the input audio.

## PeakNormalization

_Added in v0.2.0_

Apply a constant amount of gain, so that highest signal level present in each audio snippet
in the batch becomes 0 dBFS, i.e. the loudest level allowed if all samples must be between
-1 and 1.

This transform has an alternative mode (apply_to="only_too_loud_sounds") where it only
applies to audio snippets that have extreme values outside the [-1, 1] range. This is useful
for avoiding digital clipping in audio that is too loud, while leaving other audio
untouched.

## PitchShift

_Added in v0.9.0_

Pitch-shift sounds up or down without changing the tempo.

## PolarityInversion

_Added in v0.1.0_

Flip the audio samples upside-down, reversing their polarity. In other words, multiply the
waveform by -1, so negative values become positive, and vice versa. The result will sound
the same compared to the original when played back in isolation. However, when mixed with
other audio sources, the result may be different. This waveform inversion technique
is sometimes used for audio cancellation or obtaining the difference between two waveforms.
However, in the context of audio data augmentation, this transform can be useful when
training phase-aware machine learning models.

## Shift

_Added in v0.5.0_

Shift the audio forwards or backwards, with or without rollover

## ShuffleChannels

_Added in v0.6.0_

Given multichannel audio input (e.g. stereo), shuffle the channels, e.g. so left can become right and vice versa.
This transform can help combat positional bias in machine learning models that input multichannel waveforms.

If the input audio is mono, this transform does nothing except emit a warning.

## TimeInversion

_Added in v0.10.0_

Reverse (invert) the audio along the time axis similar to random flip of
an image in the visual domain. This can be relevant in the context of audio
classification. It was successfully applied in the paper
[AudioCLIP: Extending CLIP to Image, Text and Audio](https://arxiv.org/pdf/2106.13043.pdf)


# Changelog

## Unreleased

### Added

* Add new transforms: `Mix`, `Padding`, `RandomCrop` and `SpliceOut`

## [v0.11.1] - 2024-02-07

### Changed

* Add support for constant cutoff frequency in `LowPassFilter` and `HighPassFilter`
* Add support for min_f_decay==max_f_decay in `AddColoredNoise`
* Bump torchaudio dependency from >=0.7.0 to >=0.9.0

### Fixed

* Fix inaccurate type hints in `Shift`
* Remove `set_backend` to avoid `UserWarning` from torchaudio

## [v0.11.0] - 2022-06-29

### Added

* Add new transform: `Identity`
* Add API for processing targets alongside inputs. Some transforms experimentally
  support this feature already.

### Changed

* Add `ObjectDict` output type as alternative to `torch.Tensor`. This alternative is opt-in for
  now (for backwards-compatibility), but note that the old output type (`torch.Tensor`) is
  deprecated and support for it will be removed in a future version.
* Allow specifying a file path, a folder path, a list of files or a list of folders to
  `AddBackgroundNoise` and `ApplyImpulseResponse`
* Require newer version of `torch-pitch-shift` to ensure support for torchaudio 0.11 in `PitchShift`

### Fixed

* Fix a bug where `BandPassFilter` didn't work on GPU

## [v0.10.1] - 2022-03-24

### Added

* Add support for min SNR == max SNR in `AddBackgroundNoise`
* Add support for librosa 0.9.0

### Fixed

* Fix a bug where loaded audio snippets were sometimes resampled to an incompatible
 length in `AddBackgroundNoise`

## [v0.10.0] - 2022-02-11

### Added

* Implement `OneOf` and `SomeOf` for applying one or more of a given set of transforms
* Implement new transforms: `BandStopFilter` and `TimeInversion`

### Changed

* Put `ir_paths` in transform_parameters in `ApplyImpulseResponse` so it is possible
 to inspect what impulse responses were used. This also gives `freeze_parameters()`
 the expected behavior.

### Fixed

* Fix a bug where the actual bandwidth was twice as large as expected in
 `BandPassFilter`. The default values have been updated accordingly.
 If you were previously specifying `min_bandwidth_fraction` and/or `max_bandwidth_fraction`,
 you now need to double those numbers to get the same behavior as before.

## [v0.9.1] - 2021-12-20

### Added

* Officially mark python>=3.9 as supported

## [v0.9.0] - 2021-10-11

### Added

* Add parameter `compensate_for_propagation_delay` in `ApplyImpulseResponse`
* Implement `BandPassFilter`
* Implement `PitchShift`

### Removed

* Support for torchaudio<=0.6 has been removed

## [v0.8.0] - 2021-06-15

### Added

* Implement `HighPassFilter` and `LowPassFilter`

### Deprecated

* Support for torchaudio<=0.6 is deprecated and will be removed in the future

### Removed

* Support for pytorch<=1.6 has been removed

## [v0.7.0] - 2021-04-16

### Added

* Implement `AddColoredNoise`

### Deprecated

* Support for pytorch<=1.6 is deprecated and will be removed in the future

## [v0.6.0] - 2021-02-22

### Added

* Implement `ShuffleChannels`

## [v0.5.1] - 2020-12-18

### Fixed

* Fix a bug where `AddBackgroundNoise` did not work on CUDA
* Fix a bug where symlinked audio files/folders were not found when looking for audio files
* Use torch.fft.rfft instead of the torch.rfft (deprecated in pytorch 1.7) when possible. As a
bonus, the change also improves performance in `ApplyImpulseResponse`.

## [v0.5.0] - 2020-12-08

### Added

* Release `AddBackgroundNoise` and `ApplyImpulseResponse`
* Implement `Shift`

### Changed

* Make `sample_rate` optional. Allow specifying `sample_rate` in `__init__` instead of `forward`. This means torchaudio transforms can be used in `Compose` now.

### Removed

* Remove support for 1-dimensional and 2-dimensional audio tensors. Only 3-dimensional audio
 tensors are supported now.

### Fixed

* Fix a bug where one could not use the `parameters` method of the `nn.Module` subclass
* Fix a bug where files with uppercase filename extension were not found

## [v0.4.0] - 2020-11-10

### Added

* Implement `Compose` for applying multiple transforms
* Implement utility functions `from_dict` and `from_yaml` for loading data augmentation
configurations from dict, json or yaml
* Officially support differentiability in most transforms

## [v0.3.0] - 2020-10-27

### Added

* Add support for alternative modes `per_batch` and `per_channel`

### Changed

* Transforms now return the input unchanged when they are in eval mode

## [v0.2.0] - 2020-10-19

### Added

* Implement `PeakNormalization`
* Expose `convolve` in the API

### Changed

* Simplify API for using CUDA tensors. The device is now inferred from the input tensor.

## [v0.1.0] - 2020-10-12

### Added

* Initial release with `Gain` and `PolarityInversion`

# Development

## Setup

A GPU-enabled development environment for torch-audiomentations can be created with conda:

* `conda env create`

## Run tests

`pytest`

## Conventions

* Format python code with [black](https://github.com/psf/black)
* Use [Google-style docstrings](https://google.github.io/styleguide/pyguide.html#381-docstrings)
* Use explicit relative imports, not absolute imports

# Acknowledgements

The development of torch-audiomentations is kindly backed by [Nomono](https://nomono.co/).

Thanks to [all contributors](https://github.com/asteroid-team/torch-audiomentations/graphs/contributors) who help improving torch-audiomentations.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/asteroid-team/torch-audiomentations",
    "name": "torch-audiomentations",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": "",
    "keywords": "",
    "author": "Iver Jordal",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/7c/db/151c1a1ab5b426e34a225a0b54de0ac88846e46aa8704fc05409eb38f90c/torch-audiomentations-0.11.1.tar.gz",
    "platform": null,
    "description": "![torch-audiomentations](images/torch_audiomentations_logo.png)\n---\n\n![Build status](https://img.shields.io/github/actions/workflow/status/asteroid-team/torch-audiomentations/ci.yml?branch=main)\n[![Code coverage](https://img.shields.io/codecov/c/github/asteroid-team/torch-audiomentations/main.svg)](https://codecov.io/gh/asteroid-team/torch-audiomentations)\n[![Code Style: Black](https://img.shields.io/badge/code%20style-black-black.svg)](https://github.com/ambv/black)\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.10628988.svg)](https://doi.org/10.5281/zenodo.10628988)\n\nAudio data augmentation in PyTorch. Inspired by [audiomentations](https://github.com/iver56/audiomentations).\n\n* Supports CPU and GPU (CUDA) - speed is a priority\n* Supports batches of multichannel (or mono) audio\n* Transforms extend `nn.Module`, so they can be integrated as a part of a pytorch neural network model\n* Most transforms are differentiable\n* Three modes: `per_batch`, `per_example` and `per_channel`\n* Cross-platform compatibility\n* Permissive MIT license\n* Aiming for high test coverage\n\n# Setup\n\n![Python version support](https://img.shields.io/pypi/pyversions/torch-audiomentations)\n[![PyPI version](https://img.shields.io/pypi/v/torch-audiomentations.svg?style=flat)](https://pypi.org/project/torch-audiomentations/)\n[![Number of downloads from PyPI per month](https://img.shields.io/pypi/dm/torch-audiomentations.svg?style=flat)](https://pypi.org/project/torch-audiomentations/)\n\n`pip install torch-audiomentations`\n\n# Usage example\n\n```python\nimport torch\nfrom torch_audiomentations import Compose, Gain, PolarityInversion\n\n\n# Initialize augmentation callable\napply_augmentation = Compose(\n    transforms=[\n        Gain(\n            min_gain_in_db=-15.0,\n            max_gain_in_db=5.0,\n            p=0.5,\n        ),\n        PolarityInversion(p=0.5)\n    ]\n)\n\ntorch_device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\n# Make an example tensor with white noise.\n# This tensor represents 8 audio snippets with 2 channels (stereo) and 2 s of 16 kHz audio.\naudio_samples = torch.rand(size=(8, 2, 32000), dtype=torch.float32, device=torch_device) - 0.5\n\n# Apply augmentation. This varies the gain and polarity of (some of)\n# the audio snippets in the batch independently.\nperturbed_audio_samples = apply_augmentation(audio_samples, sample_rate=16000)\n```\n\n# Known issues\n\n* Target data processing is still in an experimental state ([#3](https://github.com/asteroid-team/torch-audiomentations/issues/3)). Workaround: Use `freeze_parameters` and `unfreeze_parameters` for now if the target data is audio with the same shape as the input.\n* Using torch-audiomentations in a multiprocessing context can lead to memory leaks ([#132](https://github.com/asteroid-team/torch-audiomentations/issues/132)). Workaround: If using torch-audiomentations in a multiprocessing context, it'll probably work better to run the transforms on CPU.\n* Multi-GPU / DDP is not officially supported ([#136](https://github.com/asteroid-team/torch-audiomentations/issues/136)). The author does not have a multi-GPU setup to test & fix this. Get in touch if you want to donate some hardware for this. Workaround: Run the transforms on single GPU instead.\n* `PitchShift` does not support small pitch shifts, especially for low sample rates ([#151](https://github.com/asteroid-team/torch-audiomentations/issues/151)). Workaround: If you need small pitch shifts applied to low sample rates, use [PitchShift in audiomentations](https://iver56.github.io/audiomentations/waveform_transforms/pitch_shift/) or [torch-pitch-shift](https://github.com/KentoNishi/torch-pitch-shift/) directly without the function for calculating efficient pitch-shift targets.\n\n# Contribute\n\nContributors welcome! \n[Join the Asteroid's slack](https://join.slack.com/t/asteroid-dev/shared_invite/zt-cn9y85t3-QNHXKD1Et7qoyzu1Ji5bcA)\nto start discussing about `torch-audiomentations` with us.\n\n# Motivation: Speed\n\nWe don't want data augmentation to be a bottleneck in model training speed. Here is a\ncomparison of the time it takes to run 1D convolution:\n\n![Convolve execution times](images/convolve_exec_time_plot.png)\n\nNote: Not all transforms have a speedup this impressive compared to CPU. In general, running audio data augmentation on GPU is not always the best option. For more info, see this article: [https://iver56.github.io/audiomentations/guides/cpu_vs_gpu/](https://iver56.github.io/audiomentations/guides/cpu_vs_gpu/)\n\n# Current state\n\ntorch-audiomentations is in an early development stage, so the APIs are subject to change.\n\n# Waveform transforms\n\nEvery transform has `mode`, `p`, and `p_mode` -- the parameters that decide how the augmentation is performed.\n- `mode` decides how the randomization of the augmentation is grouped and applied.\n- `p` decides the on/off probability of applying the augmentation.   \n- `p_mode` decides how the on/off of the augmentation is applied.\n\nThis visualization shows how different combinations of `mode` and `p_mode` would perform an augmentation.    \n\n![Explanation of mode, p and p_mode](images/visual_explanation_mode_etc.png)\n    \n\n## AddBackgroundNoise\n\n_Added in v0.5.0_\n\nAdd background noise to the input audio.\n\n## AddColoredNoise\n\n_Added in v0.7.0_\n\nAdd colored noise to the input audio.\n\n## ApplyImpulseResponse\n\n_Added in v0.5.0_\n\nConvolve the given audio with impulse responses.\n\n## BandPassFilter\n\n_Added in v0.9.0_\n\nApply band-pass filtering to the input audio.\n\n## BandStopFilter\n\n_Added in v0.10.0_\n\nApply band-stop filtering to the input audio. Also known as notch filter.\n\n## Gain\n\n_Added in v0.1.0_\n\nMultiply the audio by a random amplitude factor to reduce or increase the volume. This\ntechnique can help a model become somewhat invariant to the overall gain of the input audio.\n\nWarning: This transform can return samples outside the [-1, 1] range, which may lead to\nclipping or wrap distortion, depending on what you do with the audio in a later stage.\nSee also https://en.wikipedia.org/wiki/Clipping_(audio)#Digital_clipping\n\n## HighPassFilter\n\n_Added in v0.8.0_\n\nApply high-pass filtering to the input audio.\n\n## Identity\n\n_Added in v0.11.0_\n\nThis transform returns the input unchanged. It can be used for simplifying the code\nin cases where data augmentation should be disabled.\n\n## LowPassFilter\n\n_Added in v0.8.0_\n\nApply low-pass filtering to the input audio.\n\n## PeakNormalization\n\n_Added in v0.2.0_\n\nApply a constant amount of gain, so that highest signal level present in each audio snippet\nin the batch becomes 0 dBFS, i.e. the loudest level allowed if all samples must be between\n-1 and 1.\n\nThis transform has an alternative mode (apply_to=\"only_too_loud_sounds\") where it only\napplies to audio snippets that have extreme values outside the [-1, 1] range. This is useful\nfor avoiding digital clipping in audio that is too loud, while leaving other audio\nuntouched.\n\n## PitchShift\n\n_Added in v0.9.0_\n\nPitch-shift sounds up or down without changing the tempo.\n\n## PolarityInversion\n\n_Added in v0.1.0_\n\nFlip the audio samples upside-down, reversing their polarity. In other words, multiply the\nwaveform by -1, so negative values become positive, and vice versa. The result will sound\nthe same compared to the original when played back in isolation. However, when mixed with\nother audio sources, the result may be different. This waveform inversion technique\nis sometimes used for audio cancellation or obtaining the difference between two waveforms.\nHowever, in the context of audio data augmentation, this transform can be useful when\ntraining phase-aware machine learning models.\n\n## Shift\n\n_Added in v0.5.0_\n\nShift the audio forwards or backwards, with or without rollover\n\n## ShuffleChannels\n\n_Added in v0.6.0_\n\nGiven multichannel audio input (e.g. stereo), shuffle the channels, e.g. so left can become right and vice versa.\nThis transform can help combat positional bias in machine learning models that input multichannel waveforms.\n\nIf the input audio is mono, this transform does nothing except emit a warning.\n\n## TimeInversion\n\n_Added in v0.10.0_\n\nReverse (invert) the audio along the time axis similar to random flip of\nan image in the visual domain. This can be relevant in the context of audio\nclassification. It was successfully applied in the paper\n[AudioCLIP: Extending CLIP to Image, Text and Audio](https://arxiv.org/pdf/2106.13043.pdf)\n\n\n# Changelog\n\n## Unreleased\n\n### Added\n\n* Add new transforms: `Mix`, `Padding`, `RandomCrop` and `SpliceOut`\n\n## [v0.11.1] - 2024-02-07\n\n### Changed\n\n* Add support for constant cutoff frequency in `LowPassFilter` and `HighPassFilter`\n* Add support for min_f_decay==max_f_decay in `AddColoredNoise`\n* Bump torchaudio dependency from >=0.7.0 to >=0.9.0\n\n### Fixed\n\n* Fix inaccurate type hints in `Shift`\n* Remove `set_backend` to avoid `UserWarning` from torchaudio\n\n## [v0.11.0] - 2022-06-29\n\n### Added\n\n* Add new transform: `Identity`\n* Add API for processing targets alongside inputs. Some transforms experimentally\n  support this feature already.\n\n### Changed\n\n* Add `ObjectDict` output type as alternative to `torch.Tensor`. This alternative is opt-in for\n  now (for backwards-compatibility), but note that the old output type (`torch.Tensor`) is\n  deprecated and support for it will be removed in a future version.\n* Allow specifying a file path, a folder path, a list of files or a list of folders to\n  `AddBackgroundNoise` and `ApplyImpulseResponse`\n* Require newer version of `torch-pitch-shift` to ensure support for torchaudio 0.11 in `PitchShift`\n\n### Fixed\n\n* Fix a bug where `BandPassFilter` didn't work on GPU\n\n## [v0.10.1] - 2022-03-24\n\n### Added\n\n* Add support for min SNR == max SNR in `AddBackgroundNoise`\n* Add support for librosa 0.9.0\n\n### Fixed\n\n* Fix a bug where loaded audio snippets were sometimes resampled to an incompatible\n length in `AddBackgroundNoise`\n\n## [v0.10.0] - 2022-02-11\n\n### Added\n\n* Implement `OneOf` and `SomeOf` for applying one or more of a given set of transforms\n* Implement new transforms: `BandStopFilter` and `TimeInversion`\n\n### Changed\n\n* Put `ir_paths` in transform_parameters in `ApplyImpulseResponse` so it is possible\n to inspect what impulse responses were used. This also gives `freeze_parameters()`\n the expected behavior.\n\n### Fixed\n\n* Fix a bug where the actual bandwidth was twice as large as expected in\n `BandPassFilter`. The default values have been updated accordingly.\n If you were previously specifying `min_bandwidth_fraction` and/or `max_bandwidth_fraction`,\n you now need to double those numbers to get the same behavior as before.\n\n## [v0.9.1] - 2021-12-20\n\n### Added\n\n* Officially mark python>=3.9 as supported\n\n## [v0.9.0] - 2021-10-11\n\n### Added\n\n* Add parameter `compensate_for_propagation_delay` in `ApplyImpulseResponse`\n* Implement `BandPassFilter`\n* Implement `PitchShift`\n\n### Removed\n\n* Support for torchaudio<=0.6 has been removed\n\n## [v0.8.0] - 2021-06-15\n\n### Added\n\n* Implement `HighPassFilter` and `LowPassFilter`\n\n### Deprecated\n\n* Support for torchaudio<=0.6 is deprecated and will be removed in the future\n\n### Removed\n\n* Support for pytorch<=1.6 has been removed\n\n## [v0.7.0] - 2021-04-16\n\n### Added\n\n* Implement `AddColoredNoise`\n\n### Deprecated\n\n* Support for pytorch<=1.6 is deprecated and will be removed in the future\n\n## [v0.6.0] - 2021-02-22\n\n### Added\n\n* Implement `ShuffleChannels`\n\n## [v0.5.1] - 2020-12-18\n\n### Fixed\n\n* Fix a bug where `AddBackgroundNoise` did not work on CUDA\n* Fix a bug where symlinked audio files/folders were not found when looking for audio files\n* Use torch.fft.rfft instead of the torch.rfft (deprecated in pytorch 1.7) when possible. As a\nbonus, the change also improves performance in `ApplyImpulseResponse`.\n\n## [v0.5.0] - 2020-12-08\n\n### Added\n\n* Release `AddBackgroundNoise` and `ApplyImpulseResponse`\n* Implement `Shift`\n\n### Changed\n\n* Make `sample_rate` optional. Allow specifying `sample_rate` in `__init__` instead of `forward`. This means torchaudio transforms can be used in `Compose` now.\n\n### Removed\n\n* Remove support for 1-dimensional and 2-dimensional audio tensors. Only 3-dimensional audio\n tensors are supported now.\n\n### Fixed\n\n* Fix a bug where one could not use the `parameters` method of the `nn.Module` subclass\n* Fix a bug where files with uppercase filename extension were not found\n\n## [v0.4.0] - 2020-11-10\n\n### Added\n\n* Implement `Compose` for applying multiple transforms\n* Implement utility functions `from_dict` and `from_yaml` for loading data augmentation\nconfigurations from dict, json or yaml\n* Officially support differentiability in most transforms\n\n## [v0.3.0] - 2020-10-27\n\n### Added\n\n* Add support for alternative modes `per_batch` and `per_channel`\n\n### Changed\n\n* Transforms now return the input unchanged when they are in eval mode\n\n## [v0.2.0] - 2020-10-19\n\n### Added\n\n* Implement `PeakNormalization`\n* Expose `convolve` in the API\n\n### Changed\n\n* Simplify API for using CUDA tensors. The device is now inferred from the input tensor.\n\n## [v0.1.0] - 2020-10-12\n\n### Added\n\n* Initial release with `Gain` and `PolarityInversion`\n\n# Development\n\n## Setup\n\nA GPU-enabled development environment for torch-audiomentations can be created with conda:\n\n* `conda env create`\n\n## Run tests\n\n`pytest`\n\n## Conventions\n\n* Format python code with [black](https://github.com/psf/black)\n* Use [Google-style docstrings](https://google.github.io/styleguide/pyguide.html#381-docstrings)\n* Use explicit relative imports, not absolute imports\n\n# Acknowledgements\n\nThe development of torch-audiomentations is kindly backed by [Nomono](https://nomono.co/).\n\nThanks to [all contributors](https://github.com/asteroid-team/torch-audiomentations/graphs/contributors) who help improving torch-audiomentations.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A Pytorch library for audio data augmentation. Inspired by audiomentations. Useful for deep learning.",
    "version": "0.11.1",
    "project_urls": {
        "Homepage": "https://github.com/asteroid-team/torch-audiomentations"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "882c03ffe92c8c28e4511c7f8108decb2065b9ee7c0ee69bfeb66325b2b4d513",
                "md5": "bec15911aee1f37f361559c2599ba632",
                "sha256": "530b419b61c4ffd7b137828ec4ad4c2ce937db205230b5679947941f6b05242e"
            },
            "downloads": -1,
            "filename": "torch_audiomentations-0.11.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "bec15911aee1f37f361559c2599ba632",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 50097,
            "upload_time": "2024-02-07T10:03:46",
            "upload_time_iso_8601": "2024-02-07T10:03:46.509905Z",
            "url": "https://files.pythonhosted.org/packages/88/2c/03ffe92c8c28e4511c7f8108decb2065b9ee7c0ee69bfeb66325b2b4d513/torch_audiomentations-0.11.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7cdb151c1a1ab5b426e34a225a0b54de0ac88846e46aa8704fc05409eb38f90c",
                "md5": "ac86983fa4982c0d637796d447a031b3",
                "sha256": "ee4d0c0f937552b4d63dbccdd3c509f5f7747f76c2e13e528ea8ef30e13e1000"
            },
            "downloads": -1,
            "filename": "torch-audiomentations-0.11.1.tar.gz",
            "has_sig": false,
            "md5_digest": "ac86983fa4982c0d637796d447a031b3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 53408,
            "upload_time": "2024-02-07T10:03:48",
            "upload_time_iso_8601": "2024-02-07T10:03:48.054768Z",
            "url": "https://files.pythonhosted.org/packages/7c/db/151c1a1ab5b426e34a225a0b54de0ac88846e46aa8704fc05409eb38f90c/torch-audiomentations-0.11.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-07 10:03:48",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "asteroid-team",
    "github_project": "torch-audiomentations",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "torch-audiomentations"
}
        
Elapsed time: 1.46814s