fft-conv-pytorch


Namefft-conv-pytorch JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/fkodom/fft-conv-pytorch
SummaryImplementation of 1D, 2D, and 3D FFT convolutions in PyTorch.
upload_time2023-09-28 16:06:32
maintainer
docs_urlNone
authorFrank Odom
requires_python>=3.7
license
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # fft-conv-pytorch

Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch.  
* Faster than direct convolution for large kernels.
* **Much slower** than direct convolution for small kernels.
* In my local tests, FFT convolution is faster when the kernel has >100 or so elements.
    * Dependent on machine and PyTorch version.
    * Also see benchmarks below.


## Install

Using `pip`:
```bash
pip install fft-conv-pytorch
```

From source:
```bash
git clone https://github.com/fkodom/fft-conv-pytorch.git
cd fft-conv-pytorch
pip install .
```

## Example Usage

```python
import torch
from fft_conv_pytorch import fft_conv, FFTConv1d

# Create dummy data.  
#     Data shape: (batch, channels, length)
#     Kernel shape: (out_channels, in_channels, kernel_size)
#     Bias shape: (out channels, )
# For ordinary 1D convolution, simply set batch=1.
signal = torch.randn(3, 3, 1024 * 1024)
kernel = torch.randn(2, 3, 128)
bias = torch.randn(2)

# Functional execution.  (Easiest for generic use cases.)
out = fft_conv(signal, kernel, bias=bias)

# Object-oriented execution.  (Requires some extra work, since the 
# defined classes were designed for use in neural networks.)
fft_conv = FFTConv1d(3, 2, 128, bias=True)
fft_conv.weight = torch.nn.Parameter(kernel)
fft_conv.bias = torch.nn.Parameter(bias)
out = fft_conv(signal)
```

## Benchmarks

Benchmarking FFT convolution against the direct convolution from PyTorch in 1D, 2D, 
and 3D. The exact times are heavily dependent on your local machine, but relative 
scaling with kernel size is always the same. 

Dimensions | Input Size   | Input Channels | Output Channels | Bias | Padding | Stride | Dilation
-----------|--------------|----------------|-----------------|------|---------|--------|---------
1          | (4096)       | 4              | 4               | True | 0       | 1      | 1
2          | (512, 512)   | 4              | 4               | True | 0       | 1      | 1
3          | (64, 64, 64) | 4              | 4               | True | 0       | 1      | 1

![Benchmark Plot](doc/benchmark.png)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/fkodom/fft-conv-pytorch",
    "name": "fft-conv-pytorch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "",
    "author": "Frank Odom",
    "author_email": "frank.odom.iii@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/e2/33/bebb4a0c01aa18513331b6a19d1a100b5762ce3678fefae862d2f12673e6/fft-conv-pytorch-1.2.0.tar.gz",
    "platform": null,
    "description": "# fft-conv-pytorch\n\nImplementation of 1D, 2D, and 3D FFT convolutions in PyTorch.  \n* Faster than direct convolution for large kernels.\n* **Much slower** than direct convolution for small kernels.\n* In my local tests, FFT convolution is faster when the kernel has >100 or so elements.\n    * Dependent on machine and PyTorch version.\n    * Also see benchmarks below.\n\n\n## Install\n\nUsing `pip`:\n```bash\npip install fft-conv-pytorch\n```\n\nFrom source:\n```bash\ngit clone https://github.com/fkodom/fft-conv-pytorch.git\ncd fft-conv-pytorch\npip install .\n```\n\n## Example Usage\n\n```python\nimport torch\nfrom fft_conv_pytorch import fft_conv, FFTConv1d\n\n# Create dummy data.  \n#     Data shape: (batch, channels, length)\n#     Kernel shape: (out_channels, in_channels, kernel_size)\n#     Bias shape: (out channels, )\n# For ordinary 1D convolution, simply set batch=1.\nsignal = torch.randn(3, 3, 1024 * 1024)\nkernel = torch.randn(2, 3, 128)\nbias = torch.randn(2)\n\n# Functional execution.  (Easiest for generic use cases.)\nout = fft_conv(signal, kernel, bias=bias)\n\n# Object-oriented execution.  (Requires some extra work, since the \n# defined classes were designed for use in neural networks.)\nfft_conv = FFTConv1d(3, 2, 128, bias=True)\nfft_conv.weight = torch.nn.Parameter(kernel)\nfft_conv.bias = torch.nn.Parameter(bias)\nout = fft_conv(signal)\n```\n\n## Benchmarks\n\nBenchmarking FFT convolution against the direct convolution from PyTorch in 1D, 2D, \nand 3D. The exact times are heavily dependent on your local machine, but relative \nscaling with kernel size is always the same. \n\nDimensions | Input Size   | Input Channels | Output Channels | Bias | Padding | Stride | Dilation\n-----------|--------------|----------------|-----------------|------|---------|--------|---------\n1          | (4096)       | 4              | 4               | True | 0       | 1      | 1\n2          | (512, 512)   | 4              | 4               | True | 0       | 1      | 1\n3          | (64, 64, 64) | 4              | 4               | True | 0       | 1      | 1\n\n![Benchmark Plot](doc/benchmark.png)\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch.",
    "version": "1.2.0",
    "project_urls": {
        "Homepage": "https://github.com/fkodom/fft-conv-pytorch"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3496bbbf3761c6969defddb3c3268e2ab1a0524a7a16378afbdf06985cccce26",
                "md5": "ba0fc479a5e48d29d372ed0d1d62c2f8",
                "sha256": "17b9bd616df86da25e4820473698eb4831c2f2f6e73906961901e6c278098f3c"
            },
            "downloads": -1,
            "filename": "fft_conv_pytorch-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ba0fc479a5e48d29d372ed0d1d62c2f8",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 6821,
            "upload_time": "2023-09-28T16:06:31",
            "upload_time_iso_8601": "2023-09-28T16:06:31.040337Z",
            "url": "https://files.pythonhosted.org/packages/34/96/bbbf3761c6969defddb3c3268e2ab1a0524a7a16378afbdf06985cccce26/fft_conv_pytorch-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e233bebb4a0c01aa18513331b6a19d1a100b5762ce3678fefae862d2f12673e6",
                "md5": "b8ced2dffb509ecdc3b471c13b5cd80a",
                "sha256": "9a061383176fa72cb2d8815d0c9ae67d03f7f1cea182ec9b9f5e869168582adb"
            },
            "downloads": -1,
            "filename": "fft-conv-pytorch-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b8ced2dffb509ecdc3b471c13b5cd80a",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 7420,
            "upload_time": "2023-09-28T16:06:32",
            "upload_time_iso_8601": "2023-09-28T16:06:32.755819Z",
            "url": "https://files.pythonhosted.org/packages/e2/33/bebb4a0c01aa18513331b6a19d1a100b5762ce3678fefae862d2f12673e6/fft-conv-pytorch-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-09-28 16:06:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "fkodom",
    "github_project": "fft-conv-pytorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "fft-conv-pytorch"
}
        
Elapsed time: 0.12486s