pytorch-wavelets


Namepytorch-wavelets JSON
Version 1.3.0 PyPI version JSON
download
home_pagehttps://github.com/fbcotter/pytorch_wavelets
SummaryA port of the DTCWT toolbox to run on pytorch
upload_time2023-04-13 06:26:52
maintainer
docs_urlNone
authorFergal Cotter
requires_python
licenseFree To Use
keywords pytorch dwt dtcwt wavelet complex wavelet
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            2D Wavelet Transforms in Pytorch
================================

|build-status| |docs| |doi|

.. |build-status| image:: https://travis-ci.org/fbcotter/pytorch_wavelets.png?branch=master
    :alt: build status
    :scale: 100%
    :target: https://travis-ci.org/fbcotter/pytorch_wavelets

.. |docs| image:: https://readthedocs.org/projects/pytorch-wavelets/badge/?version=latest
    :target: https://pytorch-wavelets.readthedocs.io/en/latest/?badge=latest
    :alt: Documentation Status

.. |doi| image:: https://zenodo.org/badge/146817005.svg
   :target: https://zenodo.org/badge/latestdoi/146817005
   
The full documentation is also available `here`__.

__ http://pytorch-wavelets.readthedocs.io/

This package provides support for computing the 2D discrete wavelet and 
the 2d dual-tree complex wavelet transforms, their inverses, and passing 
gradients through both using pytorch.

The implementation is designed to be used with batches of multichannel images.
We use the standard pytorch implementation of having 'NCHW' data format.

We also have added layers to do the 2-D DTCWT based scatternet. This is similar
to the Morlet based scatternet in `KymatIO`__, but is roughly 10 times faster.

If you use this repo, please cite my PhD thesis, chapter 3: https://doi.org/10.17863/CAM.53748.

__ https://github.com/kymatio/kymatio

New in version 1.3.0
~~~~~~~~~~~~~~~~~~~~

- Added 1D DWT support

.. code:: python

    import torch
    from pytorch_wavelets import DWT1DForward, DWT1DInverse  # or simply DWT1D, IDWT1D
    dwt = DWT1DForward(wave='db6', J=3)
    X = torch.randn(10, 5, 100)
    yl, yh = dwt(X)
    print(yl.shape)
    >>> torch.Size([10, 5, 22])
    print(yh[0].shape)
    >>> torch.Size([10, 5, 55])
    print(yh[1].shape)
    >>> torch.Size([10, 5, 33])
    print(yh[2].shape)
    >>> torch.Size([10, 5, 22])
    idwt = DWT1DInverse(wave='db6')
    x = idwt((yl, yh))

New in version 1.2.0
~~~~~~~~~~~~~~~~~~~~

- Added a DTCWT based ScatterNet

.. code:: python

    import torch
    from pytorch_wavelets import ScatLayer
    scat = ScatLayer()
    X = torch.randn(10,5,64,64)
    # A first order scatternet with 6 orientations and one lowpass channels
    # gives 7 times the input channel dimension
    Z = scat(X)
    print(Z.shape)
    >>> torch.Size([10, 35, 32, 32])
    # A second order scatternet with 6 orientations and one lowpass channels
    # gives 7^2 times the input channel dimension
    scat2 = torch.nn.Sequential(ScatLayer(), ScatLayer())
    Z = scat2(X)
    print(Z.shape)
    >>> torch.Size([10, 245, 16, 16])
    # We also have a slightly more specialized, but slower, second order scatternet
    from pytorch_wavelets import ScatLayerj2
    scat2a = ScatLayerj2()
    Z = scat2a(X)
    print(Z.shape)
    >>> torch.Size([10, 245, 16, 16])
    # These all of course work with cuda
    scat2a.cuda()
    Z = scat2a(X.cuda())

New in version 1.1.0
~~~~~~~~~~~~~~~~~~~~

- Fixed memory problem with dwt 
- Fixed the backend code for the dtcwt calculation - much cleaner now but similar performance
- Both dtcwt and dwt should be more memory efficient/aware now. 
- Removed need to specify number of scales for DTCWTInverse

New in version 1.0.0
~~~~~~~~~~~~~~~~~~~~
Version 1.0.0 has now added support for separable DWT calculation, and more
padding schemes, such as symmetric, zero and periodization.

Also, no longer need to specify the number of channels when creating the wavelet
transform classes.

Speed Tests
~~~~~~~~~~~
We compare doing the dtcwt with the python package and doing the dwt with
PyWavelets to doing both in pytorch_wavelets, using a GTX1080. The numpy methods
were run on a 14 core Xeon Phi machine using intel's parallel python. For the
dtwcwt we use the `near_sym_a` filters for the first scale and the `qshift_a`
filters for subsequent scales. For the dwt we use the `db4` filters.

For a fixed input size, but varying the number of scales (from 1 to 4) we have
the following speeds (averaged over 5 runs):

.. image:: docs/scale.png

For an input size with height and width 512 by 512, we also vary the batch size
for a 3 scale transform. The resulting speeds were:

.. image:: docs/batchsize.png

Installation
````````````
The easiest way to install ``pytorch_wavelets`` is to clone the repo and pip install
it. Later versions will be released on PyPi but the docs need to updated first::

    $ git clone https://github.com/fbcotter/pytorch_wavelets
    $ cd pytorch_wavelets
    $ pip install .

(Although the `develop` command may be more useful if you intend to perform any
significant modification to the library.) A test suite is provided so that you
may verify the code works on your system::

    $ pip install -r tests/requirements.txt
    $ pytest tests/

Example Use
```````````
For the DWT - note that the highpass output has an extra dimension, in which we
stack the (lh, hl, hh) coefficients.  Also note that the Yh output has the
finest detail coefficients first, and the coarsest last (the opposite to
PyWavelets).

.. code:: python

    import torch
    from pytorch_wavelets import DWTForward, DWTInverse
    xfm = DWTForward(J=3, wave='db3', mode='zero')
    X = torch.randn(10,5,64,64)
    Yl, Yh = xfm(X) 
    print(Yl.shape)
    >>> torch.Size([10, 5, 12, 12])
    print(Yh[0].shape) 
    >>> torch.Size([10, 5, 3, 34, 34])
    print(Yh[1].shape)
    >>> torch.Size([10, 5, 3, 19, 19])
    print(Yh[2].shape)
    >>> torch.Size([10, 5, 3, 12, 12])
    ifm = DWTInverse(wave='db3', mode='zero')
    Y = ifm((Yl, Yh))

For the DTCWT:

.. code:: python

    import torch
    from pytorch_wavelets import DTCWTForward, DTCWTInverse
    xfm = DTCWTForward(J=3, biort='near_sym_b', qshift='qshift_b')
    X = torch.randn(10,5,64,64)
    Yl, Yh = xfm(X) 
    print(Yl.shape)
    >>> torch.Size([10, 5, 16, 16])
    print(Yh[0].shape) 
    >>> torch.Size([10, 5, 6, 32, 32, 2])
    print(Yh[1].shape)
    >>> torch.Size([10, 5, 6, 16, 16, 2])
    print(Yh[2].shape)
    >>> torch.Size([10, 5, 6, 8, 8, 2])
    ifm = DTCWTInverse(biort='near_sym_b', qshift='qshift_b')
    Y = ifm((Yl, Yh))

Some initial notes:

- Yh returned is a tuple. There are 2 extra dimensions - the first comes between
  the channel dimension of the input and the row dimension. This is the
  6 orientations of the DTCWT. The second is the final dimension, which is the
  real an imaginary parts (complex numbers are not native to pytorch)

Running on the GPU
~~~~~~~~~~~~~~~~~~
This should come as no surprise to pytorch users. The DWT and DTCWT transforms support
cuda calling:

.. code:: python

    import torch
    from pytorch_wavelets import DTCWTForward, DTCWTInverse
    xfm = DTCWTForward(J=3, biort='near_sym_b', qshift='qshift_b').cuda()
    X = torch.randn(10,5,64,64).cuda()
    Yl, Yh = xfm(X) 
    ifm = DTCWTInverse(biort='near_sym_b', qshift='qshift_b').cuda()
    Y = ifm((Yl, Yh))

The automated tests cannot test the gpu functionality, but do check cpu running.
To test whether the repo is working on your gpu, you can download the repo,
ensure you have pytorch with cuda enabled (the tests will check to see if
:code:`torch.cuda.is_available()` returns true), and run:

.. code:: 

    pip install -r tests/requirements.txt
    pytest tests/

From the base of the repo.

Backpropagation
~~~~~~~~~~~~~~~
It is possible to pass gradients through the forward and backward transforms.
All you need to do is ensure that the input to each has the required_grad
attribute set to true.



Provenance
~~~~~~~~~~
Based on the Dual-Tree Complex Wavelet Transform Pack for MATLAB by Nick
Kingsbury, Cambridge University. The original README can be found in
ORIGINAL_README.txt.  This file outlines the conditions of use of the original
MATLAB toolbox.

Further information on the DT CWT can be obtained from papers
downloadable from my website (given below). The best tutorial is in
the 1999 Royal Society Paper. In particular this explains the conversion
between 'real' quad-number subimages and pairs of complex subimages. 
The Q-shift filters are explained in the ICIP 2000 paper and in more detail
in the May 2001 paper for the Journal on Applied and Computational 
Harmonic Analysis.

This code is copyright and is supplied free of charge for research
purposes only. In return for supplying the code, all I ask is that, if
you use the algorithms, you give due reference to this work in any
papers that you write and that you let me know if you find any good
applications for the DT CWT. If the applications are good, I would be
very interested in collaboration. I accept no liability arising from use
of these algorithms.

Nick Kingsbury, 
Cambridge University, June 2003.

Dr N G Kingsbury,
Dept. of Engineering, University of Cambridge,
Trumpington St., Cambridge CB2 1PZ, UK., or
Trinity College, Cambridge CB2 1TQ, UK.
Phone: (0 or +44) 1223 338514 / 332647;  Home: 1954 211152;
Fax: 1223 338564 / 332662;  E-mail: ngk@eng.cam.ac.uk
Web home page: http://www.eng.cam.ac.uk/~ngk/

.. vim:sw=4:sts=4:et

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/fbcotter/pytorch_wavelets",
    "name": "pytorch-wavelets",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "pytorch,DWT,DTCWT,wavelet,complex wavelet",
    "author": "Fergal Cotter",
    "author_email": "fbc23@cam.ac.uk",
    "download_url": "https://files.pythonhosted.org/packages/3f/e3/1553046e97926b859edcab05d74b5b7543327c92b37c58919a3e49ff6151/pytorch_wavelets-1.3.0.tar.gz",
    "platform": null,
    "description": "2D Wavelet Transforms in Pytorch\r\n================================\r\n\r\n|build-status| |docs| |doi|\r\n\r\n.. |build-status| image:: https://travis-ci.org/fbcotter/pytorch_wavelets.png?branch=master\r\n    :alt: build status\r\n    :scale: 100%\r\n    :target: https://travis-ci.org/fbcotter/pytorch_wavelets\r\n\r\n.. |docs| image:: https://readthedocs.org/projects/pytorch-wavelets/badge/?version=latest\r\n    :target: https://pytorch-wavelets.readthedocs.io/en/latest/?badge=latest\r\n    :alt: Documentation Status\r\n\r\n.. |doi| image:: https://zenodo.org/badge/146817005.svg\r\n   :target: https://zenodo.org/badge/latestdoi/146817005\r\n   \r\nThe full documentation is also available `here`__.\r\n\r\n__ http://pytorch-wavelets.readthedocs.io/\r\n\r\nThis package provides support for computing the 2D discrete wavelet and \r\nthe 2d dual-tree complex wavelet transforms, their inverses, and passing \r\ngradients through both using pytorch.\r\n\r\nThe implementation is designed to be used with batches of multichannel images.\r\nWe use the standard pytorch implementation of having 'NCHW' data format.\r\n\r\nWe also have added layers to do the 2-D DTCWT based scatternet. This is similar\r\nto the Morlet based scatternet in `KymatIO`__, but is roughly 10 times faster.\r\n\r\nIf you use this repo, please cite my PhD thesis, chapter 3: https://doi.org/10.17863/CAM.53748.\r\n\r\n__ https://github.com/kymatio/kymatio\r\n\r\nNew in version 1.3.0\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n- Added 1D DWT support\r\n\r\n.. code:: python\r\n\r\n    import torch\r\n    from pytorch_wavelets import DWT1DForward, DWT1DInverse  # or simply DWT1D, IDWT1D\r\n    dwt = DWT1DForward(wave='db6', J=3)\r\n    X = torch.randn(10, 5, 100)\r\n    yl, yh = dwt(X)\r\n    print(yl.shape)\r\n    >>> torch.Size([10, 5, 22])\r\n    print(yh[0].shape)\r\n    >>> torch.Size([10, 5, 55])\r\n    print(yh[1].shape)\r\n    >>> torch.Size([10, 5, 33])\r\n    print(yh[2].shape)\r\n    >>> torch.Size([10, 5, 22])\r\n    idwt = DWT1DInverse(wave='db6')\r\n    x = idwt((yl, yh))\r\n\r\nNew in version 1.2.0\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n- Added a DTCWT based ScatterNet\r\n\r\n.. code:: python\r\n\r\n    import torch\r\n    from pytorch_wavelets import ScatLayer\r\n    scat = ScatLayer()\r\n    X = torch.randn(10,5,64,64)\r\n    # A first order scatternet with 6 orientations and one lowpass channels\r\n    # gives 7 times the input channel dimension\r\n    Z = scat(X)\r\n    print(Z.shape)\r\n    >>> torch.Size([10, 35, 32, 32])\r\n    # A second order scatternet with 6 orientations and one lowpass channels\r\n    # gives 7^2 times the input channel dimension\r\n    scat2 = torch.nn.Sequential(ScatLayer(), ScatLayer())\r\n    Z = scat2(X)\r\n    print(Z.shape)\r\n    >>> torch.Size([10, 245, 16, 16])\r\n    # We also have a slightly more specialized, but slower, second order scatternet\r\n    from pytorch_wavelets import ScatLayerj2\r\n    scat2a = ScatLayerj2()\r\n    Z = scat2a(X)\r\n    print(Z.shape)\r\n    >>> torch.Size([10, 245, 16, 16])\r\n    # These all of course work with cuda\r\n    scat2a.cuda()\r\n    Z = scat2a(X.cuda())\r\n\r\nNew in version 1.1.0\r\n~~~~~~~~~~~~~~~~~~~~\r\n\r\n- Fixed memory problem with dwt \r\n- Fixed the backend code for the dtcwt calculation - much cleaner now but similar performance\r\n- Both dtcwt and dwt should be more memory efficient/aware now. \r\n- Removed need to specify number of scales for DTCWTInverse\r\n\r\nNew in version 1.0.0\r\n~~~~~~~~~~~~~~~~~~~~\r\nVersion 1.0.0 has now added support for separable DWT calculation, and more\r\npadding schemes, such as symmetric, zero and periodization.\r\n\r\nAlso, no longer need to specify the number of channels when creating the wavelet\r\ntransform classes.\r\n\r\nSpeed Tests\r\n~~~~~~~~~~~\r\nWe compare doing the dtcwt with the python package and doing the dwt with\r\nPyWavelets to doing both in pytorch_wavelets, using a GTX1080. The numpy methods\r\nwere run on a 14 core Xeon Phi machine using intel's parallel python. For the\r\ndtwcwt we use the `near_sym_a` filters for the first scale and the `qshift_a`\r\nfilters for subsequent scales. For the dwt we use the `db4` filters.\r\n\r\nFor a fixed input size, but varying the number of scales (from 1 to 4) we have\r\nthe following speeds (averaged over 5 runs):\r\n\r\n.. image:: docs/scale.png\r\n\r\nFor an input size with height and width 512 by 512, we also vary the batch size\r\nfor a 3 scale transform. The resulting speeds were:\r\n\r\n.. image:: docs/batchsize.png\r\n\r\nInstallation\r\n````````````\r\nThe easiest way to install ``pytorch_wavelets`` is to clone the repo and pip install\r\nit. Later versions will be released on PyPi but the docs need to updated first::\r\n\r\n    $ git clone https://github.com/fbcotter/pytorch_wavelets\r\n    $ cd pytorch_wavelets\r\n    $ pip install .\r\n\r\n(Although the `develop` command may be more useful if you intend to perform any\r\nsignificant modification to the library.) A test suite is provided so that you\r\nmay verify the code works on your system::\r\n\r\n    $ pip install -r tests/requirements.txt\r\n    $ pytest tests/\r\n\r\nExample Use\r\n```````````\r\nFor the DWT - note that the highpass output has an extra dimension, in which we\r\nstack the (lh, hl, hh) coefficients.  Also note that the Yh output has the\r\nfinest detail coefficients first, and the coarsest last (the opposite to\r\nPyWavelets).\r\n\r\n.. code:: python\r\n\r\n    import torch\r\n    from pytorch_wavelets import DWTForward, DWTInverse\r\n    xfm = DWTForward(J=3, wave='db3', mode='zero')\r\n    X = torch.randn(10,5,64,64)\r\n    Yl, Yh = xfm(X) \r\n    print(Yl.shape)\r\n    >>> torch.Size([10, 5, 12, 12])\r\n    print(Yh[0].shape) \r\n    >>> torch.Size([10, 5, 3, 34, 34])\r\n    print(Yh[1].shape)\r\n    >>> torch.Size([10, 5, 3, 19, 19])\r\n    print(Yh[2].shape)\r\n    >>> torch.Size([10, 5, 3, 12, 12])\r\n    ifm = DWTInverse(wave='db3', mode='zero')\r\n    Y = ifm((Yl, Yh))\r\n\r\nFor the DTCWT:\r\n\r\n.. code:: python\r\n\r\n    import torch\r\n    from pytorch_wavelets import DTCWTForward, DTCWTInverse\r\n    xfm = DTCWTForward(J=3, biort='near_sym_b', qshift='qshift_b')\r\n    X = torch.randn(10,5,64,64)\r\n    Yl, Yh = xfm(X) \r\n    print(Yl.shape)\r\n    >>> torch.Size([10, 5, 16, 16])\r\n    print(Yh[0].shape) \r\n    >>> torch.Size([10, 5, 6, 32, 32, 2])\r\n    print(Yh[1].shape)\r\n    >>> torch.Size([10, 5, 6, 16, 16, 2])\r\n    print(Yh[2].shape)\r\n    >>> torch.Size([10, 5, 6, 8, 8, 2])\r\n    ifm = DTCWTInverse(biort='near_sym_b', qshift='qshift_b')\r\n    Y = ifm((Yl, Yh))\r\n\r\nSome initial notes:\r\n\r\n- Yh returned is a tuple. There are 2 extra dimensions - the first comes between\r\n  the channel dimension of the input and the row dimension. This is the\r\n  6 orientations of the DTCWT. The second is the final dimension, which is the\r\n  real an imaginary parts (complex numbers are not native to pytorch)\r\n\r\nRunning on the GPU\r\n~~~~~~~~~~~~~~~~~~\r\nThis should come as no surprise to pytorch users. The DWT and DTCWT transforms support\r\ncuda calling:\r\n\r\n.. code:: python\r\n\r\n    import torch\r\n    from pytorch_wavelets import DTCWTForward, DTCWTInverse\r\n    xfm = DTCWTForward(J=3, biort='near_sym_b', qshift='qshift_b').cuda()\r\n    X = torch.randn(10,5,64,64).cuda()\r\n    Yl, Yh = xfm(X) \r\n    ifm = DTCWTInverse(biort='near_sym_b', qshift='qshift_b').cuda()\r\n    Y = ifm((Yl, Yh))\r\n\r\nThe automated tests cannot test the gpu functionality, but do check cpu running.\r\nTo test whether the repo is working on your gpu, you can download the repo,\r\nensure you have pytorch with cuda enabled (the tests will check to see if\r\n:code:`torch.cuda.is_available()` returns true), and run:\r\n\r\n.. code:: \r\n\r\n    pip install -r tests/requirements.txt\r\n    pytest tests/\r\n\r\nFrom the base of the repo.\r\n\r\nBackpropagation\r\n~~~~~~~~~~~~~~~\r\nIt is possible to pass gradients through the forward and backward transforms.\r\nAll you need to do is ensure that the input to each has the required_grad\r\nattribute set to true.\r\n\r\n\r\n\r\nProvenance\r\n~~~~~~~~~~\r\nBased on the Dual-Tree Complex Wavelet Transform Pack for MATLAB by Nick\r\nKingsbury, Cambridge University. The original README can be found in\r\nORIGINAL_README.txt.  This file outlines the conditions of use of the original\r\nMATLAB toolbox.\r\n\r\nFurther information on the DT CWT can be obtained from papers\r\ndownloadable from my website (given below). The best tutorial is in\r\nthe 1999 Royal Society Paper. In particular this explains the conversion\r\nbetween 'real' quad-number subimages and pairs of complex subimages. \r\nThe Q-shift filters are explained in the ICIP 2000 paper and in more detail\r\nin the May 2001 paper for the Journal on Applied and Computational \r\nHarmonic Analysis.\r\n\r\nThis code is copyright and is supplied free of charge for research\r\npurposes only. In return for supplying the code, all I ask is that, if\r\nyou use the algorithms, you give due reference to this work in any\r\npapers that you write and that you let me know if you find any good\r\napplications for the DT CWT. If the applications are good, I would be\r\nvery interested in collaboration. I accept no liability arising from use\r\nof these algorithms.\r\n\r\nNick Kingsbury, \r\nCambridge University, June 2003.\r\n\r\nDr N G Kingsbury,\r\nDept. of Engineering, University of Cambridge,\r\nTrumpington St., Cambridge CB2 1PZ, UK., or\r\nTrinity College, Cambridge CB2 1TQ, UK.\r\nPhone: (0 or +44) 1223 338514 / 332647;  Home: 1954 211152;\r\nFax: 1223 338564 / 332662;  E-mail: ngk@eng.cam.ac.uk\r\nWeb home page: http://www.eng.cam.ac.uk/~ngk/\r\n\r\n.. vim:sw=4:sts=4:et\r\n",
    "bugtrack_url": null,
    "license": "Free To Use",
    "summary": "A port of the DTCWT toolbox to run on pytorch",
    "version": "1.3.0",
    "split_keywords": [
        "pytorch",
        "dwt",
        "dtcwt",
        "wavelet",
        "complex wavelet"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8d65b7da80705dc679999ef77c06ded71e060d4ea14ed80111c104223e130cc1",
                "md5": "ad4b737daace8e1a3afff787862497fe",
                "sha256": "e4f8635872370d8de640ef7548edef4ef60d5c565553c463210e939c7901ee69"
            },
            "downloads": -1,
            "filename": "pytorch_wavelets-1.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ad4b737daace8e1a3afff787862497fe",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 54879,
            "upload_time": "2023-04-13T06:26:45",
            "upload_time_iso_8601": "2023-04-13T06:26:45.630024Z",
            "url": "https://files.pythonhosted.org/packages/8d/65/b7da80705dc679999ef77c06ded71e060d4ea14ed80111c104223e130cc1/pytorch_wavelets-1.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3fe31553046e97926b859edcab05d74b5b7543327c92b37c58919a3e49ff6151",
                "md5": "421d547932f94612b8253519894bdd25",
                "sha256": "8b5c63f87c2bb36e6b342a7bb294926bda5cd974614fb4848deab6ec2792f56f"
            },
            "downloads": -1,
            "filename": "pytorch_wavelets-1.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "421d547932f94612b8253519894bdd25",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 1029927,
            "upload_time": "2023-04-13T06:26:52",
            "upload_time_iso_8601": "2023-04-13T06:26:52.086201Z",
            "url": "https://files.pythonhosted.org/packages/3f/e3/1553046e97926b859edcab05d74b5b7543327c92b37c58919a3e49ff6151/pytorch_wavelets-1.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-04-13 06:26:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "fbcotter",
    "github_project": "pytorch_wavelets",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "tox": true,
    "lcname": "pytorch-wavelets"
}
        
Elapsed time: 0.05288s