MIRTorch


NameMIRTorch JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
Summarya PyTorch-based image reconstruction toolbox
upload_time2024-08-04 22:55:51
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseBSD-3-Clause
keywords signal processing inverse problems
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MIRTorch

![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/guanhuaw/mirtorch?include_prereleases)
![Read the Docs](https://img.shields.io/readthedocs/mirtorch)

A Py***Torch***-based differentiable ***I***mage ***R***econstruction ***T***oolbox, developed at the University of ***M***ichigan.

The work is inspired by [MIRT](https://github.com/JeffFessler/mirt), a well-acclaimed toolbox for medical imaging reconstruction.

The main objective is to facilitate rapid, data-driven medical image reconstruction using CPUs and GPUs, for fast prototyping. Researchers can conveniently develop new model-based and learning-based methods (e.g., unrolled neural networks) with abstraction layers. The availability of auto-differentiation enables optimization of imaging protocols and reconstruction parameters using gradient methods.

Documentation: https://mirtorch.readthedocs.io/en/latest/

------

### Installation

We recommend to [pre-install `PyTorch` first](https://pytorch.org/).
Use `pip install mirtorch` to install.
To install the `MIRTorch` locally, after cloning the repo, please try `pip install -e .`(one may modify the package locally with this option.)

------

### Features

#### Linear maps

The `LinearMap` class overloads common matrix operations, such as `+, - , *`.

Instances include basic linear operations (like convolution), classical imaging processing, and MRI system matrix (Cartesian and Non-Cartesian, sensitivity- and B0-informed system models). ***NEW!*** MIRTorch recently adds the support for SPECT and CT.

Since the Jacobian matrix of a linear operator is itself, the toolbox can actively calculate such Jacobians during backpropagation, avoiding the large cache cost required by auto-differentiation.

When defining linear operators, please make sure that all torch tensors are on the same device and compatible. For example, `torch.cfloat` are compatible with `torch.float` but not `torch.double`. Similarly, `torch.chalf` is compatible with `torch.half`.
When the data is image, there are 2 empirical formats: `[num_batch, num_channel, nx, ny, (nz)]` and `[nx, ny, (nz)]`.
For some LinearMaps, there is a boolean `batchmode` to control the shape.

#### Proximal operators

The toolbox contains common proximal operators such as soft thresholding. These operators also support the regularizers that involve multiplication with diagonal or unitary matrices, such as orthogonal wavelets.

#### Iterative reconstruction (MBIR) algorithms

Currently, the package includes the conjugate gradient (CG), fast iterative thresholding (FISTA), optimized gradient method (POGM), forward-backward primal-dual (FBPD) algorithms for image reconstruction.

#### Dictionary learning

For dictionary learning-based reconstruction, we implemented an efficient dictionary learning algorithm ([SOUP-DIL](https://arxiv.org/abs/1511.06333)) and orthogonal matching pursuit ([OMP](https://ieeexplore.ieee.org/abstract/document/342465/?casa_token=aTDkQVCM9WEAAAAA:5rXu9YikP822bCBvkhYxKWlBTJ6Fn6baTQJ9kuNrU7K-64EmGOAczYvF2dTW3al3PfPdwJAiYw)). Due to PyTorch’s limited support of sparse matrices, we use SciPy as the backend.

#### Multi-GPU support

Currently, MIRTorch uses `torch.DataParallel` to support multiple GPUs. One may re-package the `LinearMap`, `Prox` or `Alg` inside a `torch.nn.Module` to enable data parallel. See [this tutorial](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html) for detail.

------

### Usage and examples

Generally, MIRTorch solves the image reconstruction problems that have the cost function $\textit{argmin}_{x} \|Ax-y\|_2^2 + \lambda \textit{R}(x)$. $A$ stands for the system matrix. When it is linear, one may use `LinearMap` to efficiently compute it. `y` usually denotes measurements. $\textit{R}(\cdot)$ denotes regularizers, which determines which `Alg` to be used. One may refer to [1](https://web.eecs.umich.edu/~fessler/book/), [2](https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf) and [3](https://www.youtube.com/watch?v=J6_5rPYnr_s) for more tutorials on optimization.

Here we provide several notebook tutorials focused on MRI, where $A$ is FFT or NUFFT.

- `/example/demo_mnist.ipynb` shows the LASSO on MNIST with FISTA and POGM.
- `/example/demo_mri.ipynb` contains the SENSE (CG-SENSE) and **B0**-informed reconstruction with penalized weighted least squares (*PWLS*).
- `/example/demo_3d.ipynb` contains the 3d non-Cartesian MR reconstruction. *New!* Try the Toeplitz-embedding version of B0-informed reconstruction, which reduce hour-long recon to 5 secs.
- `/example/demo_cs.ipynb` shows the compressed sensing reconstruction of under-determined MRI signals.
- `/example/demo_dl.ipynb` exhibits the dictionary learning results.
- `/example/demo_mlem` showcase SPECT recon algorithms, including EM and CNN.

Since MIRTorch is differentiable, one may use AD to update many parameters. For example, updating the reconstruction neural network's weights. More importantly, one may update the imaging system itself via gradient-based and data-driven methods. As a user case, [Bjork repo](https://github.com/guanhuaw/Bjork) contains MRI sampling pattern optimization examples. One may use the reconstruction loss as the objective function to jointly optimize reconstruction algorithms and the sampling pattern. See [this video](https://www.youtube.com/watch?v=sLFOf5EvVAs) on how to jointly optimize reconstruction and acquisition.

------

### Acknowledgments

This work is inspired by (but not limited to):

* SigPy: https://github.com/mikgroup/sigpy

* MIRT: https://github.com/JeffFessler/mirt

* MIRT.jl: https://github.com/JeffFessler/MIRT.jl

* PyLops: https://github.com/PyLops/pylops

If the code is useful to your research, please consider citing:

```bibtex
@article{wang:22:bjork,
  author={Wang, Guanhua and Luo, Tianrui and Nielsen, Jon-Fredrik and Noll, Douglas C. and Fessler, Jeffrey A.},
  journal={IEEE Transactions on Medical Imaging},
  title={B-spline Parameterized Joint Optimization of Reconstruction and K-space Trajectories ({BJORK}) for Accelerated {2D} {MRI}},
  year={2022},
  pages={1-1},
  doi={10.1109/TMI.2022.3161875}}
```

```bibtex
@inproceedings{wang:22:mirtorch,
  title={{MIRTorch}: A {PyTorch}-powered Differentiable Toolbox for Fast Image Reconstruction and Scan Protocol Optimization},
  author={Wang, Guanhua and Shah, Neel and Zhu, Keyue and Noll, Douglas C. and Fessler, Jeffrey A.},
  booktitle={Proc. Intl. Soc. Magn. Resonance. Med. (ISMRM)},
  pages={4982},
  year={2022}
}
```
If you use the SPECT code, please consider citing:

```bibtex
@ARTICLE{li:23:tet,
  author={Li, Zongyu and Dewaraja, Yuni K. and Fessler, Jeffrey A.},
  journal={IEEE Transactions on Radiation and Plasma Medical Sciences},
  title={Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction},
  year={2023},
  volume={7},
  number={4},
  pages={410-420},
  doi={10.1109/TRPMS.2023.3240934}}
```


------

### License

This package uses the BSD3 license.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "MIRTorch",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "signal processing, inverse problems",
    "author": null,
    "author_email": "Guanhua Wang <guanhuaw@umich.edu>",
    "download_url": "https://files.pythonhosted.org/packages/2b/56/8d233a6f9c285a4e6a7e2f131a0a75555786dc1f6ffa74c688c734081aec/mirtorch-0.1.2.tar.gz",
    "platform": null,
    "description": "# MIRTorch\n\n![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/guanhuaw/mirtorch?include_prereleases)\n![Read the Docs](https://img.shields.io/readthedocs/mirtorch)\n\nA Py***Torch***-based differentiable ***I***mage ***R***econstruction ***T***oolbox, developed at the University of ***M***ichigan.\n\nThe work is inspired by [MIRT](https://github.com/JeffFessler/mirt), a well-acclaimed toolbox for medical imaging reconstruction.\n\nThe main objective is to facilitate rapid, data-driven medical image reconstruction using CPUs and GPUs, for fast prototyping. Researchers can conveniently develop new model-based and learning-based methods (e.g., unrolled neural networks) with abstraction layers. The availability of auto-differentiation enables optimization of imaging protocols and reconstruction parameters using gradient methods.\n\nDocumentation: https://mirtorch.readthedocs.io/en/latest/\n\n------\n\n### Installation\n\nWe recommend to [pre-install `PyTorch` first](https://pytorch.org/).\nUse `pip install mirtorch` to install.\nTo install the `MIRTorch` locally, after cloning the repo, please try `pip install -e .`(one may modify the package locally with this option.)\n\n------\n\n### Features\n\n#### Linear maps\n\nThe `LinearMap` class overloads common matrix operations, such as `+, - , *`.\n\nInstances include basic linear operations (like convolution), classical imaging processing, and MRI system matrix (Cartesian and Non-Cartesian, sensitivity- and B0-informed system models). ***NEW!*** MIRTorch recently adds the support for SPECT and CT.\n\nSince the Jacobian matrix of a linear operator is itself, the toolbox can actively calculate such Jacobians during backpropagation, avoiding the large cache cost required by auto-differentiation.\n\nWhen defining linear operators, please make sure that all torch tensors are on the same device and compatible. For example, `torch.cfloat` are compatible with `torch.float` but not `torch.double`. Similarly, `torch.chalf` is compatible with `torch.half`.\nWhen the data is image, there are 2 empirical formats: `[num_batch, num_channel, nx, ny, (nz)]` and `[nx, ny, (nz)]`.\nFor some LinearMaps, there is a boolean `batchmode` to control the shape.\n\n#### Proximal operators\n\nThe toolbox contains common proximal operators such as soft thresholding. These operators also support the regularizers that involve multiplication with diagonal or unitary matrices, such as orthogonal wavelets.\n\n#### Iterative reconstruction (MBIR) algorithms\n\nCurrently, the package includes the conjugate gradient (CG), fast iterative thresholding (FISTA), optimized gradient method (POGM), forward-backward primal-dual (FBPD) algorithms for image reconstruction.\n\n#### Dictionary learning\n\nFor dictionary learning-based reconstruction, we implemented an efficient dictionary learning algorithm ([SOUP-DIL](https://arxiv.org/abs/1511.06333)) and orthogonal matching pursuit ([OMP](https://ieeexplore.ieee.org/abstract/document/342465/?casa_token=aTDkQVCM9WEAAAAA:5rXu9YikP822bCBvkhYxKWlBTJ6Fn6baTQJ9kuNrU7K-64EmGOAczYvF2dTW3al3PfPdwJAiYw)). Due to PyTorch\u2019s limited support of sparse matrices, we use SciPy as the backend.\n\n#### Multi-GPU support\n\nCurrently, MIRTorch uses `torch.DataParallel` to support multiple GPUs. One may re-package the `LinearMap`, `Prox` or `Alg` inside a `torch.nn.Module` to enable data parallel. See [this tutorial](https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html) for detail.\n\n------\n\n### Usage and examples\n\nGenerally, MIRTorch solves the image reconstruction problems that have the cost function $\\textit{argmin}_{x} \\|Ax-y\\|_2^2 + \\lambda \\textit{R}(x)$. $A$ stands for the system matrix. When it is linear, one may use `LinearMap` to efficiently compute it. `y` usually denotes measurements. $\\textit{R}(\\cdot)$ denotes regularizers, which determines which `Alg` to be used. One may refer to [1](https://web.eecs.umich.edu/~fessler/book/), [2](https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf) and [3](https://www.youtube.com/watch?v=J6_5rPYnr_s) for more tutorials on optimization.\n\nHere we provide several notebook tutorials focused on MRI, where $A$ is FFT or NUFFT.\n\n- `/example/demo_mnist.ipynb` shows the LASSO on MNIST with FISTA and POGM.\n- `/example/demo_mri.ipynb` contains the SENSE (CG-SENSE) and **B0**-informed reconstruction with penalized weighted least squares (*PWLS*).\n- `/example/demo_3d.ipynb` contains the 3d non-Cartesian MR reconstruction. *New!* Try the Toeplitz-embedding version of B0-informed reconstruction, which reduce hour-long recon to 5 secs.\n- `/example/demo_cs.ipynb` shows the compressed sensing reconstruction of under-determined MRI signals.\n- `/example/demo_dl.ipynb` exhibits the dictionary learning results.\n- `/example/demo_mlem` showcase SPECT recon algorithms, including EM and CNN.\n\nSince MIRTorch is differentiable, one may use AD to update many parameters. For example, updating the reconstruction neural network's weights. More importantly, one may update the imaging system itself via gradient-based and data-driven methods. As a user case, [Bjork repo](https://github.com/guanhuaw/Bjork) contains MRI sampling pattern optimization examples. One may use the reconstruction loss as the objective function to jointly optimize reconstruction algorithms and the sampling pattern. See [this video](https://www.youtube.com/watch?v=sLFOf5EvVAs) on how to jointly optimize reconstruction and acquisition.\n\n------\n\n### Acknowledgments\n\nThis work is inspired by (but not limited to):\n\n* SigPy: https://github.com/mikgroup/sigpy\n\n* MIRT: https://github.com/JeffFessler/mirt\n\n* MIRT.jl: https://github.com/JeffFessler/MIRT.jl\n\n* PyLops: https://github.com/PyLops/pylops\n\nIf the code is useful to your research, please consider citing:\n\n```bibtex\n@article{wang:22:bjork,\n  author={Wang, Guanhua and Luo, Tianrui and Nielsen, Jon-Fredrik and Noll, Douglas C. and Fessler, Jeffrey A.},\n  journal={IEEE Transactions on Medical Imaging},\n  title={B-spline Parameterized Joint Optimization of Reconstruction and K-space Trajectories ({BJORK}) for Accelerated {2D} {MRI}},\n  year={2022},\n  pages={1-1},\n  doi={10.1109/TMI.2022.3161875}}\n```\n\n```bibtex\n@inproceedings{wang:22:mirtorch,\n  title={{MIRTorch}: A {PyTorch}-powered Differentiable Toolbox for Fast Image Reconstruction and Scan Protocol Optimization},\n  author={Wang, Guanhua and Shah, Neel and Zhu, Keyue and Noll, Douglas C. and Fessler, Jeffrey A.},\n  booktitle={Proc. Intl. Soc. Magn. Resonance. Med. (ISMRM)},\n  pages={4982},\n  year={2022}\n}\n```\nIf you use the SPECT code, please consider citing:\n\n```bibtex\n@ARTICLE{li:23:tet,\n  author={Li, Zongyu and Dewaraja, Yuni K. and Fessler, Jeffrey A.},\n  journal={IEEE Transactions on Radiation and Plasma Medical Sciences},\n  title={Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction},\n  year={2023},\n  volume={7},\n  number={4},\n  pages={410-420},\n  doi={10.1109/TRPMS.2023.3240934}}\n```\n\n\n------\n\n### License\n\nThis package uses the BSD3 license.\n",
    "bugtrack_url": null,
    "license": "BSD-3-Clause",
    "summary": "a PyTorch-based image reconstruction toolbox",
    "version": "0.1.2",
    "project_urls": {
        "repository": "https://github.com/guanhuaw/MIRTorch"
    },
    "split_keywords": [
        "signal processing",
        " inverse problems"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a72922f30ca96ac3e4fb0eed39d0f9b35d5d33eeaa4ac72f7c767dcfbc4516be",
                "md5": "0e55728880a166fdfaf78902749ad519",
                "sha256": "2a9c30a91cf24f5e76634c4844085bfed24e786a29580d6386a288677968c991"
            },
            "downloads": -1,
            "filename": "MIRTorch-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0e55728880a166fdfaf78902749ad519",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 72498,
            "upload_time": "2024-08-04T22:55:49",
            "upload_time_iso_8601": "2024-08-04T22:55:49.691893Z",
            "url": "https://files.pythonhosted.org/packages/a7/29/22f30ca96ac3e4fb0eed39d0f9b35d5d33eeaa4ac72f7c767dcfbc4516be/MIRTorch-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2b568d233a6f9c285a4e6a7e2f131a0a75555786dc1f6ffa74c688c734081aec",
                "md5": "6addfdbe17dff95f0e59e63b9e46e87d",
                "sha256": "f71a1648d818576fc1d1fc1432ab96297e18591d4df40e6cf2c79ee42f457b97"
            },
            "downloads": -1,
            "filename": "mirtorch-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "6addfdbe17dff95f0e59e63b9e46e87d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 13809525,
            "upload_time": "2024-08-04T22:55:51",
            "upload_time_iso_8601": "2024-08-04T22:55:51.549252Z",
            "url": "https://files.pythonhosted.org/packages/2b/56/8d233a6f9c285a4e6a7e2f131a0a75555786dc1f6ffa74c688c734081aec/mirtorch-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-04 22:55:51",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "guanhuaw",
    "github_project": "MIRTorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "mirtorch"
}
        
Elapsed time: 0.57106s