Name | torchTT JSON |
Version |
0.3
JSON |
| download |
home_page | None |
Summary | Tensor-Train decomposition in pytorch. |
upload_time | 2024-12-09 21:37:00 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.7 |
license | MIT License Copyright (c) 2021 ion-g-ion Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
pytorch
tensor-train decomposition
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<img src="https://github.com/ion-g-ion/torchTT/blob/main/logo.png?raw=true" width="400px" >
</p>
# torchTT
Tensor-Train decomposition in `pytorch`
Tensor-Train decomposition package written in Python on top of `pytorch`. Supports GPU acceleration and automatic differentiation.
It also contains routines for solving linear systems in the TT format and performing adaptive cross approximation (the AMEN solver/cross interpolation is inspired form the [MATLAB TT-Toolbox](https://github.com/oseledets/TT-Toolbox)).
Some routines are implemented in C++ for an increased execution speed.
## Installation
### Requirements
Following requirements are needed:
- `python>=3.6`
- `torch>=1.7.0`
- `numpy>=1.18`
- [`opt_einsum`](https://pypi.org/project/opt-einsum/)
The GPU (if available) version of pytorch is recommended to be installed. Read the [official installation guide](https://pytorch.org/get-started/locally/) for further info.
### Using pip
You can install the package using the `pip` command:
```
pip install torchTT
```
The latest github version can be installed using:
```
pip install git+https://github.com/ion-g-ion/torchTT
```
One can also clone the repository and manually install the package:
```
git clone https://github.com/ion-g-ion/torchTT
cd torchTT
python setup.py install
```
### Using conda
**TODO**
## Components
The main modules/submodules that can be accessed after importing `torchtt` are briefly desctibed in the following table.
Detailed description can be found [here](https://ion-g-ion.github.io/torchTT/index.html).
| Component | Description |
| --- | --- |
| [`torchtt`](https://ion-g-ion.github.io/torchTT/torchtt/torchtt.html) | Basic TT class and basic linear algebra functions. |
| [`torchtt.solvers`](https://ion-g-ion.github.io/torchTT/torchtt/solvers.html) | Implementation of the AMEN solver. |
| [`torchtt.grad`](https://ion-g-ion.github.io/torchTT/torchtt/grad.html) | Wrapper for automatic differentiation. |
| [`torchtt.manifold`](https://ion-g-ion.github.io/torchTT/torchtt/manifold.html) | Riemannian gradient and projection onto manifolds of tensors with fixed TT rank. |
| [`torchtt.nn`](https://ion-g-ion.github.io/torchTT/torchtt/nn.html) | Basic TT neural network layer. |
| [`torchtt.interpolate`](https://ion-g-ion.github.io/torchTT/torchtt/interpolate.html) | Cross approximation routines. |
## Tests
The directory [tests/](tests/) from the root folder contains all the `unittests`. To run them use the command:
```
pytest tests/
```
## Documentation and examples
The documentation can be found [here](https://ion-g-ion.github.io/torchTT/index.html).
Following example scripts (as well as python notebooks) are also provied provided as part of the documentation:
* [basic_tutorial.py](examples/basic_tutorial.py) / [basic_tutorial.ipynp](examples/basic_tutorial.ipynb): This contains a basic tutorial on decomposing full tensors in the TT format as well as performing rank rounding, slicing ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/basic_tutorial.ipynb)).
* [basic_linalg.py](examples/basic_linalg.py) / [basic_linalg.ipynp](examples/basic_linalg.ipynb): This tutorial presents all the algebra operations that can be performed in the TT format ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/basic_linalg.ipynb)).
* [efficient_linalg.py](examples/efficient_linalg.py) / [efficient_linalg.ipynb](examples/efficient_linalg.ipynb): contains the DMRG for fast matves and AMEN for elementwise inversion in the TT format ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/efficient_linalg.ipynb)).
* [automatic_differentiation.py](examples/automatic_differentiation.py) / [automatic_differentiation.ipynp](examples/automatic_differentiation.ipynb): Basic tutorial on AD in `torchtt` ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/automatic_differentiation.ipynb)).
* [cross_interpolation.py](examples/cross_interpolation.py) / [cross_interpolation.ipynb](examples/cross_interpolation.ipynb): In this script, the cross interpolation emthod is exemplified ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/cross_interpolation.ipynb)).
* [system_solvers.py](examples/system_solvers.py) / [system_solvers.ipynb](examples/system_solvers.ipynb): This contains the bais ussage of the multilinear solvers ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/system_solvers.ipynb)).
* [cuda.py](examples/cuda.py) / [cuda.ipynb](examples/cuda.ipynb): This provides an example on how to use the GPU acceleration ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/cuda.ipynb)).
* [basic_nn.py](examples/basic_nn.py) / [basic_nn.ipynb](examples/basic_nn.ipynb): This provides an example on how to use the TT neural network layers ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/basic_nn.ipynb)).
* [mnist_nn.py](examples/mnist_nn.py) / [mnist_nn.ipynb](examples/mnist_nn.ipynb): Example of TT layers used for image classification ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/mnist_nn.ipynb)).
The documentation is generated using `shpinx` with:
```
make html
```
after installing the packages
```
pip install sphinx sphinx_rtd_theme
```
## Author
Ion Gabriel Ion, e-mail: ion.ion.gabriel@gmail.com
Raw data
{
"_id": null,
"home_page": null,
"name": "torchTT",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "pytorch, tensor-train decomposition",
"author": null,
"author_email": "Ion Gabriel Ion <ion.ion.gabriel@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/9e/fd/7020c0128be303812dfc96426b6b295706ebd3da948b0464803be09e3fa9/torchtt-0.3.tar.gz",
"platform": null,
"description": "\n<p align=\"center\">\n<img src=\"https://github.com/ion-g-ion/torchTT/blob/main/logo.png?raw=true\" width=\"400px\" >\n</p>\n\n# torchTT\nTensor-Train decomposition in `pytorch`\n\nTensor-Train decomposition package written in Python on top of `pytorch`. Supports GPU acceleration and automatic differentiation.\nIt also contains routines for solving linear systems in the TT format and performing adaptive cross approximation (the AMEN solver/cross interpolation is inspired form the [MATLAB TT-Toolbox](https://github.com/oseledets/TT-Toolbox)).\nSome routines are implemented in C++ for an increased execution speed.\n\n\n## Installation\n\n### Requirements\nFollowing requirements are needed:\n\n- `python>=3.6`\n- `torch>=1.7.0`\n- `numpy>=1.18`\n- [`opt_einsum`](https://pypi.org/project/opt-einsum/)\n\nThe GPU (if available) version of pytorch is recommended to be installed. Read the [official installation guide](https://pytorch.org/get-started/locally/) for further info.\n\n### Using pip\nYou can install the package using the `pip` command:\n\n```\npip install torchTT\n```\n\nThe latest github version can be installed using:\n\n```\npip install git+https://github.com/ion-g-ion/torchTT\n```\n\nOne can also clone the repository and manually install the package: \n\n```\ngit clone https://github.com/ion-g-ion/torchTT\ncd torchTT\npython setup.py install\n``` \n\n### Using conda\n\n**TODO**\n\n## Components\n\nThe main modules/submodules that can be accessed after importing `torchtt` are briefly desctibed in the following table.\nDetailed description can be found [here](https://ion-g-ion.github.io/torchTT/index.html).\n\n| Component | Description |\n| --- | --- |\n| [`torchtt`](https://ion-g-ion.github.io/torchTT/torchtt/torchtt.html) | Basic TT class and basic linear algebra functions. |\n| [`torchtt.solvers`](https://ion-g-ion.github.io/torchTT/torchtt/solvers.html) | Implementation of the AMEN solver. |\n| [`torchtt.grad`](https://ion-g-ion.github.io/torchTT/torchtt/grad.html) | Wrapper for automatic differentiation. |\n| [`torchtt.manifold`](https://ion-g-ion.github.io/torchTT/torchtt/manifold.html) | Riemannian gradient and projection onto manifolds of tensors with fixed TT rank. |\n| [`torchtt.nn`](https://ion-g-ion.github.io/torchTT/torchtt/nn.html) | Basic TT neural network layer. |\n| [`torchtt.interpolate`](https://ion-g-ion.github.io/torchTT/torchtt/interpolate.html) | Cross approximation routines. |\n\n## Tests \n\nThe directory [tests/](tests/) from the root folder contains all the `unittests`. To run them use the command:\n\n```\npytest tests/\n```\n\n\n## Documentation and examples\nThe documentation can be found [here](https://ion-g-ion.github.io/torchTT/index.html).\nFollowing example scripts (as well as python notebooks) are also provied provided as part of the documentation:\n\n * [basic_tutorial.py](examples/basic_tutorial.py) / [basic_tutorial.ipynp](examples/basic_tutorial.ipynb): This contains a basic tutorial on decomposing full tensors in the TT format as well as performing rank rounding, slicing ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/basic_tutorial.ipynb)). \n * [basic_linalg.py](examples/basic_linalg.py) / [basic_linalg.ipynp](examples/basic_linalg.ipynb): This tutorial presents all the algebra operations that can be performed in the TT format ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/basic_linalg.ipynb)). \n * [efficient_linalg.py](examples/efficient_linalg.py) / [efficient_linalg.ipynb](examples/efficient_linalg.ipynb): contains the DMRG for fast matves and AMEN for elementwise inversion in the TT format ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/efficient_linalg.ipynb)). \n * [automatic_differentiation.py](examples/automatic_differentiation.py) / [automatic_differentiation.ipynp](examples/automatic_differentiation.ipynb): Basic tutorial on AD in `torchtt` ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/automatic_differentiation.ipynb)). \n * [cross_interpolation.py](examples/cross_interpolation.py) / [cross_interpolation.ipynb](examples/cross_interpolation.ipynb): In this script, the cross interpolation emthod is exemplified ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/cross_interpolation.ipynb)). \n * [system_solvers.py](examples/system_solvers.py) / [system_solvers.ipynb](examples/system_solvers.ipynb): This contains the bais ussage of the multilinear solvers ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/system_solvers.ipynb)). \n * [cuda.py](examples/cuda.py) / [cuda.ipynb](examples/cuda.ipynb): This provides an example on how to use the GPU acceleration ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/cuda.ipynb)). \n * [basic_nn.py](examples/basic_nn.py) / [basic_nn.ipynb](examples/basic_nn.ipynb): This provides an example on how to use the TT neural network layers ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/basic_nn.ipynb)). \n * [mnist_nn.py](examples/mnist_nn.py) / [mnist_nn.ipynb](examples/mnist_nn.ipynb): Example of TT layers used for image classification ([Try on Google Colab](https://colab.research.google.com/github/ion-g-ion/torchTT/blob/main/examples/mnist_nn.ipynb)). \n \n The documentation is generated using `shpinx` with:\n\n ```\n make html\n ```\n\n after installing the packages\n\n ```\n pip install sphinx sphinx_rtd_theme\n ```\n\n## Author \nIon Gabriel Ion, e-mail: ion.ion.gabriel@gmail.com\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2021 ion-g-ion Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "Tensor-Train decomposition in pytorch.",
"version": "0.3",
"project_urls": null,
"split_keywords": [
"pytorch",
" tensor-train decomposition"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9efd7020c0128be303812dfc96426b6b295706ebd3da948b0464803be09e3fa9",
"md5": "69a1873e082955392722f382fdab0b1e",
"sha256": "f317f57eec5b2eda6af11f7eed8bbf97b18bb1fac8f00f1d22c97703fc1bd4d9"
},
"downloads": -1,
"filename": "torchtt-0.3.tar.gz",
"has_sig": false,
"md5_digest": "69a1873e082955392722f382fdab0b1e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 524583,
"upload_time": "2024-12-09T21:37:00",
"upload_time_iso_8601": "2024-12-09T21:37:00.138916Z",
"url": "https://files.pythonhosted.org/packages/9e/fd/7020c0128be303812dfc96426b6b295706ebd3da948b0464803be09e3fa9/torchtt-0.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-09 21:37:00",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "torchtt"
}