lowrank


Namelowrank JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/rmsolgi/tensorlearn.git
SummaryA Python Package for Advanced Tensor Learning Methods
upload_time2023-08-11 18:45:32
maintainerRyan Solgi
docs_urlNone
authorRyan Solgi
requires_python
license
keywords tensor decomposition tensor-train rank auto-rank candecomp parafac cp
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # TensorLearn

TensorLearn is a Python library distributed on [Pypi](https://pypi.org) to implement 
tensor learning methods.

This is a project under development. Yet, the available methods are final and functional. The requirment is [Numpy](https://numpy.org).

    
## Installation

Use the package manager [pip](https://pip.pypa.io/en/stable/) to install tensorlearn in Python.

```python
pip install tensorlearn
```

## methods
### Decomposition Methods
- [auto_rank_tt](#autoranktt-id)

- [cp_als_rand_init](#cpalsrandinit-id)

### Tensor Operations for Tensor-Train 
- [tt_to_tensor](#tttotensor-id)

- [tt_compression_ratio](#ttcr-id)

### Tensor Operations for CANDECOMP/PARAFAC (CP)
- [cp_to_tensor](#cptotensor-id)

- [cp_compression_ratio](#cpcr-id)

### Tensor Operations
- [tensor_resize](#tensorresize-id)

- [unfold](#unfold-id)

- [tensor_frobenius_norm](#tfronorm-id)

### Matrix Operations
- [error_truncated_svd](#etsvd-id)

- [column_wise_kronecker](#colwisekron-id)

---


## <a name="autoranktt-id"></a>auto_rank_tt

```python
tensorlearn.auto_rank_tt(tensor, epsilon)
```

This implementation of [tensor-train decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition) determines the ranks automatically based on a given error bound according to [Oseledets (2011)](https://epubs.siam.org/doi/10.1137/090752286). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page [tensor-train decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition).


### Arguments 
- tensor < array >: The given tensor to be decomposed.

- epsilon < float >: [The error bound of decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition#epsilon-id) in the range \[0,1\].

### Return
- TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)

---
## <a name="cpalsrandinit-id"></a>cp_als_rand_init

```python
tensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)
```

This is an implementation of [CANDECOMP/PARAFAC (CP) decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/CP_decomposition) using [alternating least squares (ALS) algorithm](https://arxiv.org/abs/2112.10855) with random initialization of factors.

### Arguments 
- tensor < array >: the given tensor to be decomposed

- rank < int >: number of ranks

- iterations < int >: the number of iterations of the ALS algorithm

- random_seed < int >: the seed of random number generator for random initialization of the factor matrices 


### Return
- weights < array >: the vector of normalization weights (lambda) in CP decomposition

- factors < list of arrays >: factor matrices of the CP decomposition

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/CP_decomposition/CP_example.py)

---





## <a name="tttotensor-id"></a>tt_to_tensor

```python
tensorlearn.tt_to_tensor(factors)
```

Returns the full tensor given the TT factors


### Arguments
- factors < list of numpy arrays >: TT factors

### Return
- full tensor < numpy array >

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)



---

## <a name="ttcr-id"></a>tt_compression_ratio

```python
tensorlearn.tt_compression_ratio(factors)
```

Returns [data compression ratio](https://en.wikipedia.org/wiki/Data_compression_ratio) for [tensor-train decompostion](https://github.com/rmsolgi/TensorLearn/tree/main/CP_decomposition)

### Arguments
- factors < list of numpy arrays >: TT factors

### Return
- Compression ratio < float >

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)

---

## <a name="cptotensor-id"></a>cp_to_tensor

Returns the full tensor given the CP factor matrices and weights


```python
tensorlearn.cp_to_tensor(weights, factors)
```

### Arguments
- weights < array >: the vector of normalization weights (lambda) in CP decomposition

- factors < list of arrays >: factor matrices of the CP decomposition

### Return
- full tensor < array >

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/CP_decomposition/CP_example.py)


---


## <a name="cpcr-id"></a>cp_compression_ratio

Returns [data compression ratio](https://en.wikipedia.org/wiki/Data_compression_ratio) for [CP- decompostion](https://github.com/rmsolgi/TensorLearn/tree/main/CP_decomposition)

```python
tensorlearn.cp_compression_ratio(weights, factors)
```

### Arguments
- weights < array >: the vector of normalization weights (lambda) in CP decomposition

- factors < list of arrays >: factor matrices of the CP decomposition

### Return

- Compression ratio < float >

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/CP_decomposition/CP_example.py)

---

## <a name="tensorresize-id"></a>tensor_resize

```python
tensorlearn.tensor_resize(tensor, new_shape)
```

This method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to [numpy.ndarray.resize()](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.resize.html)

### Arguments
- tensor < array >: the given tensor

- new_shape < tuple >: new shape 

### Return
- tensor < array >: tensor with new given shape

---

## <a name="unfold-id"></a>unfold
```python
tensorlearn.unfold(tensor, n)
```
Unfold the tensor with respect to dimension n.

### Arguments
- tensor < array >: tensor to be unfolded

- n < int >: dimension based on which the tensor is unfolded

### Return
- matrix < array >: unfolded tensor with respect to dimension n

---

## <a name="tfronorm-id"></a>tensor_frobenius_norm

```python
tensorlearn.tensor_frobenius_norm(tensor)
```

Calculates the [frobenius norm](https://mathworld.wolfram.com/FrobeniusNorm.html) of the given tensor.

### Arguments
- tensor < array >: the given tensor

### Return
- frobenius norm < float >

[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)

---


## <a name="etsvd-id"></a>error_truncated_svd

```python
tensorlearn.error_truncated_svd(x, error)
```
This method conducts a [compact svd](https://en.wikipedia.org/wiki/Singular_value_decomposition) and return [sigma (error)-truncated SVD](https://langvillea.people.cofc.edu/DISSECTION-LAB/Emmie%27sLSI-SVDModule/p5module.html) of a given matrix. This is an implementation using [numpy.linalg.svd](https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html) with full_matrices=False. This method is used in [TT-SVD algorithm](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition#ttsvd-id) in [auto_rank_tt](#autoranktt-id).

### Arguments
- x < 2D array >: the given matrix to be decomposed

- error < float >: the given error in the range \[0,1\]

### Return
- r, u, s, vh < int, numpy array, numpy array, numpy array > 


---

## <a name="colwisekron-id"></a>column_wise_kronecker

```python
tensorlearn.column_wise_kronecker(a, b)
```
Returns the [column wise Kronecker product (Sometimes known as Khatri Rao)](https://en.wikipedia.org/wiki/Khatri–Rao_product) of two given matrices.

### Arguments

- a,b < 2D array >: the given matrices

### Return

- column wise Kronecker product < array >



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/rmsolgi/tensorlearn.git",
    "name": "lowrank",
    "maintainer": "Ryan Solgi",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "tensor,decomposition,tensor-train,rank,auto-rank,CANDECOMP,PARAFAC,CP",
    "author": "Ryan Solgi",
    "author_email": "ryan.solgi@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/1d/8c/5f118cbcd8b961005d2e272009aede4965fd800f996dd8866115e31fdb29/lowrank-1.0.0.tar.gz",
    "platform": null,
    "description": "# TensorLearn\n\nTensorLearn is a Python library distributed on [Pypi](https://pypi.org) to implement \ntensor learning methods.\n\nThis is a project under development. Yet, the available methods are final and functional. The requirment is [Numpy](https://numpy.org).\n\n    \n## Installation\n\nUse the package manager [pip](https://pip.pypa.io/en/stable/) to install tensorlearn in Python.\n\n```python\npip install tensorlearn\n```\n\n## methods\n### Decomposition Methods\n- [auto_rank_tt](#autoranktt-id)\n\n- [cp_als_rand_init](#cpalsrandinit-id)\n\n### Tensor Operations for Tensor-Train \n- [tt_to_tensor](#tttotensor-id)\n\n- [tt_compression_ratio](#ttcr-id)\n\n### Tensor Operations for CANDECOMP/PARAFAC (CP)\n- [cp_to_tensor](#cptotensor-id)\n\n- [cp_compression_ratio](#cpcr-id)\n\n### Tensor Operations\n- [tensor_resize](#tensorresize-id)\n\n- [unfold](#unfold-id)\n\n- [tensor_frobenius_norm](#tfronorm-id)\n\n### Matrix Operations\n- [error_truncated_svd](#etsvd-id)\n\n- [column_wise_kronecker](#colwisekron-id)\n\n---\n\n\n## <a name=\"autoranktt-id\"></a>auto_rank_tt\n\n```python\ntensorlearn.auto_rank_tt(tensor, epsilon)\n```\n\nThis implementation of [tensor-train decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition) determines the ranks automatically based on a given error bound according to [Oseledets (2011)](https://epubs.siam.org/doi/10.1137/090752286). Therefore the user does not need to specify the ranks. Instead the user specifies an upper error bound (epsilon) which bounds the error of the decomposition. For more information and details please see the page [tensor-train decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition).\n\n\n### Arguments \n- tensor < array >: The given tensor to be decomposed.\n\n- epsilon < float >: [The error bound of decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition#epsilon-id) in the range \\[0,1\\].\n\n### Return\n- TT factors < list of arrays >: The list includes numpy arrays of factors (or TT cores) according to TT decomposition. Length of the list equals the dimension of the given tensor to be decomposed.\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)\n\n---\n## <a name=\"cpalsrandinit-id\"></a>cp_als_rand_init\n\n```python\ntensorlearn.cp_als_rand_init(tensor, rank, iteration, random_seed=None)\n```\n\nThis is an implementation of [CANDECOMP/PARAFAC (CP) decomposition](https://github.com/rmsolgi/TensorLearn/tree/main/CP_decomposition) using [alternating least squares (ALS) algorithm](https://arxiv.org/abs/2112.10855) with random initialization of factors.\n\n### Arguments \n- tensor < array >: the given tensor to be decomposed\n\n- rank < int >: number of ranks\n\n- iterations < int >: the number of iterations of the ALS algorithm\n\n- random_seed < int >: the seed of random number generator for random initialization of the factor matrices \n\n\n### Return\n- weights < array >: the vector of normalization weights (lambda) in CP decomposition\n\n- factors < list of arrays >: factor matrices of the CP decomposition\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/CP_decomposition/CP_example.py)\n\n---\n\n\n\n\n\n## <a name=\"tttotensor-id\"></a>tt_to_tensor\n\n```python\ntensorlearn.tt_to_tensor(factors)\n```\n\nReturns the full tensor given the TT factors\n\n\n### Arguments\n- factors < list of numpy arrays >: TT factors\n\n### Return\n- full tensor < numpy array >\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)\n\n\n\n---\n\n## <a name=\"ttcr-id\"></a>tt_compression_ratio\n\n```python\ntensorlearn.tt_compression_ratio(factors)\n```\n\nReturns [data compression ratio](https://en.wikipedia.org/wiki/Data_compression_ratio) for [tensor-train decompostion](https://github.com/rmsolgi/TensorLearn/tree/main/CP_decomposition)\n\n### Arguments\n- factors < list of numpy arrays >: TT factors\n\n### Return\n- Compression ratio < float >\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)\n\n---\n\n## <a name=\"cptotensor-id\"></a>cp_to_tensor\n\nReturns the full tensor given the CP factor matrices and weights\n\n\n```python\ntensorlearn.cp_to_tensor(weights, factors)\n```\n\n### Arguments\n- weights < array >: the vector of normalization weights (lambda) in CP decomposition\n\n- factors < list of arrays >: factor matrices of the CP decomposition\n\n### Return\n- full tensor < array >\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/CP_decomposition/CP_example.py)\n\n\n---\n\n\n## <a name=\"cpcr-id\"></a>cp_compression_ratio\n\nReturns [data compression ratio](https://en.wikipedia.org/wiki/Data_compression_ratio) for [CP- decompostion](https://github.com/rmsolgi/TensorLearn/tree/main/CP_decomposition)\n\n```python\ntensorlearn.cp_compression_ratio(weights, factors)\n```\n\n### Arguments\n- weights < array >: the vector of normalization weights (lambda) in CP decomposition\n\n- factors < list of arrays >: factor matrices of the CP decomposition\n\n### Return\n\n- Compression ratio < float >\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/CP_decomposition/CP_example.py)\n\n---\n\n## <a name=\"tensorresize-id\"></a>tensor_resize\n\n```python\ntensorlearn.tensor_resize(tensor, new_shape)\n```\n\nThis method reshapes the given tensor to a new shape. The new size must be bigger than or equal to the original shape. If the new shape results in a tensor of greater size (number of elements) the tensor fills with zeros. This works similar to [numpy.ndarray.resize()](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.resize.html)\n\n### Arguments\n- tensor < array >: the given tensor\n\n- new_shape < tuple >: new shape \n\n### Return\n- tensor < array >: tensor with new given shape\n\n---\n\n## <a name=\"unfold-id\"></a>unfold\n```python\ntensorlearn.unfold(tensor, n)\n```\nUnfold the tensor with respect to dimension n.\n\n### Arguments\n- tensor < array >: tensor to be unfolded\n\n- n < int >: dimension based on which the tensor is unfolded\n\n### Return\n- matrix < array >: unfolded tensor with respect to dimension n\n\n---\n\n## <a name=\"tfronorm-id\"></a>tensor_frobenius_norm\n\n```python\ntensorlearn.tensor_frobenius_norm(tensor)\n```\n\nCalculates the [frobenius norm](https://mathworld.wolfram.com/FrobeniusNorm.html) of the given tensor.\n\n### Arguments\n- tensor < array >: the given tensor\n\n### Return\n- frobenius norm < float >\n\n[Example](https://github.com/rmsolgi/TensorLearn/blob/main/Tensor-Train%20Decomposition/example_tt.py)\n\n---\n\n\n## <a name=\"etsvd-id\"></a>error_truncated_svd\n\n```python\ntensorlearn.error_truncated_svd(x, error)\n```\nThis method conducts a [compact svd](https://en.wikipedia.org/wiki/Singular_value_decomposition) and return [sigma (error)-truncated SVD](https://langvillea.people.cofc.edu/DISSECTION-LAB/Emmie%27sLSI-SVDModule/p5module.html) of a given matrix. This is an implementation using [numpy.linalg.svd](https://numpy.org/doc/stable/reference/generated/numpy.linalg.svd.html) with full_matrices=False. This method is used in [TT-SVD algorithm](https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition#ttsvd-id) in [auto_rank_tt](#autoranktt-id).\n\n### Arguments\n- x < 2D array >: the given matrix to be decomposed\n\n- error < float >: the given error in the range \\[0,1\\]\n\n### Return\n- r, u, s, vh < int, numpy array, numpy array, numpy array > \n\n\n---\n\n## <a name=\"colwisekron-id\"></a>column_wise_kronecker\n\n```python\ntensorlearn.column_wise_kronecker(a, b)\n```\nReturns the [column wise Kronecker product (Sometimes known as Khatri Rao)](https://en.wikipedia.org/wiki/Khatri\u2013Rao_product) of two given matrices.\n\n### Arguments\n\n- a,b < 2D array >: the given matrices\n\n### Return\n\n- column wise Kronecker product < array >\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A Python Package for Advanced Tensor Learning Methods",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/rmsolgi/tensorlearn.git"
    },
    "split_keywords": [
        "tensor",
        "decomposition",
        "tensor-train",
        "rank",
        "auto-rank",
        "candecomp",
        "parafac",
        "cp"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e23fb1923168e1aeb619158b55ea63bd580a837b6f9a67746e5f8cadc6a39785",
                "md5": "4058769e2e6d855e3a1c208185d1c8f4",
                "sha256": "e61221c5dafc4856ce9cd132be3ccccaae9a6b24257b684a59a1872a5ec666ad"
            },
            "downloads": -1,
            "filename": "lowrank-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4058769e2e6d855e3a1c208185d1c8f4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 4971,
            "upload_time": "2023-08-11T18:45:30",
            "upload_time_iso_8601": "2023-08-11T18:45:30.103051Z",
            "url": "https://files.pythonhosted.org/packages/e2/3f/b1923168e1aeb619158b55ea63bd580a837b6f9a67746e5f8cadc6a39785/lowrank-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1d8c5f118cbcd8b961005d2e272009aede4965fd800f996dd8866115e31fdb29",
                "md5": "cc73346d2f3b6141067291780f34015e",
                "sha256": "45a6c160949ee422c7645a00e2e61dc2bd0f2d38b7d50ca2ab111a762c859112"
            },
            "downloads": -1,
            "filename": "lowrank-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "cc73346d2f3b6141067291780f34015e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 5264,
            "upload_time": "2023-08-11T18:45:32",
            "upload_time_iso_8601": "2023-08-11T18:45:32.060500Z",
            "url": "https://files.pythonhosted.org/packages/1d/8c/5f118cbcd8b961005d2e272009aede4965fd800f996dd8866115e31fdb29/lowrank-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-11 18:45:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rmsolgi",
    "github_project": "tensorlearn",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "lowrank"
}
        
Elapsed time: 0.97096s