torch-sparse-carate


Nametorch-sparse-carate JSON
Version 0.6.17 PyPI version JSON
download
home_pagehttps://codeberg.org/sail.black/pytorch_sparse_carate.git
SummaryPyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
upload_time2023-03-25 20:20:10
maintainer
docs_urlNone
authorJulian M. Kleber
requires_python>=3.7
license
keywords pytorch sparse sparse-matrices autograd
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Badges 


[![Downloads](https://static.pepy.tech/personalized-badge/carate?period=total&units=international_system&left_color=black&right_color=orange&left_text=Downloads)](https://pepy.tech/project/find_the_right_one)
[![License: GPL v3](https://img.shields.io/badge/License-GPL_v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
![Python Versions](https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%20-blue) 
![Style Black](https://warehouse-camo.ingress.cmh1.psfhosted.org/fbfdc7754183ecf079bc71ddeabaf88f6cbc5c00/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f636f64652532307374796c652d626c61636b2d3030303030302e737667) 

The repo was not touched, however the quality may significantly be improved using 
[lia](https://codeberg.org/cap_jmk/lia.git)


# Badges Parent Package

[pypi-image]: https://badge.fury.io/py/torch-sparse.svg
[pypi-url]: https://pypi.python.org/pypi/torch-sparse
[testing-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml/badge.svg
[testing-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml
[linting-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml/badge.svg
[linting-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml
[coverage-image]: https://codecov.io/gh/rusty1s/pytorch_sparse/branch/master/graph/badge.svg
[coverage-url]: https://codecov.io/github/rusty1s/pytorch_sparse?branch=master

# PyTorch Sparse

[![PyPI Version][pypi-image]][pypi-url]
[![Testing Status][testing-image]][testing-url]
[![Linting Status][linting-image]][linting-url]
[![Code Coverage][coverage-image]][coverage-url]

--------------------------------------------------------------------------------

This package consists of a small extension library of optimized sparse matrix operations with autograd support.
This package currently consists of the following methods:

* **[Coalesce](#coalesce)**
* **[Transpose](#transpose)**
* **[Sparse Dense Matrix Multiplication](#sparse-dense-matrix-multiplication)**
* **[Sparse Sparse Matrix Multiplication](#sparse-sparse-matrix-multiplication)**

All included operations work on varying data types and are implemented both for CPU and GPU.
To avoid the hazzle of creating [`torch.sparse_coo_tensor`](https://pytorch.org/docs/stable/torch.html?highlight=sparse_coo_tensor#torch.sparse_coo_tensor), this package defines operations on sparse tensors by simply passing `index` and `value` tensors as arguments ([with same shapes as defined in PyTorch](https://pytorch.org/docs/stable/sparse.html)).
Note that only `value` comes with autograd support, as `index` is discrete and therefore not differentiable.

## Installation

### Anaconda

**Update:** You can now install `pytorch-sparse` via [Anaconda](https://anaconda.org/pyg/pytorch-sparse) for all major OS/PyTorch/CUDA combinations 🤗
Given that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run

```
conda install pytorch-sparse -c pyg
```

### Binaries

We alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).

#### PyTorch 2.0

To install the binaries for PyTorch 2.0.0, simply run

```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.html
```

where `${CUDA}` should be replaced by either `cpu`, `cu117`, or `cu118` depending on your PyTorch installation.

|             | `cpu` | `cu117` | `cu118` |
|-------------|-------|---------|---------|
| **Linux**   | ✅    | ✅      | ✅      |
| **Windows** | ✅    | ✅      | ✅      |
| **macOS**   | ✅    |         |         |

#### PyTorch 1.13

To install the binaries for PyTorch 1.13.0, simply run

```
pip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.13.0+${CUDA}.html
```

where `${CUDA}` should be replaced by either `cpu`, `cu116`, or `cu117` depending on your PyTorch installation.

|             | `cpu` | `cu116` | `cu117` |
|-------------|-------|---------|---------|
| **Linux**   | ✅    | ✅      | ✅      |
| **Windows** | ✅    | ✅      | ✅      |
| **macOS**   | ✅    |         |         |

**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0 and PyTorch 1.12.0/1.12.1 (following the same procedure).
For older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.
You can look up the latest supported version number [here](https://data.pyg.org/whl).

### From source

Ensure that at least PyTorch 1.7.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:

```
$ python -c "import torch; print(torch.__version__)"
>>> 1.7.0

$ echo $PATH
>>> /usr/local/cuda/bin:...

$ echo $CPATH
>>> /usr/local/cuda/include:...
```

If you want to additionally build `torch-sparse` with METIS support, *e.g.* for partioning, please download and install the [METIS library](https://web.archive.org/web/20211119110155/http://glaros.dtc.umn.edu/gkhome/metis/metis/download) by following the instructions in the `Install.txt` file.
Note that METIS needs to be installed with 64 bit `IDXTYPEWIDTH` by changing `include/metis.h`.
Afterwards, set the environment variable `WITH_METIS=1`.

Then run:

```
pip install torch-scatter torch-sparse
```

When running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.
In this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:

```
export TORCH_CUDA_ARCH_LIST="6.0 6.1 7.2+PTX 7.5+PTX"
```

## Functions

### Coalesce

```
torch_sparse.coalesce(index, value, m, n, op="add") -> (torch.LongTensor, torch.Tensor)
```

Row-wise sorts `index` and removes duplicate entries.
Duplicate entries are removed by scattering them together.
For scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.

#### Parameters

* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **op** *(string, optional)* - The scatter operation to use. (default: `"add"`)

#### Returns

* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.
* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.

#### Example

```python
import torch
from torch_sparse import coalesce

index = torch.tensor([[1, 0, 1, 0, 2, 1],
                      [0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])

index, value = coalesce(index, value, m=3, n=2)
```

```
print(index)
tensor([[0, 1, 1, 2],
        [1, 0, 1, 0]])
print(value)
tensor([[6.0, 8.0],
        [7.0, 9.0],
        [3.0, 4.0],
        [5.0, 6.0]])
```

### Transpose

```
torch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)
```

Transposes dimensions 0 and 1 of a sparse matrix.

#### Parameters

* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)

#### Returns

* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.
* **value** *(Tensor)* - The transposed value tensor of sparse matrix.

#### Example

```python
import torch
from torch_sparse import transpose

index = torch.tensor([[1, 0, 1, 0, 2, 1],
                      [0, 1, 1, 1, 0, 0]])
value = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])

index, value = transpose(index, value, 3, 2)
```

```
print(index)
tensor([[0, 0, 1, 1],
        [1, 2, 0, 1]])
print(value)
tensor([[7.0, 9.0],
        [5.0, 6.0],
        [6.0, 8.0],
        [3.0, 4.0]])
```

### Sparse Dense Matrix Multiplication

```
torch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor
```

Matrix product of a sparse matrix with a dense matrix.

#### Parameters

* **index** *(LongTensor)* - The index tensor of sparse matrix.
* **value** *(Tensor)* - The value tensor of sparse matrix.
* **m** *(int)* - The first dimension of sparse matrix.
* **n** *(int)* - The second dimension of sparse matrix.
* **matrix** *(Tensor)* - The dense matrix.

#### Returns

* **out** *(Tensor)* - The dense output matrix.

#### Example

```python
import torch
from torch_sparse import spmm

index = torch.tensor([[0, 0, 1, 2, 2],
                      [0, 2, 1, 0, 1]])
value = torch.Tensor([1, 2, 4, 1, 3])
matrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])

out = spmm(index, value, 3, 3, matrix)
```

```
print(out)
tensor([[7.0, 16.0],
        [8.0, 20.0],
        [7.0, 19.0]])
```

### Sparse Sparse Matrix Multiplication

```
torch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)
```

Matrix product of two sparse tensors.
Both input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).

#### Parameters

* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.
* **valueA** *(Tensor)* - The value tensor of first sparse matrix.
* **indexB** *(LongTensor)* - The index tensor of second sparse matrix.
* **valueB** *(Tensor)* - The value tensor of second sparse matrix.
* **m** *(int)* - The first dimension of first sparse matrix.
* **k** *(int)* - The second dimension of first sparse matrix and first dimension of second sparse matrix.
* **n** *(int)* - The second dimension of second sparse matrix.
* **coalesced** *(bool, optional)*: If set to `True`, will coalesce both input sparse matrices. (default: `False`)

#### Returns

* **index** *(LongTensor)* - The output index tensor of sparse matrix.
* **value** *(Tensor)* - The output value tensor of sparse matrix.

#### Example

```python
import torch
from torch_sparse import spspmm

indexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])
valueA = torch.Tensor([1, 2, 3, 4, 5])

indexB = torch.tensor([[0, 2], [1, 0]])
valueB = torch.Tensor([2, 4])

indexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)
```

```
print(indexC)
tensor([[0, 1, 2],
        [0, 1, 1]])
print(valueC)
tensor([8.0, 6.0, 8.0])
```

## Running tests

```
pytest
```

## C++ API

`torch-sparse` also offers a C++ API that contains C++ equivalent of python models.
For this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):

```
mkdir build
cd build
# Add -DWITH_CUDA=on support for CUDA support
cmake -DCMAKE_PREFIX_PATH="..." ..
make
make install
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://codeberg.org/sail.black/pytorch_sparse_carate.git",
    "name": "torch-sparse-carate",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "pytorch,sparse,sparse-matrices,autograd",
    "author": "Julian M. Kleber",
    "author_email": "julian.kleber@sail.black",
    "download_url": "https://files.pythonhosted.org/packages/e1/1f/988cf9ad7e2954279a54da959b3574f5fb608f0516e38efdb7e52cb6b2b2/torch_sparse_carate-0.6.17.tar.gz",
    "platform": null,
    "description": "# Badges \n\n\n[![Downloads](https://static.pepy.tech/personalized-badge/carate?period=total&units=international_system&left_color=black&right_color=orange&left_text=Downloads)](https://pepy.tech/project/find_the_right_one)\n[![License: GPL v3](https://img.shields.io/badge/License-GPL_v3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n![Python Versions](https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11%20%7C%20-blue) \n![Style Black](https://warehouse-camo.ingress.cmh1.psfhosted.org/fbfdc7754183ecf079bc71ddeabaf88f6cbc5c00/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f636f64652532307374796c652d626c61636b2d3030303030302e737667) \n\nThe repo was not touched, however the quality may significantly be improved using \n[lia](https://codeberg.org/cap_jmk/lia.git)\n\n\n# Badges Parent Package\n\n[pypi-image]: https://badge.fury.io/py/torch-sparse.svg\n[pypi-url]: https://pypi.python.org/pypi/torch-sparse\n[testing-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml/badge.svg\n[testing-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml\n[linting-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml/badge.svg\n[linting-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml\n[coverage-image]: https://codecov.io/gh/rusty1s/pytorch_sparse/branch/master/graph/badge.svg\n[coverage-url]: https://codecov.io/github/rusty1s/pytorch_sparse?branch=master\n\n# PyTorch Sparse\n\n[![PyPI Version][pypi-image]][pypi-url]\n[![Testing Status][testing-image]][testing-url]\n[![Linting Status][linting-image]][linting-url]\n[![Code Coverage][coverage-image]][coverage-url]\n\n--------------------------------------------------------------------------------\n\nThis package consists of a small extension library of optimized sparse matrix operations with autograd support.\nThis package currently consists of the following methods:\n\n* **[Coalesce](#coalesce)**\n* **[Transpose](#transpose)**\n* **[Sparse Dense Matrix Multiplication](#sparse-dense-matrix-multiplication)**\n* **[Sparse Sparse Matrix Multiplication](#sparse-sparse-matrix-multiplication)**\n\nAll included operations work on varying data types and are implemented both for CPU and GPU.\nTo avoid the hazzle of creating [`torch.sparse_coo_tensor`](https://pytorch.org/docs/stable/torch.html?highlight=sparse_coo_tensor#torch.sparse_coo_tensor), this package defines operations on sparse tensors by simply passing `index` and `value` tensors as arguments ([with same shapes as defined in PyTorch](https://pytorch.org/docs/stable/sparse.html)).\nNote that only `value` comes with autograd support, as `index` is discrete and therefore not differentiable.\n\n## Installation\n\n### Anaconda\n\n**Update:** You can now install `pytorch-sparse` via [Anaconda](https://anaconda.org/pyg/pytorch-sparse) for all major OS/PyTorch/CUDA combinations \ud83e\udd17\nGiven that you have [`pytorch >= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run\n\n```\nconda install pytorch-sparse -c pyg\n```\n\n### Binaries\n\nWe alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).\n\n#### PyTorch 2.0\n\nTo install the binaries for PyTorch 2.0.0, simply run\n\n```\npip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.0.0+${CUDA}.html\n```\n\nwhere `${CUDA}` should be replaced by either `cpu`, `cu117`, or `cu118` depending on your PyTorch installation.\n\n|             | `cpu` | `cu117` | `cu118` |\n|-------------|-------|---------|---------|\n| **Linux**   | \u2705    | \u2705      | \u2705      |\n| **Windows** | \u2705    | \u2705      | \u2705      |\n| **macOS**   | \u2705    |         |         |\n\n#### PyTorch 1.13\n\nTo install the binaries for PyTorch 1.13.0, simply run\n\n```\npip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-1.13.0+${CUDA}.html\n```\n\nwhere `${CUDA}` should be replaced by either `cpu`, `cu116`, or `cu117` depending on your PyTorch installation.\n\n|             | `cpu` | `cu116` | `cu117` |\n|-------------|-------|---------|---------|\n| **Linux**   | \u2705    | \u2705      | \u2705      |\n| **Windows** | \u2705    | \u2705      | \u2705      |\n| **macOS**   | \u2705    |         |         |\n\n**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0 and PyTorch 1.12.0/1.12.1 (following the same procedure).\nFor older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.\nYou can look up the latest supported version number [here](https://data.pyg.org/whl).\n\n### From source\n\nEnsure that at least PyTorch 1.7.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:\n\n```\n$ python -c \"import torch; print(torch.__version__)\"\n>>> 1.7.0\n\n$ echo $PATH\n>>> /usr/local/cuda/bin:...\n\n$ echo $CPATH\n>>> /usr/local/cuda/include:...\n```\n\nIf you want to additionally build `torch-sparse` with METIS support, *e.g.* for partioning, please download and install the [METIS library](https://web.archive.org/web/20211119110155/http://glaros.dtc.umn.edu/gkhome/metis/metis/download) by following the instructions in the `Install.txt` file.\nNote that METIS needs to be installed with 64 bit `IDXTYPEWIDTH` by changing `include/metis.h`.\nAfterwards, set the environment variable `WITH_METIS=1`.\n\nThen run:\n\n```\npip install torch-scatter torch-sparse\n```\n\nWhen running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.\nIn this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:\n\n```\nexport TORCH_CUDA_ARCH_LIST=\"6.0 6.1 7.2+PTX 7.5+PTX\"\n```\n\n## Functions\n\n### Coalesce\n\n```\ntorch_sparse.coalesce(index, value, m, n, op=\"add\") -> (torch.LongTensor, torch.Tensor)\n```\n\nRow-wise sorts `index` and removes duplicate entries.\nDuplicate entries are removed by scattering them together.\nFor scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.\n\n#### Parameters\n\n* **index** *(LongTensor)* - The index tensor of sparse matrix.\n* **value** *(Tensor)* - The value tensor of sparse matrix.\n* **m** *(int)* - The first dimension of sparse matrix.\n* **n** *(int)* - The second dimension of sparse matrix.\n* **op** *(string, optional)* - The scatter operation to use. (default: `\"add\"`)\n\n#### Returns\n\n* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.\n* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import coalesce\n\nindex = torch.tensor([[1, 0, 1, 0, 2, 1],\n                      [0, 1, 1, 1, 0, 0]])\nvalue = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])\n\nindex, value = coalesce(index, value, m=3, n=2)\n```\n\n```\nprint(index)\ntensor([[0, 1, 1, 2],\n        [1, 0, 1, 0]])\nprint(value)\ntensor([[6.0, 8.0],\n        [7.0, 9.0],\n        [3.0, 4.0],\n        [5.0, 6.0]])\n```\n\n### Transpose\n\n```\ntorch_sparse.transpose(index, value, m, n) -> (torch.LongTensor, torch.Tensor)\n```\n\nTransposes dimensions 0 and 1 of a sparse matrix.\n\n#### Parameters\n\n* **index** *(LongTensor)* - The index tensor of sparse matrix.\n* **value** *(Tensor)* - The value tensor of sparse matrix.\n* **m** *(int)* - The first dimension of sparse matrix.\n* **n** *(int)* - The second dimension of sparse matrix.\n* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)\n\n#### Returns\n\n* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.\n* **value** *(Tensor)* - The transposed value tensor of sparse matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import transpose\n\nindex = torch.tensor([[1, 0, 1, 0, 2, 1],\n                      [0, 1, 1, 1, 0, 0]])\nvalue = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])\n\nindex, value = transpose(index, value, 3, 2)\n```\n\n```\nprint(index)\ntensor([[0, 0, 1, 1],\n        [1, 2, 0, 1]])\nprint(value)\ntensor([[7.0, 9.0],\n        [5.0, 6.0],\n        [6.0, 8.0],\n        [3.0, 4.0]])\n```\n\n### Sparse Dense Matrix Multiplication\n\n```\ntorch_sparse.spmm(index, value, m, n, matrix) -> torch.Tensor\n```\n\nMatrix product of a sparse matrix with a dense matrix.\n\n#### Parameters\n\n* **index** *(LongTensor)* - The index tensor of sparse matrix.\n* **value** *(Tensor)* - The value tensor of sparse matrix.\n* **m** *(int)* - The first dimension of sparse matrix.\n* **n** *(int)* - The second dimension of sparse matrix.\n* **matrix** *(Tensor)* - The dense matrix.\n\n#### Returns\n\n* **out** *(Tensor)* - The dense output matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import spmm\n\nindex = torch.tensor([[0, 0, 1, 2, 2],\n                      [0, 2, 1, 0, 1]])\nvalue = torch.Tensor([1, 2, 4, 1, 3])\nmatrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])\n\nout = spmm(index, value, 3, 3, matrix)\n```\n\n```\nprint(out)\ntensor([[7.0, 16.0],\n        [8.0, 20.0],\n        [7.0, 19.0]])\n```\n\n### Sparse Sparse Matrix Multiplication\n\n```\ntorch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -> (torch.LongTensor, torch.Tensor)\n```\n\nMatrix product of two sparse tensors.\nBoth input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).\n\n#### Parameters\n\n* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.\n* **valueA** *(Tensor)* - The value tensor of first sparse matrix.\n* **indexB** *(LongTensor)* - The index tensor of second sparse matrix.\n* **valueB** *(Tensor)* - The value tensor of second sparse matrix.\n* **m** *(int)* - The first dimension of first sparse matrix.\n* **k** *(int)* - The second dimension of first sparse matrix and first dimension of second sparse matrix.\n* **n** *(int)* - The second dimension of second sparse matrix.\n* **coalesced** *(bool, optional)*: If set to `True`, will coalesce both input sparse matrices. (default: `False`)\n\n#### Returns\n\n* **index** *(LongTensor)* - The output index tensor of sparse matrix.\n* **value** *(Tensor)* - The output value tensor of sparse matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import spspmm\n\nindexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])\nvalueA = torch.Tensor([1, 2, 3, 4, 5])\n\nindexB = torch.tensor([[0, 2], [1, 0]])\nvalueB = torch.Tensor([2, 4])\n\nindexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)\n```\n\n```\nprint(indexC)\ntensor([[0, 1, 2],\n        [0, 1, 1]])\nprint(valueC)\ntensor([8.0, 6.0, 8.0])\n```\n\n## Running tests\n\n```\npytest\n```\n\n## C++ API\n\n`torch-sparse` also offers a C++ API that contains C++ equivalent of python models.\nFor this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):\n\n```\nmkdir build\ncd build\n# Add -DWITH_CUDA=on support for CUDA support\ncmake -DCMAKE_PREFIX_PATH=\"...\" ..\nmake\nmake install\n```\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations",
    "version": "0.6.17",
    "split_keywords": [
        "pytorch",
        "sparse",
        "sparse-matrices",
        "autograd"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "df0bd419064dcf0665366f48f9ad181c23628ec14abc9aa3beb0c112d12939c0",
                "md5": "17da2dd902f09bc222769ee1b874116a",
                "sha256": "4d4b61e1226ed1049f2f5b2964b0676db2b453a86e4597add91f56f2832ea6d3"
            },
            "downloads": -1,
            "filename": "torch_sparse_carate-0.6.17-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "17da2dd902f09bc222769ee1b874116a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 33723,
            "upload_time": "2023-03-25T20:20:08",
            "upload_time_iso_8601": "2023-03-25T20:20:08.571588Z",
            "url": "https://files.pythonhosted.org/packages/df/0b/d419064dcf0665366f48f9ad181c23628ec14abc9aa3beb0c112d12939c0/torch_sparse_carate-0.6.17-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e11f988cf9ad7e2954279a54da959b3574f5fb608f0516e38efdb7e52cb6b2b2",
                "md5": "63d168c0163cd91329d882bbbd50ec9f",
                "sha256": "7630bf38d14e36d2ec44ef6143a4dcee77943c13648ab1655052e610950f2403"
            },
            "downloads": -1,
            "filename": "torch_sparse_carate-0.6.17.tar.gz",
            "has_sig": false,
            "md5_digest": "63d168c0163cd91329d882bbbd50ec9f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 50824,
            "upload_time": "2023-03-25T20:20:10",
            "upload_time_iso_8601": "2023-03-25T20:20:10.764508Z",
            "url": "https://files.pythonhosted.org/packages/e1/1f/988cf9ad7e2954279a54da959b3574f5fb608f0516e38efdb7e52cb6b2b2/torch_sparse_carate-0.6.17.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-25 20:20:10",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "torch-sparse-carate"
}
        
Elapsed time: 0.04838s