partialtorch


Namepartialtorch JSON
Version 0.0.8 PyPI version JSON
download
home_pagehttps://github.com/inspiros/partialtorch
SummaryMasked and Partial Operations for PyTorch
upload_time2023-11-12 08:11:32
maintainer
docs_urlNone
authorHoang-Nhat Tran (inspiros)
requires_python>=3.8
licenseMIT
keywords masked_tensor masked_operator partial_operator
VCS
bugtrack_url
requirements torch
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![logo](https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/logo.png) PartialTorch ![Build Wheels Status](https://img.shields.io/github/actions/workflow/status/inspiros/partialtorch/build_wheels.yml) ![License](https://img.shields.io/github/license/inspiros/partialtorch)
=============

**PartialTorch** is a thin C++ wrapper of **PyTorch**'s operators to support masked and partial semantics.

## Main Features

### Masked Pair

We use a custom C++ extension class called `partialtorch.MaskedPair` to store ``data`` and ``mask`` (an optional
``Tensor`` of the same shape as ``data``, containing ``0/1`` values indicating the availability of the corresponding
element in ``data``).

The advantages of `MaskedPair` is that it is statically-typed but unpackable like `namedtuple`,
and more importantly, it is accepted by `torch.jit.script` functions as argument or return type.
This container is a temporary substitution for `torch.masked.MaskedTensor` and may change in the future.

This table compares the two in some aspects:

|                                     |                             ``torch.masked.MaskedTensor``                              |                      ``partialtorch.MaskedPair``                       |
|:------------------------------------|:--------------------------------------------------------------------------------------:|:----------------------------------------------------------------------:|
| **Backend**                         |                                         Python                                         |                                  C++                                   |
| **Nature**                          |          Is a subclass of ``Tensor`` with ``mask`` as an additional attribute          |                Is a container of ``data`` and ``mask``                 |
| **Supported layouts**               |                                   Strided and Sparse                                   |                             Only Strided️                              |
| **Mask types**                      |                                  ``torch.BoolTensor``                                  |       ``Optional[torch.BoolTensor]`` (may support other dtypes)        |
| **Ops Coverage**                    | Listed [here](https://pytorch.org/docs/stable/masked.html) (with lots of restrictions) |  All masked ops that ``torch.masked.MaskedTensor`` supports and more   |
| **``torch.jit.script``-able**       |            Yes✔️ (Python ops seem not to be jit compiled but encapsulated)             |                                 Yes✔️                                  |
| **Supports ``Tensor``'s methods**   |                                         Yes✔️                                          |                             Only a few[^1]                             |
| **Supports ``__torch_function__``** |                                         Yes✔️                                          |                                No❌[^1]                                 |
| **Performance**                     |           Slow and sometimes buggy (e.g. try calling ``.backward`` 3 times)            | Faster, not prone to bugs related to ``autograd`` as it is a container |

[^1]: We blame ``torch`` 😅

More details about the differences will be discussed below.

### Masked Operators

<p align="center">
    <img src="https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/torch_masked_binary.png" width="600">
</p>

<p align="center">
    <img src="https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/masked_binary.png" width="600">
</p>

**Masked operators** are the same things that can be found in ``torch.masked``
package (_which is, unfortunately, still in prototype stage_).

Our semantic differs from ``torch.masked`` for non-unary operators.

- ``torch.masked``: Requires operands to share identical mask
  (check this [link](https://pytorch.org/docs/stable/masked.html)), which is not always the case when we have to deal
  with missing data.
- ``partialtorch``: Allows operands to have different masks, the output mask is the result of a _bitwise all_ function
  of input masks' values.

### Partial Operators

<p align="center">
    <img src="https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/partial_binary.png" width="600">
</p>

Similar to masked operators, **partial operators** allow non-uniform masks but instead of using _bitwise all_
to compute output mask, they use _bitwise any_.
That means output at any position with at least one present operand is NOT considered missing.

In details, before fowarding to the regular ``torch`` native operators, the masked positions of each operand are filled
with an _identity value_.
The identity value is defined as the initial value that has the property ``op(op_identity, value) = value``.
For example, the identity value of element-wise addition is ``0``.

<p align="center">
    <img src="https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/regular_binary.png" width="600">
</p>

All partial operators have a prefix ``partial_`` prepended to their name (e.g. ``partialtorch.partial_add``),
while masked operators inherit their native ops' names.
Reduction operators are excluded from this rule as they can be considered unary partial, and some of them
are already available in ``torch.masked``.

#### Scaled Partial Operators

<p align="center">
    <img src="https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/scaled_binary.png" width="800">
</p>

Some partial operators that involves addition/substraction are extended to have _rescaling semantic_.
We call them **scaled partial operators**.
In essence, they rescale the output by the ratio of present operands in the computation of the output.
The idea is similar to ``torch.dropout`` rescaling by $\frac{1}{1-p}$,
or more precisely the way [**Partial Convolution**](https://arxiv.org/abs/1804.07723) works.

Programatically, all scaled partial operators share the same signature with their non-scaled counterparts,
and are dispatched to when adding a keyword-only argument ``scaled = True``:

```python
pout = partialtorch.partial_add(pa, pb, scaled=True)
```

### Torch Ops Coverage

We found out that the workload is behemoth for a group of one person, and involves manually reimplementing all
native functors under the ``at::_ops`` namespace (guess how many there are).
Therefore, we try to cover as many primitive operators as possible, as well as a few other operators relevant to our
work.
The full list of all registered signatures can be found in this [file](resources/partialtorch_ops.yaml).

If you want any operator to be added, please contact me.
But if they fall into one of the following categories, the porting may take long or will not happen:

- Ops that do not have a meaningful masked semantic (e.g. ``torch.det``).
- Ops that cannot be implemented easily by calling native ops and requires writing custom kernels (e.g. ``torch.mode``).
- Ops that accept output as an input a.k.a. _out_ ops (e.g.
  ``aten::mul.out(self: Tensor, other: Tensor, *, out: Tensor(a!)) -> Tensor(a!)``).
- Ops for tensors with unsuported properties (e.g. named tensors, sparse/quantized layouts).
- Ops with any input/return type that do not have ``pybind11`` type conversions predefined by ``torch``'s C++ backend.

Also, everyone is welcome to contribute.

## Requirements

- ``torch>=2.1.0`` _(this version of **PyTorch** brought a number of changes that are not backward compatible)_

## Installation

#### From TestPyPI

[partialtorch](https://test.pypi.org/project/partialtorch/) has wheels hosted at **TestPyPI**
(it is not likely to reach a stable state anytime soon):

```bash
pip install -i https://test.pypi.org/simple/ partialtorch
```

The Linux and Windows wheels are built with **Cuda 12.1**.
If you cannot find a wheel for your Arch/Python/Cuda, or there is any problem with library linking when importing,
proceed to [instructions to build from source](#from-source).

|                  |             Linux/Windows             |     MacOS      |
|------------------|:-------------------------------------:|:--------------:|
| Python version:  |               3.8-3.11                |    3.8-3.11    |
| PyTorch version: |            `torch==2.1.0`             | `torch==2.1.0` |
| Cuda version:    |                 12.1                  |       -        |
| GPU CCs:         | `5.0,6.0,6.1,7.0,7.5,8.0,8.6,9.0+PTX` |       -        |

#### From Source

For installing from source, you need a C++17 compiler (`gcc`/`msvc`) and a Cuda compiler (`nvcc`) installed.
Then, clone this repo and execute:

```bash
pip install .
```

## Usage

### Initializing a ``MaskedPair``

While ``MaskedPair`` is almost as simple as a ``namedtuple``, there are also a few supporting creation ops:

```python
import torch, partialtorch

x = torch.rand(3, 3)
x_mask = torch.bernoulli(torch.full_like(x, 0.5)).bool()  # x_mask must have dtype torch.bool

px = partialtorch.masked_pair(x, x_mask)  # with 2 inputs data and mask
px = partialtorch.masked_pair(x)  # with data only (mask = None)
px = partialtorch.masked_pair(x, None)  # explicitly define mask = None
px = partialtorch.masked_pair(x, True)  # explicitly define mask = True (equivalent to None)
px = partialtorch.masked_pair((x, x_mask))  # from tuple

# this new random function conveniently does the work of the above steps
px = partialtorch.rand_mask(x, 0.5)
```

Note that ``MaskedPair`` is not a subclass of ``Tensor`` like ``MaskedTensor``,
so we only support a very limited number of methods.
This is mostly because of the current limitations of C++ backend for custom classes[^1] such as:

- Unable to overload methods with the same name
- Unable to define custom type conversions from Python type (``Tensor``) or to custom Python type
  (to be able to define custom methods such as ``__str__`` of ``Tensor`` does for example)
- Unable to define ``__torch_function__``

In the meantime, please consider ``MaskedPair`` purely a fast container and use
``partialtorch.op(pair, ...)`` instead of ``pair.op(...)`` if not available.

**Note:** You cannot index ``MaskedPair`` with ``pair[..., 1:-1]`` as they acts like tuple of 2 elements when indexed.

### Operators

All registered ops can be accessed like any torch's custom C++ operator by calling ``torch.ops.partialtorch.[op_name]``
(the same way we call native ATen function ``torch.ops.aten.[op_name]``).
Their overloaded versions that accept ``Tensor`` are also registered for convenience
(but return type is always converted to ``MaskedPair``).

<table>
<tr>
<th>torch</th>
<th>partialtorch</th>
</tr>

<tr>
<td>
<sub>

```python
import torch

torch.manual_seed(1)
x = torch.rand(5, 5)

y = torch.sum(x, 0, keepdim=True)
```

</sub>
<td>
<sub>

```python
import torch
import partialtorch

torch.manual_seed(1)
x = torch.rand(5, 5)
px = partialtorch.rand_mask(x, 0.5)

# standard extension ops calling
pout = torch.ops.partialtorch.sum(px, 0, keepdim=True)
# all exposed ops are also aliased inside partialtorch.ops
pout = partialtorch.ops.sum(px, 0, keepdim=True)
```

</sub>
</td>
</tr>

</table>

Furthermore, we inherit the naming convention of for inplace ops - appending a trailing ``_`` character after their
names (e.g. ``partialtorch.relu`` and ``partialtorch.relu_``).
They modify both data and mask of the first operand inplacely.

The usage is kept as close to the corresponding ``Tensor`` ops as possible.
Hence, further explaination is redundant.

### Neural Network Layers

Currently, there are only a number of modules implemented in ``partialtorch.nn`` subpackage that are masked equivalences
of those in ``torch.nn``.
This is the list of submodules inside ``partialtorch.nn.modules`` and the layers they provide:

- [`partialtorch.nn.modules.activation`](partialtorch/nn/modules/activation.py): All activations
  except ``torch.nn.MultiheadAttention``
- [`partialtorch.nn.modules.batchnorm`](partialtorch/nn/modules/batchnorm.py): ``BatchNormNd``
- [`partialtorch.nn.modules.channelshuffle`](partialtorch/nn/modules/channelshuffle.py): ``ChannelShuffle``
- [`partialtorch.nn.modules.conv`](partialtorch/nn/modules/conv.py): ``PartialConvNd``, ``PartialConvTransposeNd``
- [`partialtorch.nn.modules.dropout`](partialtorch/nn/modules/dropout.py): ``DropoutNd``, ``AlphaDropout``, ``FeatureAlphaDropout``
- [`partialtorch.nn.modules.flatten`](partialtorch/nn/modules/flatten.py): ``Flatten``, ``Unflatten``
- [`partialtorch.nn.modules.fold`](partialtorch/nn/modules/fold.py): ``Fold``, ``Unfold``
- [`partialtorch.nn.modules.instancenorm`](partialtorch/nn/modules/instancenorm.py): ``InstanceNormNd``
- [`partialtorch.nn.modules.normalization`](partialtorch/nn/modules/normalization.py): ``LayerNorm``
- [`partialtorch.nn.modules.padding`](partialtorch/nn/modules/padding.py): ``CircularPadNd``, ``ConstantPadNd``, ``ReflectionPadNd``, ``ReplicationPadNd``, ``ZeroPadNd``
- [`partialtorch.nn.modules.pixelshuffle`](partialtorch/nn/modules/pixelshuffle.py): ``PixelShuffle``, ``PixelUnshuffle``
- [`partialtorch.nn.modules.pooling`](partialtorch/nn/modules/pooling.py): ``MaxPoolNd``, ``AvgPoolNd``, ``FractionalMaxPoolNd``, ``LpPoolNd``, ``AdaptiveMaxPoolNd``, ``AdaptiveAvgPoolNd``
- [`partialtorch.nn.modules.upsampling`](partialtorch/nn/modules/upsampling.py): ``Upsample``, ``UpsamplingNearest2d``, ``UpsamplingBilinear2d``, ``PartialUpsample``, ``PartialUpsamplingBilinear2d``

The steps for declaring your custom module is identical, except that we now use the classes inside ``partialtorch.nn``
which input and output ``MaskedPair``.
Note that to make them scriptable, you may have to explicitly annotate input and output types.

<table>
<tr>
<th>torch</th>
<th>partialtorch</th>
</tr>

<tr>
<td>
<sub>

```python
import torch.nn as nn
import torch.nn.functional as F

from torch import Tensor


class ConvBlock(nn.Module):
    def __init__(self, in_channels, out_channels):
        self.conv = nn.Conv2d(in_channels,
                              out_channels,
                              kernel_size=(3, 3))
        self.bn = nn.BatchNorm2d(out_channels)
        self.pool = nn.MaxPool2d(kernel_size=(2, 2))

    def forward(self, x: Tensor) -> Tensor:
        x = self.conv(x)
        x = F.relu(x)
        x = self.bn(x)
        x = self.pool(x)
        return x
```

</sub>
<td>
<sub>

```python
import torch.nn as nn

import partialtorch.nn as partial_nn
import partialtorch.nn.functional as partial_F

from partialtorch import MaskedPair


class PartialConvBlock(nn.Module):
    def __init__(self, in_channels, out_channels):
        self.conv = partial_nn.PartialConv2d(in_channels,
                                             out_channels,
                                             kernel_size=(3, 3))
        self.bn = partial_nn.BatchNorm2d(out_channels)
        self.pool = partial_nn.MaxPool2d(kernel_size=(2, 2))

    def forward(self, x: MaskedPair) -> MaskedPair:
        x = self.conv(x)
        x = partial_F.relu(x)
        x = self.bn(x)
        x = self.pool(x)
        return x
```

</sub>
</td>
</tr>

</table>

A few other examples can be found in [examples](examples) folder.

## Citation

This code is part of another project of us. Citation will be added in the future.

## Acknowledgements

Part of the codebase is modified from the following repositories:

- https://github.com/pytorch/pytorch
- https://github.com/NVIDIA/partialconv

## License

The code is released under the MIT license. See [`LICENSE.txt`](LICENSE.txt) for details.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/inspiros/partialtorch",
    "name": "partialtorch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "masked_tensor,masked_operator,partial_operator",
    "author": "Hoang-Nhat Tran (inspiros)",
    "author_email": "hnhat.tran@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/a4/43/f8814b385e27680c4bb623c1b61cb8dc8f073e52c74a493f41654bf28ef3/partialtorch-0.0.8.tar.gz",
    "platform": null,
    "description": "![logo](https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/logo.png) PartialTorch ![Build Wheels Status](https://img.shields.io/github/actions/workflow/status/inspiros/partialtorch/build_wheels.yml) ![License](https://img.shields.io/github/license/inspiros/partialtorch)\n=============\n\n**PartialTorch** is a thin C++ wrapper of **PyTorch**'s operators to support masked and partial semantics.\n\n## Main Features\n\n### Masked Pair\n\nWe use a custom C++ extension class called `partialtorch.MaskedPair` to store ``data`` and ``mask`` (an optional\n``Tensor`` of the same shape as ``data``, containing ``0/1`` values indicating the availability of the corresponding\nelement in ``data``).\n\nThe advantages of `MaskedPair` is that it is statically-typed but unpackable like `namedtuple`,\nand more importantly, it is accepted by `torch.jit.script` functions as argument or return type.\nThis container is a temporary substitution for `torch.masked.MaskedTensor` and may change in the future.\n\nThis table compares the two in some aspects:\n\n|                                     |                             ``torch.masked.MaskedTensor``                              |                      ``partialtorch.MaskedPair``                       |\n|:------------------------------------|:--------------------------------------------------------------------------------------:|:----------------------------------------------------------------------:|\n| **Backend**                         |                                         Python                                         |                                  C++                                   |\n| **Nature**                          |          Is a subclass of ``Tensor`` with ``mask`` as an additional attribute          |                Is a container of ``data`` and ``mask``                 |\n| **Supported layouts**               |                                   Strided and Sparse                                   |                             Only Strided\ufe0f                              |\n| **Mask types**                      |                                  ``torch.BoolTensor``                                  |       ``Optional[torch.BoolTensor]`` (may support other dtypes)        |\n| **Ops Coverage**                    | Listed [here](https://pytorch.org/docs/stable/masked.html) (with lots of restrictions) |  All masked ops that ``torch.masked.MaskedTensor`` supports and more   |\n| **``torch.jit.script``-able**       |            Yes\u2714\ufe0f (Python ops seem not to be jit compiled but encapsulated)             |                                 Yes\u2714\ufe0f                                  |\n| **Supports ``Tensor``'s methods**   |                                         Yes\u2714\ufe0f                                          |                             Only a few[^1]                             |\n| **Supports ``__torch_function__``** |                                         Yes\u2714\ufe0f                                          |                                No\u274c[^1]                                 |\n| **Performance**                     |           Slow and sometimes buggy (e.g. try calling ``.backward`` 3 times)            | Faster, not prone to bugs related to ``autograd`` as it is a container |\n\n[^1]: We blame ``torch`` \ud83d\ude05\n\nMore details about the differences will be discussed below.\n\n### Masked Operators\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/torch_masked_binary.png\" width=\"600\">\n</p>\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/masked_binary.png\" width=\"600\">\n</p>\n\n**Masked operators** are the same things that can be found in ``torch.masked``\npackage (_which is, unfortunately, still in prototype stage_).\n\nOur semantic differs from ``torch.masked`` for non-unary operators.\n\n- ``torch.masked``: Requires operands to share identical mask\n  (check this [link](https://pytorch.org/docs/stable/masked.html)), which is not always the case when we have to deal\n  with missing data.\n- ``partialtorch``: Allows operands to have different masks, the output mask is the result of a _bitwise all_ function\n  of input masks' values.\n\n### Partial Operators\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/partial_binary.png\" width=\"600\">\n</p>\n\nSimilar to masked operators, **partial operators** allow non-uniform masks but instead of using _bitwise all_\nto compute output mask, they use _bitwise any_.\nThat means output at any position with at least one present operand is NOT considered missing.\n\nIn details, before fowarding to the regular ``torch`` native operators, the masked positions of each operand are filled\nwith an _identity value_.\nThe identity value is defined as the initial value that has the property ``op(op_identity, value) = value``.\nFor example, the identity value of element-wise addition is ``0``.\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/regular_binary.png\" width=\"600\">\n</p>\n\nAll partial operators have a prefix ``partial_`` prepended to their name (e.g. ``partialtorch.partial_add``),\nwhile masked operators inherit their native ops' names.\nReduction operators are excluded from this rule as they can be considered unary partial, and some of them\nare already available in ``torch.masked``.\n\n#### Scaled Partial Operators\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/inspiros/partialtorch/master/resources/scaled_binary.png\" width=\"800\">\n</p>\n\nSome partial operators that involves addition/substraction are extended to have _rescaling semantic_.\nWe call them **scaled partial operators**.\nIn essence, they rescale the output by the ratio of present operands in the computation of the output.\nThe idea is similar to ``torch.dropout`` rescaling by $\\frac{1}{1-p}$,\nor more precisely the way [**Partial Convolution**](https://arxiv.org/abs/1804.07723) works.\n\nProgramatically, all scaled partial operators share the same signature with their non-scaled counterparts,\nand are dispatched to when adding a keyword-only argument ``scaled = True``:\n\n```python\npout = partialtorch.partial_add(pa, pb, scaled=True)\n```\n\n### Torch Ops Coverage\n\nWe found out that the workload is behemoth for a group of one person, and involves manually reimplementing all\nnative functors under the ``at::_ops`` namespace (guess how many there are).\nTherefore, we try to cover as many primitive operators as possible, as well as a few other operators relevant to our\nwork.\nThe full list of all registered signatures can be found in this [file](resources/partialtorch_ops.yaml).\n\nIf you want any operator to be added, please contact me.\nBut if they fall into one of the following categories, the porting may take long or will not happen:\n\n- Ops that do not have a meaningful masked semantic (e.g. ``torch.det``).\n- Ops that cannot be implemented easily by calling native ops and requires writing custom kernels (e.g. ``torch.mode``).\n- Ops that accept output as an input a.k.a. _out_ ops (e.g.\n  ``aten::mul.out(self: Tensor, other: Tensor, *, out: Tensor(a!)) -> Tensor(a!)``).\n- Ops for tensors with unsuported properties (e.g. named tensors, sparse/quantized layouts).\n- Ops with any input/return type that do not have ``pybind11`` type conversions predefined by ``torch``'s C++ backend.\n\nAlso, everyone is welcome to contribute.\n\n## Requirements\n\n- ``torch>=2.1.0`` _(this version of **PyTorch** brought a number of changes that are not backward compatible)_\n\n## Installation\n\n#### From TestPyPI\n\n[partialtorch](https://test.pypi.org/project/partialtorch/) has wheels hosted at **TestPyPI**\n(it is not likely to reach a stable state anytime soon):\n\n```bash\npip install -i https://test.pypi.org/simple/ partialtorch\n```\n\nThe Linux and Windows wheels are built with **Cuda 12.1**.\nIf you cannot find a wheel for your Arch/Python/Cuda, or there is any problem with library linking when importing,\nproceed to [instructions to build from source](#from-source).\n\n|                  |             Linux/Windows             |     MacOS      |\n|------------------|:-------------------------------------:|:--------------:|\n| Python version:  |               3.8-3.11                |    3.8-3.11    |\n| PyTorch version: |            `torch==2.1.0`             | `torch==2.1.0` |\n| Cuda version:    |                 12.1                  |       -        |\n| GPU CCs:         | `5.0,6.0,6.1,7.0,7.5,8.0,8.6,9.0+PTX` |       -        |\n\n#### From Source\n\nFor installing from source, you need a C++17 compiler (`gcc`/`msvc`) and a Cuda compiler (`nvcc`) installed.\nThen, clone this repo and execute:\n\n```bash\npip install .\n```\n\n## Usage\n\n### Initializing a ``MaskedPair``\n\nWhile ``MaskedPair`` is almost as simple as a ``namedtuple``, there are also a few supporting creation ops:\n\n```python\nimport torch, partialtorch\n\nx = torch.rand(3, 3)\nx_mask = torch.bernoulli(torch.full_like(x, 0.5)).bool()  # x_mask must have dtype torch.bool\n\npx = partialtorch.masked_pair(x, x_mask)  # with 2 inputs data and mask\npx = partialtorch.masked_pair(x)  # with data only (mask = None)\npx = partialtorch.masked_pair(x, None)  # explicitly define mask = None\npx = partialtorch.masked_pair(x, True)  # explicitly define mask = True (equivalent to None)\npx = partialtorch.masked_pair((x, x_mask))  # from tuple\n\n# this new random function conveniently does the work of the above steps\npx = partialtorch.rand_mask(x, 0.5)\n```\n\nNote that ``MaskedPair`` is not a subclass of ``Tensor`` like ``MaskedTensor``,\nso we only support a very limited number of methods.\nThis is mostly because of the current limitations of C++ backend for custom classes[^1] such as:\n\n- Unable to overload methods with the same name\n- Unable to define custom type conversions from Python type (``Tensor``) or to custom Python type\n  (to be able to define custom methods such as ``__str__`` of ``Tensor`` does for example)\n- Unable to define ``__torch_function__``\n\nIn the meantime, please consider ``MaskedPair`` purely a fast container and use\n``partialtorch.op(pair, ...)`` instead of ``pair.op(...)`` if not available.\n\n**Note:** You cannot index ``MaskedPair`` with ``pair[..., 1:-1]`` as they acts like tuple of 2 elements when indexed.\n\n### Operators\n\nAll registered ops can be accessed like any torch's custom C++ operator by calling ``torch.ops.partialtorch.[op_name]``\n(the same way we call native ATen function ``torch.ops.aten.[op_name]``).\nTheir overloaded versions that accept ``Tensor`` are also registered for convenience\n(but return type is always converted to ``MaskedPair``).\n\n<table>\n<tr>\n<th>torch</th>\n<th>partialtorch</th>\n</tr>\n\n<tr>\n<td>\n<sub>\n\n```python\nimport torch\n\ntorch.manual_seed(1)\nx = torch.rand(5, 5)\n\ny = torch.sum(x, 0, keepdim=True)\n```\n\n</sub>\n<td>\n<sub>\n\n```python\nimport torch\nimport partialtorch\n\ntorch.manual_seed(1)\nx = torch.rand(5, 5)\npx = partialtorch.rand_mask(x, 0.5)\n\n# standard extension ops calling\npout = torch.ops.partialtorch.sum(px, 0, keepdim=True)\n# all exposed ops are also aliased inside partialtorch.ops\npout = partialtorch.ops.sum(px, 0, keepdim=True)\n```\n\n</sub>\n</td>\n</tr>\n\n</table>\n\nFurthermore, we inherit the naming convention of for inplace ops - appending a trailing ``_`` character after their\nnames (e.g. ``partialtorch.relu`` and ``partialtorch.relu_``).\nThey modify both data and mask of the first operand inplacely.\n\nThe usage is kept as close to the corresponding ``Tensor`` ops as possible.\nHence, further explaination is redundant.\n\n### Neural Network Layers\n\nCurrently, there are only a number of modules implemented in ``partialtorch.nn`` subpackage that are masked equivalences\nof those in ``torch.nn``.\nThis is the list of submodules inside ``partialtorch.nn.modules`` and the layers they provide:\n\n- [`partialtorch.nn.modules.activation`](partialtorch/nn/modules/activation.py): All activations\n  except ``torch.nn.MultiheadAttention``\n- [`partialtorch.nn.modules.batchnorm`](partialtorch/nn/modules/batchnorm.py): ``BatchNormNd``\n- [`partialtorch.nn.modules.channelshuffle`](partialtorch/nn/modules/channelshuffle.py): ``ChannelShuffle``\n- [`partialtorch.nn.modules.conv`](partialtorch/nn/modules/conv.py): ``PartialConvNd``, ``PartialConvTransposeNd``\n- [`partialtorch.nn.modules.dropout`](partialtorch/nn/modules/dropout.py): ``DropoutNd``, ``AlphaDropout``, ``FeatureAlphaDropout``\n- [`partialtorch.nn.modules.flatten`](partialtorch/nn/modules/flatten.py): ``Flatten``, ``Unflatten``\n- [`partialtorch.nn.modules.fold`](partialtorch/nn/modules/fold.py): ``Fold``, ``Unfold``\n- [`partialtorch.nn.modules.instancenorm`](partialtorch/nn/modules/instancenorm.py): ``InstanceNormNd``\n- [`partialtorch.nn.modules.normalization`](partialtorch/nn/modules/normalization.py): ``LayerNorm``\n- [`partialtorch.nn.modules.padding`](partialtorch/nn/modules/padding.py): ``CircularPadNd``, ``ConstantPadNd``, ``ReflectionPadNd``, ``ReplicationPadNd``, ``ZeroPadNd``\n- [`partialtorch.nn.modules.pixelshuffle`](partialtorch/nn/modules/pixelshuffle.py): ``PixelShuffle``, ``PixelUnshuffle``\n- [`partialtorch.nn.modules.pooling`](partialtorch/nn/modules/pooling.py): ``MaxPoolNd``, ``AvgPoolNd``, ``FractionalMaxPoolNd``, ``LpPoolNd``, ``AdaptiveMaxPoolNd``, ``AdaptiveAvgPoolNd``\n- [`partialtorch.nn.modules.upsampling`](partialtorch/nn/modules/upsampling.py): ``Upsample``, ``UpsamplingNearest2d``, ``UpsamplingBilinear2d``, ``PartialUpsample``, ``PartialUpsamplingBilinear2d``\n\nThe steps for declaring your custom module is identical, except that we now use the classes inside ``partialtorch.nn``\nwhich input and output ``MaskedPair``.\nNote that to make them scriptable, you may have to explicitly annotate input and output types.\n\n<table>\n<tr>\n<th>torch</th>\n<th>partialtorch</th>\n</tr>\n\n<tr>\n<td>\n<sub>\n\n```python\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom torch import Tensor\n\n\nclass ConvBlock(nn.Module):\n    def __init__(self, in_channels, out_channels):\n        self.conv = nn.Conv2d(in_channels,\n                              out_channels,\n                              kernel_size=(3, 3))\n        self.bn = nn.BatchNorm2d(out_channels)\n        self.pool = nn.MaxPool2d(kernel_size=(2, 2))\n\n    def forward(self, x: Tensor) -> Tensor:\n        x = self.conv(x)\n        x = F.relu(x)\n        x = self.bn(x)\n        x = self.pool(x)\n        return x\n```\n\n</sub>\n<td>\n<sub>\n\n```python\nimport torch.nn as nn\n\nimport partialtorch.nn as partial_nn\nimport partialtorch.nn.functional as partial_F\n\nfrom partialtorch import MaskedPair\n\n\nclass PartialConvBlock(nn.Module):\n    def __init__(self, in_channels, out_channels):\n        self.conv = partial_nn.PartialConv2d(in_channels,\n                                             out_channels,\n                                             kernel_size=(3, 3))\n        self.bn = partial_nn.BatchNorm2d(out_channels)\n        self.pool = partial_nn.MaxPool2d(kernel_size=(2, 2))\n\n    def forward(self, x: MaskedPair) -> MaskedPair:\n        x = self.conv(x)\n        x = partial_F.relu(x)\n        x = self.bn(x)\n        x = self.pool(x)\n        return x\n```\n\n</sub>\n</td>\n</tr>\n\n</table>\n\nA few other examples can be found in [examples](examples) folder.\n\n## Citation\n\nThis code is part of another project of us. Citation will be added in the future.\n\n## Acknowledgements\n\nPart of the codebase is modified from the following repositories:\n\n- https://github.com/pytorch/pytorch\n- https://github.com/NVIDIA/partialconv\n\n## License\n\nThe code is released under the MIT license. See [`LICENSE.txt`](LICENSE.txt) for details.\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Masked and Partial Operations for PyTorch",
    "version": "0.0.8",
    "project_urls": {
        "Homepage": "https://github.com/inspiros/partialtorch",
        "Source": "https://github.com/inspiros/partialtorch"
    },
    "split_keywords": [
        "masked_tensor",
        "masked_operator",
        "partial_operator"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6f8371df3d18726979854312a8341101f928b4eb98fd26cb02e3e91f8edfff88",
                "md5": "f9a4dadb6a7f3ca08a0c8a29f6832d3b",
                "sha256": "7146da40d51a9af1b029e00822d31bad644be10ae89c24f87efb08ec667259a6"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp310-cp310-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "f9a4dadb6a7f3ca08a0c8a29f6832d3b",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 4329806,
            "upload_time": "2023-11-12T08:09:31",
            "upload_time_iso_8601": "2023-11-12T08:09:31.796957Z",
            "url": "https://files.pythonhosted.org/packages/6f/83/71df3d18726979854312a8341101f928b4eb98fd26cb02e3e91f8edfff88/partialtorch-0.0.8-cp310-cp310-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d5c31c345bfb77f104683495364cdd6fd928b95f9a1102ac11208fb2c3c355ad",
                "md5": "15c3342b2ae0acd4d8e4ac7e67a05e99",
                "sha256": "4031ea436efdf0e153f812c6d1cf1185b5436b62b6720bffac85562643ed8daf"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "15c3342b2ae0acd4d8e4ac7e67a05e99",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 65554205,
            "upload_time": "2023-11-12T08:09:54",
            "upload_time_iso_8601": "2023-11-12T08:09:54.732515Z",
            "url": "https://files.pythonhosted.org/packages/d5/c3/1c345bfb77f104683495364cdd6fd928b95f9a1102ac11208fb2c3c355ad/partialtorch-0.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a266712ade4b03e15c2232a70526f5ca8a91b467f3faaa2e4d1743d57a5e9018",
                "md5": "c186cd245266231d821b002a402bd5ff",
                "sha256": "748c3ae04db4315d4e83d5a0c6ed7b674669244a6ca20217c3aef120a9067938"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "c186cd245266231d821b002a402bd5ff",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 8864874,
            "upload_time": "2023-11-12T08:09:59",
            "upload_time_iso_8601": "2023-11-12T08:09:59.990781Z",
            "url": "https://files.pythonhosted.org/packages/a2/66/712ade4b03e15c2232a70526f5ca8a91b467f3faaa2e4d1743d57a5e9018/partialtorch-0.0.8-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "30b66c40e44a55138a55c3e66ce17d4f9ad4e52654e2841cce44c0564f3b31d3",
                "md5": "3a47c37acf00bc9c035d2a17016daa1b",
                "sha256": "86556280219c0db7e09db4df2fcbc31bf69b33812a3e4ebe139e3bf8ba966352"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp311-cp311-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "3a47c37acf00bc9c035d2a17016daa1b",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 4330269,
            "upload_time": "2023-11-12T08:10:03",
            "upload_time_iso_8601": "2023-11-12T08:10:03.246347Z",
            "url": "https://files.pythonhosted.org/packages/30/b6/6c40e44a55138a55c3e66ce17d4f9ad4e52654e2841cce44c0564f3b31d3/partialtorch-0.0.8-cp311-cp311-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b7d8d2d82c278011be2c14434a33e766ba27fb753805747488570ed79812e315",
                "md5": "21b8fd681b261355dd48a3427f55ed25",
                "sha256": "fcd1d0d0441742de0c9ae5bf4e790a7825758ac93748efd7d627d8ac5ad118fa"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "21b8fd681b261355dd48a3427f55ed25",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 65584560,
            "upload_time": "2023-11-12T08:10:21",
            "upload_time_iso_8601": "2023-11-12T08:10:21.800366Z",
            "url": "https://files.pythonhosted.org/packages/b7/d8/d2d82c278011be2c14434a33e766ba27fb753805747488570ed79812e315/partialtorch-0.0.8-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7ee8b9b1e15f497c6eb86baaa3800c3c5040aab7c721331b9408c0f2870fc9c2",
                "md5": "a643426078d2abedd0ce7ab21003dc81",
                "sha256": "15fd47a2ce714ee9b9f8280061228999107e74443bfb03157078021e373afa6a"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "a643426078d2abedd0ce7ab21003dc81",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 8866425,
            "upload_time": "2023-11-12T08:10:26",
            "upload_time_iso_8601": "2023-11-12T08:10:26.951916Z",
            "url": "https://files.pythonhosted.org/packages/7e/e8/b9b1e15f497c6eb86baaa3800c3c5040aab7c721331b9408c0f2870fc9c2/partialtorch-0.0.8-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "07f1afec0fb4cfc574166c77aafc950c867865b8e14382a986910f9234447ec7",
                "md5": "ba4e5a0169a534ed8232f0e1774ff316",
                "sha256": "8193aaf7ada15d61a429e3c77a887814327c6791d75d177770390ce5688e0232"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp38-cp38-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "ba4e5a0169a534ed8232f0e1774ff316",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 4329643,
            "upload_time": "2023-11-12T08:10:30",
            "upload_time_iso_8601": "2023-11-12T08:10:30.188900Z",
            "url": "https://files.pythonhosted.org/packages/07/f1/afec0fb4cfc574166c77aafc950c867865b8e14382a986910f9234447ec7/partialtorch-0.0.8-cp38-cp38-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bb9aa7d5f4be3e78c07d949d007c354e2cc92bf19b515ee58d9adb7cbf6ad21d",
                "md5": "731523e51f051617fe51f40c29d87b0b",
                "sha256": "6fd30da81fb62416f8e7f69cf49030b0c3586e389a9fdf0c9c688608974816ea"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "731523e51f051617fe51f40c29d87b0b",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 65555168,
            "upload_time": "2023-11-12T08:10:48",
            "upload_time_iso_8601": "2023-11-12T08:10:48.205497Z",
            "url": "https://files.pythonhosted.org/packages/bb/9a/a7d5f4be3e78c07d949d007c354e2cc92bf19b515ee58d9adb7cbf6ad21d/partialtorch-0.0.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "79c7f94827f4a73e20ae664337a416f3bd3c939adb78b0c55222b3ab3aee6bf6",
                "md5": "0ca3502569b320c0b8d440ba72cdef83",
                "sha256": "cd92aad28cae1f78ff3ba35798543a2e03a13af1bd9703d09fe1e26641f37cfa"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp38-cp38-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "0ca3502569b320c0b8d440ba72cdef83",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 8864942,
            "upload_time": "2023-11-12T08:10:54",
            "upload_time_iso_8601": "2023-11-12T08:10:54.573813Z",
            "url": "https://files.pythonhosted.org/packages/79/c7/f94827f4a73e20ae664337a416f3bd3c939adb78b0c55222b3ab3aee6bf6/partialtorch-0.0.8-cp38-cp38-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3f96f6f4ac3e714312e93d473ed159084bd3e5a2f099ff8ebc1169f3af783f2f",
                "md5": "6d0f656f85365cca256681d580779e14",
                "sha256": "5bb821034bea398b5a995668b83895680bf7f5855a5fc913b09afc867bb81509"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp39-cp39-macosx_10_9_x86_64.whl",
            "has_sig": false,
            "md5_digest": "6d0f656f85365cca256681d580779e14",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 4329777,
            "upload_time": "2023-11-12T08:10:57",
            "upload_time_iso_8601": "2023-11-12T08:10:57.840542Z",
            "url": "https://files.pythonhosted.org/packages/3f/96/f6f4ac3e714312e93d473ed159084bd3e5a2f099ff8ebc1169f3af783f2f/partialtorch-0.0.8-cp39-cp39-macosx_10_9_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5e7462607dbae6725a06aee379a08b3c0b389d8929c177d9843a6c17ef171152",
                "md5": "06406170ad06332dbfd9bea385e7be33",
                "sha256": "12416b57ee0b656a49a9d06ab2b3527b5b74892198995cfc93626e75f418c60b"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "06406170ad06332dbfd9bea385e7be33",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 65553336,
            "upload_time": "2023-11-12T08:11:21",
            "upload_time_iso_8601": "2023-11-12T08:11:21.039088Z",
            "url": "https://files.pythonhosted.org/packages/5e/74/62607dbae6725a06aee379a08b3c0b389d8929c177d9843a6c17ef171152/partialtorch-0.0.8-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "19e404df426d43b5a784952d2ce98623a1bbb1ff6e6c918b968ebc61fa1602c4",
                "md5": "8b9b8fb93e6df5b53805713fe052fd36",
                "sha256": "9d06573811ccb00520ccc990584a3e0253364e78cd598b5f9d7c28411daf35b8"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "8b9b8fb93e6df5b53805713fe052fd36",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 8869070,
            "upload_time": "2023-11-12T08:11:30",
            "upload_time_iso_8601": "2023-11-12T08:11:30.458091Z",
            "url": "https://files.pythonhosted.org/packages/19/e4/04df426d43b5a784952d2ce98623a1bbb1ff6e6c918b968ebc61fa1602c4/partialtorch-0.0.8-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a443f8814b385e27680c4bb623c1b61cb8dc8f073e52c74a493f41654bf28ef3",
                "md5": "32ffe0427e3b148635285dc2fd1357f6",
                "sha256": "81623f8ac7d1187fbfda6183a5378b044cfd796a6f52cd356ad88270d7bc6bef"
            },
            "downloads": -1,
            "filename": "partialtorch-0.0.8.tar.gz",
            "has_sig": false,
            "md5_digest": "32ffe0427e3b148635285dc2fd1357f6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 113260,
            "upload_time": "2023-11-12T08:11:32",
            "upload_time_iso_8601": "2023-11-12T08:11:32.574519Z",
            "url": "https://files.pythonhosted.org/packages/a4/43/f8814b385e27680c4bb623c1b61cb8dc8f073e52c74a493f41654bf28ef3/partialtorch-0.0.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-12 08:11:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "inspiros",
    "github_project": "partialtorch",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "torch",
            "specs": [
                [
                    ">=",
                    "2.1.0"
                ]
            ]
        }
    ],
    "lcname": "partialtorch"
}
        
Elapsed time: 0.13104s