continual-inference


Namecontinual-inference JSON
Version 1.2.3 PyPI version JSON
download
home_pagehttps://github.com/lukashedegaard/continual-inference
SummaryA Python library for Continual Inference Networks in PyTorch
upload_time2023-06-16 09:28:28
maintainer
docs_urlNone
authorLukas Hedegaard
requires_python
license
keywords deep learning pytorch ai online inference continual
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <img src="https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/logo/logo_name.svg" style="width: 400px;">

__A Python library for Continual Inference Networks in PyTorch__

[Quick-start](https://continual-inference.readthedocs.io/en/latest/generated/README.html#quick-start) • 
[Docs](https://continual-inference.readthedocs.io/en/latest/generated/README.html) • 
[Principles](https://continual-inference.readthedocs.io/en/latest/generated/README.html#library-principles) • 
[Paper](https://arxiv.org/abs/2204.03418) • 
[Examples](https://continual-inference.readthedocs.io/en/latest/generated/README.html#composition-examples) • 
[Modules](https://continual-inference.readthedocs.io/en/latest/common/modules.html) • 
[Model Zoo](https://continual-inference.readthedocs.io/en/latest/generated/README.html#model-zoo-and-benchmarks) • 
[Contribute](https://continual-inference.readthedocs.io/en/latest/generated/CONTRIBUTING.html) • 
[License](https://github.com/LukasHedegaard/continual-inference/blob/main/LICENSE)

<div>
  <a href="https://pypi.org/project/continual-inference/" style="display:inline-block;">
    <img src="https://img.shields.io/pypi/pyversions/continual-inference" height="20" >
  </a>
  <a href="https://badge.fury.io/py/continual-inference" style="display:inline-block;">
    <img src="https://badge.fury.io/py/continual-inference.svg" height="20" >
  </a>
  <a href="https://continual-inference.readthedocs.io/en/latest/generated/README.html" style="display:inline-block;">
    <img src="https://readthedocs.org/projects/continual-inference/badge/?version=latest" alt="Documentation Status" height="20"/>
  </a>
  <a href="https://pepy.tech/project/continual-inference" style="display:inline-block;">
    <img src="https://pepy.tech/badge/continual-inference" height="20">
  </a>
  <a href="https://codecov.io/gh/LukasHedegaard/continual-inference" style="display:inline-block;">
    <img src="https://codecov.io/gh/LukasHedegaard/continual-inference/branch/main/graph/badge.svg?token=XW1UQZSEOG" height="20"/>
  </a>
  <a href="https://opensource.org/licenses/Apache-2.0" style="display:inline-block;">
    <img src="https://img.shields.io/badge/License-Apache%202.0-blue.svg" height="20">
  </a>
  <!-- <a href="https://arxiv.org/abs/2204.03418" style="display:inline-block;">
    <img src="http://img.shields.io/badge/paper-arxiv.2204.03418-B31B1B.svg" height="20" >
  </a> -->
  <a href="https://github.com/psf/black" style="display:inline-block;">
    <img src="https://img.shields.io/badge/code%20style-black-000000.svg" height="20">
  </a>
  <a href="https://www.codefactor.io/repository/github/lukashedegaard/continual-inference/overview/main" style="display:inline-block;">
    <img src="https://www.codefactor.io/repository/github/lukashedegaard/continual-inference/badge/main" alt="We match PyTorch interfaces exactly. Method arguments named 'input' reduce the codefactor to 'A-'" height="20" />
  </a>
</div>

## Continual Inference Networks ensure efficient stream processing
Many of our favorite Deep Neural Network architectures (e.g., [CNNs](https://arxiv.org/abs/2106.00050) and [Transformers](https://arxiv.org/abs/2201.06268)) were built with offline-processing for offline processing. Rather than processing inputs one sequence element at a time, they require the whole (spatio-)temporal sequence to be passed as a single input.
Yet, **many important real-life applications need online predictions on a continual input stream**. 
While CNNs and Transformers can be applied by re-assembling and passing sequences within a sliding window, this is _inefficient_ due to the redundant intermediary computations from overlapping clips.

**Continual Inference Networks** (CINs) are built to ensure efficient stream processing by employing an alternative computational ordering, which allows sequential computations without the use of sliding window processing.
In general, CINs requires approx. _L_ ×  fewer FLOPs per prediction compared to sliding window-based inference with non-CINs, where _L_ is the corresponding sequence length of a non-CIN network. For more details, check out the videos below describing Continual 3D CNNs [[1](https://arxiv.org/abs/2106.00050)] and Transformers [[2](https://arxiv.org/abs/2201.06268)].


<div align="center">
  <a href="http://www.youtube.com/watch?feature=player_embedded&v=Jm2A7dVEaF4" target="_blank">
     <img src="http://img.youtube.com/vi/Jm2A7dVEaF4/hqdefault.jpg" alt="Presentation of Continual 3D CNNs" style="width:240px;height:auto;" />
  </a>
  <a href="http://www.youtube.com/watch?feature=player_embedded&v=gy802Tlp-eQ" target="_blank">
     <img src="http://img.youtube.com/vi/gy802Tlp-eQ/hqdefault.jpg" alt="Presentation of Continual Transformers" style="width:240px;height:auto;" />
  </a>
</div>

## News
- 2022-12-02: ONNX compatibility for all modules is available from v1.0.0. See [test_onnx.py](tests/continual/test_onnx.py) for examples.


## Quick-start

### Install 
```bash
pip install continual-inference
```



### Example
`co` modules are weight-compatible drop-in replacement for `torch.nn`, enhanced with the capability of efficient _continual inference_:

```python3
import torch
import continual as co
                                                           
#                      B, C, T, H, W
example = torch.randn((1, 1, 5, 3, 3))

conv = co.Conv3d(in_channels=1, out_channels=1, kernel_size=(3, 3, 3))

# Same exact computation as torch.nn.Conv3d ✅
output = conv(example)

# But can also perform online inference efficiently 🚀
firsts = conv.forward_steps(example[:, :, :4])
last = conv.forward_step(example[:, :, 4])

assert torch.allclose(output[:, :, : conv.delay], firsts)
assert torch.allclose(output[:, :, conv.delay], last)

# Temporal properties
assert conv.receptive_field == 3
assert conv.delay == 2
```

See the [network composition](#composition) and [model zoo](#model-zoo-and-benchmarks) sections for additional examples.

## Library principles

### Forward modes
The library components feature three distinct forward modes, which are handy for different situations, namely `forward`, `forward_step`, and `forward_steps`:

#### `forward(input)`
Performs a forward computation over multiple time-steps. This function is identical to the corresponding module in _torch.nn_, ensuring cross-compatibility. Moreover, it's handy for efficient training on clip-based data.

```
         O            (O: output)
         ↑ 
         N            (N: network module)
         ↑ 
 -----------------    (-: aggregation)
 P   I   I   I   P    (I: input frame, P: padding)
```


#### `forward_step(input, update_state=True)`
Performs a forward computation for a single frame and (optionally) updates internal states accordingly. This function performs efficient continual inference.

```
O+S O+S O+S O+S   (O: output, S: updated internal state)
 ↑   ↑   ↑   ↑ 
 N   N   N   N    (N: network module)
 ↑   ↑   ↑   ↑ 
 I   I   I   I    (I: input frame)
```

#### `forward_steps(input, pad_end=False, update_state=True)`
Performs a forward computation across multiple time-steps while updating internal states for continual inference (if update_state=True).
Start-padding is always accounted for, but end-padding is omitted per default in expectance of the next input step. It can be added by specifying pad_end=True. If so, the output-input mapping the exact same as that of forward.
```
         O            (O: output)
         ↑ 
 -----------------    (-: aggregation)
 O  O+S O+S O+S  O    (O: output, S: updated internal state)
 ↑   ↑   ↑   ↑   ↑
 N   N   N   N   N    (N: network module)
 ↑   ↑   ↑   ↑   ↑
 P   I   I   I   P    (I: input frame, P: padding)
```

#### `__call__`
Per default, the `__call__` function operates identically to _torch.nn_ and executes forward. We supply two options for changing this behavior, namely the _call_mode_ property and the _call_mode_ context manager. An example of their use follows:

```python
timeseries = torch.randn(batch, channel, time)
timestep = timeseries[:, :, 0]

net(timeseries)  # Invokes net.forward(timeseries)

# Assign permanent call_mode property
net.call_mode = "forward_step"
net(timestep)  # Invokes net.forward_step(timestep)

# Assign temporary call_mode with context manager
with co.call_mode("forward_steps"):
    net(timeseries)  # Invokes net.forward_steps(timeseries)

net(timestep)  # Invokes net.forward_step(timestep) again
```

### Composition

Continual Inference Networks require strict handling of internal data delays to guarantee correspondence between [forward modes](#forward-modes). While it is possible to compose neural networks by defining _forward_, _forward_step_, and _forward_steps_ manually, correct handling of delays is cumbersome and time-consuming. Instead, we provide a rich interface of container modules, which handles delays automatically. On top of `co.Sequential` (which is a drop-in replacement of _torch.nn.Sequential_), we provide modules for handling parallel and conditional dataflow. 

- [`co.Sequential`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Sequential.html): Invoke modules sequentially, passing the output of one module onto the next.
- [`co.Broadcast`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Broadcast.html): Broadcast one stream to multiple.
- [`co.Parallel`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Parallel.html): Invoke modules in parallel given each their input.
- [`co.ParallelDispatch`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.ParallelDispatch.html): Dispatch multiple input streams to multiple output streams flexibly.
- [`co.Reduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Reduce.html): Reduce multiple input streams to one.
- [`co.BroadcastReduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.BroadcastReduce.html): Shorthand for Sequential(Broadcast, Parallel, Reduce).
- [`co.Residual`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Residual.html): Residual connection.
- [`co.Conditional`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conditional.html): Conditionally checks whether to invoke a module (or another) at runtime.


#### Composition examples:

<details>
<summary><b>Residual module</b></summary>

Short-hand:
```python3
residual = co.Residual(co.Conv3d(32, 32, kernel_size=3, padding=1))
```

Explicit:
```python3
residual = co.Sequential(
    co.Broadcast(2),
    co.Parallel(
        co.Conv3d(32, 32, kernel_size=3, padding=1),
        co.Delay(2),
    ),
    co.Reduce("sum"),
)
```

</details>

<details>
<summary><b>3D MobileNetV2 Inverted residual block</b></summary>

Continual 3D version of the [MobileNetV2 Inverted residual block](https://arxiv.org/pdf/1801.04381.pdf).

<div align="center">
  <img src="https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/examples/mb_conv.png" style="width: 15vw; min-width: 200px;">
  <br>
  MobileNetV2 Inverted residual block. Source: https://arxiv.org/pdf/1801.04381.pdf
</div>

```python3
mb_conv = co.Residual(
    co.Sequential(
      co.Conv3d(32, 64, kernel_size=(1, 1, 1)),
      nn.BatchNorm3d(64),
      nn.ReLU6(),
      co.Conv3d(64, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1), groups=64),
      nn.ReLU6(),
      co.Conv3d(64, 32, kernel_size=(1, 1, 1)),
      nn.BatchNorm3d(32),
    )
)
```

</details>

<details>
<summary><b>3D Squeeze-and-Excitation module</b></summary>

Continual 3D version of the [Squeeze-and-Excitation module](https://arxiv.org/pdf/1709.01507.pdf)

<div align="center">
  <img src="https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/examples/se_block.png" style="width: 15vw; min-width: 200px;">
  <br>
  Squeeze-and-Excitation block. 
  Scale refers to a broadcasted element-wise multiplication.
  Adapted from: https://arxiv.org/pdf/1709.01507.pdf
</div>

```python3
se = co.Residual(
    co.Sequential(
        OrderedDict([
            ("pool", co.AdaptiveAvgPool3d((1, 1, 1), kernel_size=7)),
            ("down", co.Conv3d(256, 16, kernel_size=1)),
            ("act1", nn.ReLU()),
            ("up", co.Conv3d(16, 256, kernel_size=1)),
            ("act2", nn.Sigmoid()),
        ])
    ),
    reduce="mul",
)
```

</details>

<details>
<summary><b>3D Inception module</b></summary>

Continual 3D version of the [Inception module](https://arxiv.org/pdf/1409.4842v1.pdf):
<div align="center">
  <img src="https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/examples/inception_block.png" style="width: 25vw; min-width: 350px;">
  <br>
  Inception module. Source: https://arxiv.org/pdf/1409.4842v1.pdf
   
</div>

```python3
def norm_relu(module, channels):
    return co.Sequential(
        module,
        nn.BatchNorm3d(channels),
        nn.ReLU(),
    )

inception_module = co.BroadcastReduce(
    co.Conv3d(192, 64, kernel_size=1),
    co.Sequential(
        norm_relu(co.Conv3d(192, 96, kernel_size=1), 96),
        norm_relu(co.Conv3d(96, 128, kernel_size=3, padding=1), 128),
    ),
    co.Sequential(
        norm_relu(co.Conv3d(192, 16, kernel_size=1), 16),
        norm_relu(co.Conv3d(16, 32, kernel_size=5, padding=2), 32),
    ),
    co.Sequential(
        co.MaxPool3d(kernel_size=(1, 3, 3), padding=(0, 1, 1), stride=1),
        norm_relu(co.Conv3d(192, 32, kernel_size=1), 32),
    ),
    reduce="concat",
)
```
</details>


### Input shapes
We enforce a unified ordering of input dimensions for all library modules, namely:

    (batch, channel, time, optional_dim2, optional_dim3)

### Outputs
The outputs produces by `forward_step` and `forward_steps` are identical to those of `forward`, provided the same data was input beforehand and state update was enabled. We know that input and output shapes aren't necessarily the same when using `forward` in the PyTorch library, and  generally depends on padding, stride and receptive field of a module. 

For the `forward_step` function, this comes to show by some `None`-valued outputs. Specifically, modules with a _delay_ (i.e. with receptive fields larger than the padding + 1) will produce `None` until the input count exceeds the delay. Moreover, _stride_ > 1 will produce `Tensor` outputs every _stride_ steps and `None` the remaining steps. A visual example is shown below:

<div align="center">
  <img src="https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/continual/continual-stride.png" style="width:300px;height:auto;"/>
  </br>
  A mixed example of delay and outputs under padding and stride. Here, we illustrate the step-wise operation of two co module layers, l1 with with receptive_field = 3, padding = 2, and stride = 2 and l2 with receptive_field = 3, no padding and stride = 1. ⧇ denotes a padded zero, ■ is a non-zero step-feature, and ☒ is an empty output.
</div>

For more information, please see the [library paper](https://arxiv.org/abs/2204.03418).

### Handling state
During stream processing, network modules which operate over multiple time-steps, e.g., a convolution with `kernel_size > 1` in the temporal dimension, will aggregate and cache state internally. Each module has its own local state, which can be inspected using `module.get_state()`. During `forward_step` and `forward_steps`, the state is updated unless the forward_step(s) is invoked with an `update_state = False` argument.

A __state cleanup__ can be accomplished via `module.clean_state()`.


## Module library
_Continual Inference_ features a rich collection of modules for defining Continual Inference Networks. Specific care was taken to create CIN versions of the PyTorch modules found in [_torch.nn_](https://pytorch.org/docs/stable/nn.html):

<details>
<summary><b>Convolutions</b></summary>

- [`co.Conv1d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conv1d.html)
- [`co.Conv2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conv2d.html)
- [`co.Conv3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conv3d.html)

</details>

<details>
<summary><b>Pooling</b></summary>

  - [`co.AvgPool1d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AvgPool1d.html)
  - [`co.AvgPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AvgPool2d.html)
  - [`co.AvgPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AvgPool3d.html)
  - [`co.MaxPool1d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.MaxPool1d.html)
  - [`co.MaxPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.MaxPool2d.html)
  - [`co.MaxPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.MaxPool3d.html)
  - [`co.AdaptiveAvgPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveAvgPool2d.html)
  - [`co.AdaptiveAvgPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveAvgPool3d.html)
  - [`co.AdaptiveMaxPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveMaxPool2d.html)
  - [`co.AdaptiveMaxPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveMaxPool3d.html)

</details>

<details>
<summary><b>Linear</b></summary>

  - [`co.Linear`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Linear.html)
  - [`co.Identity`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Identity.html): Maps input to output without modification.
  - [`co.Add`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Add.html): Adds a constant value.
  - [`co.Multiply`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Multiply.html): Multiplies with a constant factor.

</details>

<details>
<summary><b>Recurrent</b></summary>

  - [`co.RNN`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.RNN.html)
  - [`co.LSTM`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.LSTM.html)
  - [`co.GRU`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.GRU.html)

</details>

<details>
<summary><b>Transformers</b></summary>

  - [`co.TransformerEncoder`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.TransformerEncoder.html)
  - [`co.TransformerEncoderLayerFactory`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.TransformerEncoderLayerFactory.html): Factory function corresponding to `nn.TransformerEncoderLayer`.
  - [`co.SingleOutputTransformerEncoderLayer`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.SingleOutputTransformerEncoderLayer.html): SingleOutputMHA version of `nn.TransformerEncoderLayer`.
  - [`co.RetroactiveTransformerEncoderLayer`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.RetroactiveTransformerEncoderLayer.html): RetroactiveMHA version of `nn.TransformerEncoderLayer`.
  - [`co.RetroactiveMultiheadAttention`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.retroactive_mha.html.RetroactiveMultiheadAttention): Retroactive version of `nn.MultiheadAttention`.
  - [`co.SingleOutputMultiheadAttention`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.single_output_mha.html.SingleOutputMultiheadAttention): Single-output version of `nn.MultiheadAttention`.
  - [`co.RecyclingPositionalEncoding`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.RecyclingPositionalEncoding.html): Positional Encoding used for Continual Transformers.

</details>


Modules for composing and converting networks. Both _composition_ and _utility_ modules can be used for regular definition of PyTorch modules as well.

<details>
<summary><b>Composition modules</b></summary>

  - [`co.Sequential`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Sequential.html): Invoke modules sequentially, passing the output of one module onto the next.
  - [`co.Broadcast`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Broadcast.html): Broadcast one stream to multiple.
  - [`co.Parallel`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Parallel.html): Invoke modules in parallel given each their input.
  - [`co.ParallelDispatch`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.ParallelDispatch.html): Dispatch multiple input streams to multiple output streams flexibly.
  - [`co.Reduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Reduce.html): Reduce multiple input streams to one.
  - [`co.BroadcastReduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.BroadcastReduce.html): Shorthand for Sequential(Broadcast, Parallel, Reduce).
  - [`co.Residual`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Residual.html): Residual connection.
  - [`co.Conditional`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conditional.html): Conditionally checks whether to invoke a module (or another) at runtime.

</details>

<details>
<summary><b>Utility modules</b></summary>

  - [`co.Delay`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Delay.html): Pure delay module (e.g. needed in residuals).
  - [`co.Skip`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Skip.html): Skip a predefined number of input steps.
  - [`co.Reshape`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Reshape.html): Reshape non-temporal dimensions.
  - [`co.Lambda`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Lambda.html): Lambda module which wraps any function.
  - [`co.Constant`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Constant.html): Maps input to and output with constant value.
  - [`co.Zero`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Zero.html): Maps input to output of zeros.
  - [`co.One`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.One.html): Maps input to output of ones.

</details>

<details>
<summary><b>Converters</b></summary>

  - [`co.continual`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.continual.html): conversion function from `torch.nn` modules to `co` modules.
  - [`co.forward_stepping`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.forward_stepping.html): functional wrapper, which enhances temporally local `torch.nn` modules with the forward_stepping functions.

</details>

We support drop-in interoperability with with the following _torch.nn_ modules:

<details>
<summary><b>Activation</b></summary>

  - `nn.Threshold`
  - `nn.ReLU`
  - `nn.RReLU`
  - `nn.Hardtanh`
  - `nn.ReLU6`
  - `nn.Sigmoid`
  - `nn.Hardsigmoid`
  - `nn.Tanh`
  - `nn.SiLU`
  - `nn.Hardswish`
  - `nn.ELU`
  - `nn.CELU`
  - `nn.SELU`
  - `nn.GLU`
  - `nn.GELU`
  - `nn.Hardshrink`
  - `nn.LeakyReLU`
  - `nn.LogSigmoid`
  - `nn.Softplus`
  - `nn.Softshrink`
  - `nn.PReLU`
  - `nn.Softsign`
  - `nn.Tanhshrink`
  - `nn.Softmin`
  - `nn.Softmax`
  - `nn.Softmax2d`
  - `nn.LogSoftmax`

</details>

<details>
<summary><b>Normalization</b></summary>

  - `nn.BatchNorm1d`
  - `nn.BatchNorm2d`
  - `nn.BatchNorm3d`
  - `nn.GroupNorm`,
  - `nn.InstanceNorm1d` (affine=True, track_running_stats=True required)
  - `nn.InstanceNorm2d` (affine=True, track_running_stats=True required)
  - `nn.InstanceNorm3d` (affine=True, track_running_stats=True required)
  - `nn.LayerNorm` (only non-temporal dimensions must be specified)

</details>

<details>
<summary><b>Dropout</b></summary>

  - `nn.Dropout`
  - `nn.Dropout1d`
  - `nn.Dropout2d`
  - `nn.Dropout3d`
  - `nn.AlphaDropout`
  - `nn.FeatureAlphaDropout`

</details>


## Model Zoo and Benchmarks

### Continual 3D CNNs

Benchmark results for 1-view testing on __Kinetics400__. For reference, _X3D-L_ scores 69.3% top-1 acc with 19.2 GFLOPs per prediction. 

Arch     | Avg. pool size | Top 1 (%) | FLOPs (G) per step | FLOPs reduction | Params (M) | Code                                                                   | Weights
-------- | -------------- | --------- | ------------------ | --------------- | ---------- | ---------------------------------------------------------------------- | ---- 
CoX3D-L  | 64             | 71.6      | 1.25               | 15.3x           | 6.2        | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/cox3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D\_L.pyth)
CoX3D-M  | 64             | 71.0      | 0.33               | 15.1x           | 3.8        | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/cox3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D\_M.pyth)
CoX3D-S  | 64             | 64.7      | 0.17               | 12.1x           | 3.8        | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/cox3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D\_S.pyth)
CoSlow   | 64             | 73.1      | 6.90               |  8.0x           | 32.5       | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/coslow) | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/SLOW\_8x8\_R50.pyth)
CoI3D    | 64             | 64.0      | 5.68               |  5.0x           | 28.0       | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/coi3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/I3D\_8x8\_R50.pyth)

FLOPs reduction is noted relative to non-continual inference.
Note that [on-hardware inference](https://arxiv.org/abs/2106.00050) doesn't reach the same speedups as "FLOPs reductions" might suggest due to overhead of state reads and writes. This overhead is less important for large batch sizes. This applies to all models in the model zoo.

### Continual ST-GCNs

Benchmark results for on __NTU RGB+D 60__ for the joint modality. For reference, _ST-GCN_ achieves 86% X-Sub and 93.4 X-View accuracy with 16.73 GFLOPs per prediction. 

Arch      | Receptive field | X-Sub Acc (%) | X-View Acc (%) | FLOPs (G) per step | FLOPs reduction | Params (M) | Code                                                                  
--------  | --------------- | ------------- | -------------- | ------------------ | --------------- | ---------- | -----
CoST-GCN  | 300             | 86.3          | 93.8           | 0.16               | 107.7x          | 3.1        | [link](https://github.com/LukasHedegaard/continual-skeletons/blob/main/models/cost_gcn_mod/cost_gcn_mod.py)
CoA-GCN   | 300             | 84.1          | 92.6           | 0.17               | 108.7x          | 3.5        | [link](https://github.com/LukasHedegaard/continual-skeletons/blob/main/models/coa_gcn_mod/coa_gcn_mod.py)
CoST-GCN  | 300             | 86.3          | 92.4           | 0.15               | 107.6x          | 3.1        | [link](https://github.com/LukasHedegaard/continual-skeletons/blob/main/models/cos_tr_mod/cos_tr_mod.py)

[Here](https://drive.google.com/drive/u/4/folders/1m6aV5Zv8tAytvxF6qY4m9nyqlkKv0y72), you can download pre-trained,model weights for the above architectures on NTU RGB+D 60, NTU RGB+D 120, and Kinetics-400 on joint and bone modalities.


### Continual Transformers

Benchmark results for on __THUMOS14__ on top of features extracted using a TSN-ResNet50 backbone pre-trained on Kinetics400. For reference, _OadTR_ achieves 64.4 % mAP with 2.5 GFLOPs per prediction. 

Arch        | Receptive field | mAP (%) | FLOPs (G) per step |  Params (M) | Code                                                                  
----------  | --------------- | ------- | ------------------ |  ---------- | -----
CoOadTR-b1  | 64              | 64.2    | 0.41               |  15.9       | [link](https://github.com/LukasHedegaard/CoOadTR)
CoOadTR-b2  | 64              | 64.4    | 0.01               |   9.6       | [link](https://github.com/LukasHedegaard/CoOadTR)

The library features complete implementations of the [one](https://github.com/LukasHedegaard/continual-inference/blob/9895344f50a93ebb5cf5c4f26ecfdf27b6a3fe75/tests/continual/test_transformer.py#L8)- and [two](https://github.com/LukasHedegaard/continual-inference/blob/9895344f50a93ebb5cf5c4f26ecfdf27b6a3fe75/tests/continual/test_transformer.py#L59)-block continual transformer encoders as well.


## Compatibility
The library modules are built to integrate seamlessly with other PyTorch projects.
Specifically, extra care was taken to ensure out-of-the-box compatibility with:
- [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning)
- [ptflops](https://github.com/sovrasov/flops-counter.pytorch)
- [ride](https://github.com/LukasHedegaard/ride)
- [onnx](https://github.com/onnx/onnx)
<!-- - [onnxruntime](https://github.com/microsoft/onnxruntime) -->


## Citation
<a href="https://arxiv.org/abs/2204.03418" style="display:inline-block;">
  <img src="http://img.shields.io/badge/paper-arxiv.2204.03418-B31B1B.svg" height="20" >
</a>

```bibtex
@inproceedings{hedegaard2022colib,
  title={Continual Inference: A Library for Efficient Online Inference with Deep Neural Networks in PyTorch},
  author={Lukas Hedegaard and Alexandros Iosifidis},
  booktitle={European Conference on Computer Vision Workshops (ECCVW)},
  year={2022}
}
```


## Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR).



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/lukashedegaard/continual-inference",
    "name": "continual-inference",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "deep learning,pytorch,AI,online,inference,continual",
    "author": "Lukas Hedegaard",
    "author_email": "lukasxhedegaard@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/f1/62/1075ec917fcc27b5606c544e27424f9d66384076543279906f2987bbe71e/continual-inference-1.2.3.tar.gz",
    "platform": null,
    "description": "<img src=\"https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/logo/logo_name.svg\" style=\"width: 400px;\">\n\n__A Python library for Continual Inference Networks in PyTorch__\n\n[Quick-start](https://continual-inference.readthedocs.io/en/latest/generated/README.html#quick-start) \u2022 \n[Docs](https://continual-inference.readthedocs.io/en/latest/generated/README.html) \u2022 \n[Principles](https://continual-inference.readthedocs.io/en/latest/generated/README.html#library-principles) \u2022 \n[Paper](https://arxiv.org/abs/2204.03418) \u2022 \n[Examples](https://continual-inference.readthedocs.io/en/latest/generated/README.html#composition-examples) \u2022 \n[Modules](https://continual-inference.readthedocs.io/en/latest/common/modules.html) \u2022 \n[Model Zoo](https://continual-inference.readthedocs.io/en/latest/generated/README.html#model-zoo-and-benchmarks) \u2022 \n[Contribute](https://continual-inference.readthedocs.io/en/latest/generated/CONTRIBUTING.html) \u2022 \n[License](https://github.com/LukasHedegaard/continual-inference/blob/main/LICENSE)\n\n<div>\n  <a href=\"https://pypi.org/project/continual-inference/\" style=\"display:inline-block;\">\n    <img src=\"https://img.shields.io/pypi/pyversions/continual-inference\" height=\"20\" >\n  </a>\n  <a href=\"https://badge.fury.io/py/continual-inference\" style=\"display:inline-block;\">\n    <img src=\"https://badge.fury.io/py/continual-inference.svg\" height=\"20\" >\n  </a>\n  <a href=\"https://continual-inference.readthedocs.io/en/latest/generated/README.html\" style=\"display:inline-block;\">\n    <img src=\"https://readthedocs.org/projects/continual-inference/badge/?version=latest\" alt=\"Documentation Status\" height=\"20\"/>\n  </a>\n  <a href=\"https://pepy.tech/project/continual-inference\" style=\"display:inline-block;\">\n    <img src=\"https://pepy.tech/badge/continual-inference\" height=\"20\">\n  </a>\n  <a href=\"https://codecov.io/gh/LukasHedegaard/continual-inference\" style=\"display:inline-block;\">\n    <img src=\"https://codecov.io/gh/LukasHedegaard/continual-inference/branch/main/graph/badge.svg?token=XW1UQZSEOG\" height=\"20\"/>\n  </a>\n  <a href=\"https://opensource.org/licenses/Apache-2.0\" style=\"display:inline-block;\">\n    <img src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\" height=\"20\">\n  </a>\n  <!-- <a href=\"https://arxiv.org/abs/2204.03418\" style=\"display:inline-block;\">\n    <img src=\"http://img.shields.io/badge/paper-arxiv.2204.03418-B31B1B.svg\" height=\"20\" >\n  </a> -->\n  <a href=\"https://github.com/psf/black\" style=\"display:inline-block;\">\n    <img src=\"https://img.shields.io/badge/code%20style-black-000000.svg\" height=\"20\">\n  </a>\n  <a href=\"https://www.codefactor.io/repository/github/lukashedegaard/continual-inference/overview/main\" style=\"display:inline-block;\">\n    <img src=\"https://www.codefactor.io/repository/github/lukashedegaard/continual-inference/badge/main\" alt=\"We match PyTorch interfaces exactly. Method arguments named 'input' reduce the codefactor to 'A-'\" height=\"20\" />\n  </a>\n</div>\n\n## Continual Inference Networks ensure efficient stream processing\nMany of our favorite Deep Neural Network architectures (e.g., [CNNs](https://arxiv.org/abs/2106.00050) and [Transformers](https://arxiv.org/abs/2201.06268)) were built with offline-processing for offline processing. Rather than processing inputs one sequence element at a time, they require the whole (spatio-)temporal sequence to be passed as a single input.\nYet, **many important real-life applications need online predictions on a continual input stream**. \nWhile CNNs and Transformers can be applied by re-assembling and passing sequences within a sliding window, this is _inefficient_ due to the redundant intermediary computations from overlapping clips.\n\n**Continual Inference Networks** (CINs) are built to ensure efficient stream processing by employing an alternative computational ordering, which allows sequential computations without the use of sliding window processing.\nIn general, CINs requires approx. _L_ \u00d7  fewer FLOPs per prediction compared to sliding window-based inference with non-CINs, where _L_ is the corresponding sequence length of a non-CIN network. For more details, check out the videos below describing Continual 3D CNNs [[1](https://arxiv.org/abs/2106.00050)] and Transformers [[2](https://arxiv.org/abs/2201.06268)].\n\n\n<div align=\"center\">\n  <a href=\"http://www.youtube.com/watch?feature=player_embedded&v=Jm2A7dVEaF4\" target=\"_blank\">\n     <img src=\"http://img.youtube.com/vi/Jm2A7dVEaF4/hqdefault.jpg\" alt=\"Presentation of Continual 3D CNNs\" style=\"width:240px;height:auto;\" />\n  </a>\n  <a href=\"http://www.youtube.com/watch?feature=player_embedded&v=gy802Tlp-eQ\" target=\"_blank\">\n     <img src=\"http://img.youtube.com/vi/gy802Tlp-eQ/hqdefault.jpg\" alt=\"Presentation of Continual Transformers\" style=\"width:240px;height:auto;\" />\n  </a>\n</div>\n\n## News\n- 2022-12-02: ONNX compatibility for all modules is available from v1.0.0. See [test_onnx.py](tests/continual/test_onnx.py) for examples.\n\n\n## Quick-start\n\n### Install \n```bash\npip install continual-inference\n```\n\n\n\n### Example\n`co` modules are weight-compatible drop-in replacement for `torch.nn`, enhanced with the capability of efficient _continual inference_:\n\n```python3\nimport torch\nimport continual as co\n                                                           \n#                      B, C, T, H, W\nexample = torch.randn((1, 1, 5, 3, 3))\n\nconv = co.Conv3d(in_channels=1, out_channels=1, kernel_size=(3, 3, 3))\n\n# Same exact computation as torch.nn.Conv3d \u2705\noutput = conv(example)\n\n# But can also perform online inference efficiently \ud83d\ude80\nfirsts = conv.forward_steps(example[:, :, :4])\nlast = conv.forward_step(example[:, :, 4])\n\nassert torch.allclose(output[:, :, : conv.delay], firsts)\nassert torch.allclose(output[:, :, conv.delay], last)\n\n# Temporal properties\nassert conv.receptive_field == 3\nassert conv.delay == 2\n```\n\nSee the [network composition](#composition) and [model zoo](#model-zoo-and-benchmarks) sections for additional examples.\n\n## Library principles\n\n### Forward modes\nThe library components feature three distinct forward modes, which are handy for different situations, namely `forward`, `forward_step`, and `forward_steps`:\n\n#### `forward(input)`\nPerforms a forward computation over multiple time-steps. This function is identical to the corresponding module in _torch.nn_, ensuring cross-compatibility. Moreover, it's handy for efficient training on clip-based data.\n\n```\n         O            (O: output)\n         \u2191 \n         N            (N: network module)\n         \u2191 \n -----------------    (-: aggregation)\n P   I   I   I   P    (I: input frame, P: padding)\n```\n\n\n#### `forward_step(input, update_state=True)`\nPerforms a forward computation for a single frame and (optionally) updates internal states accordingly. This function performs efficient continual inference.\n\n```\nO+S O+S O+S O+S   (O: output, S: updated internal state)\n \u2191   \u2191   \u2191   \u2191 \n N   N   N   N    (N: network module)\n \u2191   \u2191   \u2191   \u2191 \n I   I   I   I    (I: input frame)\n```\n\n#### `forward_steps(input, pad_end=False, update_state=True)`\nPerforms a forward computation across multiple time-steps while updating internal states for continual inference (if update_state=True).\nStart-padding is always accounted for, but end-padding is omitted per default in expectance of the next input step. It can be added by specifying pad_end=True. If so, the output-input mapping the exact same as that of forward.\n```\n         O            (O: output)\n         \u2191 \n -----------------    (-: aggregation)\n O  O+S O+S O+S  O    (O: output, S: updated internal state)\n \u2191   \u2191   \u2191   \u2191   \u2191\n N   N   N   N   N    (N: network module)\n \u2191   \u2191   \u2191   \u2191   \u2191\n P   I   I   I   P    (I: input frame, P: padding)\n```\n\n#### `__call__`\nPer default, the `__call__` function operates identically to _torch.nn_ and executes forward. We supply two options for changing this behavior, namely the _call_mode_ property and the _call_mode_ context manager. An example of their use follows:\n\n```python\ntimeseries = torch.randn(batch, channel, time)\ntimestep = timeseries[:, :, 0]\n\nnet(timeseries)  # Invokes net.forward(timeseries)\n\n# Assign permanent call_mode property\nnet.call_mode = \"forward_step\"\nnet(timestep)  # Invokes net.forward_step(timestep)\n\n# Assign temporary call_mode with context manager\nwith co.call_mode(\"forward_steps\"):\n    net(timeseries)  # Invokes net.forward_steps(timeseries)\n\nnet(timestep)  # Invokes net.forward_step(timestep) again\n```\n\n### Composition\n\nContinual Inference Networks require strict handling of internal data delays to guarantee correspondence between [forward modes](#forward-modes). While it is possible to compose neural networks by defining _forward_, _forward_step_, and _forward_steps_ manually, correct handling of delays is cumbersome and time-consuming. Instead, we provide a rich interface of container modules, which handles delays automatically. On top of `co.Sequential` (which is a drop-in replacement of _torch.nn.Sequential_), we provide modules for handling parallel and conditional dataflow. \n\n- [`co.Sequential`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Sequential.html): Invoke modules sequentially, passing the output of one module onto the next.\n- [`co.Broadcast`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Broadcast.html): Broadcast one stream to multiple.\n- [`co.Parallel`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Parallel.html): Invoke modules in parallel given each their input.\n- [`co.ParallelDispatch`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.ParallelDispatch.html): Dispatch multiple input streams to multiple output streams flexibly.\n- [`co.Reduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Reduce.html): Reduce multiple input streams to one.\n- [`co.BroadcastReduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.BroadcastReduce.html): Shorthand for Sequential(Broadcast, Parallel, Reduce).\n- [`co.Residual`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Residual.html): Residual connection.\n- [`co.Conditional`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conditional.html): Conditionally checks whether to invoke a module (or another) at runtime.\n\n\n#### Composition examples:\n\n<details>\n<summary><b>Residual module</b></summary>\n\nShort-hand:\n```python3\nresidual = co.Residual(co.Conv3d(32, 32, kernel_size=3, padding=1))\n```\n\nExplicit:\n```python3\nresidual = co.Sequential(\n    co.Broadcast(2),\n    co.Parallel(\n        co.Conv3d(32, 32, kernel_size=3, padding=1),\n        co.Delay(2),\n    ),\n    co.Reduce(\"sum\"),\n)\n```\n\n</details>\n\n<details>\n<summary><b>3D MobileNetV2 Inverted residual block</b></summary>\n\nContinual 3D version of the [MobileNetV2 Inverted residual block](https://arxiv.org/pdf/1801.04381.pdf).\n\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/examples/mb_conv.png\" style=\"width: 15vw; min-width: 200px;\">\n  <br>\n  MobileNetV2 Inverted residual block. Source: https://arxiv.org/pdf/1801.04381.pdf\n</div>\n\n```python3\nmb_conv = co.Residual(\n    co.Sequential(\n      co.Conv3d(32, 64, kernel_size=(1, 1, 1)),\n      nn.BatchNorm3d(64),\n      nn.ReLU6(),\n      co.Conv3d(64, 64, kernel_size=(3, 3, 3), padding=(1, 1, 1), groups=64),\n      nn.ReLU6(),\n      co.Conv3d(64, 32, kernel_size=(1, 1, 1)),\n      nn.BatchNorm3d(32),\n    )\n)\n```\n\n</details>\n\n<details>\n<summary><b>3D Squeeze-and-Excitation module</b></summary>\n\nContinual 3D version of the [Squeeze-and-Excitation module](https://arxiv.org/pdf/1709.01507.pdf)\n\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/examples/se_block.png\" style=\"width: 15vw; min-width: 200px;\">\n  <br>\n  Squeeze-and-Excitation block. \n  Scale refers to a broadcasted element-wise multiplication.\n  Adapted from: https://arxiv.org/pdf/1709.01507.pdf\n</div>\n\n```python3\nse = co.Residual(\n    co.Sequential(\n        OrderedDict([\n            (\"pool\", co.AdaptiveAvgPool3d((1, 1, 1), kernel_size=7)),\n            (\"down\", co.Conv3d(256, 16, kernel_size=1)),\n            (\"act1\", nn.ReLU()),\n            (\"up\", co.Conv3d(16, 256, kernel_size=1)),\n            (\"act2\", nn.Sigmoid()),\n        ])\n    ),\n    reduce=\"mul\",\n)\n```\n\n</details>\n\n<details>\n<summary><b>3D Inception module</b></summary>\n\nContinual 3D version of the [Inception module](https://arxiv.org/pdf/1409.4842v1.pdf):\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/examples/inception_block.png\" style=\"width: 25vw; min-width: 350px;\">\n  <br>\n  Inception module. Source: https://arxiv.org/pdf/1409.4842v1.pdf\n   \n</div>\n\n```python3\ndef norm_relu(module, channels):\n    return co.Sequential(\n        module,\n        nn.BatchNorm3d(channels),\n        nn.ReLU(),\n    )\n\ninception_module = co.BroadcastReduce(\n    co.Conv3d(192, 64, kernel_size=1),\n    co.Sequential(\n        norm_relu(co.Conv3d(192, 96, kernel_size=1), 96),\n        norm_relu(co.Conv3d(96, 128, kernel_size=3, padding=1), 128),\n    ),\n    co.Sequential(\n        norm_relu(co.Conv3d(192, 16, kernel_size=1), 16),\n        norm_relu(co.Conv3d(16, 32, kernel_size=5, padding=2), 32),\n    ),\n    co.Sequential(\n        co.MaxPool3d(kernel_size=(1, 3, 3), padding=(0, 1, 1), stride=1),\n        norm_relu(co.Conv3d(192, 32, kernel_size=1), 32),\n    ),\n    reduce=\"concat\",\n)\n```\n</details>\n\n\n### Input shapes\nWe enforce a unified ordering of input dimensions for all library modules, namely:\n\n    (batch, channel, time, optional_dim2, optional_dim3)\n\n### Outputs\nThe outputs produces by `forward_step` and `forward_steps` are identical to those of `forward`, provided the same data was input beforehand and state update was enabled. We know that input and output shapes aren't necessarily the same when using `forward` in the PyTorch library, and  generally depends on padding, stride and receptive field of a module. \n\nFor the `forward_step` function, this comes to show by some `None`-valued outputs. Specifically, modules with a _delay_ (i.e. with receptive fields larger than the padding + 1) will produce `None` until the input count exceeds the delay. Moreover, _stride_ > 1 will produce `Tensor` outputs every _stride_ steps and `None` the remaining steps. A visual example is shown below:\n\n<div align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/LukasHedegaard/continual-inference/main/figures/continual/continual-stride.png\" style=\"width:300px;height:auto;\"/>\n  </br>\n  A mixed example of delay and outputs under padding and stride. Here, we illustrate the step-wise operation of two co module layers, l1 with with receptive_field = 3, padding = 2, and stride = 2 and l2 with receptive_field = 3, no padding and stride = 1. \u29c7 denotes a padded zero, \u25a0 is a non-zero step-feature, and \u2612 is an empty output.\n</div>\n\nFor more information, please see the [library paper](https://arxiv.org/abs/2204.03418).\n\n### Handling state\nDuring stream processing, network modules which operate over multiple time-steps, e.g., a convolution with `kernel_size > 1` in the temporal dimension, will aggregate and cache state internally. Each module has its own local state, which can be inspected using `module.get_state()`. During `forward_step` and `forward_steps`, the state is updated unless the forward_step(s) is invoked with an `update_state = False` argument.\n\nA __state cleanup__ can be accomplished via `module.clean_state()`.\n\n\n## Module library\n_Continual Inference_ features a rich collection of modules for defining Continual Inference Networks. Specific care was taken to create CIN versions of the PyTorch modules found in [_torch.nn_](https://pytorch.org/docs/stable/nn.html):\n\n<details>\n<summary><b>Convolutions</b></summary>\n\n- [`co.Conv1d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conv1d.html)\n- [`co.Conv2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conv2d.html)\n- [`co.Conv3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conv3d.html)\n\n</details>\n\n<details>\n<summary><b>Pooling</b></summary>\n\n  - [`co.AvgPool1d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AvgPool1d.html)\n  - [`co.AvgPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AvgPool2d.html)\n  - [`co.AvgPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AvgPool3d.html)\n  - [`co.MaxPool1d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.MaxPool1d.html)\n  - [`co.MaxPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.MaxPool2d.html)\n  - [`co.MaxPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.MaxPool3d.html)\n  - [`co.AdaptiveAvgPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveAvgPool2d.html)\n  - [`co.AdaptiveAvgPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveAvgPool3d.html)\n  - [`co.AdaptiveMaxPool2d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveMaxPool2d.html)\n  - [`co.AdaptiveMaxPool3d`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.AdaptiveMaxPool3d.html)\n\n</details>\n\n<details>\n<summary><b>Linear</b></summary>\n\n  - [`co.Linear`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Linear.html)\n  - [`co.Identity`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Identity.html): Maps input to output without modification.\n  - [`co.Add`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Add.html): Adds a constant value.\n  - [`co.Multiply`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Multiply.html): Multiplies with a constant factor.\n\n</details>\n\n<details>\n<summary><b>Recurrent</b></summary>\n\n  - [`co.RNN`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.RNN.html)\n  - [`co.LSTM`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.LSTM.html)\n  - [`co.GRU`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.GRU.html)\n\n</details>\n\n<details>\n<summary><b>Transformers</b></summary>\n\n  - [`co.TransformerEncoder`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.TransformerEncoder.html)\n  - [`co.TransformerEncoderLayerFactory`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.TransformerEncoderLayerFactory.html): Factory function corresponding to `nn.TransformerEncoderLayer`.\n  - [`co.SingleOutputTransformerEncoderLayer`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.SingleOutputTransformerEncoderLayer.html): SingleOutputMHA version of `nn.TransformerEncoderLayer`.\n  - [`co.RetroactiveTransformerEncoderLayer`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.RetroactiveTransformerEncoderLayer.html): RetroactiveMHA version of `nn.TransformerEncoderLayer`.\n  - [`co.RetroactiveMultiheadAttention`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.retroactive_mha.html.RetroactiveMultiheadAttention): Retroactive version of `nn.MultiheadAttention`.\n  - [`co.SingleOutputMultiheadAttention`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.single_output_mha.html.SingleOutputMultiheadAttention): Single-output version of `nn.MultiheadAttention`.\n  - [`co.RecyclingPositionalEncoding`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.RecyclingPositionalEncoding.html): Positional Encoding used for Continual Transformers.\n\n</details>\n\n\nModules for composing and converting networks. Both _composition_ and _utility_ modules can be used for regular definition of PyTorch modules as well.\n\n<details>\n<summary><b>Composition modules</b></summary>\n\n  - [`co.Sequential`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Sequential.html): Invoke modules sequentially, passing the output of one module onto the next.\n  - [`co.Broadcast`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Broadcast.html): Broadcast one stream to multiple.\n  - [`co.Parallel`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Parallel.html): Invoke modules in parallel given each their input.\n  - [`co.ParallelDispatch`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.ParallelDispatch.html): Dispatch multiple input streams to multiple output streams flexibly.\n  - [`co.Reduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Reduce.html): Reduce multiple input streams to one.\n  - [`co.BroadcastReduce`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.BroadcastReduce.html): Shorthand for Sequential(Broadcast, Parallel, Reduce).\n  - [`co.Residual`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Residual.html): Residual connection.\n  - [`co.Conditional`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Conditional.html): Conditionally checks whether to invoke a module (or another) at runtime.\n\n</details>\n\n<details>\n<summary><b>Utility modules</b></summary>\n\n  - [`co.Delay`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Delay.html): Pure delay module (e.g. needed in residuals).\n  - [`co.Skip`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Skip.html): Skip a predefined number of input steps.\n  - [`co.Reshape`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Reshape.html): Reshape non-temporal dimensions.\n  - [`co.Lambda`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Lambda.html): Lambda module which wraps any function.\n  - [`co.Constant`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Constant.html): Maps input to and output with constant value.\n  - [`co.Zero`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.Zero.html): Maps input to output of zeros.\n  - [`co.One`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.One.html): Maps input to output of ones.\n\n</details>\n\n<details>\n<summary><b>Converters</b></summary>\n\n  - [`co.continual`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.continual.html): conversion function from `torch.nn` modules to `co` modules.\n  - [`co.forward_stepping`](https://continual-inference.readthedocs.io/en/latest/common/generated/continual.forward_stepping.html): functional wrapper, which enhances temporally local `torch.nn` modules with the forward_stepping functions.\n\n</details>\n\nWe support drop-in interoperability with with the following _torch.nn_ modules:\n\n<details>\n<summary><b>Activation</b></summary>\n\n  - `nn.Threshold`\n  - `nn.ReLU`\n  - `nn.RReLU`\n  - `nn.Hardtanh`\n  - `nn.ReLU6`\n  - `nn.Sigmoid`\n  - `nn.Hardsigmoid`\n  - `nn.Tanh`\n  - `nn.SiLU`\n  - `nn.Hardswish`\n  - `nn.ELU`\n  - `nn.CELU`\n  - `nn.SELU`\n  - `nn.GLU`\n  - `nn.GELU`\n  - `nn.Hardshrink`\n  - `nn.LeakyReLU`\n  - `nn.LogSigmoid`\n  - `nn.Softplus`\n  - `nn.Softshrink`\n  - `nn.PReLU`\n  - `nn.Softsign`\n  - `nn.Tanhshrink`\n  - `nn.Softmin`\n  - `nn.Softmax`\n  - `nn.Softmax2d`\n  - `nn.LogSoftmax`\n\n</details>\n\n<details>\n<summary><b>Normalization</b></summary>\n\n  - `nn.BatchNorm1d`\n  - `nn.BatchNorm2d`\n  - `nn.BatchNorm3d`\n  - `nn.GroupNorm`,\n  - `nn.InstanceNorm1d` (affine=True, track_running_stats=True required)\n  - `nn.InstanceNorm2d` (affine=True, track_running_stats=True required)\n  - `nn.InstanceNorm3d` (affine=True, track_running_stats=True required)\n  - `nn.LayerNorm` (only non-temporal dimensions must be specified)\n\n</details>\n\n<details>\n<summary><b>Dropout</b></summary>\n\n  - `nn.Dropout`\n  - `nn.Dropout1d`\n  - `nn.Dropout2d`\n  - `nn.Dropout3d`\n  - `nn.AlphaDropout`\n  - `nn.FeatureAlphaDropout`\n\n</details>\n\n\n## Model Zoo and Benchmarks\n\n### Continual 3D CNNs\n\nBenchmark results for 1-view testing on __Kinetics400__. For reference, _X3D-L_ scores 69.3% top-1 acc with 19.2 GFLOPs per prediction. \n\nArch     | Avg. pool size | Top 1 (%) | FLOPs (G) per step | FLOPs reduction | Params (M) | Code                                                                   | Weights\n-------- | -------------- | --------- | ------------------ | --------------- | ---------- | ---------------------------------------------------------------------- | ---- \nCoX3D-L  | 64             | 71.6      | 1.25               | 15.3x           | 6.2        | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/cox3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D\\_L.pyth)\nCoX3D-M  | 64             | 71.0      | 0.33               | 15.1x           | 3.8        | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/cox3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D\\_M.pyth)\nCoX3D-S  | 64             | 64.7      | 0.17               | 12.1x           | 3.8        | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/cox3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/X3D\\_S.pyth)\nCoSlow   | 64             | 73.1      | 6.90               |  8.0x           | 32.5       | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/coslow) | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/SLOW\\_8x8\\_R50.pyth)\nCoI3D    | 64             | 64.0      | 5.68               |  5.0x           | 28.0       | [link](https://github.com/LukasHedegaard/co3d/tree/main/models/coi3d)  | [link](https://dl.fbaipublicfiles.com/pytorchvideo/model_zoo/kinetics/I3D\\_8x8\\_R50.pyth)\n\nFLOPs reduction is noted relative to non-continual inference.\nNote that [on-hardware inference](https://arxiv.org/abs/2106.00050) doesn't reach the same speedups as \"FLOPs reductions\" might suggest due to overhead of state reads and writes. This overhead is less important for large batch sizes. This applies to all models in the model zoo.\n\n### Continual ST-GCNs\n\nBenchmark results for on __NTU RGB+D 60__ for the joint modality. For reference, _ST-GCN_ achieves 86% X-Sub and 93.4 X-View accuracy with 16.73 GFLOPs per prediction. \n\nArch      | Receptive field | X-Sub Acc (%) | X-View Acc (%) | FLOPs (G) per step | FLOPs reduction | Params (M) | Code                                                                  \n--------  | --------------- | ------------- | -------------- | ------------------ | --------------- | ---------- | -----\nCoST-GCN  | 300             | 86.3          | 93.8           | 0.16               | 107.7x          | 3.1        | [link](https://github.com/LukasHedegaard/continual-skeletons/blob/main/models/cost_gcn_mod/cost_gcn_mod.py)\nCoA-GCN   | 300             | 84.1          | 92.6           | 0.17               | 108.7x          | 3.5        | [link](https://github.com/LukasHedegaard/continual-skeletons/blob/main/models/coa_gcn_mod/coa_gcn_mod.py)\nCoST-GCN  | 300             | 86.3          | 92.4           | 0.15               | 107.6x          | 3.1        | [link](https://github.com/LukasHedegaard/continual-skeletons/blob/main/models/cos_tr_mod/cos_tr_mod.py)\n\n[Here](https://drive.google.com/drive/u/4/folders/1m6aV5Zv8tAytvxF6qY4m9nyqlkKv0y72), you can download pre-trained,model weights for the above architectures on NTU RGB+D 60, NTU RGB+D 120, and Kinetics-400 on joint and bone modalities.\n\n\n### Continual Transformers\n\nBenchmark results for on __THUMOS14__ on top of features extracted using a TSN-ResNet50 backbone pre-trained on Kinetics400. For reference, _OadTR_ achieves 64.4 % mAP with 2.5 GFLOPs per prediction. \n\nArch        | Receptive field | mAP (%) | FLOPs (G) per step |  Params (M) | Code                                                                  \n----------  | --------------- | ------- | ------------------ |  ---------- | -----\nCoOadTR-b1  | 64              | 64.2    | 0.41               |  15.9       | [link](https://github.com/LukasHedegaard/CoOadTR)\nCoOadTR-b2  | 64              | 64.4    | 0.01               |   9.6       | [link](https://github.com/LukasHedegaard/CoOadTR)\n\nThe library features complete implementations of the [one](https://github.com/LukasHedegaard/continual-inference/blob/9895344f50a93ebb5cf5c4f26ecfdf27b6a3fe75/tests/continual/test_transformer.py#L8)- and [two](https://github.com/LukasHedegaard/continual-inference/blob/9895344f50a93ebb5cf5c4f26ecfdf27b6a3fe75/tests/continual/test_transformer.py#L59)-block continual transformer encoders as well.\n\n\n## Compatibility\nThe library modules are built to integrate seamlessly with other PyTorch projects.\nSpecifically, extra care was taken to ensure out-of-the-box compatibility with:\n- [pytorch-lightning](https://github.com/PyTorchLightning/pytorch-lightning)\n- [ptflops](https://github.com/sovrasov/flops-counter.pytorch)\n- [ride](https://github.com/LukasHedegaard/ride)\n- [onnx](https://github.com/onnx/onnx)\n<!-- - [onnxruntime](https://github.com/microsoft/onnxruntime) -->\n\n\n## Citation\n<a href=\"https://arxiv.org/abs/2204.03418\" style=\"display:inline-block;\">\n  <img src=\"http://img.shields.io/badge/paper-arxiv.2204.03418-B31B1B.svg\" height=\"20\" >\n</a>\n\n```bibtex\n@inproceedings{hedegaard2022colib,\n  title={Continual Inference: A Library for Efficient Online Inference with Deep Neural Networks in PyTorch},\n  author={Lukas Hedegaard and Alexandros Iosifidis},\n  booktitle={European Conference on Computer Vision Workshops (ECCVW)},\n  year={2022}\n}\n```\n\n\n## Acknowledgement\nThis work has received funding from the European Union\u2019s Horizon 2020 research and innovation programme under grant agreement No 871449 (OpenDR).\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "A Python library for Continual Inference Networks in PyTorch",
    "version": "1.2.3",
    "project_urls": {
        "Homepage": "https://github.com/lukashedegaard/continual-inference"
    },
    "split_keywords": [
        "deep learning",
        "pytorch",
        "ai",
        "online",
        "inference",
        "continual"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "07dad507b7ef6ab0ad6fbeaab1e7c902b85d3d567dc0bb0243dec5f7413f1cb8",
                "md5": "d1a68b37df2d343c76e00d4858b35dbb",
                "sha256": "69e8b3b09cc0b321c59bd115975b75e08fd128956cd4ec265f05130b08037f8a"
            },
            "downloads": -1,
            "filename": "continual_inference-1.2.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d1a68b37df2d343c76e00d4858b35dbb",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 81132,
            "upload_time": "2023-06-16T09:28:26",
            "upload_time_iso_8601": "2023-06-16T09:28:26.397565Z",
            "url": "https://files.pythonhosted.org/packages/07/da/d507b7ef6ab0ad6fbeaab1e7c902b85d3d567dc0bb0243dec5f7413f1cb8/continual_inference-1.2.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f1621075ec917fcc27b5606c544e27424f9d66384076543279906f2987bbe71e",
                "md5": "b90376ff03749e8982f799c592a6d0c7",
                "sha256": "c92ac2b2562ea7343b6116eb6ca76e97bf47530ea845c44ce956a01043c8513f"
            },
            "downloads": -1,
            "filename": "continual-inference-1.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "b90376ff03749e8982f799c592a6d0c7",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 76404,
            "upload_time": "2023-06-16T09:28:28",
            "upload_time_iso_8601": "2023-06-16T09:28:28.166073Z",
            "url": "https://files.pythonhosted.org/packages/f1/62/1075ec917fcc27b5606c544e27424f9d66384076543279906f2987bbe71e/continual-inference-1.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-16 09:28:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "lukashedegaard",
    "github_project": "continual-inference",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "continual-inference"
}
        
Elapsed time: 0.07836s