nobuco


Namenobuco JSON
Version 0.14.3 PyPI version JSON
download
home_pageNone
SummaryPytorch to Tensorflow conversion made intuitive
upload_time2024-05-16 17:00:28
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
licenseNone
keywords pytorch tensorflow keras converter
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/nobuco.png">
<sup><a href="https://www.behance.net/diliajl">diliajl</a></sup>
</p>

**No** **Bu**llshit **Co**nverter is a tool that helps you translate Pytorch models into Keras/Tensorflow graphs without losing your mind.

- Supports a wide range of architectures
  - [x] Control flow ops (If, For, While)
  - [x] Recurrent layers (LSTM, GRU)
  - [x] Arbitrary torch functions
- Simple
- Flexible
- Efficient
- Sanity-preserving, with clear mistake messaging

> [!IMPORTANT]  
> Nobuco only supports Keras 2 at the moment. If you'd like to use the new multi-backend Keras 3, please bump up the related issue: https://github.com/keras-team/keras/issues/19314

## Installation <img src="https://img.shields.io/pypi/v/nobuco?color=blue&style=flat-square">
<img src="https://img.shields.io/badge/PyTorch-2.1-EE4C2C.svg?style=flat&logo=pytorch"> <img src="https://img.shields.io/badge/TensorFlow-2.15-FF6F00.svg?style=flat&logo=tensorflow">

```bash
pip install -U nobuco
```

<!-- toc -->

## Table of Contents
- [Essentials](#essentials)
- [Channel order wizardry](#channel-order-wizardry)
- [Implementation mismatch: pick your poison](#implementation-mismatch-pick-your-poison)
- [Going dynamic](#going-dynamic)
  - [Control flows](#control-flows)
  - [Dynamic shapes](#dynamic-shapes)
- [In-place operations](#in-place-operations)
- [A little white lie: tracing mode](#a-little-white-lie-tracing-mode)
- [Ad hoc modifications](#ad-hoc-modifications) 
- [So we put a converter inside your converter](#so-we-put-a-converter-inside-your-converter)
- [But was it worth it?](#but-was-it-worth-it)
- [Nobuco knowledge base](#nobuco-knowledge-base)

<!-- tocstop -->

<!-- toc -->

- [Deep dive](#deep-dive)
  - [Aggressive transposition removal: fighting fire with fire](#aggressive-transposition-removal-fighting-fire-with-fire)

<!-- tocstop -->

## Essentials

Suppose we want to convert a Pytorch module similar to this one:

````python
class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Conv2d(3, 16, kernel_size=(3, 3), padding=(1, 1), stride=(2, 2))

    def forward(self, x):
        x = self.conv(x)
        x = nn.Hardsigmoid()(x)
        x = 1 - x[:, ::2] * x[:, 1::2]
        return x
````

The process is exactly what you would expect. Instantiate the module, create dummy inputs and call the magic function:

```python
import nobuco
from nobuco import ChannelOrder, ChannelOrderingStrategy
from nobuco.layers.weight import WeightLayer
```

````python
dummy_image = torch.rand(size=(1, 3, 256, 256))
pytorch_module = MyModule().eval()

keras_model = nobuco.pytorch_to_keras(
    pytorch_module,
    args=[dummy_image], kwargs=None,
    inputs_channel_order=ChannelOrder.TENSORFLOW,
    outputs_channel_order=ChannelOrder.TENSORFLOW
)
````

Aaaand done! That's all it takes to... hold on, what's that?

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/essentials1.svg" width="100%">

Nobuco says it doesn't know how to handle hard sigmoid.
Apparently, it's our job to provide a node converter for either `F.hardsigmoid` or the enclosing `Hardsigmoid` module (or the entire `MyModule`, but that makes little sense). Here, we'll go for the former.

Conversion is done directly. No layers upon layers of abstraction, no obscure intermediate representation. 
A node converter is just a `Callable` that takes the same arguments as the corresponding node in Pytorch and outputs an equivalent node in Tensorflow. 
The converted node preserves the original node's signature, but Pytorch tensors replaced with Tensorflow counterparts (be that `tf.Tensor`, `KerasTensor`, `tf.Variable`, or `ResourceVariable`).

This should do the trick:

````python
@nobuco.converter(F.hardsigmoid, channel_ordering_strategy=ChannelOrderingStrategy.MINIMUM_TRANSPOSITIONS)
def hardsigmoid(input: torch.Tensor, inplace: bool = False):
    return lambda input, inplace=False: tf.keras.activations.hard_sigmoid(input)
````

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/essentials2.svg" width="100%">

It works, but the outputs don't quite match. Perhaps we should check how [Pytorch](https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html) and [Tensorflow](https://www.tensorflow.org/api_docs/python/tf/keras/activations/hard_sigmoid) define hard sigmoid. 
And sure enough, their implementations differ. Have to type in the formula manually, I guess...

````python
@nobuco.converter(F.hardsigmoid, channel_ordering_strategy=ChannelOrderingStrategy.MINIMUM_TRANSPOSITIONS)
def hardsigmoid(input: torch.Tensor, inplace: bool = False):
    return lambda input, inplace=False: tf.clip_by_value(input/6 + 1/2, clip_value_min=0, clip_value_max=1)
````

And the happy result:

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/essentials3.svg" width="100%">

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/tutorial.png" width="30%">
</p>

The example above is artificial, but it illustrates the point.
It's not feasible to provide a node converter for every existing Pytorch op. There are literally [thousands](https://dev-discuss.pytorch.org/t/where-do-the-2000-pytorch-operators-come-from-more-than-you-wanted-to-know/) of them! 
Best we can do without the converter constantly lacking essential functionality, being riddled with bugs, doing weird stuff and breaking apart with every other PT/TF release 
is to keep the tool simple and customizable, make it clear where a problem comes from and let the _user_ sort things out.
Usually it's easy for a human to translate an isolated operation from one framework to another.
Reproducing the graph structure is a different matter entirely. For that, Nobuco has you covered!

Nobuco lets you intervene in conversion at each step, asks for help where needed and doesn't bother you with routine stuff.

https://user-images.githubusercontent.com/2457934/233740603-cc11acc5-cd6b-48c8-b089-ff3ead772dd0.mp4

<p align="center"><em>
With an IDE, you can jump right where the node was [I]nvoked, [D]efined and [C]onverted
</em></p>

## Channel order wizardry

Some operations assume its input tensors have a channel dimension. 
And as you probably know, Pytorch and Tensorflow do not agree on the layout of such tensors.
Pytorch adopts channel-first layout (_B**C**H_, _B**C**HW_, etc.) 
while Tensorflow works efficiently with channel-last tensors (_BH**C**_, _BHW**C**_, ...).
Transposing tensors between the two layouts incurs non-trivial overhead as generally, tensor data must be physically rearranged.
In an effort to keep that overhead to the minimum, Nobuco does layout coercions _lazily_. 
A couple of things are needed to make it possible:

- Tensorflow tensors are augmented with an additional property which stores their channel order, either Pytorch (channel first) or Tensorflow (channel last) style.
- Node converters have requirements on what channel order their inputs must have. Said requirements are expressed with `channel_ordering_strategy` argument. 

Channel ordering strategies are
- `FORCE_TENSORFLOW_ORDER`
  - Input tensors will be coerced to Tensorflow channel order.
  - Convenient for converting channel-aware operations (convolution, batchnorm).
- `FORCE_PYTORCH_ORDER`
  - Input tensors entering the node will look exactly as they do in the original Pytorch graph. 
  - Use it when the node does not interpret its input tensors as having a channel dimension (linear, matmul). 
- `MINIMUM_TRANSPOSITIONS`
  - The channel order is decided by a majority vote (whichever prevails among the inputs). This way the number of coercions (i.e. tensor transpositions) is kept to the minimum.
  It also means whenever there's only one input, it will be left untouched.
  - Best choice for element-wise ops (most activations).
- `MANUAL`
  - You are on your own. In exchange for unrestricted freedom, you take responsibility to coerce input tensors to suitable channel order and to also annotate output tensors with their order.

The simple lazy approach makes wonders in most situations, but sometimes it produces suboptimal graphs.
Consider the code below. Imagine this is some sort of text processing network. 
It first applies a GRU layer which assumes the inputs do not have a channel dimension, so its input/output layouts are the same in both Pytorch and Tensorflow.
But then, the outputs are passed to a couple of 1D convolutions which are channel-aware. 
Because of that, a transpose op must be put in the converted graph.

```python
class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.gru = nn.GRU(32, 128, num_layers=1, batch_first=True, bidirectional=False)
        self.conv1 = nn.Conv1d(12, 40, kernel_size=3, padding=1)
        self.conv2 = nn.Conv1d(12, 60, kernel_size=1, padding=0)

    def forward(self, x):
        x, hx = self.gru(x)
        x1 = self.conv1(x)
        x2 = self.conv2(x)
        return x1, x2

pytorch_module = MyModule().eval()

inputs = [
    torch.normal(0, 1, size=(1, 12, 32)),
]
keras_model = nobuco.pytorch_to_keras(
    pytorch_module, inputs,
    inputs_channel_order=ChannelOrder.PYTORCH,
)
```

The laziness shoots us in the foot here, and we get not one transpose but two:

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/channel_ordering.png" width="30%">
</p>

For such occasions, there's two brethren functions: `force_tensorflow_order` and `force_pytorch_order`.

```python
x, hx = self.gru(x)
x = nobuco.force_tensorflow_order(x)
x1 = self.conv1(x)
x2 = self.conv2(x)
```

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/channel_ordering_forced.png" width="30%">
</p>

In case you are curious, the implementation is trivial:

```python
@nobuco.traceable
def force_tensorflow_order(inputs):
    return inputs


@nobuco.converter(force_tensorflow_order, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)
def converter_force_tensorflow_order(inputs):
    return lambda inputs: inputs
```

`force_pytorch_order` is defined analogously.

## Implementation mismatch: pick your poison

Sometimes, Pytorch and Tensorflow just don't go along. 
In case of `hardsigmoid`, it's a mere inconvenience, but it can be much more sinister.

Take the model below, for example.

```python
class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.factor = 4
        self.conv = nn.Conv2d(3*self.factor**2, 3*self.factor**2, kernel_size=1)

    def forward(self, x):
        x = nn.PixelUnshuffle(self.factor)(x)
        x = self.conv(x)
        x = nn.PixelShuffle(self.factor)(x)
        return x
```

Ideally, there would only be three nodes in the converted graph. That's not what we get, though.

Tensorflow does not have `pixel_unshuffle`/`pixel_shuffle`.
Their closest counterparts, `tf.nn.space_to_depth`/`tf.nn.depth_to_space`,
do almost the same thing but not quite: output channels are in a different order.
The order must be fixed with a pricey `transpose`, no way around that. Or is there?

<p align="center">
    <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/pixelshuffle1.png" width="20%">
</p>

Instead of emulating an absent Pytorch op in Tensorflow, 
we might do the procedure in reverse: provide a Pytorch implementation for the Tensorflow node we want to convert to.
The overhead would be carried by the original Pytorch model leaving the converted graph nice and clean.

```python
from nobuco.addons.torch.space_to_depth import SpaceToDepth
from nobuco.addons.torch.depth_to_space import DepthToSpace


class MyModuleTFOptimized(nn.Module):
    def __init__(self):
        super().__init__()
        self.factor = 4
        self.conv = nn.Conv2d(3*self.factor**2, 3*self.factor**2, kernel_size=1)

    def forward(self, x):
        x = SpaceToDepth(self.factor)(x)
        x = self.conv(x)
        x = DepthToSpace(self.factor)(x)
        return x
```

<p align="center">
    <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/pixelshuffle2.png" width="20%">
</p>

<table align="center">
<tr>
<th>Torch-optimized</th>
<th>Tensorflow-optimized</th>
</tr>
<tr>
<td>

**Torch implementation**

```python
F.pixel_unshuffle
```

**Tensorflow converter**
```python
@nobuco.converter(F.pixel_unshuffle, 
                  channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)
def converter_pixel_unshuffle(input: Tensor, downscale_factor: _int):
    def func(input, downscale_factor):
        x = tf.nn.space_to_depth(input, downscale_factor)
        x = channel_interleave2d(x, downscale_factor, reverse=True)
        return x
    return func


def channel_interleave2d(x, block_size: int, reverse: bool):
    b, h, w, c = x.shape
    n_blocks = block_size ** 2

    if reverse:
        x = tf.reshape(x, (b, h, w, n_blocks, c // n_blocks))
    else:
        x = tf.reshape(x, (b, h, w, c // n_blocks, n_blocks))

    x = tf.transpose(x, (0, 1, 2, 4, 3))
    x = tf.reshape(x, (b, h, w, c))
    return x
```

</td>
<td>

**Torch implementation**
```python
class SpaceToDepth(nn.Module):
    def __init__(self, block_size):
        super().__init__()
        self.block_size = block_size

    def forward(self, input):
        x = F.pixel_unshuffle(input, self.block_size)
        x = channel_interleave2d(x, self.block_size, reverse=False)
        return x

    
def channel_interleave2d(x: torch.Tensor, block_size: int, reverse: bool) -> torch.Tensor:
    b, c, h, w = x.shape
    n_blocks = block_size ** 2

    if reverse:
        x = x.view(b, n_blocks, c // n_blocks, h, w)
    else:
        x = x.view(b, c // n_blocks, n_blocks, h, w)

    x = x.transpose(1, 2).reshape(b, c, h, w)
    return x
```

**Tensorflow converter**
```python
@nobuco.converter(SpaceToDepth, 
                  channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)
def converter_space_to_depth(self, input: torch.Tensor):
    return lambda input: tf.nn.space_to_depth(input, self.block_size)
```

</td>
</tr>
</table>

## Going dynamic

### Control flows
Introducing python control flow statements into the compute graph is no easy feat.
Tensorflow can do so via `tf.autograph`, but at a cost of [system's complexity](https://www.youtube.com/watch?v=NIEgzljyDyI) and with some notable [limitations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md).
Stuff like that is way above Nobuco's paygrade, so the following module cannot be properly handled without human intervention.

```python
class ControlIf(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv_pre = nn.Conv2d(3, 16, kernel_size=(1, 1))
        self.conv_true = nn.Conv2d(16, 32, kernel_size=(1, 1))
        self.conv_false = nn.Conv2d(16, 32, kernel_size=(1, 1))
        self.conv_shared = nn.Conv2d(32, 32, kernel_size=(1, 1))

    def forward(self, x):
        x = self.conv_pre(x)
        if x.mean() > 0:
            x = self.conv_true(x)
            x = torch.tanh(x)
            x = self.conv_shared(x)
            x = x + 1
        else:
            x = self.conv_false(x)
            x = torch.sigmoid(x)
            x = self.conv_shared(x)
            x = x - 1
        x = self.conv_shared(x)
        return x
```

Of course, it's possible to translate the dynamic module into a Tensorflow layer
(don't forget to decorate it with `@tf.function` for autograph to kick in).
But what if it contains inner modules, do you replicate them in Tensorflow all by hand?
Not unless you want to! 
Just convert them separately and use the resulting graphs inside the parent layer.

```python
class ControlIfKeras(tf.keras.layers.Layer):
    def __init__(self, conv_pre, conv_true, conv_false, conv_shared, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.conv_pre = conv_pre
        self.conv_true = conv_true
        self.conv_false = conv_false
        self.conv_shared = conv_shared

    def get_config(self):
        config = super().get_config()
        config.update({
            "conv_pre": self.conv_pre,
            "conv_true": self.conv_true,
            "conv_false": self.conv_false,
            "conv_shared": self.conv_shared,
        })
        return config

    @tf.function
    def call(self, x):
        x = self.conv_pre(x)
        if tf.reduce_mean(x) > 0:
            x = self.conv_true(x)
            x = tf.tanh(x)
            x = self.conv_shared(x)
            x = x + 1
        else:
            x = self.conv_false(x)
            x = tf.sigmoid(x)
            x = self.conv_shared(x)
            x = x - 1
        x = self.conv_shared(x)
        return x


@nobuco.converter(ControlIf, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)
def converter_ControlIf(self, x):
    order = ChannelOrder.TENSORFLOW
    kwargs = {'inputs_channel_order': order, 'outputs_channel_order': order, 'return_outputs_pt': True}
    
    conv_pre, out_pre = nobuco.pytorch_to_keras(self.conv_pre, [x], **kwargs)
    conv_true, out_true = nobuco.pytorch_to_keras(self.conv_true, [out_pre], **kwargs)
    conv_false, out_false = nobuco.pytorch_to_keras(self.conv_false, [out_pre], **kwargs)
    conv_shared, _ = nobuco.pytorch_to_keras(self.conv_shared, [out_true], **kwargs)
    layer = ControlIfKeras(conv_pre, conv_true, conv_false, conv_shared)
    return layer
```

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/control_if.png" width="25%">
</p>

See [examples](/examples) for other ways to convert control flow ops.

### Dynamic shapes

What if we wanted our module to accept images of arbitrary height and width?
Can we have that? Let's try:

```python
class DynamicShape(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Conv2d(3, 16, kernel_size=(1, 1))

    def forward(self, x):
        x = self.conv(x)

        # Produces static shape
        b, c, h, w = x.shape

        x = x[:, :, h//3:, w//3:]
        return x


input = torch.normal(0, 1, size=(1, 3, 128, 128))
pytorch_module = DynamicShape().eval()

keras_model = nobuco.pytorch_to_keras(
    pytorch_module,
    args=[input],
    input_shapes={input: (None, 3, None, None)}, # Annotate dynamic axes with None
    inputs_channel_order=ChannelOrder.TENSORFLOW,
    outputs_channel_order=ChannelOrder.TENSORFLOW,
)
```

Something's not right. We don't see shape extraction ops in the debug output or the graph:

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape1.svg" width="100%">

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape1.png" width="15%">
</p>

That's not surprising, actually. 
In Pytorch, tensor shape is a tuple of regular integers, not tensors, so it's quite difficult to track them.
`nobuco.shape` solves this problem.
This function returns tensors, much like [`tf.shape`](https://www.tensorflow.org/api_https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/python/tf/shape) does:

```python
# Allows for dynamic shape
b, c, h, w = nobuco.shape(x)
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape2.svg" width="100%">

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape2.png" width="30%">
</p>

It's also possible to automatically substitute every `.shape`/`.size` call with `nobuco.shape` during the tracing phase by setting `trace_shape` flag:

```python
keras_model = nobuco.pytorch_to_keras(
  # ...
  trace_shape=True
)
```

## In-place operations

Nobuco can handle most situations where tensors are modified in-place. For instance, these will work just fine:

```python
class MyModule(nn.Module):
    def forward(self, x):
        x[:, 1:2, 16:25, 8::2] *= 2
        torch.relu_(x)
        return x
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace1.svg" width="100%">

However, applying in-place operation to a slice yields incorrect result. What gives?

```python
class MyModule(nn.Module):
    def forward(self, x):
        torch.relu_(x[:, 1:2, 16:25, 8::2])
        return x
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace2.svg" width="100%">

You see, Tensorflow graphs (and many other formats like ONNX) do not support in-place ops.
So when we take slice (`x[:, 1:2, 16:25, 8::2]`) in TF/ONNX, the result is not a view of the original tensor but a copy. 
This copy is then passed to `relu` (which is not in-place either), and its result is not used anywhere. 
As you can see above, the output tensors of `__getitem__` and `relu_` are <span style="color:gray">grayed out</span>, and these operations are excluded from the graph.
In fact, it's empty:

<p align="center">
<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace_empty.png" width="30%">
</p>

The easiest way of fixing this is to explicitly assign the result to the slice.
Conveniently enough, most standard in-place operations in Pytorch do return their modified arguments as outputs.

```python
class MyModule(nn.Module):
    def forward(self, x):
        x[:, 1:2, 16:25, 8::2] = torch.relu_(x[:, 1:2, 16:25, 8::2])
        return x
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace3.svg" width="100%">

## A little white lie: tracing mode

The flexibility and dynamic nature of Pytorch graphs can make it quite challenging to directly translate them not only to Tensorflow but ONNX as well.
How are these types of problems solved for real-world models?
Once again, we'll learn by example:

```python
pytorch_module = torchvision.models.detection.ssdlite320_mobilenet_v3_large(weights=SSDLite320_MobileNet_V3_Large_Weights.DEFAULT).eval()

x = torch.rand(size=(1, 3, 320, 320))

keras_model = nobuco.pytorch_to_keras(
    pytorch_module, 
    args=[x],
)
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/tracing1.svg" width="100%">

Is that Nobuco's failure to handle in-place `copy_`? Yes, but there's more to the story.
Let's peek into the model's source code (set `debug_traces=nobuco.TraceLevel.ALWAYS` for easier navigation). Here's the culprit:

```python
def batch_images(self, images: List[Tensor], size_divisible: int = 32) -> Tensor:
    if torchvision._is_tracing():
        # batch_images() does not export well to ONNX
        # call _onnx_batch_images() instead
        return self._onnx_batch_images(images, size_divisible) # <- Alternative ONNX-friendly implementation

    max_size = self.max_by_axis([list(img.shape) for img in images])
    stride = float(size_divisible)
    max_size = list(max_size)
    max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)
    max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)

    batch_shape = [len(images)] + max_size
    batched_imgs = images[0].new_full(batch_shape, 0)
    for i in range(batched_imgs.shape[0]):
        img = images[i]
        batched_imgs[i, : img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) # <- In-place copy

    return batched_imgs
```

This method is certainly not fit for tracing. 
The model's authors knew it and provided an alternative implementation in case we'd want to export it to ONNX (works for Keras, too!).
`torchvision._is_tracing()` returns True whenever the model is being traced (e.g. invoked inside `torch.onnx.export(...)`).
This state is not directly controllable by the user, yet Nobuco can gaslight the model into thinking it's being traced by Pytorch itself:

```python
keras_model = nobuco.pytorch_to_keras(
    # ...
    enable_torch_tracing=True
)
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/tracing2.svg" width="100%">

Why is it not enabled by default, then? 
You see, the other effect of tracing mode is equivalent to that of `trace_shape=True`: `shape` calls return tensors instead of ints.
Many Pytorch models were never meant to be converted to anything, and tracing mode may break them.
Nobuco tries to minimize silent/obscure errors and user's confusion, and not being too smart is a reasonable tradeoff.

## Ad hoc modifications

Let's say, for illustrative purposes, that we prefer putting batchnorm _before_ convolution.
We were sure `TFLiteConverter` would fuse these two linear operations into one.
Alas, it failed to meet our expectations. 
Can we still get the fusion to work without re-training or messing around with the model checkpoint?

```python
class FusibleModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.bn = nn.BatchNorm2d(3)
        self.conv = nn.Conv2d(3, 16, kernel_size=(3, 3), padding=(0, 0))
        self.act = nn.ReLU()

    def forward(self, x):
        x = self.bn(x)
        x = self.conv(x)
        x = self.act(x)
        return x
```

<p align="center">
  <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/fusion1.png" width="20%">
</p>

Here's one way to do it:
- Wrap the two ops in a `Callable`. Decorate it with `@nobuco.traceable`. 
- Make a custom converter for it which does the desired optimization. 

```python
class FusibleModule(nn.Module):
    # ...

    @nobuco.traceable
    def bn_conv(self, x):
        x = self.bn(x)
        x = self.conv(x)
        return x

    def forward(self, x):
        x = self.bn_conv(x)
        x = self.act(x)
        return x
```

```python
@nobuco.converter(FusibleModule.bn_conv, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)
def converter_bn_conv(self, x):
    order = ChannelOrder.TENSORFLOW
    bn, out_bn = nobuco.pytorch_to_keras(self.bn, [x], inputs_channel_order=order, outputs_channel_order=order, return_outputs_pt=True)
    conv = nobuco.pytorch_to_keras(self.conv, [out_bn], inputs_channel_order=order, outputs_channel_order=order)

    gamma, beta, moving_mean, moving_variance = bn.get_weights()
    kernel, bias = conv.get_weights()
    eps = self.bn.eps

    '''
    y = gamma * (x - moving_mean) / sqrt(moving_variance + eps) + beta
    z = kernel * y + bias
    =>
    z = kernel_fused * x + bias_fused WHERE
    kernel_fused = kernel * gamma / sqrt(moving_variance + eps)
    bias_fused = -kernel_fused * moving_mean + kernel * beta + bias
    '''
    kernel_fused = kernel * (gamma / np.sqrt(moving_variance + eps))[None, None, :, None]
    bias_fused = (-kernel_fused * moving_mean[None, None, :, None] + kernel * beta[None, None, :, None]).sum(axis=(0, 1, 2)).flatten() + bias
    conv.set_weights([kernel_fused, bias_fused])
    return lambda self, x: conv(x)
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/fusion2.svg" width="100%">

<p align="center">
  <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/fusion2.png" width="20%">
</p>

## So we put a converter inside your converter

As we've learned, Nobuco gets confused when in-place operation is applied to a slice.
There's a way to fix that, but let's not do it now.
Instead, we'll use it as an excuse to explain the concept of nested converters.
So, for this module, conversion will give us incorrect result:

```python
class SliceReLU(nn.Module):
    def forward(self, x):
        # Gives incorrect result after conversion
        torch.relu_(x[:, 1:2, 16:25, 8::2])
        # That's the recommended approach, but we're not going for it now
        # x[:, 1:2, 16:25, 8::2] = torch.relu_(x[:, 1:2, 16:25, 8::2])
        return x


class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv = nn.Conv2d(3, 3, kernel_size=(3, 3), padding=(1, 1))

    def forward(self, x):
        x = self.conv(x)
        SliceReLU()(x)
        return x
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/converter_inside_converter1.svg" width="100%">

We've seen it's possible to invoke a Nobuco converter inside another Nobuco converter.
Can we embed some third-party converter? You bet! Why? Because it might just do what we need.
Let's consider the standard route: Pytorch -> ONNX -> Tensorflow, with the latter step done with [onnx-tf](https://github.com/onnx/onnx-tensorflow).
This library likes transposing stuff so much, converting the whole graph with it may introduce intolerable inference overhead. Nonetheless, it does the job.
A sensible tradeoff would be to wrap the problematic operation into its own `nn.Module` and give it a special treat, while handling everything else with Nobuco.

```python
import onnx
from onnx_tf.backend import prepare


@nobuco.converter(SliceReLU, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_PYTORCH_ORDER, reusable=False)
def converter_SliceReLU(self, x):
    model_path = 'slice_relu'
    onnx_path = model_path + '.onnx'

    # NB: onnx.export in implemented via tracing i.e. it may modify the inputs!
    torch.onnx.export(self, (x,), onnx_path, opset_version=12, input_names=['input'],
                      dynamic_axes={'input': [0, 1, 2, 3]}
                      )
    onnx_model = onnx.load(onnx_path)
    tf_rep = prepare(onnx_model)
    tf_rep.export_graph(model_path)
    model = tf.keras.models.load_model(model_path)
    return keras.layers.Lambda(lambda x: model(input=x))
```

<img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/converter_inside_converter2.svg" width="100%">

## But was it worth it?

Let's cut to the chase, here's the numbers.

**mobilenet_v3_large** (26.8 Mb)

|                   | nobuco  | onnx_tf  | speedup |
|-------------------|---------|----------|---------|
| x86 (XNNPACK)     | 11.1 ms | 14.7 ms  | 1.3x    |
| Arm CPU (XNNPACK) | 24.3 ms | 40.3 ms  | 1.6x    |
| Arm GPU (OpenCL)  | 21.3 ms | 192.6 ms | 9x      |

**deeplabv3_resnet50** (158.5 Mb)

|                   | nobuco | onnx_tf | speedup |
|-------------------|--------|---------|---------|
| x86 (XNNPACK)     | 1.25 s | 1.34 s  | 1.07x   |
| Arm CPU (XNNPACK) | 2.0 s  | 2.7 s   | 1.35x   |
| Arm GPU (OpenCL)  | 1.6 s  | 2.6 s   | 1.62x   |

As we can see, redundant transpositions may completely ruin the performance, especially on a GPU.
But that's not the only issue.
Let's test this:

```python
class SliceReLU(nn.Module):
    def forward(self, x):
        x[:, 1:2, 16:25, 8::2] = torch.relu_(x[:, 1:2, 16:25, 8::2])
        return x
```

|                   | nobuco     | onnx_tf    | speedup |
|-------------------|------------|------------|---------|
| x86 (XNNPACK)     | 0.40 ms    | 1.57 ms    | 3.9x    |
| Arm CPU           | 4.6 ms     | **2.9** ms | 0.6x    |
| Arm CPU (XNNPACK) | **2.1** ms | FAIL       | —       |
| Arm GPU (OpenCL)  | 21.8 ms    | FAIL       | —       |

Again, the graph obtained with `onnx_tf` is much slower on x86 CPU.
Worse yet, on mobile processor, optimized TFLite delegates for both GPU and CPU failed.
No transpose ops were added this time, so who's to blame?
Just look what `torch.onnx.export` gives us:

<p align="center">
  <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/slice_relu_onnx.png" width="100%">
  <b>slice_relu.onnx</b>
</p>

`onnx_tf` does a fair job optimizing the monstrosity of a graph it's given,
but combining consecutive `slice` ops seems too much to ask.
It also leaves out garbage nodes sometimes (note the free-floating `While` in this example).

Nobuco evades these types of problems by simply not dealing with `onnx`.

<table align="center">
  <tr>
    <th>slice_relu_nobuco</th>
    <th>slice_relu_onnxtf</th>
  </tr>
  <tr>
    <td>
      <p align="center">
        <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/slice_relu_nobuco.png" width="60%">
      </p>
    </td>
    <td>
      <p align="center">
        <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/slice_relu_onnxtf.png" width="60%">
      </p>
    </td>
  </tr>
</table>

## Nobuco knowledge base

Don't want to convert anything but looking for a Tensorflow equivalent of a certain Pytorch node (operation or module)?
Nobuco already implements quite a few node converters, most written in a concise and, hopefully, understandable way.
These are located in [nobuco/node_converters](https://github.com/AlexanderLutsenko/nobuco/tree/master/nobuco/node_converters),
and there's a utility function to help you find what you need:

```python
node = torch.Tensor.repeat
# node = F.relu_
# node = nn.LSTM

location_link, source_code = nobuco.locate_converter(node)
print('Converter location:')
print(location_link)
print('Converter source code:')
print(source_code)
```

```console
Converter location:
File "/home/user/anaconda3/envs/nb/lib/python3.9/site-packages/nobuco/node_converters/tensor_manipulation.py", line 141

Converter source code:
@converter(torch.Tensor.repeat, channel_ordering_strategy=ChannelOrderingStrategy.MINIMUM_TRANSPOSITIONS)
def converter_repeat(self, *sizes):
    def func(self, *sizes):
        if get_channel_order(self) == ChannelOrder.TENSORFLOW:
            sizes = permute_pytorch2keras(sizes)
        return tf.tile(self, sizes)
    return func
```

---

## Deep dive

<details>
<summary><h3>Aggressive transposition removal: fighting fire with fire</h3></summary>

Despite trying its best, Nobuco may produce outrageously inefficient graphs:

```python
class MyModule(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv1d(3, 3, kernel_size=1)
        self.conv2 = nn.Conv1d(6, 6, kernel_size=1)
    
                                    ################################
                                    # How it's translated to Keras #
                                    ################################
    def forward(self, x):           #
        x = self.conv1(x)           # <- Output is in TENSORFLOW order
                                    #
        x = x.reshape(-1, 6, 2)     # <- Expects input in PYTORCH order, transposition needed
                                    #    Output is in PYTORCH order
                                    #
        x = self.conv2(x)           # <- Expects input in TENSORFLOW order, transposition needed 
        return x                    #
```

<p align="center">
  <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/reshape1.png" width="15%">
</p>

Two transpositions out of nowhere?! How could Nobuco fail so miserably?

Let's try it ourselves, then. First, the original operation. (implemented in Numpy, just not to be partial to either of the two frameworks)

```python
import numpy as np

x_torch = np.asarray([
    [1, 2, 3, 4],
    [5, 6, 7, 8],
    [9, 10, 11, 12]
])
print('x_torch:\n', x_torch)

y_torch = x_torch.reshape((6, 2))
print('y_torch:\n', y_torch)
```

```console
x_torch:
 [[ 1  2  3  4]
 [ 5  6  7  8]
 [ 9 10 11 12]]
y_torch:
 [[ 1  2]
 [ 3  4]
 [ 5  6]
 [ 7  8]
 [ 9 10]
 [11 12]]
```

But remember the catch: in Keras, the inputs to `reshape` are transposed relative to the original Pytorch implementation, because that's how `conv1` returns them.

```python
x_keras = x_torch.transpose()
print('x_keras:\n', x_torch)
```

```console
x_keras:
 [[ 1  5  9]
 [ 2  6 10]
 [ 3  7 11]
 [ 4  8 12]]
```

So, if we just permute the `shape` parameter accordingly, will that work? No, the result is scrambled beyond recognition!


```python
def reshape_keras_incorrect(x_keras, shape_torch):
    shape_keras = list(reversed(shape_torch))
    return x_keras.reshape(shape_keras)

y_keras = reshape_keras_incorrect(x_keras, (6, 2))
print('y_keras:\n', y_keras)
print('Is correct:', np.array_equal(y_keras.transpose(), y_torch))
```

```console
y_keras:
 [[ 1  5  9  2  6 10]
 [ 3  7 11  4  8 12]]
Is correct: False
```

To get it work correctly for all shapes, we have to perform `reshape` on the original non-transposed tensor, i.e. prepare the input beforehand.
We also transpose the output to later pass it to Keras convolution. 


```python
def reshape_keras(x_keras, shape_torch):
    x_torch = x_keras.transpose()
    y_torch = x_torch.reshape(shape_torch)
    y_keras = y_torch.transpose()
    return y_keras

y_keras = reshape_keras(x_keras, (6, 2))
print('y_keras:\n', y_keras)
print('Is correct:', np.array_equal(y_keras.transpose(), y_torch))
```

```console
y_keras:
 [[ 1  3  5  7  9 11]
 [ 2  4  6  8 10 12]]
Is correct: True
```

Is there a better way? Yes, if we can afford to modify the Pytorch model. Remember the [pick-your-poison](#implementation-mismatch-pick-your-poison) section? Same thing.

> :dart:
> Here's the general recipe to get rid of redundant transpositions: 
> 1) permute inputs to Tensorflow channel order
> 2) define the subgraph as you want to see it in the converted model
> 3) permute outputs back to Pytorch channel order

Solving the transposition problem with more transpositions, huh?
No mystery here, two adjacent permutations are easily fused into one. Being opposites, `pytorch->tensorflow` and `tensorflow->pytorch` permutations just cancel each other out.

But wait, is Nobuco sophisticated enough to perform global optimization? It's not, and it doesn't.
Instead, when it sees a `permute` op, it checks whether the op can be construed as transposition from Pytorch to Keras or vice versa. 
If so, no work is done on the input tensor, only its metadata (`channel_order` field) is changed.

```python
class MyModule(nn.Module):          ################################
    # ...                           # How it's translated to Keras #
                                    ################################
    def forward(self, x):           #
        x = self.conv1(x)           # <- Output is in TENSORFLOW order
                                    #
        # BCH -> BHC                #
        x = x.permute(0, 2, 1)      # <- No actual transposition done, just order marked as PYTORCH
        # Reshape transposed input  #
        x = x.reshape(-1, 2, 6)     # <- Expects input in PYTORCH order, no transposition needed
                                    #    Output is in PYTORCH order
        # BHC -> BCH                #
        x = x.permute(0, 2, 1)      # <- No actual transposition done, just order marked as TENSORFLOW
                                    #
        x = self.conv2(x)           # <- Expects input in TENSORFLOW order, no transposition needed 
        return x                    #
```

<p align="center">
  <img src="https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/reshape2.png" width="15%">
</p>

</details>

---

### Acknowledgements

Slice assign converter is based on [Zaccharie Ramzi's tf-slice-assign script](https://github.com/zaccharieramzi/tf-slice-assign).

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nobuco",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "pytorch, tensorflow, keras, converter",
    "author": null,
    "author_email": "Alexander Lutsenko <lex.lutsenko@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/14/57/d6ad652dc4f88d9e5a07cb0ab019dd8a33113b15c2eb9890a47725037975/nobuco-0.14.3.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/nobuco.png\">\n<sup><a href=\"https://www.behance.net/diliajl\">diliajl</a></sup>\n</p>\n\n**No** **Bu**llshit **Co**nverter is a tool that helps you translate Pytorch models into Keras/Tensorflow graphs without losing your mind.\n\n- Supports a wide range of architectures\n  - [x] Control flow ops (If, For, While)\n  - [x] Recurrent layers (LSTM, GRU)\n  - [x] Arbitrary torch functions\n- Simple\n- Flexible\n- Efficient\n- Sanity-preserving, with clear mistake messaging\n\n> [!IMPORTANT]  \n> Nobuco only supports Keras 2 at the moment. If you'd like to use the new multi-backend Keras 3, please bump up the related issue: https://github.com/keras-team/keras/issues/19314\n\n## Installation <img src=\"https://img.shields.io/pypi/v/nobuco?color=blue&style=flat-square\">\n<img src=\"https://img.shields.io/badge/PyTorch-2.1-EE4C2C.svg?style=flat&logo=pytorch\"> <img src=\"https://img.shields.io/badge/TensorFlow-2.15-FF6F00.svg?style=flat&logo=tensorflow\">\n\n```bash\npip install -U nobuco\n```\n\n<!-- toc -->\n\n## Table of Contents\n- [Essentials](#essentials)\n- [Channel order wizardry](#channel-order-wizardry)\n- [Implementation mismatch: pick your poison](#implementation-mismatch-pick-your-poison)\n- [Going dynamic](#going-dynamic)\n  - [Control flows](#control-flows)\n  - [Dynamic shapes](#dynamic-shapes)\n- [In-place operations](#in-place-operations)\n- [A little white lie: tracing mode](#a-little-white-lie-tracing-mode)\n- [Ad hoc modifications](#ad-hoc-modifications) \n- [So we put a converter inside your converter](#so-we-put-a-converter-inside-your-converter)\n- [But was it worth it?](#but-was-it-worth-it)\n- [Nobuco knowledge base](#nobuco-knowledge-base)\n\n<!-- tocstop -->\n\n<!-- toc -->\n\n- [Deep dive](#deep-dive)\n  - [Aggressive transposition removal: fighting fire with fire](#aggressive-transposition-removal-fighting-fire-with-fire)\n\n<!-- tocstop -->\n\n## Essentials\n\nSuppose we want to convert a Pytorch module similar to this one:\n\n````python\nclass MyModule(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv = nn.Conv2d(3, 16, kernel_size=(3, 3), padding=(1, 1), stride=(2, 2))\n\n    def forward(self, x):\n        x = self.conv(x)\n        x = nn.Hardsigmoid()(x)\n        x = 1 - x[:, ::2] * x[:, 1::2]\n        return x\n````\n\nThe process is exactly what you would expect. Instantiate the module, create dummy inputs and call the magic function:\n\n```python\nimport nobuco\nfrom nobuco import ChannelOrder, ChannelOrderingStrategy\nfrom nobuco.layers.weight import WeightLayer\n```\n\n````python\ndummy_image = torch.rand(size=(1, 3, 256, 256))\npytorch_module = MyModule().eval()\n\nkeras_model = nobuco.pytorch_to_keras(\n    pytorch_module,\n    args=[dummy_image], kwargs=None,\n    inputs_channel_order=ChannelOrder.TENSORFLOW,\n    outputs_channel_order=ChannelOrder.TENSORFLOW\n)\n````\n\nAaaand done! That's all it takes to... hold on, what's that?\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/essentials1.svg\" width=\"100%\">\n\nNobuco says it doesn't know how to handle hard sigmoid.\nApparently, it's our job to provide a node converter for either `F.hardsigmoid` or the enclosing `Hardsigmoid` module (or the entire `MyModule`, but that makes little sense). Here, we'll go for the former.\n\nConversion is done directly. No layers upon layers of abstraction, no obscure intermediate representation. \nA node converter is just a `Callable` that takes the same arguments as the corresponding node in Pytorch and outputs an equivalent node in Tensorflow. \nThe converted node preserves the original node's signature, but Pytorch tensors replaced with Tensorflow counterparts (be that `tf.Tensor`, `KerasTensor`, `tf.Variable`, or `ResourceVariable`).\n\nThis should do the trick:\n\n````python\n@nobuco.converter(F.hardsigmoid, channel_ordering_strategy=ChannelOrderingStrategy.MINIMUM_TRANSPOSITIONS)\ndef hardsigmoid(input: torch.Tensor, inplace: bool = False):\n    return lambda input, inplace=False: tf.keras.activations.hard_sigmoid(input)\n````\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/essentials2.svg\" width=\"100%\">\n\nIt works, but the outputs don't quite match. Perhaps we should check how [Pytorch](https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html) and [Tensorflow](https://www.tensorflow.org/api_docs/python/tf/keras/activations/hard_sigmoid) define hard sigmoid. \nAnd sure enough, their implementations differ. Have to type in the formula manually, I guess...\n\n````python\n@nobuco.converter(F.hardsigmoid, channel_ordering_strategy=ChannelOrderingStrategy.MINIMUM_TRANSPOSITIONS)\ndef hardsigmoid(input: torch.Tensor, inplace: bool = False):\n    return lambda input, inplace=False: tf.clip_by_value(input/6 + 1/2, clip_value_min=0, clip_value_max=1)\n````\n\nAnd the happy result:\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/essentials3.svg\" width=\"100%\">\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/tutorial.png\" width=\"30%\">\n</p>\n\nThe example above is artificial, but it illustrates the point.\nIt's not feasible to provide a node converter for every existing Pytorch op. There are literally [thousands](https://dev-discuss.pytorch.org/t/where-do-the-2000-pytorch-operators-come-from-more-than-you-wanted-to-know/) of them! \nBest we can do without the converter constantly lacking essential functionality, being riddled with bugs, doing weird stuff and breaking apart with every other PT/TF release \nis to keep the tool simple and customizable, make it clear where a problem comes from and let the _user_ sort things out.\nUsually it's easy for a human to translate an isolated operation from one framework to another.\nReproducing the graph structure is a different matter entirely. For that, Nobuco has you covered!\n\nNobuco lets you intervene in conversion at each step, asks for help where needed and doesn't bother you with routine stuff.\n\nhttps://user-images.githubusercontent.com/2457934/233740603-cc11acc5-cd6b-48c8-b089-ff3ead772dd0.mp4\n\n<p align=\"center\"><em>\nWith an IDE, you can jump right where the node was [I]nvoked, [D]efined and [C]onverted\n</em></p>\n\n## Channel order wizardry\n\nSome operations assume its input tensors have a channel dimension. \nAnd as you probably know, Pytorch and Tensorflow do not agree on the layout of such tensors.\nPytorch adopts channel-first layout (_B**C**H_, _B**C**HW_, etc.) \nwhile Tensorflow works efficiently with channel-last tensors (_BH**C**_, _BHW**C**_, ...).\nTransposing tensors between the two layouts incurs non-trivial overhead as generally, tensor data must be physically rearranged.\nIn an effort to keep that overhead to the minimum, Nobuco does layout coercions _lazily_. \nA couple of things are needed to make it possible:\n\n- Tensorflow tensors are augmented with an additional property which stores their channel order, either Pytorch (channel first) or Tensorflow (channel last) style.\n- Node converters have requirements on what channel order their inputs must have. Said requirements are expressed with `channel_ordering_strategy` argument. \n\nChannel ordering strategies are\n- `FORCE_TENSORFLOW_ORDER`\n  - Input tensors will be coerced to Tensorflow channel order.\n  - Convenient for converting channel-aware operations (convolution, batchnorm).\n- `FORCE_PYTORCH_ORDER`\n  - Input tensors entering the node will look exactly as they do in the original Pytorch graph. \n  - Use it when the node does not interpret its input tensors as having a channel dimension (linear, matmul). \n- `MINIMUM_TRANSPOSITIONS`\n  - The channel order is decided by a majority vote (whichever prevails among the inputs). This way the number of coercions (i.e. tensor transpositions) is kept to the minimum.\n  It also means whenever there's only one input, it will be left untouched.\n  - Best choice for element-wise ops (most activations).\n- `MANUAL`\n  - You are on your own. In exchange for unrestricted freedom, you take responsibility to coerce input tensors to suitable channel order and to also annotate output tensors with their order.\n\nThe simple lazy approach makes wonders in most situations, but sometimes it produces suboptimal graphs.\nConsider the code below. Imagine this is some sort of text processing network. \nIt first applies a GRU layer which assumes the inputs do not have a channel dimension, so its input/output layouts are the same in both Pytorch and Tensorflow.\nBut then, the outputs are passed to a couple of 1D convolutions which are channel-aware. \nBecause of that, a transpose op must be put in the converted graph.\n\n```python\nclass MyModule(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.gru = nn.GRU(32, 128, num_layers=1, batch_first=True, bidirectional=False)\n        self.conv1 = nn.Conv1d(12, 40, kernel_size=3, padding=1)\n        self.conv2 = nn.Conv1d(12, 60, kernel_size=1, padding=0)\n\n    def forward(self, x):\n        x, hx = self.gru(x)\n        x1 = self.conv1(x)\n        x2 = self.conv2(x)\n        return x1, x2\n\npytorch_module = MyModule().eval()\n\ninputs = [\n    torch.normal(0, 1, size=(1, 12, 32)),\n]\nkeras_model = nobuco.pytorch_to_keras(\n    pytorch_module, inputs,\n    inputs_channel_order=ChannelOrder.PYTORCH,\n)\n```\n\nThe laziness shoots us in the foot here, and we get not one transpose but two:\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/channel_ordering.png\" width=\"30%\">\n</p>\n\nFor such occasions, there's two brethren functions: `force_tensorflow_order` and `force_pytorch_order`.\n\n```python\nx, hx = self.gru(x)\nx = nobuco.force_tensorflow_order(x)\nx1 = self.conv1(x)\nx2 = self.conv2(x)\n```\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/channel_ordering_forced.png\" width=\"30%\">\n</p>\n\nIn case you are curious, the implementation is trivial:\n\n```python\n@nobuco.traceable\ndef force_tensorflow_order(inputs):\n    return inputs\n\n\n@nobuco.converter(force_tensorflow_order, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)\ndef converter_force_tensorflow_order(inputs):\n    return lambda inputs: inputs\n```\n\n`force_pytorch_order` is defined analogously.\n\n## Implementation mismatch: pick your poison\n\nSometimes, Pytorch and Tensorflow just don't go along. \nIn case of `hardsigmoid`, it's a mere inconvenience, but it can be much more sinister.\n\nTake the model below, for example.\n\n```python\nclass MyModule(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.factor = 4\n        self.conv = nn.Conv2d(3*self.factor**2, 3*self.factor**2, kernel_size=1)\n\n    def forward(self, x):\n        x = nn.PixelUnshuffle(self.factor)(x)\n        x = self.conv(x)\n        x = nn.PixelShuffle(self.factor)(x)\n        return x\n```\n\nIdeally, there would only be three nodes in the converted graph. That's not what we get, though.\n\nTensorflow does not have `pixel_unshuffle`/`pixel_shuffle`.\nTheir closest counterparts, `tf.nn.space_to_depth`/`tf.nn.depth_to_space`,\ndo almost the same thing but not quite: output channels are in a different order.\nThe order must be fixed with a pricey `transpose`, no way around that. Or is there?\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/pixelshuffle1.png\" width=\"20%\">\n</p>\n\nInstead of emulating an absent Pytorch op in Tensorflow, \nwe might do the procedure in reverse: provide a Pytorch implementation for the Tensorflow node we want to convert to.\nThe overhead would be carried by the original Pytorch model leaving the converted graph nice and clean.\n\n```python\nfrom nobuco.addons.torch.space_to_depth import SpaceToDepth\nfrom nobuco.addons.torch.depth_to_space import DepthToSpace\n\n\nclass MyModuleTFOptimized(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.factor = 4\n        self.conv = nn.Conv2d(3*self.factor**2, 3*self.factor**2, kernel_size=1)\n\n    def forward(self, x):\n        x = SpaceToDepth(self.factor)(x)\n        x = self.conv(x)\n        x = DepthToSpace(self.factor)(x)\n        return x\n```\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/pixelshuffle2.png\" width=\"20%\">\n</p>\n\n<table align=\"center\">\n<tr>\n<th>Torch-optimized</th>\n<th>Tensorflow-optimized</th>\n</tr>\n<tr>\n<td>\n\n**Torch implementation**\n\n```python\nF.pixel_unshuffle\n```\n\n**Tensorflow converter**\n```python\n@nobuco.converter(F.pixel_unshuffle, \n                  channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)\ndef converter_pixel_unshuffle(input: Tensor, downscale_factor: _int):\n    def func(input, downscale_factor):\n        x = tf.nn.space_to_depth(input, downscale_factor)\n        x = channel_interleave2d(x, downscale_factor, reverse=True)\n        return x\n    return func\n\n\ndef channel_interleave2d(x, block_size: int, reverse: bool):\n    b, h, w, c = x.shape\n    n_blocks = block_size ** 2\n\n    if reverse:\n        x = tf.reshape(x, (b, h, w, n_blocks, c // n_blocks))\n    else:\n        x = tf.reshape(x, (b, h, w, c // n_blocks, n_blocks))\n\n    x = tf.transpose(x, (0, 1, 2, 4, 3))\n    x = tf.reshape(x, (b, h, w, c))\n    return x\n```\n\n</td>\n<td>\n\n**Torch implementation**\n```python\nclass SpaceToDepth(nn.Module):\n    def __init__(self, block_size):\n        super().__init__()\n        self.block_size = block_size\n\n    def forward(self, input):\n        x = F.pixel_unshuffle(input, self.block_size)\n        x = channel_interleave2d(x, self.block_size, reverse=False)\n        return x\n\n    \ndef channel_interleave2d(x: torch.Tensor, block_size: int, reverse: bool) -> torch.Tensor:\n    b, c, h, w = x.shape\n    n_blocks = block_size ** 2\n\n    if reverse:\n        x = x.view(b, n_blocks, c // n_blocks, h, w)\n    else:\n        x = x.view(b, c // n_blocks, n_blocks, h, w)\n\n    x = x.transpose(1, 2).reshape(b, c, h, w)\n    return x\n```\n\n**Tensorflow converter**\n```python\n@nobuco.converter(SpaceToDepth, \n                  channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)\ndef converter_space_to_depth(self, input: torch.Tensor):\n    return lambda input: tf.nn.space_to_depth(input, self.block_size)\n```\n\n</td>\n</tr>\n</table>\n\n## Going dynamic\n\n### Control flows\nIntroducing python control flow statements into the compute graph is no easy feat.\nTensorflow can do so via `tf.autograph`, but at a cost of [system's complexity](https://www.youtube.com/watch?v=NIEgzljyDyI) and with some notable [limitations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/control_flow.md).\nStuff like that is way above Nobuco's paygrade, so the following module cannot be properly handled without human intervention.\n\n```python\nclass ControlIf(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv_pre = nn.Conv2d(3, 16, kernel_size=(1, 1))\n        self.conv_true = nn.Conv2d(16, 32, kernel_size=(1, 1))\n        self.conv_false = nn.Conv2d(16, 32, kernel_size=(1, 1))\n        self.conv_shared = nn.Conv2d(32, 32, kernel_size=(1, 1))\n\n    def forward(self, x):\n        x = self.conv_pre(x)\n        if x.mean() > 0:\n            x = self.conv_true(x)\n            x = torch.tanh(x)\n            x = self.conv_shared(x)\n            x = x + 1\n        else:\n            x = self.conv_false(x)\n            x = torch.sigmoid(x)\n            x = self.conv_shared(x)\n            x = x - 1\n        x = self.conv_shared(x)\n        return x\n```\n\nOf course, it's possible to translate the dynamic module into a Tensorflow layer\n(don't forget to decorate it with `@tf.function` for autograph to kick in).\nBut what if it contains inner modules, do you replicate them in Tensorflow all by hand?\nNot unless you want to! \nJust convert them separately and use the resulting graphs inside the parent layer.\n\n```python\nclass ControlIfKeras(tf.keras.layers.Layer):\n    def __init__(self, conv_pre, conv_true, conv_false, conv_shared, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.conv_pre = conv_pre\n        self.conv_true = conv_true\n        self.conv_false = conv_false\n        self.conv_shared = conv_shared\n\n    def get_config(self):\n        config = super().get_config()\n        config.update({\n            \"conv_pre\": self.conv_pre,\n            \"conv_true\": self.conv_true,\n            \"conv_false\": self.conv_false,\n            \"conv_shared\": self.conv_shared,\n        })\n        return config\n\n    @tf.function\n    def call(self, x):\n        x = self.conv_pre(x)\n        if tf.reduce_mean(x) > 0:\n            x = self.conv_true(x)\n            x = tf.tanh(x)\n            x = self.conv_shared(x)\n            x = x + 1\n        else:\n            x = self.conv_false(x)\n            x = tf.sigmoid(x)\n            x = self.conv_shared(x)\n            x = x - 1\n        x = self.conv_shared(x)\n        return x\n\n\n@nobuco.converter(ControlIf, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)\ndef converter_ControlIf(self, x):\n    order = ChannelOrder.TENSORFLOW\n    kwargs = {'inputs_channel_order': order, 'outputs_channel_order': order, 'return_outputs_pt': True}\n    \n    conv_pre, out_pre = nobuco.pytorch_to_keras(self.conv_pre, [x], **kwargs)\n    conv_true, out_true = nobuco.pytorch_to_keras(self.conv_true, [out_pre], **kwargs)\n    conv_false, out_false = nobuco.pytorch_to_keras(self.conv_false, [out_pre], **kwargs)\n    conv_shared, _ = nobuco.pytorch_to_keras(self.conv_shared, [out_true], **kwargs)\n    layer = ControlIfKeras(conv_pre, conv_true, conv_false, conv_shared)\n    return layer\n```\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/control_if.png\" width=\"25%\">\n</p>\n\nSee [examples](/examples) for other ways to convert control flow ops.\n\n### Dynamic shapes\n\nWhat if we wanted our module to accept images of arbitrary height and width?\nCan we have that? Let's try:\n\n```python\nclass DynamicShape(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv = nn.Conv2d(3, 16, kernel_size=(1, 1))\n\n    def forward(self, x):\n        x = self.conv(x)\n\n        # Produces static shape\n        b, c, h, w = x.shape\n\n        x = x[:, :, h//3:, w//3:]\n        return x\n\n\ninput = torch.normal(0, 1, size=(1, 3, 128, 128))\npytorch_module = DynamicShape().eval()\n\nkeras_model = nobuco.pytorch_to_keras(\n    pytorch_module,\n    args=[input],\n    input_shapes={input: (None, 3, None, None)}, # Annotate dynamic axes with None\n    inputs_channel_order=ChannelOrder.TENSORFLOW,\n    outputs_channel_order=ChannelOrder.TENSORFLOW,\n)\n```\n\nSomething's not right. We don't see shape extraction ops in the debug output or the graph:\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape1.svg\" width=\"100%\">\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape1.png\" width=\"15%\">\n</p>\n\nThat's not surprising, actually. \nIn Pytorch, tensor shape is a tuple of regular integers, not tensors, so it's quite difficult to track them.\n`nobuco.shape` solves this problem.\nThis function returns tensors, much like [`tf.shape`](https://www.tensorflow.org/api_https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/python/tf/shape) does:\n\n```python\n# Allows for dynamic shape\nb, c, h, w = nobuco.shape(x)\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape2.svg\" width=\"100%\">\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/dynamic_shape2.png\" width=\"30%\">\n</p>\n\nIt's also possible to automatically substitute every `.shape`/`.size` call with `nobuco.shape` during the tracing phase by setting `trace_shape` flag:\n\n```python\nkeras_model = nobuco.pytorch_to_keras(\n  # ...\n  trace_shape=True\n)\n```\n\n## In-place operations\n\nNobuco can handle most situations where tensors are modified in-place. For instance, these will work just fine:\n\n```python\nclass MyModule(nn.Module):\n    def forward(self, x):\n        x[:, 1:2, 16:25, 8::2] *= 2\n        torch.relu_(x)\n        return x\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace1.svg\" width=\"100%\">\n\nHowever, applying in-place operation to a slice yields incorrect result. What gives?\n\n```python\nclass MyModule(nn.Module):\n    def forward(self, x):\n        torch.relu_(x[:, 1:2, 16:25, 8::2])\n        return x\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace2.svg\" width=\"100%\">\n\nYou see, Tensorflow graphs (and many other formats like ONNX) do not support in-place ops.\nSo when we take slice (`x[:, 1:2, 16:25, 8::2]`) in TF/ONNX, the result is not a view of the original tensor but a copy. \nThis copy is then passed to `relu` (which is not in-place either), and its result is not used anywhere. \nAs you can see above, the output tensors of `__getitem__` and `relu_` are <span style=\"color:gray\">grayed out</span>, and these operations are excluded from the graph.\nIn fact, it's empty:\n\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace_empty.png\" width=\"30%\">\n</p>\n\nThe easiest way of fixing this is to explicitly assign the result to the slice.\nConveniently enough, most standard in-place operations in Pytorch do return their modified arguments as outputs.\n\n```python\nclass MyModule(nn.Module):\n    def forward(self, x):\n        x[:, 1:2, 16:25, 8::2] = torch.relu_(x[:, 1:2, 16:25, 8::2])\n        return x\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/inplace3.svg\" width=\"100%\">\n\n## A little white lie: tracing mode\n\nThe flexibility and dynamic nature of Pytorch graphs can make it quite challenging to directly translate them not only to Tensorflow but ONNX as well.\nHow are these types of problems solved for real-world models?\nOnce again, we'll learn by example:\n\n```python\npytorch_module = torchvision.models.detection.ssdlite320_mobilenet_v3_large(weights=SSDLite320_MobileNet_V3_Large_Weights.DEFAULT).eval()\n\nx = torch.rand(size=(1, 3, 320, 320))\n\nkeras_model = nobuco.pytorch_to_keras(\n    pytorch_module, \n    args=[x],\n)\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/tracing1.svg\" width=\"100%\">\n\nIs that Nobuco's failure to handle in-place `copy_`? Yes, but there's more to the story.\nLet's peek into the model's source code (set `debug_traces=nobuco.TraceLevel.ALWAYS` for easier navigation). Here's the culprit:\n\n```python\ndef batch_images(self, images: List[Tensor], size_divisible: int = 32) -> Tensor:\n    if torchvision._is_tracing():\n        # batch_images() does not export well to ONNX\n        # call _onnx_batch_images() instead\n        return self._onnx_batch_images(images, size_divisible) # <- Alternative ONNX-friendly implementation\n\n    max_size = self.max_by_axis([list(img.shape) for img in images])\n    stride = float(size_divisible)\n    max_size = list(max_size)\n    max_size[1] = int(math.ceil(float(max_size[1]) / stride) * stride)\n    max_size[2] = int(math.ceil(float(max_size[2]) / stride) * stride)\n\n    batch_shape = [len(images)] + max_size\n    batched_imgs = images[0].new_full(batch_shape, 0)\n    for i in range(batched_imgs.shape[0]):\n        img = images[i]\n        batched_imgs[i, : img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) # <- In-place copy\n\n    return batched_imgs\n```\n\nThis method is certainly not fit for tracing. \nThe model's authors knew it and provided an alternative implementation in case we'd want to export it to ONNX (works for Keras, too!).\n`torchvision._is_tracing()` returns True whenever the model is being traced (e.g. invoked inside `torch.onnx.export(...)`).\nThis state is not directly controllable by the user, yet Nobuco can gaslight the model into thinking it's being traced by Pytorch itself:\n\n```python\nkeras_model = nobuco.pytorch_to_keras(\n    # ...\n    enable_torch_tracing=True\n)\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/tracing2.svg\" width=\"100%\">\n\nWhy is it not enabled by default, then? \nYou see, the other effect of tracing mode is equivalent to that of `trace_shape=True`: `shape` calls return tensors instead of ints.\nMany Pytorch models were never meant to be converted to anything, and tracing mode may break them.\nNobuco tries to minimize silent/obscure errors and user's confusion, and not being too smart is a reasonable tradeoff.\n\n## Ad hoc modifications\n\nLet's say, for illustrative purposes, that we prefer putting batchnorm _before_ convolution.\nWe were sure `TFLiteConverter` would fuse these two linear operations into one.\nAlas, it failed to meet our expectations. \nCan we still get the fusion to work without re-training or messing around with the model checkpoint?\n\n```python\nclass FusibleModule(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.bn = nn.BatchNorm2d(3)\n        self.conv = nn.Conv2d(3, 16, kernel_size=(3, 3), padding=(0, 0))\n        self.act = nn.ReLU()\n\n    def forward(self, x):\n        x = self.bn(x)\n        x = self.conv(x)\n        x = self.act(x)\n        return x\n```\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/fusion1.png\" width=\"20%\">\n</p>\n\nHere's one way to do it:\n- Wrap the two ops in a `Callable`. Decorate it with `@nobuco.traceable`. \n- Make a custom converter for it which does the desired optimization. \n\n```python\nclass FusibleModule(nn.Module):\n    # ...\n\n    @nobuco.traceable\n    def bn_conv(self, x):\n        x = self.bn(x)\n        x = self.conv(x)\n        return x\n\n    def forward(self, x):\n        x = self.bn_conv(x)\n        x = self.act(x)\n        return x\n```\n\n```python\n@nobuco.converter(FusibleModule.bn_conv, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_TENSORFLOW_ORDER)\ndef converter_bn_conv(self, x):\n    order = ChannelOrder.TENSORFLOW\n    bn, out_bn = nobuco.pytorch_to_keras(self.bn, [x], inputs_channel_order=order, outputs_channel_order=order, return_outputs_pt=True)\n    conv = nobuco.pytorch_to_keras(self.conv, [out_bn], inputs_channel_order=order, outputs_channel_order=order)\n\n    gamma, beta, moving_mean, moving_variance = bn.get_weights()\n    kernel, bias = conv.get_weights()\n    eps = self.bn.eps\n\n    '''\n    y = gamma * (x - moving_mean) / sqrt(moving_variance + eps) + beta\n    z = kernel * y + bias\n    =>\n    z = kernel_fused * x + bias_fused WHERE\n    kernel_fused = kernel * gamma / sqrt(moving_variance + eps)\n    bias_fused = -kernel_fused * moving_mean + kernel * beta + bias\n    '''\n    kernel_fused = kernel * (gamma / np.sqrt(moving_variance + eps))[None, None, :, None]\n    bias_fused = (-kernel_fused * moving_mean[None, None, :, None] + kernel * beta[None, None, :, None]).sum(axis=(0, 1, 2)).flatten() + bias\n    conv.set_weights([kernel_fused, bias_fused])\n    return lambda self, x: conv(x)\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/fusion2.svg\" width=\"100%\">\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/fusion2.png\" width=\"20%\">\n</p>\n\n## So we put a converter inside your converter\n\nAs we've learned, Nobuco gets confused when in-place operation is applied to a slice.\nThere's a way to fix that, but let's not do it now.\nInstead, we'll use it as an excuse to explain the concept of nested converters.\nSo, for this module, conversion will give us incorrect result:\n\n```python\nclass SliceReLU(nn.Module):\n    def forward(self, x):\n        # Gives incorrect result after conversion\n        torch.relu_(x[:, 1:2, 16:25, 8::2])\n        # That's the recommended approach, but we're not going for it now\n        # x[:, 1:2, 16:25, 8::2] = torch.relu_(x[:, 1:2, 16:25, 8::2])\n        return x\n\n\nclass MyModule(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv = nn.Conv2d(3, 3, kernel_size=(3, 3), padding=(1, 1))\n\n    def forward(self, x):\n        x = self.conv(x)\n        SliceReLU()(x)\n        return x\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/converter_inside_converter1.svg\" width=\"100%\">\n\nWe've seen it's possible to invoke a Nobuco converter inside another Nobuco converter.\nCan we embed some third-party converter? You bet! Why? Because it might just do what we need.\nLet's consider the standard route: Pytorch -> ONNX -> Tensorflow, with the latter step done with [onnx-tf](https://github.com/onnx/onnx-tensorflow).\nThis library likes transposing stuff so much, converting the whole graph with it may introduce intolerable inference overhead. Nonetheless, it does the job.\nA sensible tradeoff would be to wrap the problematic operation into its own `nn.Module` and give it a special treat, while handling everything else with Nobuco.\n\n```python\nimport onnx\nfrom onnx_tf.backend import prepare\n\n\n@nobuco.converter(SliceReLU, channel_ordering_strategy=ChannelOrderingStrategy.FORCE_PYTORCH_ORDER, reusable=False)\ndef converter_SliceReLU(self, x):\n    model_path = 'slice_relu'\n    onnx_path = model_path + '.onnx'\n\n    # NB: onnx.export in implemented via tracing i.e. it may modify the inputs!\n    torch.onnx.export(self, (x,), onnx_path, opset_version=12, input_names=['input'],\n                      dynamic_axes={'input': [0, 1, 2, 3]}\n                      )\n    onnx_model = onnx.load(onnx_path)\n    tf_rep = prepare(onnx_model)\n    tf_rep.export_graph(model_path)\n    model = tf.keras.models.load_model(model_path)\n    return keras.layers.Lambda(lambda x: model(input=x))\n```\n\n<img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/converter_inside_converter2.svg\" width=\"100%\">\n\n## But was it worth it?\n\nLet's cut to the chase, here's the numbers.\n\n**mobilenet_v3_large** (26.8 Mb)\n\n|                   | nobuco  | onnx_tf  | speedup |\n|-------------------|---------|----------|---------|\n| x86 (XNNPACK)     | 11.1 ms | 14.7 ms  | 1.3x    |\n| Arm CPU (XNNPACK) | 24.3 ms | 40.3 ms  | 1.6x    |\n| Arm GPU (OpenCL)  | 21.3 ms | 192.6 ms | 9x      |\n\n**deeplabv3_resnet50** (158.5 Mb)\n\n|                   | nobuco | onnx_tf | speedup |\n|-------------------|--------|---------|---------|\n| x86 (XNNPACK)     | 1.25 s | 1.34 s  | 1.07x   |\n| Arm CPU (XNNPACK) | 2.0 s  | 2.7 s   | 1.35x   |\n| Arm GPU (OpenCL)  | 1.6 s  | 2.6 s   | 1.62x   |\n\nAs we can see, redundant transpositions may completely ruin the performance, especially on a GPU.\nBut that's not the only issue.\nLet's test this:\n\n```python\nclass SliceReLU(nn.Module):\n    def forward(self, x):\n        x[:, 1:2, 16:25, 8::2] = torch.relu_(x[:, 1:2, 16:25, 8::2])\n        return x\n```\n\n|                   | nobuco     | onnx_tf    | speedup |\n|-------------------|------------|------------|---------|\n| x86 (XNNPACK)     | 0.40 ms    | 1.57 ms    | 3.9x    |\n| Arm CPU           | 4.6 ms     | **2.9** ms | 0.6x    |\n| Arm CPU (XNNPACK) | **2.1** ms | FAIL       | \u2014       |\n| Arm GPU (OpenCL)  | 21.8 ms    | FAIL       | \u2014       |\n\nAgain, the graph obtained with `onnx_tf` is much slower on x86 CPU.\nWorse yet, on mobile processor, optimized TFLite delegates for both GPU and CPU failed.\nNo transpose ops were added this time, so who's to blame?\nJust look what `torch.onnx.export` gives us:\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/slice_relu_onnx.png\" width=\"100%\">\n  <b>slice_relu.onnx</b>\n</p>\n\n`onnx_tf` does a fair job optimizing the monstrosity of a graph it's given,\nbut combining consecutive `slice` ops seems too much to ask.\nIt also leaves out garbage nodes sometimes (note the free-floating `While` in this example).\n\nNobuco evades these types of problems by simply not dealing with `onnx`.\n\n<table align=\"center\">\n  <tr>\n    <th>slice_relu_nobuco</th>\n    <th>slice_relu_onnxtf</th>\n  </tr>\n  <tr>\n    <td>\n      <p align=\"center\">\n        <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/slice_relu_nobuco.png\" width=\"60%\">\n      </p>\n    </td>\n    <td>\n      <p align=\"center\">\n        <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/slice_relu_onnxtf.png\" width=\"60%\">\n      </p>\n    </td>\n  </tr>\n</table>\n\n## Nobuco knowledge base\n\nDon't want to convert anything but looking for a Tensorflow equivalent of a certain Pytorch node (operation or module)?\nNobuco already implements quite a few node converters, most written in a concise and, hopefully, understandable way.\nThese are located in [nobuco/node_converters](https://github.com/AlexanderLutsenko/nobuco/tree/master/nobuco/node_converters),\nand there's a utility function to help you find what you need:\n\n```python\nnode = torch.Tensor.repeat\n# node = F.relu_\n# node = nn.LSTM\n\nlocation_link, source_code = nobuco.locate_converter(node)\nprint('Converter location:')\nprint(location_link)\nprint('Converter source code:')\nprint(source_code)\n```\n\n```console\nConverter location:\nFile \"/home/user/anaconda3/envs/nb/lib/python3.9/site-packages/nobuco/node_converters/tensor_manipulation.py\", line 141\n\nConverter source code:\n@converter(torch.Tensor.repeat, channel_ordering_strategy=ChannelOrderingStrategy.MINIMUM_TRANSPOSITIONS)\ndef converter_repeat(self, *sizes):\n    def func(self, *sizes):\n        if get_channel_order(self) == ChannelOrder.TENSORFLOW:\n            sizes = permute_pytorch2keras(sizes)\n        return tf.tile(self, sizes)\n    return func\n```\n\n---\n\n## Deep dive\n\n<details>\n<summary><h3>Aggressive transposition removal: fighting fire with fire</h3></summary>\n\nDespite trying its best, Nobuco may produce outrageously inefficient graphs:\n\n```python\nclass MyModule(nn.Module):\n    def __init__(self):\n        super().__init__()\n        self.conv1 = nn.Conv1d(3, 3, kernel_size=1)\n        self.conv2 = nn.Conv1d(6, 6, kernel_size=1)\n    \n                                    ################################\n                                    # How it's translated to Keras #\n                                    ################################\n    def forward(self, x):           #\n        x = self.conv1(x)           # <- Output is in TENSORFLOW order\n                                    #\n        x = x.reshape(-1, 6, 2)     # <- Expects input in PYTORCH order, transposition needed\n                                    #    Output is in PYTORCH order\n                                    #\n        x = self.conv2(x)           # <- Expects input in TENSORFLOW order, transposition needed \n        return x                    #\n```\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/reshape1.png\" width=\"15%\">\n</p>\n\nTwo transpositions out of nowhere?! How could Nobuco fail so miserably?\n\nLet's try it ourselves, then. First, the original operation. (implemented in Numpy, just not to be partial to either of the two frameworks)\n\n```python\nimport numpy as np\n\nx_torch = np.asarray([\n    [1, 2, 3, 4],\n    [5, 6, 7, 8],\n    [9, 10, 11, 12]\n])\nprint('x_torch:\\n', x_torch)\n\ny_torch = x_torch.reshape((6, 2))\nprint('y_torch:\\n', y_torch)\n```\n\n```console\nx_torch:\n [[ 1  2  3  4]\n [ 5  6  7  8]\n [ 9 10 11 12]]\ny_torch:\n [[ 1  2]\n [ 3  4]\n [ 5  6]\n [ 7  8]\n [ 9 10]\n [11 12]]\n```\n\nBut remember the catch: in Keras, the inputs to `reshape` are transposed relative to the original Pytorch implementation, because that's how `conv1` returns them.\n\n```python\nx_keras = x_torch.transpose()\nprint('x_keras:\\n', x_torch)\n```\n\n```console\nx_keras:\n [[ 1  5  9]\n [ 2  6 10]\n [ 3  7 11]\n [ 4  8 12]]\n```\n\nSo, if we just permute the `shape` parameter accordingly, will that work? No, the result is scrambled beyond recognition!\n\n\n```python\ndef reshape_keras_incorrect(x_keras, shape_torch):\n    shape_keras = list(reversed(shape_torch))\n    return x_keras.reshape(shape_keras)\n\ny_keras = reshape_keras_incorrect(x_keras, (6, 2))\nprint('y_keras:\\n', y_keras)\nprint('Is correct:', np.array_equal(y_keras.transpose(), y_torch))\n```\n\n```console\ny_keras:\n [[ 1  5  9  2  6 10]\n [ 3  7 11  4  8 12]]\nIs correct: False\n```\n\nTo get it work correctly for all shapes, we have to perform `reshape` on the original non-transposed tensor, i.e. prepare the input beforehand.\nWe also transpose the output to later pass it to Keras convolution. \n\n\n```python\ndef reshape_keras(x_keras, shape_torch):\n    x_torch = x_keras.transpose()\n    y_torch = x_torch.reshape(shape_torch)\n    y_keras = y_torch.transpose()\n    return y_keras\n\ny_keras = reshape_keras(x_keras, (6, 2))\nprint('y_keras:\\n', y_keras)\nprint('Is correct:', np.array_equal(y_keras.transpose(), y_torch))\n```\n\n```console\ny_keras:\n [[ 1  3  5  7  9 11]\n [ 2  4  6  8 10 12]]\nIs correct: True\n```\n\nIs there a better way? Yes, if we can afford to modify the Pytorch model. Remember the [pick-your-poison](#implementation-mismatch-pick-your-poison) section? Same thing.\n\n> :dart:\n> Here's the general recipe to get rid of redundant transpositions: \n> 1) permute inputs to Tensorflow channel order\n> 2) define the subgraph as you want to see it in the converted model\n> 3) permute outputs back to Pytorch channel order\n\nSolving the transposition problem with more transpositions, huh?\nNo mystery here, two adjacent permutations are easily fused into one. Being opposites, `pytorch->tensorflow` and `tensorflow->pytorch` permutations just cancel each other out.\n\nBut wait, is Nobuco sophisticated enough to perform global optimization? It's not, and it doesn't.\nInstead, when it sees a `permute` op, it checks whether the op can be construed as transposition from Pytorch to Keras or vice versa. \nIf so, no work is done on the input tensor, only its metadata (`channel_order` field) is changed.\n\n```python\nclass MyModule(nn.Module):          ################################\n    # ...                           # How it's translated to Keras #\n                                    ################################\n    def forward(self, x):           #\n        x = self.conv1(x)           # <- Output is in TENSORFLOW order\n                                    #\n        # BCH -> BHC                #\n        x = x.permute(0, 2, 1)      # <- No actual transposition done, just order marked as PYTORCH\n        # Reshape transposed input  #\n        x = x.reshape(-1, 2, 6)     # <- Expects input in PYTORCH order, no transposition needed\n                                    #    Output is in PYTORCH order\n        # BHC -> BCH                #\n        x = x.permute(0, 2, 1)      # <- No actual transposition done, just order marked as TENSORFLOW\n                                    #\n        x = self.conv2(x)           # <- Expects input in TENSORFLOW order, no transposition needed \n        return x                    #\n```\n\n<p align=\"center\">\n  <img src=\"https://raw.githubusercontent.com/AlexanderLutsenko/nobuco/master/docs/reshape2.png\" width=\"15%\">\n</p>\n\n</details>\n\n---\n\n### Acknowledgements\n\nSlice assign converter is based on [Zaccharie Ramzi's tf-slice-assign script](https://github.com/zaccharieramzi/tf-slice-assign).\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Pytorch to Tensorflow conversion made intuitive",
    "version": "0.14.3",
    "project_urls": {
        "Bug Tracker": "https://github.com/AlexanderLutsenko/nobuco/issues",
        "Homepage": "https://github.com/AlexanderLutsenko/nobuco"
    },
    "split_keywords": [
        "pytorch",
        " tensorflow",
        " keras",
        " converter"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "09d1e6253509b9d1454100ba3083ce3f70e209fdf1c0d43215618715bfbc2238",
                "md5": "8b10bdf8513de0096643af6d7c793e99",
                "sha256": "97e013c5db06b2744ed54d42a04222eb5b6e6ed1dd6e1abde77f4a62bd00f798"
            },
            "downloads": -1,
            "filename": "nobuco-0.14.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8b10bdf8513de0096643af6d7c793e99",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 78619,
            "upload_time": "2024-05-16T17:00:24",
            "upload_time_iso_8601": "2024-05-16T17:00:24.666360Z",
            "url": "https://files.pythonhosted.org/packages/09/d1/e6253509b9d1454100ba3083ce3f70e209fdf1c0d43215618715bfbc2238/nobuco-0.14.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1457d6ad652dc4f88d9e5a07cb0ab019dd8a33113b15c2eb9890a47725037975",
                "md5": "724892fbd5015a90773bf9a6aff8e1ee",
                "sha256": "b222e5c7075dcc9716bc18b07d1b552644c30ed3e34659936839e14b4d4e73e2"
            },
            "downloads": -1,
            "filename": "nobuco-0.14.3.tar.gz",
            "has_sig": false,
            "md5_digest": "724892fbd5015a90773bf9a6aff8e1ee",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 81790,
            "upload_time": "2024-05-16T17:00:28",
            "upload_time_iso_8601": "2024-05-16T17:00:28.375653Z",
            "url": "https://files.pythonhosted.org/packages/14/57/d6ad652dc4f88d9e5a07cb0ab019dd8a33113b15c2eb9890a47725037975/nobuco-0.14.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-16 17:00:28",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "AlexanderLutsenko",
    "github_project": "nobuco",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "nobuco"
}
        
Elapsed time: 0.38743s