torch-tensorrt


Nametorch-tensorrt JSON
Version 2.2.0 PyPI version JSON
download
home_page
SummaryTorch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch
upload_time2024-02-14 01:49:39
maintainer
docs_urlNone
author
requires_python>=3.8
licenseCopyright (c) 2020-present, NVIDIA CORPORATION. All rights reserved. Copyright (c) Meta Platforms, Inc. and affiliates. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords pytorch torch tensorrt trt ai artificial intelligence ml machine learning dl deep learning compiler dynamo torchscript inference
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # torch_tensorrt

> Ahead of Time (AOT) compiling for PyTorch JIT

Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.

## Example Usage

``` python
import torch_tensorrt

...

trt_ts_module = torch_tensorrt.compile(torch_script_module,
    inputs = [example_tensor, # Provide example tensor for input shape or...
        torch_tensorrt.Input( # Specify input object with shape and dtype
            min_shape=[1, 3, 224, 224],
            opt_shape=[1, 3, 512, 512],
            max_shape=[1, 3, 1024, 1024],
            # For static size shape=[1, 3, 224, 224]
            dtype=torch.half) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)
    ],
    enabled_precisions = {torch.half}, # Run with FP16)

result = trt_ts_module(input_data) # run inference
torch.jit.save(trt_ts_module, "trt_torchscript_module.ts") # save the TRT embedded Torchscript

```

## Installation

| ABI / Platform                          | Installation command                                         |
| --------------------------------------- | ------------------------------------------------------------ |
| Pre CXX11 ABI (Linux x86_64)            | python3 setup.py install                                     |
| CXX ABI  (Linux x86_64)                 | python3 setup.py install --use-cxx11-abi                     |
| Pre CXX11 ABI (Jetson platform aarch64) | python3 setup.py install --jetpack-version 4.6               |
| CXX11 ABI (Jetson platform aarch64)     | python3 setup.py install --jetpack-version 4.6 --use-cxx11-abi |

For Linux x86_64 platform, Pytorch libraries default to pre cxx11 abi. So, please use `python3 setup.py install`.

On Jetson platforms, NVIDIA hosts <a href="https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048">pre-built Pytorch wheel files</a>. These wheel files are built with CXX11 ABI. So on jetson platforms, please use `python3 setup.py install --jetpack-version 4.6 --use-cxx11-abi`

## Under the Hood

When a traced module is provided to Torch-TensorRT, the compiler takes the internal representation and transforms it into one like this:

```
graph(%input.2 : Tensor):
    %2 : Float(84, 10) = prim::Constant[value=<Tensor>]()
    %3 : Float(120, 84) = prim::Constant[value=<Tensor>]()
    %4 : Float(576, 120) = prim::Constant[value=<Tensor>]()
    %5 : int = prim::Constant[value=-1]() # x.py:25:0
    %6 : int[] = prim::Constant[value=annotate(List[int], [])]()
    %7 : int[] = prim::Constant[value=[2, 2]]()
    %8 : int[] = prim::Constant[value=[0, 0]]()
    %9 : int[] = prim::Constant[value=[1, 1]]()
    %10 : bool = prim::Constant[value=1]() # ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
    %11 : int = prim::Constant[value=1]() # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0
    %12 : bool = prim::Constant[value=0]() # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0
    %self.classifer.fc3.bias : Float(10) = prim::Constant[value= 0.0464  0.0383  0.0678  0.0932  0.1045 -0.0805 -0.0435 -0.0818  0.0208 -0.0358 [ CUDAFloatType{10} ]]()
    %self.classifer.fc2.bias : Float(84) = prim::Constant[value=<Tensor>]()
    %self.classifer.fc1.bias : Float(120) = prim::Constant[value=<Tensor>]()
    %self.feat.conv2.weight : Float(16, 6, 3, 3) = prim::Constant[value=<Tensor>]()
    %self.feat.conv2.bias : Float(16) = prim::Constant[value=<Tensor>]()
    %self.feat.conv1.weight : Float(6, 1, 3, 3) = prim::Constant[value=<Tensor>]()
    %self.feat.conv1.bias : Float(6) = prim::Constant[value= 0.0530 -0.1691  0.2802  0.1502  0.1056 -0.1549 [ CUDAFloatType{6} ]]()
    %input0.4 : Tensor = aten::_convolution(%input.2, %self.feat.conv1.weight, %self.feat.conv1.bias, %9, %8, %9, %12, %8, %11, %12, %12, %10) # ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
    %input0.5 : Tensor = aten::relu(%input0.4) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0
    %input1.2 : Tensor = aten::max_pool2d(%input0.5, %7, %6, %8, %9, %12) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0
    %input0.6 : Tensor = aten::_convolution(%input1.2, %self.feat.conv2.weight, %self.feat.conv2.bias, %9, %8, %9, %12, %8, %11, %12, %12, %10) # ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0
    %input2.1 : Tensor = aten::relu(%input0.6) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0
    %x.1 : Tensor = aten::max_pool2d(%input2.1, %7, %6, %8, %9, %12) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0
    %input.1 : Tensor = aten::flatten(%x.1, %11, %5) # x.py:25:0
    %27 : Tensor = aten::matmul(%input.1, %4)
    %28 : Tensor = trt::const(%self.classifer.fc1.bias)
    %29 : Tensor = aten::add_(%28, %27, %11)
    %input0.2 : Tensor = aten::relu(%29) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0
    %31 : Tensor = aten::matmul(%input0.2, %3)
    %32 : Tensor = trt::const(%self.classifer.fc2.bias)
    %33 : Tensor = aten::add_(%32, %31, %11)
    %input1.1 : Tensor = aten::relu(%33) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0
    %35 : Tensor = aten::matmul(%input1.1, %2)
    %36 : Tensor = trt::const(%self.classifer.fc3.bias)
    %37 : Tensor = aten::add_(%36, %35, %11)
    return (%37)
(CompileGraph)
```

The graph has now been transformed from a collection of modules much like how your PyTorch Modules are collections of modules, each managing their own parameters into a single graph
with the parameters inlined into the graph and all of the operations laid out. Torch-TensorRT has also executed a number of optimizations and mappings to make the graph easier to translate
to TensorRT. From here the compiler can assemble the TensorRT engine by following the dataflow through the graph.

When the graph construction phase is complete, Torch-TensorRT produces a serialized TensorRT engine. From here depending on the API, this engine is returned to the user or moves into the graph
construction phase. Here Torch-TensorRT creates a JIT Module to execute the TensorRT engine which will be instantiated and managed by the Torch-TensorRT runtime.

Here is the graph that you get back after compilation is complete:

```
graph(%self.1 : __torch__.___torch_mangle_10.LeNet_trt,
    %2 : Tensor):
    %1 : int = prim::Constant[value=94106001690080]()
    %3 : Tensor = trt::execute_engine(%1, %2)
    return (%3)
(AddEngineToGraph)
```

You can see the call where the engine is executed, based on a constant which is the ID of the engine, telling JIT how to find the engine and the input tensor which will be fed to TensorRT.
The engine represents the exact same calculations as what is done by running a normal PyTorch module but optimized to run on your GPU.

Torch-TensorRT converts from TorchScript by generating layers or subgraphs in correspondance with instructions seen in the graph. Converters are small modules of code used to map one specific
operation to a layer or subgraph in TensorRT. Not all operations are support, but if you need to implement one, you can in C++.

## Registering Custom Converters

Operations are mapped to TensorRT through the use of modular converters, a function that takes a node from a the JIT graph and produces an equivalent layer or subgraph in TensorRT. Torch-TensorRT
ships with a library of these converters stored in a registry, that will be executed depending on the node being parsed. For instance a `aten::relu(%input0.4)` instruction will trigger the
relu converter to be run on it, producing an activation layer in the TensorRT graph. But since this library is not exhaustive you may need to write your own to get Torch-TensorRT to support your module.

Shipped with the Torch-TensorRT distribution are the internal core API headers. You can therefore access the converter registry and add a converter for the op you need.

For example, if we try to compile a graph with a build of Torch-TensorRT that doesn’t support the flatten operation (`aten::flatten`) you may see this error:

```
terminate called after throwing an instance of 'torch_tensorrt::Error'
what():  [enforce fail at core/conversion/conversion.cpp:109] Expected converter to be true but got false
Unable to convert node: %input.1 : Tensor = aten::flatten(%x.1, %11, %5) # x.py:25:0 (conversion.AddLayer)
Schema: aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)
Converter for aten::flatten requested, but no such converter was found.
If you need a converter for this operator, you can try implementing one yourself
or request a converter: https://www.github.com/NVIDIA/Torch-TensorRT/issues
```

We can register a converter for this operator in our application. All of the tools required to build a converter can be imported by including `Torch-TensorRT/core/conversion/converters/converters.h`.
We start by creating an instance of the self-registering `class torch_tensorrt::core::conversion::converters::RegisterNodeConversionPatterns()` which will register converters in the global converter
registry, associating a function schema like `aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)` with a lambda that will take the state of the conversion, the
node/operation in question to convert and all of the inputs to the node and produces as a side effect a new layer in the TensorRT network. Arguments are passed as a vector of inspectable unions
of TensorRT ITensors and Torch IValues in the order arguments are listed in the schema.

Below is a implementation of a `aten::flatten` converter that we can use in our application. You have full access to the Torch and TensorRT libraries in the converter implementation. So for example
we can quickly get the output size by just running the operation in PyTorch instead of implementing the full calculation outself like we do below for this flatten converter.

```c++
#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"
#include "torch_tensorrt/core/conversion/converters/converters.h"

static auto flatten_converter = torch_tensorrt::core::conversion::converters::RegisterNodeConversionPatterns()
    .pattern({
        "aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)",
        [](torch_tensorrt::core::conversion::ConversionCtx* ctx,
           const torch::jit::Node* n,
           torch_tensorrt::core::conversion::converters::args& args) -> bool {
            auto in = args[0].ITensor();
            auto start_dim = args[1].unwrapToInt();
            auto end_dim = args[2].unwrapToInt();
            auto in_shape = torch_tensorrt::core::util::toVec(in->getDimensions());
            auto out_shape = torch::flatten(torch::rand(in_shape), start_dim, end_dim).sizes();

            auto shuffle = ctx->net->addShuffle(*in);
            shuffle->setReshapeDimensions(torch_tensorrt::core::util::toDims(out_shape));
            shuffle->setName(torch_tensorrt::core::util::node_info(n).c_str());

            auto out_tensor = ctx->AssociateValueAndTensor(n->outputs()[0], shuffle->getOutput(0));
            return true;
        }
    });
```

To use this converter in Python, it is recommended to use PyTorch’s [C++ / CUDA Extention](https://pytorch.org/tutorials/advanced/cpp_extension.html#custom-c-and-cuda-extensions) template to wrap
your library of converters into a `.so` that you can load with `ctypes.CDLL()` in your Python application.

You can find more information on all the details of writing converters in the contributors documentation ([Writing Converters](https://nvidia.github.io/Torch-TensorRT/contributors/writing_converters.html#writing-converters)). If you
find yourself with a large library of converter implementations, do consider upstreaming them, PRs are welcome and it would be great for the community to benefit as well.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "torch-tensorrt",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "pytorch,torch,tensorrt,trt,ai,artificial intelligence,ml,machine learning,dl,deep learning,compiler,dynamo,torchscript,inference",
    "author": "",
    "author_email": "NVIDIA Corporation <narens@nvidia.com>",
    "download_url": "",
    "platform": null,
    "description": "# torch_tensorrt\n\n> Ahead of Time (AOT) compiling for PyTorch JIT\n\nTorch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA's TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch's Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT's suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.\n\n## Example Usage\n\n``` python\nimport torch_tensorrt\n\n...\n\ntrt_ts_module = torch_tensorrt.compile(torch_script_module,\n    inputs = [example_tensor, # Provide example tensor for input shape or...\n        torch_tensorrt.Input( # Specify input object with shape and dtype\n            min_shape=[1, 3, 224, 224],\n            opt_shape=[1, 3, 512, 512],\n            max_shape=[1, 3, 1024, 1024],\n            # For static size shape=[1, 3, 224, 224]\n            dtype=torch.half) # Datatype of input tensor. Allowed options torch.(float|half|int8|int32|bool)\n    ],\n    enabled_precisions = {torch.half}, # Run with FP16)\n\nresult = trt_ts_module(input_data) # run inference\ntorch.jit.save(trt_ts_module, \"trt_torchscript_module.ts\") # save the TRT embedded Torchscript\n\n```\n\n## Installation\n\n| ABI / Platform                          | Installation command                                         |\n| --------------------------------------- | ------------------------------------------------------------ |\n| Pre CXX11 ABI (Linux x86_64)            | python3 setup.py install                                     |\n| CXX ABI  (Linux x86_64)                 | python3 setup.py install --use-cxx11-abi                     |\n| Pre CXX11 ABI (Jetson platform aarch64) | python3 setup.py install --jetpack-version 4.6               |\n| CXX11 ABI (Jetson platform aarch64)     | python3 setup.py install --jetpack-version 4.6 --use-cxx11-abi |\n\nFor Linux x86_64 platform, Pytorch libraries default to pre cxx11 abi. So, please use `python3 setup.py install`.\n\nOn Jetson platforms, NVIDIA hosts <a href=\"https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048\">pre-built Pytorch wheel files</a>. These wheel files are built with CXX11 ABI. So on jetson platforms, please use `python3 setup.py install --jetpack-version 4.6 --use-cxx11-abi`\n\n## Under the Hood\n\nWhen a traced module is provided to Torch-TensorRT, the compiler takes the internal representation and transforms it into one like this:\n\n```\ngraph(%input.2 : Tensor):\n    %2 : Float(84, 10) = prim::Constant[value=<Tensor>]()\n    %3 : Float(120, 84) = prim::Constant[value=<Tensor>]()\n    %4 : Float(576, 120) = prim::Constant[value=<Tensor>]()\n    %5 : int = prim::Constant[value=-1]() # x.py:25:0\n    %6 : int[] = prim::Constant[value=annotate(List[int], [])]()\n    %7 : int[] = prim::Constant[value=[2, 2]]()\n    %8 : int[] = prim::Constant[value=[0, 0]]()\n    %9 : int[] = prim::Constant[value=[1, 1]]()\n    %10 : bool = prim::Constant[value=1]() # ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0\n    %11 : int = prim::Constant[value=1]() # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0\n    %12 : bool = prim::Constant[value=0]() # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0\n    %self.classifer.fc3.bias : Float(10) = prim::Constant[value= 0.0464  0.0383  0.0678  0.0932  0.1045 -0.0805 -0.0435 -0.0818  0.0208 -0.0358 [ CUDAFloatType{10} ]]()\n    %self.classifer.fc2.bias : Float(84) = prim::Constant[value=<Tensor>]()\n    %self.classifer.fc1.bias : Float(120) = prim::Constant[value=<Tensor>]()\n    %self.feat.conv2.weight : Float(16, 6, 3, 3) = prim::Constant[value=<Tensor>]()\n    %self.feat.conv2.bias : Float(16) = prim::Constant[value=<Tensor>]()\n    %self.feat.conv1.weight : Float(6, 1, 3, 3) = prim::Constant[value=<Tensor>]()\n    %self.feat.conv1.bias : Float(6) = prim::Constant[value= 0.0530 -0.1691  0.2802  0.1502  0.1056 -0.1549 [ CUDAFloatType{6} ]]()\n    %input0.4 : Tensor = aten::_convolution(%input.2, %self.feat.conv1.weight, %self.feat.conv1.bias, %9, %8, %9, %12, %8, %11, %12, %12, %10) # ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0\n    %input0.5 : Tensor = aten::relu(%input0.4) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0\n    %input1.2 : Tensor = aten::max_pool2d(%input0.5, %7, %6, %8, %9, %12) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0\n    %input0.6 : Tensor = aten::_convolution(%input1.2, %self.feat.conv2.weight, %self.feat.conv2.bias, %9, %8, %9, %12, %8, %11, %12, %12, %10) # ~/.local/lib/python3.6/site-packages/torch/nn/modules/conv.py:346:0\n    %input2.1 : Tensor = aten::relu(%input0.6) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0\n    %x.1 : Tensor = aten::max_pool2d(%input2.1, %7, %6, %8, %9, %12) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:539:0\n    %input.1 : Tensor = aten::flatten(%x.1, %11, %5) # x.py:25:0\n    %27 : Tensor = aten::matmul(%input.1, %4)\n    %28 : Tensor = trt::const(%self.classifer.fc1.bias)\n    %29 : Tensor = aten::add_(%28, %27, %11)\n    %input0.2 : Tensor = aten::relu(%29) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0\n    %31 : Tensor = aten::matmul(%input0.2, %3)\n    %32 : Tensor = trt::const(%self.classifer.fc2.bias)\n    %33 : Tensor = aten::add_(%32, %31, %11)\n    %input1.1 : Tensor = aten::relu(%33) # ~/.local/lib/python3.6/site-packages/torch/nn/functional.py:1063:0\n    %35 : Tensor = aten::matmul(%input1.1, %2)\n    %36 : Tensor = trt::const(%self.classifer.fc3.bias)\n    %37 : Tensor = aten::add_(%36, %35, %11)\n    return (%37)\n(CompileGraph)\n```\n\nThe graph has now been transformed from a collection of modules much like how your PyTorch Modules are collections of modules, each managing their own parameters into a single graph\nwith the parameters inlined into the graph and all of the operations laid out. Torch-TensorRT has also executed a number of optimizations and mappings to make the graph easier to translate\nto TensorRT. From here the compiler can assemble the TensorRT engine by following the dataflow through the graph.\n\nWhen the graph construction phase is complete, Torch-TensorRT produces a serialized TensorRT engine. From here depending on the API, this engine is returned to the user or moves into the graph\nconstruction phase. Here Torch-TensorRT creates a JIT Module to execute the TensorRT engine which will be instantiated and managed by the Torch-TensorRT runtime.\n\nHere is the graph that you get back after compilation is complete:\n\n```\ngraph(%self.1 : __torch__.___torch_mangle_10.LeNet_trt,\n    %2 : Tensor):\n    %1 : int = prim::Constant[value=94106001690080]()\n    %3 : Tensor = trt::execute_engine(%1, %2)\n    return (%3)\n(AddEngineToGraph)\n```\n\nYou can see the call where the engine is executed, based on a constant which is the ID of the engine, telling JIT how to find the engine and the input tensor which will be fed to TensorRT.\nThe engine represents the exact same calculations as what is done by running a normal PyTorch module but optimized to run on your GPU.\n\nTorch-TensorRT converts from TorchScript by generating layers or subgraphs in correspondance with instructions seen in the graph. Converters are small modules of code used to map one specific\noperation to a layer or subgraph in TensorRT. Not all operations are support, but if you need to implement one, you can in C++.\n\n## Registering Custom Converters\n\nOperations are mapped to TensorRT through the use of modular converters, a function that takes a node from a the JIT graph and produces an equivalent layer or subgraph in TensorRT. Torch-TensorRT\nships with a library of these converters stored in a registry, that will be executed depending on the node being parsed. For instance a `aten::relu(%input0.4)` instruction will trigger the\nrelu converter to be run on it, producing an activation layer in the TensorRT graph. But since this library is not exhaustive you may need to write your own to get Torch-TensorRT to support your module.\n\nShipped with the Torch-TensorRT distribution are the internal core API headers. You can therefore access the converter registry and add a converter for the op you need.\n\nFor example, if we try to compile a graph with a build of Torch-TensorRT that doesn\u2019t support the flatten operation (`aten::flatten`) you may see this error:\n\n```\nterminate called after throwing an instance of 'torch_tensorrt::Error'\nwhat():  [enforce fail at core/conversion/conversion.cpp:109] Expected converter to be true but got false\nUnable to convert node: %input.1 : Tensor = aten::flatten(%x.1, %11, %5) # x.py:25:0 (conversion.AddLayer)\nSchema: aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)\nConverter for aten::flatten requested, but no such converter was found.\nIf you need a converter for this operator, you can try implementing one yourself\nor request a converter: https://www.github.com/NVIDIA/Torch-TensorRT/issues\n```\n\nWe can register a converter for this operator in our application. All of the tools required to build a converter can be imported by including `Torch-TensorRT/core/conversion/converters/converters.h`.\nWe start by creating an instance of the self-registering `class torch_tensorrt::core::conversion::converters::RegisterNodeConversionPatterns()` which will register converters in the global converter\nregistry, associating a function schema like `aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)` with a lambda that will take the state of the conversion, the\nnode/operation in question to convert and all of the inputs to the node and produces as a side effect a new layer in the TensorRT network. Arguments are passed as a vector of inspectable unions\nof TensorRT ITensors and Torch IValues in the order arguments are listed in the schema.\n\nBelow is a implementation of a `aten::flatten` converter that we can use in our application. You have full access to the Torch and TensorRT libraries in the converter implementation. So for example\nwe can quickly get the output size by just running the operation in PyTorch instead of implementing the full calculation outself like we do below for this flatten converter.\n\n```c++\n#include \"torch/script.h\"\n#include \"torch_tensorrt/torch_tensorrt.h\"\n#include \"torch_tensorrt/core/conversion/converters/converters.h\"\n\nstatic auto flatten_converter = torch_tensorrt::core::conversion::converters::RegisterNodeConversionPatterns()\n    .pattern({\n        \"aten::flatten.using_ints(Tensor self, int start_dim=0, int end_dim=-1) -> (Tensor)\",\n        [](torch_tensorrt::core::conversion::ConversionCtx* ctx,\n           const torch::jit::Node* n,\n           torch_tensorrt::core::conversion::converters::args& args) -> bool {\n            auto in = args[0].ITensor();\n            auto start_dim = args[1].unwrapToInt();\n            auto end_dim = args[2].unwrapToInt();\n            auto in_shape = torch_tensorrt::core::util::toVec(in->getDimensions());\n            auto out_shape = torch::flatten(torch::rand(in_shape), start_dim, end_dim).sizes();\n\n            auto shuffle = ctx->net->addShuffle(*in);\n            shuffle->setReshapeDimensions(torch_tensorrt::core::util::toDims(out_shape));\n            shuffle->setName(torch_tensorrt::core::util::node_info(n).c_str());\n\n            auto out_tensor = ctx->AssociateValueAndTensor(n->outputs()[0], shuffle->getOutput(0));\n            return true;\n        }\n    });\n```\n\nTo use this converter in Python, it is recommended to use PyTorch\u2019s [C++ / CUDA Extention](https://pytorch.org/tutorials/advanced/cpp_extension.html#custom-c-and-cuda-extensions) template to wrap\nyour library of converters into a `.so` that you can load with `ctypes.CDLL()` in your Python application.\n\nYou can find more information on all the details of writing converters in the contributors documentation ([Writing Converters](https://nvidia.github.io/Torch-TensorRT/contributors/writing_converters.html#writing-converters)). If you\nfind yourself with a large library of converter implementations, do consider upstreaming them, PRs are welcome and it would be great for the community to benefit as well.\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2020-present, NVIDIA CORPORATION. All rights reserved. Copyright (c) Meta Platforms, Inc. and affiliates. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.",
    "summary": "Torch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch",
    "version": "2.2.0",
    "project_urls": {
        "Changelog": "https://github.com/pytorch/tensorrt/releases",
        "Documentation": "https://pytorch.org/tensorrt",
        "Homepage": "https://pytorch.org/tensorrt",
        "Repository": "https://github.com/pytorch/tensorrt.git"
    },
    "split_keywords": [
        "pytorch",
        "torch",
        "tensorrt",
        "trt",
        "ai",
        "artificial intelligence",
        "ml",
        "machine learning",
        "dl",
        "deep learning",
        "compiler",
        "dynamo",
        "torchscript",
        "inference"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c8841bbad2b6d243ecff277410fa8f502019f435686f8f1a879eb0e9f0b9c17c",
                "md5": "6a4e2fcc20d781a6a454f29b5e6e6b79",
                "sha256": "536742bdff257d2c26a52a1fd79712043a73feff5aeffe71fe3fb98b740425a5"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.2.0-cp310-cp310-manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "6a4e2fcc20d781a6a454f29b5e6e6b79",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 18433957,
            "upload_time": "2024-02-14T01:49:39",
            "upload_time_iso_8601": "2024-02-14T01:49:39.684714Z",
            "url": "https://files.pythonhosted.org/packages/c8/84/1bbad2b6d243ecff277410fa8f502019f435686f8f1a879eb0e9f0b9c17c/torch_tensorrt-2.2.0-cp310-cp310-manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13b6b8db0755aa9bd8f6f2f321a1173f6ba768f71db40218d61f2756ad9e687d",
                "md5": "04031f66554eb60283ded8aa0eb7fada",
                "sha256": "7c66e44b760104bb14207467a0f831b28ac37cd3a24e3372b6df8fd77f6dd798"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.2.0-cp311-cp311-manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "04031f66554eb60283ded8aa0eb7fada",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 18455721,
            "upload_time": "2024-02-14T01:49:43",
            "upload_time_iso_8601": "2024-02-14T01:49:43.705834Z",
            "url": "https://files.pythonhosted.org/packages/13/b6/b8db0755aa9bd8f6f2f321a1173f6ba768f71db40218d61f2756ad9e687d/torch_tensorrt-2.2.0-cp311-cp311-manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8b69d36751c9953d036cab3c468cb5b00e737f084057f367bc6691b860a328c7",
                "md5": "1e5b25ef7dc6650e24383a47db403669",
                "sha256": "e3ec27dc5344e1b25ed94733f9fd3be16aeb5ddbd30b105bfd960fbf5f307abd"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.2.0-cp38-cp38-manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "1e5b25ef7dc6650e24383a47db403669",
            "packagetype": "bdist_wheel",
            "python_version": "cp38",
            "requires_python": ">=3.8",
            "size": 18418508,
            "upload_time": "2024-02-14T01:49:47",
            "upload_time_iso_8601": "2024-02-14T01:49:47.479688Z",
            "url": "https://files.pythonhosted.org/packages/8b/69/d36751c9953d036cab3c468cb5b00e737f084057f367bc6691b860a328c7/torch_tensorrt-2.2.0-cp38-cp38-manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "abe2542acf99d918df4151287e5afd5bf8bc9710fda4ed3333886d9f2ae6208e",
                "md5": "8f877a4c89f484ec996f242fc6fe6fc2",
                "sha256": "915556ea76ecf4ea43b80bfe249e97cc45009628e5112d53c381b588dbb73285"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.2.0-cp39-cp39-manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "8f877a4c89f484ec996f242fc6fe6fc2",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 18412224,
            "upload_time": "2024-02-14T01:49:51",
            "upload_time_iso_8601": "2024-02-14T01:49:51.192079Z",
            "url": "https://files.pythonhosted.org/packages/ab/e2/542acf99d918df4151287e5afd5bf8bc9710fda4ed3333886d9f2ae6208e/torch_tensorrt-2.2.0-cp39-cp39-manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-14 01:49:39",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pytorch",
    "github_project": "tensorrt",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torch-tensorrt"
}
        
Elapsed time: 0.19024s