torch-tensorrt


Nametorch-tensorrt JSON
Version 2.5.0 PyPI version JSON
download
home_pageNone
SummaryTorch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch
upload_time2024-10-18 01:21:50
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseCopyright (c) 2020-present, NVIDIA CORPORATION. All rights reserved. Copyright (c) Meta Platforms, Inc. and affiliates. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
keywords pytorch torch tensorrt trt ai artificial intelligence ml machine learning dl deep learning compiler dynamo torchscript inference
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">

Torch-TensorRT
===========================
<h4> Easily achieve the best inference performance for any PyTorch model on the NVIDIA platform. </h4>

[![Documentation](https://img.shields.io/badge/docs-master-brightgreen)](https://nvidia.github.io/Torch-TensorRT/)
[![pytorch](https://img.shields.io/badge/PyTorch-2.4-green)](https://www.python.org/downloads/release/python-31013/)
[![cuda](https://img.shields.io/badge/CUDA-12.4-green)](https://developer.nvidia.com/cuda-downloads)
[![trt](https://img.shields.io/badge/TensorRT-10.3.0-green)](https://github.com/nvidia/tensorrt-llm)
[![license](https://img.shields.io/badge/license-BSD--3--Clause-blue)](./LICENSE)
[![linux_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml)
[![windows_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml)

---
<div align="left">

Torch-TensorRT brings the power of TensorRT to PyTorch. Accelerate inference latency by up to 5x compared to eager execution in just one line of code.
</div></div>

## Installation
Stable versions of Torch-TensorRT are published on PyPI
```bash
pip install torch-tensorrt
```

Nightly versions of Torch-TensorRT are published on the PyTorch package index
```bash
pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu124
```

Torch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included.

For more advanced installation  methods, please see [here](https://pytorch.org/TensorRT/getting_started/installation.html)

## Quickstart

### Option 1: torch.compile
You can use Torch-TensorRT anywhere you use `torch.compile`:

```python
import torch
import torch_tensorrt

model = MyModel().eval().cuda() # define your model here
x = torch.randn((1, 3, 224, 224)).cuda() # define what the inputs to the model will look like

optimized_model = torch.compile(model, backend="tensorrt")
optimized_model(x) # compiled on first run

optimized_model(x) # this will be fast!
```

### Option 2: Export
If you want to optimize your model ahead-of-time and/or deploy in a C++ environment, Torch-TensorRT provides an export-style workflow that serializes an optimized module. This module can be deployed in PyTorch or with libtorch (i.e. without a Python dependency).

#### Step 1: Optimize + serialize
```python
import torch
import torch_tensorrt

model = MyModel().eval().cuda() # define your model here
inputs = [torch.randn((1, 3, 224, 224)).cuda()] # define a list of representative inputs here

trt_gm = torch_tensorrt.compile(model, ir="dynamo", inputs)
torch_tensorrt.save(trt_gm, "trt.ep", inputs=inputs) # PyTorch only supports Python runtime for an ExportedProgram. For C++ deployment, use a TorchScript file
torch_tensorrt.save(trt_gm, "trt.ts", output_format="torchscript", inputs=inputs)
```

#### Step 2: Deploy
##### Deployment in PyTorch:
```python
import torch
import torch_tensorrt

inputs = [torch.randn((1, 3, 224, 224)).cuda()] # your inputs go here

# You can run this in a new python session!
model = torch.export.load("trt.ep").module()
# model = torch_tensorrt.load("trt.ep").module() # this also works
model(*inputs)
```

##### Deployment in C++:
```cpp
#include "torch/script.h"
#include "torch_tensorrt/torch_tensorrt.h"

auto trt_mod = torch::jit::load("trt.ts");
auto input_tensor = [...]; // fill this with your inputs
auto results = trt_mod.forward({input_tensor});
```

## Further resources
- [Up to 50% faster Stable Diffusion inference with one line of code](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_stable_diffusion.html#sphx-glr-tutorials-rendered-examples-dynamo-torch-compile-stable-diffusion-py)
- [Optimize LLMs from Hugging Face with Torch-TensorRT]() \[coming soon\]
- [Run your model in FP8 with Torch-TensorRT](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_fp8_ptq.html)
- [Tools to resolve graph breaks and boost performance]() \[coming soon\]
- [Tech Talk (GTC '23)](https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51714/)
- [Documentation](https://nvidia.github.io/Torch-TensorRT/)


## Platform Support

| Platform            | Support                                          |
| ------------------- | ------------------------------------------------ |
| Linux AMD64 / GPU   | **Supported**                                    |
| Windows / GPU       | **Supported (Dynamo only)**                      |
| Linux aarch64 / GPU | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** |
| Linux aarch64 / DLA | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** |
| Linux ppc64le / GPU | Not supported                                    |

> Note: Refer [NVIDIA L4T PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) for PyTorch libraries on JetPack.

### Dependencies

These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.

- Bazel 6.3.2
- Libtorch 2.5.0.dev (latest nightly) (built with CUDA 12.4)
- CUDA 12.4
- TensorRT 10.3.0.26

## Deprecation Policy

Deprecation is used to inform developers that some APIs and tools are no longer recommended for use. Beginning with version 2.3, Torch-TensorRT has the following deprecation policy:

Deprecation notices are communicated in the Release Notes. Deprecated API functions will have a statement in the source documenting when they were deprecated. Deprecated methods and classes will issue deprecation warnings at runtime, if they are used. Torch-TensorRT provides a 6-month migration period after the deprecation. APIs and tools continue to work during the migration period. After the migration period ends, APIs and tools are removed in a manner consistent with semantic versioning.

## Contributing

Take a look at the [CONTRIBUTING.md](CONTRIBUTING.md)


## License

The Torch-TensorRT license can be found in the [LICENSE](./LICENSE) file. It is licensed with a BSD Style licence

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "torch-tensorrt",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "pytorch, torch, tensorrt, trt, ai, artificial intelligence, ml, machine learning, dl, deep learning, compiler, dynamo, torchscript, inference",
    "author": null,
    "author_email": "NVIDIA Corporation <narens@nvidia.com>",
    "download_url": null,
    "platform": null,
    "description": "<div align=\"center\">\n\nTorch-TensorRT\n===========================\n<h4> Easily achieve the best inference performance for any PyTorch model on the NVIDIA platform. </h4>\n\n[![Documentation](https://img.shields.io/badge/docs-master-brightgreen)](https://nvidia.github.io/Torch-TensorRT/)\n[![pytorch](https://img.shields.io/badge/PyTorch-2.4-green)](https://www.python.org/downloads/release/python-31013/)\n[![cuda](https://img.shields.io/badge/CUDA-12.4-green)](https://developer.nvidia.com/cuda-downloads)\n[![trt](https://img.shields.io/badge/TensorRT-10.3.0-green)](https://github.com/nvidia/tensorrt-llm)\n[![license](https://img.shields.io/badge/license-BSD--3--Clause-blue)](./LICENSE)\n[![linux_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-linux.yml)\n[![windows_tests](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml/badge.svg)](https://github.com/pytorch/TensorRT/actions/workflows/build-test-windows.yml)\n\n---\n<div align=\"left\">\n\nTorch-TensorRT brings the power of TensorRT to PyTorch. Accelerate inference latency by up to 5x compared to eager execution in just one line of code.\n</div></div>\n\n## Installation\nStable versions of Torch-TensorRT are published on PyPI\n```bash\npip install torch-tensorrt\n```\n\nNightly versions of Torch-TensorRT are published on the PyTorch package index\n```bash\npip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu124\n```\n\nTorch-TensorRT is also distributed in the ready-to-run [NVIDIA NGC PyTorch Container](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch) which has all dependencies with the proper versions and example notebooks included.\n\nFor more advanced installation  methods, please see [here](https://pytorch.org/TensorRT/getting_started/installation.html)\n\n## Quickstart\n\n### Option 1: torch.compile\nYou can use Torch-TensorRT anywhere you use `torch.compile`:\n\n```python\nimport torch\nimport torch_tensorrt\n\nmodel = MyModel().eval().cuda() # define your model here\nx = torch.randn((1, 3, 224, 224)).cuda() # define what the inputs to the model will look like\n\noptimized_model = torch.compile(model, backend=\"tensorrt\")\noptimized_model(x) # compiled on first run\n\noptimized_model(x) # this will be fast!\n```\n\n### Option 2: Export\nIf you want to optimize your model ahead-of-time and/or deploy in a C++ environment, Torch-TensorRT provides an export-style workflow that serializes an optimized module. This module can be deployed in PyTorch or with libtorch (i.e. without a Python dependency).\n\n#### Step 1: Optimize + serialize\n```python\nimport torch\nimport torch_tensorrt\n\nmodel = MyModel().eval().cuda() # define your model here\ninputs = [torch.randn((1, 3, 224, 224)).cuda()] # define a list of representative inputs here\n\ntrt_gm = torch_tensorrt.compile(model, ir=\"dynamo\", inputs)\ntorch_tensorrt.save(trt_gm, \"trt.ep\", inputs=inputs) # PyTorch only supports Python runtime for an ExportedProgram. For C++ deployment, use a TorchScript file\ntorch_tensorrt.save(trt_gm, \"trt.ts\", output_format=\"torchscript\", inputs=inputs)\n```\n\n#### Step 2: Deploy\n##### Deployment in PyTorch:\n```python\nimport torch\nimport torch_tensorrt\n\ninputs = [torch.randn((1, 3, 224, 224)).cuda()] # your inputs go here\n\n# You can run this in a new python session!\nmodel = torch.export.load(\"trt.ep\").module()\n# model = torch_tensorrt.load(\"trt.ep\").module() # this also works\nmodel(*inputs)\n```\n\n##### Deployment in C++:\n```cpp\n#include \"torch/script.h\"\n#include \"torch_tensorrt/torch_tensorrt.h\"\n\nauto trt_mod = torch::jit::load(\"trt.ts\");\nauto input_tensor = [...]; // fill this with your inputs\nauto results = trt_mod.forward({input_tensor});\n```\n\n## Further resources\n- [Up to 50% faster Stable Diffusion inference with one line of code](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/torch_compile_stable_diffusion.html#sphx-glr-tutorials-rendered-examples-dynamo-torch-compile-stable-diffusion-py)\n- [Optimize LLMs from Hugging Face with Torch-TensorRT]() \\[coming soon\\]\n- [Run your model in FP8 with Torch-TensorRT](https://pytorch.org/TensorRT/tutorials/_rendered_examples/dynamo/vgg16_fp8_ptq.html)\n- [Tools to resolve graph breaks and boost performance]() \\[coming soon\\]\n- [Tech Talk (GTC '23)](https://www.nvidia.com/en-us/on-demand/session/gtcspring23-s51714/)\n- [Documentation](https://nvidia.github.io/Torch-TensorRT/)\n\n\n## Platform Support\n\n| Platform            | Support                                          |\n| ------------------- | ------------------------------------------------ |\n| Linux AMD64 / GPU   | **Supported**                                    |\n| Windows / GPU       | **Supported (Dynamo only)**                      |\n| Linux aarch64 / GPU | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** |\n| Linux aarch64 / DLA | **Native Compilation Supported on JetPack-4.4+ (use v1.0.0 for the time being)** |\n| Linux ppc64le / GPU | Not supported                                    |\n\n> Note: Refer [NVIDIA L4T PyTorch NGC container](https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch) for PyTorch libraries on JetPack.\n\n### Dependencies\n\nThese are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.\n\n- Bazel 6.3.2\n- Libtorch 2.5.0.dev (latest nightly) (built with CUDA 12.4)\n- CUDA 12.4\n- TensorRT 10.3.0.26\n\n## Deprecation Policy\n\nDeprecation is used to inform developers that some APIs and tools are no longer recommended for use. Beginning with version 2.3, Torch-TensorRT has the following deprecation policy:\n\nDeprecation notices are communicated in the Release Notes. Deprecated API functions will have a statement in the source documenting when they were deprecated. Deprecated methods and classes will issue deprecation warnings at runtime, if they are used. Torch-TensorRT provides a 6-month migration period after the deprecation. APIs and tools continue to work during the migration period. After the migration period ends, APIs and tools are removed in a manner consistent with semantic versioning.\n\n## Contributing\n\nTake a look at the [CONTRIBUTING.md](CONTRIBUTING.md)\n\n\n## License\n\nThe Torch-TensorRT license can be found in the [LICENSE](./LICENSE) file. It is licensed with a BSD Style licence\n",
    "bugtrack_url": null,
    "license": "Copyright (c) 2020-present, NVIDIA CORPORATION. All rights reserved. Copyright (c) Meta Platforms, Inc. and affiliates. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.",
    "summary": "Torch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch",
    "version": "2.5.0",
    "project_urls": {
        "Changelog": "https://github.com/pytorch/tensorrt/releases",
        "Documentation": "https://pytorch.org/tensorrt",
        "Homepage": "https://pytorch.org/tensorrt",
        "Repository": "https://github.com/pytorch/tensorrt.git"
    },
    "split_keywords": [
        "pytorch",
        " torch",
        " tensorrt",
        " trt",
        " ai",
        " artificial intelligence",
        " ml",
        " machine learning",
        " dl",
        " deep learning",
        " compiler",
        " dynamo",
        " torchscript",
        " inference"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "13490bd42291b2bd6bdc64e51299e731be09f3e8b8bbee939d943240c7b63dca",
                "md5": "c38228b27343f568bf4318b266461bca",
                "sha256": "3b059b1e024e1ae0f37ab2da32f15077e53a18294368f74c8d92ebc24fb1c5f3"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "c38228b27343f568bf4318b266461bca",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 3577721,
            "upload_time": "2024-10-18T01:21:50",
            "upload_time_iso_8601": "2024-10-18T01:21:50.338825Z",
            "url": "https://files.pythonhosted.org/packages/13/49/0bd42291b2bd6bdc64e51299e731be09f3e8b8bbee939d943240c7b63dca/torch_tensorrt-2.5.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5f2b4662215a1b7ac311dea83ebd2dd7accb72618231d4600ea172e370c1766d",
                "md5": "926c43f170e9afcbd4237c1b7fcee45a",
                "sha256": "fca7984736394cfc4685460cdde5a3b0e697bd23a7aeba91e66daa0619c91170"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp310-cp310-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "926c43f170e9afcbd4237c1b7fcee45a",
            "packagetype": "bdist_wheel",
            "python_version": "cp310",
            "requires_python": ">=3.8",
            "size": 2882047,
            "upload_time": "2024-10-18T01:21:58",
            "upload_time_iso_8601": "2024-10-18T01:21:58.874896Z",
            "url": "https://files.pythonhosted.org/packages/5f/2b/4662215a1b7ac311dea83ebd2dd7accb72618231d4600ea172e370c1766d/torch_tensorrt-2.5.0-cp310-cp310-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "87c611f4e6300bd135dc7f8160670f6b2b10618e5112c6ee368dfcc4f7dc5cfc",
                "md5": "6408b1f067b61f75ad9f9f599a9420c1",
                "sha256": "1b408fe06ba0e855cb6a142a2ac27815ab45f43c05d3e3880c8602429cddb27f"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "6408b1f067b61f75ad9f9f599a9420c1",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 3580312,
            "upload_time": "2024-10-18T01:22:02",
            "upload_time_iso_8601": "2024-10-18T01:22:02.343710Z",
            "url": "https://files.pythonhosted.org/packages/87/c6/11f4e6300bd135dc7f8160670f6b2b10618e5112c6ee368dfcc4f7dc5cfc/torch_tensorrt-2.5.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "10762345bee0199424846d5708254ed1c71293f4825b15c6b824d7fae32aac55",
                "md5": "ccfdaab726a319eebc18a056c3855a87",
                "sha256": "a893b933d6432c8a07ce0bf9b394c5b200ba3d620250bef4d89b0a734189cd84"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp311-cp311-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "ccfdaab726a319eebc18a056c3855a87",
            "packagetype": "bdist_wheel",
            "python_version": "cp311",
            "requires_python": ">=3.8",
            "size": 2883408,
            "upload_time": "2024-10-18T01:22:05",
            "upload_time_iso_8601": "2024-10-18T01:22:05.180878Z",
            "url": "https://files.pythonhosted.org/packages/10/76/2345bee0199424846d5708254ed1c71293f4825b15c6b824d7fae32aac55/torch_tensorrt-2.5.0-cp311-cp311-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3b63f2d800a2353c1b03d6000ff686f30dae4fa746fdf8dd85b6d4245e5593ff",
                "md5": "027b179949a0bfb2afdaa8d76c4340ee",
                "sha256": "d317a244284f5fe2f57b78ec895ff6da0b23ff7b208a5f3001a61a77f3b9ec35"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "027b179949a0bfb2afdaa8d76c4340ee",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.8",
            "size": 3583395,
            "upload_time": "2024-10-18T01:22:08",
            "upload_time_iso_8601": "2024-10-18T01:22:08.973346Z",
            "url": "https://files.pythonhosted.org/packages/3b/63/f2d800a2353c1b03d6000ff686f30dae4fa746fdf8dd85b6d4245e5593ff/torch_tensorrt-2.5.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e070ed10a8ce0f30bc938a53fcb0066fe2fb34dac33c6f240c24d3d9276444c1",
                "md5": "923ef5334e0be429d72f836b2e25adb5",
                "sha256": "b9e7ee1510e644106e9fb6f194a227a1394d8646fea049e5dba572f99dc38b24"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp312-cp312-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "923ef5334e0be429d72f836b2e25adb5",
            "packagetype": "bdist_wheel",
            "python_version": "cp312",
            "requires_python": ">=3.8",
            "size": 2884481,
            "upload_time": "2024-10-18T01:22:12",
            "upload_time_iso_8601": "2024-10-18T01:22:12.749290Z",
            "url": "https://files.pythonhosted.org/packages/e0/70/ed10a8ce0f30bc938a53fcb0066fe2fb34dac33c6f240c24d3d9276444c1/torch_tensorrt-2.5.0-cp312-cp312-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0ac3dc2c0580d4ee49714e1dca7199dba065168759d0e375838d9f31b4cc5855",
                "md5": "7d2fdc7fc385ae5ebb8ec5838dde75ad",
                "sha256": "6076cac847127bfea3cce3bb50aa7a7465510c0bf3eb085bd49dbfde1998471a"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "has_sig": false,
            "md5_digest": "7d2fdc7fc385ae5ebb8ec5838dde75ad",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 3577682,
            "upload_time": "2024-10-18T01:22:15",
            "upload_time_iso_8601": "2024-10-18T01:22:15.328816Z",
            "url": "https://files.pythonhosted.org/packages/0a/c3/dc2c0580d4ee49714e1dca7199dba065168759d0e375838d9f31b4cc5855/torch_tensorrt-2.5.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_34_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4eea2dba63cf0f929abe83ea915f34ba973a9029fe9b26015e50b17670c79b0a",
                "md5": "02d67545bf99f2b995369eaad6b0703b",
                "sha256": "b0a2ed43fb047ed7294c3213c52367676d6581483f6fae045d744ce2549d7781"
            },
            "downloads": -1,
            "filename": "torch_tensorrt-2.5.0-cp39-cp39-win_amd64.whl",
            "has_sig": false,
            "md5_digest": "02d67545bf99f2b995369eaad6b0703b",
            "packagetype": "bdist_wheel",
            "python_version": "cp39",
            "requires_python": ">=3.8",
            "size": 2873666,
            "upload_time": "2024-10-18T01:22:17",
            "upload_time_iso_8601": "2024-10-18T01:22:17.766341Z",
            "url": "https://files.pythonhosted.org/packages/4e/ea/2dba63cf0f929abe83ea915f34ba973a9029fe9b26015e50b17670c79b0a/torch_tensorrt-2.5.0-cp39-cp39-win_amd64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-18 01:21:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "pytorch",
    "github_project": "tensorrt",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "torch-tensorrt"
}
        
Elapsed time: 0.38699s