Name | Version | Summary | date |
tensorrt-cu11-bindings |
10.0.0b6 |
A high performance deep learning inference library |
2024-03-26 22:19:39 |
torch-tensorrt |
2.2.0 |
Torch-TensorRT is a package which allows users to automatically compile PyTorch and TorchScript modules to TensorRT while remaining in PyTorch |
2024-02-14 01:49:39 |
tritony |
0.0.16 |
Tiny configuration for Triton Inference Server |
2023-12-15 04:15:36 |
trtpg |
1.3.0 |
Generate TensorRT plugin in fly |
2023-07-11 06:25:58 |
pyotritonclient |
0.2.6 |
A lightweight http client library for communicating with Nvidia Triton Inference Server (with Pyodide support in the browser) |
2023-06-06 08:48:27 |
tensorrt-bindings |
8.6.1 |
A high performance deep learning inference library |
2023-05-03 00:36:33 |
tensorrt-lean-bindings |
8.6.1 |
A high performance deep learning inference library |
2023-05-03 00:26:49 |
tensorrt-dispatch-bindings |
8.6.1 |
A high performance deep learning inference library |
2023-05-03 00:18:44 |
nn-sdk |
1.8.26 |
nn-sdk tensorflow(v1 ,v2),onnx,tensorrt,fasttext model infer engine |
2023-03-28 09:14:05 |
nvidia-tensorrt |
99.0.0 |
A high performance deep learning inference library |
2023-01-27 23:01:29 |
nvidia-tao-deploy |
4.0.0.1 |
NVIDIA's package for deploying models from TAO Toolkit. |
2022-12-12 19:44:39 |