Name | Version | Summary | date |
torch-workflow-archiver |
0.2.15 |
Torch Workflow Archiver is used for creating archives of workflow designed using trained neural net models that can be consumed by TorchServe inference |
2024-09-30 18:57:48 |
torch-model-archiver |
0.12.0 |
Torch Model Archiver is used for creating archives of trained neural net models that can be consumed by TorchServe inference |
2024-09-30 18:57:46 |
torchserve |
0.12.0 |
TorchServe is a tool for serving neural net models for inference |
2024-09-30 18:57:42 |
multi-model-server-gpu |
0.0.2 |
(Altered for multiprocessing GPU Inference) Multi Model Server is a tool for serving neural net models for inference |
2024-09-09 19:18:04 |
causallib |
0.9.7 |
A Python package for flexible and modular causal inference modeling |
2024-07-31 11:25:59 |
deepsparse-ent |
1.8.0 |
An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application |
2024-07-19 16:32:32 |
deepsparse |
1.8.0 |
An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application |
2024-07-19 16:29:01 |
sparsezoo |
1.8.1 |
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes |
2024-07-19 16:25:22 |
sparseml-nightly |
1.8.0.20240630 |
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models |
2024-06-30 20:38:38 |
sparseml |
1.8.0 |
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models |
2024-06-21 15:53:30 |
gt4sd |
1.4.2 |
Generative Toolkit for Scientific Discovery (GT4SD). |
2024-06-13 13:23:55 |
dame-flame |
0.71 |
Causal Inference Covariate Matching |
2024-06-10 19:43:50 |
scripro |
1.1.30 |
Single-cell gene regulation network inference by large-scale data integration Pro |
2024-06-05 16:54:40 |
deepsparse-nightly |
1.8.0.20240502 |
An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application |
2024-05-06 19:42:19 |
sparsezoo-nightly |
1.8.0.20240506 |
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes |
2024-05-06 19:37:51 |
sagemaker-schema-inference-artifacts |
0.0.5 |
Open source library for Hugging Face Task Sample Inputs and Outputs |
2024-04-08 22:06:12 |
torchlayers-nightly |
1711930087 |
Input shape inference and SOTA custom layers for PyTorch. |
2024-04-01 00:08:27 |
model-infer-utils |
0.0.3 |
A utils package for model-inference |
2024-03-20 07:10:36 |
new-ai-benchmark |
2.7.0 |
AI Benchmark is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs. |
2024-03-10 18:53:48 |
sparsify-nightly |
1.7.0.20240304 |
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint |
2024-03-05 13:38:57 |