Name | Version | Summary | date |
ant-ray-nightly |
3.0.0.dev20250122 |
Ray provides a simple, universal API for building distributed applications. |
2025-01-22 03:58:42 |
torch-workflow-archiver-nightly |
2025.1.19 |
Torch Workflow Archiver is used for creating archives of workflow designed using trained neural net models that can be consumed by TorchServe inference |
2025-01-19 11:25:54 |
torch-model-archiver-nightly |
2025.1.19 |
Torch Model Archiver is used for creating archives of trained neural net models that can be consumed by TorchServe inference |
2025-01-19 11:25:51 |
torchserve-nightly |
2025.1.19 |
TorchServe is a tool for serving neural net models for inference |
2025-01-19 11:25:46 |
ant-ray |
3.0.0.dev2 |
Ray provides a simple, universal API for building distributed applications. |
2025-01-15 13:35:39 |
jina |
3.33.0 |
Multimodal AI services & pipelines with cloud-native stack: gRPC, Kubernetes, Docker, OpenTelemetry, Prometheus, Jaeger, etc. |
2024-12-20 12:13:56 |
ray-nightly |
3.0.0.dev20241219 |
Ray provides a simple, universal API for building distributed applications. |
2024-12-19 14:00:26 |
ray-cpp |
2.40.0 |
A subpackage of Ray which provides the Ray C++ API. |
2024-12-03 23:47:52 |
ray |
2.40.0 |
Ray provides a simple, universal API for building distributed applications. |
2024-12-03 23:45:31 |
tensorflow-serving-api-gpu |
2.18.0 |
TensorFlow Serving Python API. |
2024-10-31 00:37:41 |
tensorflow-serving-api |
2.18.0 |
TensorFlow Serving Python API. |
2024-10-31 00:37:39 |
torch-workflow-archiver |
0.2.15 |
Torch Workflow Archiver is used for creating archives of workflow designed using trained neural net models that can be consumed by TorchServe inference |
2024-09-30 18:57:48 |
torch-model-archiver |
0.12.0 |
Torch Model Archiver is used for creating archives of trained neural net models that can be consumed by TorchServe inference |
2024-09-30 18:57:46 |
torchserve |
0.12.0 |
TorchServe is a tool for serving neural net models for inference |
2024-09-30 18:57:42 |
multi-model-server-gpu |
0.0.2 |
(Altered for multiprocessing GPU Inference) Multi Model Server is a tool for serving neural net models for inference |
2024-09-09 19:18:04 |
mlModelSaver |
1.0.33 |
Make life easier for saving and serving ML models |
2024-06-20 07:30:14 |
iog-sdk |
0.0.1 |
IOG SDK - Internet of GPUs - IO.net |
2024-04-29 22:27:48 |
marie-ai |
3.0.28 |
Python library to Integrate AI-powered features into your applications |
2024-02-15 12:51:28 |
torchserve-ag |
0.8.2b20230918 |
TorchServe is a tool for serving neural net models for inference |
2023-09-18 18:01:25 |
taichu-serve |
2.0.32 |
taichu serve is a tool for serving deep learning inference |
2023-09-12 03:39:44 |