Name | Version | Summary | date |
octoflow |
0.0.45 |
Streamlining machine learning tracking for seamless experiment management. |
2024-04-30 15:58:29 |
pyppbox-ultralytics |
8.1.48 |
Ultralytics YOLOv8 for SOTA object detection, multi-object tracking, instance segmentation, pose estimation and image classification. |
2024-04-30 11:41:27 |
mistralrs-mkl |
0.1.2 |
Fast and easy LLM serving. |
2024-04-30 09:54:34 |
mistralrs-metal |
0.1.2 |
Fast and easy LLM serving. |
2024-04-30 09:54:33 |
mistralrs-cuda |
0.1.2 |
Fast and easy LLM serving. |
2024-04-30 09:54:23 |
mistralrs-accelerate |
0.1.2 |
Fast and easy LLM serving. |
2024-04-30 09:54:21 |
mistralrs |
0.1.2 |
Fast and easy LLM serving. |
2024-04-30 09:54:17 |
jaxutils-nightly |
0.0.8.dev20240430 |
Utility functions for JaxGaussianProcesses |
2024-04-30 00:06:23 |
jaxkern-nightly |
0.0.5.dev20240430 |
Kernels in Jax. |
2024-04-30 00:05:06 |
iog-sdk |
0.0.1 |
IOG SDK - Internet of GPUs - IO.net |
2024-04-29 22:27:48 |
twinlab |
2.6.0 |
twinLab - Probabilistic Machine Learning for Engineers |
2024-04-29 10:13:22 |
ultralytics |
8.2.4 |
Ultralytics YOLOv8 for SOTA object detection, multi-object tracking, instance segmentation, pose estimation and image classification. |
2024-04-27 19:48:23 |
stable-baselines3 |
2.3.2 |
Pytorch version of Stable Baselines, implementations of reinforcement learning algorithms. |
2024-04-27 13:09:14 |
tdw |
1.12.25.0 |
3D simulation environment |
2024-04-26 21:02:46 |
supervision |
0.20.0 |
A set of easy-to-use utils that will come in handy in any Computer Vision project |
2024-04-24 17:38:40 |
topsearch |
0.0.3 |
A Python package for topographical analysis of machine learning models and physical systems |
2024-04-24 10:34:46 |
danila-lib |
1.3.9 |
This is the module for detecting and classifying text on rama pictures |
2024-04-23 11:47:58 |
spox |
0.11.0 |
A framework for constructing ONNX computational graphs. |
2024-04-23 11:19:47 |
phenocv |
0.1.4 |
Rice High Throughput Phenotyping Computer Vision Toolkit |
2024-04-23 05:27:13 |
intel-ai-safety |
0.0.0 |
Explainable AI Tooling (XAI). XAI is used to discover and explain a model's prediction in a way that is interpretable to the user. Relevant information in the dataset, featureset, and model's algorithms are exposed. |
2024-04-23 02:52:49 |