Name | Version | Summary | date |
nm-magic-wand-nightly |
0.2.2.20240426 |
SparseLinear layers |
2024-04-26 00:46:39 |
nm-magic-wand |
0.2.1 |
SparseLinear layers |
2024-04-09 18:31:52 |
deepsparse |
1.7.1 |
An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application |
2024-03-19 17:49:49 |
sparsify-nightly |
1.7.0.20240304 |
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint |
2024-03-05 13:38:57 |
sparsify |
1.6.1 |
Easy-to-use UI for automatically sparsifying neural networks and creating sparsification recipes for better inference performance and a smaller footprint |
2023-12-20 14:28:37 |
sparseml |
1.6.1 |
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models |
2023-12-20 14:24:12 |
sparsezoo |
1.6.1 |
Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes |
2023-12-20 14:23:57 |
deepsparse-ent |
1.6.0 |
An inference runtime offering GPU-class performance on CPUs and APIs to integrate ML into your application |
2023-12-04 16:24:07 |
optimum-deepsparse |
0.1.0.dev1 |
Optimum DeepSparse is an extension of the Hugging Face Transformers library that integrates the DeepSparse inference runtime. DeepSparse offers GPU-class performance on CPUs, making it possible to run Transformers and other deep learning models on commodity hardware with sparsity. Optimum DeepSparse provides a framework for developers to easily integrate DeepSparse into their applications, regardless of the hardware platform. |
2023-10-26 02:02:45 |
deepstruct |
0.10.0 |
|
2023-01-18 11:46:06 |