Name | Version | Summary | date |
ai-edge-quantizer-nightly |
0.4.0.dev20250823 |
A quantizer for advanced developers to quantize converted AI Edge models. |
2025-08-23 00:12:49 |
llmcompressor |
0.7.1 |
A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation. |
2025-08-21 21:36:37 |
ai-edge-quantizer |
0.3.0 |
A quantizer for advanced developers to quantize converted AI Edge models. |
2025-08-21 20:40:00 |
openvino-easy |
1.0.0 |
Framework-agnostic Python wrapper for OpenVINO 2025 |
2025-08-21 04:53:21 |
metis-agent |
0.18.1 |
Advanced AI agent framework with custom instructions, slash commands, Claude Code-style CLI, multi-agent orchestration, 36+ tools, and enterprise security |
2025-08-20 20:33:33 |
torch-floating-point |
0.0.8 |
A PyTorch library for custom floating point quantization with autograd support |
2025-08-19 23:00:20 |
optimum-benchmark |
0.6.0 |
Optimum-Benchmark is a unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes. |
2025-08-19 22:40:14 |
vector-quantize-pytorch |
1.23.0 |
Vector Quantization - Pytorch |
2025-08-19 14:23:01 |
bitsandbytes |
0.47.0 |
k-bit optimizers and matrix multiplication routines. |
2025-08-11 18:51:20 |
optimum-intel |
1.25.0 |
Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. |
2025-08-04 16:41:28 |
tensorflores |
0.1.11 |
TensorFlores is a Python-based framework for optimizing machine learning deployment in resource-constrained environments, with support for TinyML, EdgeAI, and quantization. |
2025-08-04 00:29:03 |
optimum |
1.27.0 |
Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. |
2025-07-30 16:40:44 |
unfake |
1.0.1 |
High-performance tool for improving AI-generated pixel art |
2025-07-26 21:35:36 |
fedcore |
0.0.5.3 |
Federated learning core library |
2025-07-09 21:11:50 |
llmcompressor-nightly |
0.4.1.20250314 |
A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation. |
2025-03-14 03:23:13 |
kvquant |
0.0.1 |
More for Keys, Less for Values: Adaptive KV Cache Quantization 🐍🚀🎉🦕 |
2025-02-27 20:12:37 |
gptqmodel |
1.8.1 |
A LLM quantization package with user-friendly apis. Based on GPTQ algorithm. |
2025-02-08 20:20:50 |
nncf |
2.15.0 |
Neural Networks Compression Framework |
2025-02-06 10:09:14 |
autoawq |
0.2.8 |
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. |
2025-01-20 11:03:42 |
friendli-model-optimizer |
0.10.0 |
Model Optimizer CLI for Friendli Engine. |
2025-01-08 09:57:35 |