PyDigger - unearthing stuff about Python


NameVersionSummarydate
ai-edge-quantizer-nightly 0.0.1.dev20241005 A quantizer for advanced developers to quantize converted AI Edge models. 2024-10-05 00:11:31
owlite 0.0.8 A fake package to warn the user they are not installing the correct package. 2024-10-02 07:36:45
optimum-quanto 0.2.5 A pytorch quantization backend for optimum. 2024-10-01 11:52:23
vector-quantize-pytorch 1.17.8 Vector Quantization - Pytorch 2024-09-26 17:07:40
gptqmodel 1.0.6 A LLM quantization package with user-friendly apis. Based on GPTQ algorithm. 2024-09-26 16:01:07
llmcompressor-nightly 0.2.0.20240926 A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation. 2024-09-26 03:25:01
friendli-model-optimizer 0.7.0 Model Optimizer CLI for Friendli Engine. 2024-09-25 07:00:45
fmo-core 0.7 Model Optimizer for Friendli Engine. 2024-09-25 06:33:25
llmcompressor 0.2.0 A library for compressing large language models utilizing the latest techniques and research in the field for both training aware and post training techniques. The library is designed to be flexible and easy to use on top of PyTorch and HuggingFace Transformers, allowing for quick experimentation. 2024-09-23 21:34:25
hquant 0.1.0 High quality quantization for Pillow images. 2024-09-19 23:02:03
nncf 2.13.0 Neural Networks Compression Framework 2024-09-19 10:25:02
autoawq-kernels 0.0.8 AutoAWQ Kernels implements the AWQ kernels. 2024-09-12 11:46:36
bitlinear-pytorch 0.5.0 Implementation of the BitLinear layer from: The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits 2024-09-11 15:26:31
optimum-intel 1.19.0 Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. 2024-09-10 21:51:54
optimum 1.22.0 Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. 2024-09-10 16:18:56
awq-cli 0.2.0 Command-line interface for AutoAWQ 2024-08-09 08:02:27
optimum-benchmark 0.4.0 Optimum-Benchmark is a unified multi-backend utility for benchmarking Transformers, Timm, Diffusers and Sentence-Transformers with full support of Optimum's hardware optimizations & quantization schemes. 2024-07-31 08:35:53
autoawq 0.2.6 AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. 2024-07-23 18:20:27
sparsezoo 1.8.1 Neural network model repository for highly sparse and sparse-quantized models with matching sparsification recipes 2024-07-19 16:25:22
sparseml-nightly 1.8.0.20240630 Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models 2024-06-30 20:38:38
hourdayweektotal
2519819782250366
Elapsed time: 1.62641s