Name | Version | Summary | date |
unhallucinated-faster-whisper |
0.0.3 |
Faster Whisper transcription with CTranslate2 |
2025-02-11 18:05:58 |
auto-round |
0.4.5 |
Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs |
2025-01-27 08:29:47 |
auto-round-lib |
0.4.5 |
Repository of AutoRound: Advanced Weight-Only Quantization Algorithm for LLMs |
2025-01-27 08:25:33 |
bitsandbytes-npu-beta |
0.45.2 |
k-bit optimizers and matrix multiplication routines. |
2025-01-07 09:27:26 |
faster-whisper |
1.1.1 |
Faster Whisper transcription with CTranslate2 |
2025-01-01 14:47:21 |
neural-compressor-tf |
3.2 |
Repository of Intel® Neural Compressor |
2024-12-28 12:06:22 |
neural-compressor-pt |
3.2 |
Repository of Intel® Neural Compressor |
2024-12-28 12:06:02 |
neural-compressor |
3.2 |
Repository of Intel® Neural Compressor |
2024-12-28 12:05:14 |
quantizers |
1.1.0 |
None |
2024-12-09 23:12:02 |
mindspore-gs |
0.6.0 |
A MindSpore model optimization algorithm set.. |
2024-11-27 10:31:42 |
mobius-faster-whisper |
1.1.1 |
Mobius Version of Faster Whisper transcription with CTranslate2 |
2024-10-24 14:02:53 |
ctranslate2 |
4.5.0 |
Fast inference engine for Transformer models |
2024-10-22 13:32:16 |
topai-faster-whisper |
1.0.4.post4 |
Faster Whisper transcription with CTranslate2 |
2024-10-17 09:28:10 |
pngquant-cli |
3.0.3 |
Precompiled binaries for pngquant, the lossy PNG compressor based on libimagequant. |
2024-10-04 08:58:36 |
neural-compressor-3x-tf |
3.0 |
Repository of Intel® Neural Compressor |
2024-08-11 13:26:43 |
neural-compressor-3x-pt |
3.0 |
Repository of Intel® Neural Compressor |
2024-08-11 13:24:08 |
onnx-neural-compressor |
1.0 |
Repository of Neural Compressor ORT |
2024-07-31 16:36:14 |
neural-solution |
2.6.1 |
Repository of Intel® Neural Compressor |
2024-07-02 03:30:03 |
qattn |
0.1.1 |
Efficient GPU Kernels in Triton for Quantized Vision Transformers |
2024-06-21 17:27:15 |
neural-insights |
2.6 |
Repository of Intel® Neural Compressor |
2024-06-14 14:50:33 |