Name | Version | Summary | date |
ALLM |
1.0.6 |
A simple and efficient python library for fast inference of LLMs. |
2024-05-12 06:07:35 |
ALLMDEV |
1.3.2 |
A simple and efficient python library for fast inference of GGUF Large Language Models. |
2024-05-11 18:08:11 |
glai |
0.1.3 |
Easy deployment of quantized llama models on cpu |
2024-01-13 19:04:27 |
gguf-llama |
0.0.18 |
Wrapper for simplified use of Llama2 GGUF quantized models. |
2024-01-13 18:59:58 |
gguf-modeldb |
0.0.3 |
A Llama2 quantized gguf model db with over 80 preconfigured models downloadable in one line, easly add your own models or adjust settings. Don't struggle with manual downloads again. |
2024-01-13 18:57:53 |
llama-memory |
0.0.1a1 |
Easy deployment of quantized llama models on cpu |
2024-01-09 01:59:55 |
LibreAssist |
0.0.1 |
An open-source library LibreAssist is a Local Assistance here to help you with your needs on your personal computer |
2023-12-23 13:59:09 |
gguf |
0.6.0 |
Read and write ML models in GGUF for GGML |
2023-12-12 16:49:53 |