Name | Version | Summary | date |
kimchima |
0.4.8 |
The collections of tools for ML model development. |
2024-04-19 05:41:13 |
optimum |
1.19.0 |
Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality. |
2024-04-16 13:44:57 |
zetascale |
2.4.2 |
Rapidly Build, Optimize, and Deploy SOTA AI Models |
2024-04-15 23:45:54 |
optimum-nvidia |
0.1.0b6 |
Optimum Nvidia is the interface between the Hugging Face Transformers and NVIDIA GPUs. " |
2024-04-11 21:13:38 |
optimum-neuron |
0.0.21 |
Optimum Neuron is the interface between the Hugging Face Transformers and Diffusers libraries and AWS Tranium and Inferentia accelerators. It provides a set of tools enabling easy model loading, training and inference on single and multiple neuron core settings for different downstream tasks. |
2024-04-08 22:45:46 |
optimum-habana |
1.11.0 |
Optimum Habana is the interface between the Hugging Face Transformers and Diffusers libraries and Habana's Gaudi processor (HPU). It provides a set of tools enabling easy model loading, training and inference on single- and multi-HPU settings for different downstream tasks. |
2024-04-04 13:34:06 |
minicons |
0.2.42 |
A package of useful functions to analyze transformer based language models. |
2024-04-03 04:50:13 |
torchscale-gml |
0.2.1 |
Transformers at any scale |
2024-04-02 21:56:10 |
wordslab-llms |
0.0.3 |
Python library for easy and efficient inference and fine tuning of popular open source llms |
2024-04-01 17:06:34 |
nano-askllm |
0.2.3 |
Unofficial implementation of the Ask-LLM paper 'How to Train Data-Efficient LLMs', arXiv:2402.09668. |
2024-03-22 14:50:06 |
autotransformers |
0.0.7 |
a Python package for automatic training and benchmarking of Language Models. |
2024-03-18 13:26:56 |
sibila |
0.3.6 |
Structured queries from local or online LLM models |
2024-03-16 19:36:26 |
nendo-plugin-textgen |
0.1.1 |
A text generation plugin using local LLMs or other text generation methods. Builds on top of `transformers` by Hugging Face. |
2024-03-14 09:17:28 |
meshgpt-pytorch |
1.1.1 |
MeshGPT Pytorch |
2024-03-14 02:22:54 |
local-llm-function-calling |
0.1.23 |
A tool for generating function arguments and choosing what function to call with local LLMs |
2024-03-12 12:29:43 |
nanodl |
1.2.1.dev1 |
A Jax-based library for designing and training transformer models from scratch. |
2024-03-12 09:51:11 |
transformers-stream-generator |
0.0.5 |
This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/Transformers. |
2024-03-11 14:18:02 |
cprex |
0.3.0 |
Chemical Properties Relation Extraction |
2024-03-08 16:47:08 |
nncf |
2.9.0 |
Neural Networks Compression Framework |
2024-03-06 11:39:35 |
savis |
0.2 |
A Sentence-Level Visualization of Attention in Transformers |
2024-03-03 09:17:00 |