Name | Version | Summary | date |
audiolm-superfeel |
2.1.7 |
AudioLM - Language Modeling Approach to Audio Generation from Google Research - Pytorch |
2024-05-04 12:58:19 |
soundstorm-superfeel |
0.4.5 |
SoundStorm - Efficient Parallel Audio Generation from Google Deepmind, in Pytorch |
2024-05-04 10:33:55 |
MEGABYTE-pytorch |
0.3.0 |
MEGABYTE - Pytorch |
2024-05-03 02:14:25 |
autoawq |
0.2.5 |
AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. |
2024-05-02 18:32:41 |
datadreamer.dev |
0.35.0 |
Prompt. Generate Synthetic Data. Train & Align Models. |
2024-05-02 05:36:30 |
REaLTabFormer |
0.1.7 |
A novel method for generating tabular and relational data using language models. |
2024-04-28 18:00:11 |
s5-pytorch |
0.2.1 |
S5 - Simplified State Space Layers for Sequence Modeling - Pytorch |
2024-04-26 09:39:13 |
nncf |
2.10.0 |
Neural Networks Compression Framework |
2024-04-25 12:01:53 |
deformable-attention |
0.0.19 |
Deformable Attention - from the paper "Vision Transformer with Deformable Attention" |
2024-04-23 23:45:52 |
BS-RoFormer |
0.4.1 |
BS-RoFormer - Band-Split Rotary Transformer for SOTA Music Source Separation |
2024-04-21 16:44:28 |
graph-transformer |
0.2.1 |
This is the implementation of Graph Transformer (https://www.ijcai.org/proceedings/2021/0214.pdf) |
2024-04-20 11:09:13 |
inseq |
0.6.0 |
Interpretability for Sequence Generation Models 🔍 |
2024-04-13 13:37:37 |
imagen-pytorch |
1.26.3 |
Imagen - unprecedented photorealism × deep level of language understanding |
2024-04-03 18:18:00 |
mltb2 |
0.12.3 |
Machine Learning Toolbox 2 |
2024-03-31 13:05:57 |
andromeda-torch |
0.0.9 |
Andromeda - Pytorch |
2024-03-21 22:02:59 |