Name | Version | Summary | date |
self-reasoning-tokens-pytorch |
0.0.4 |
Self Reasoning Tokens |
2024-05-05 18:01:34 |
audiolm-superfeel |
2.1.7 |
AudioLM - Language Modeling Approach to Audio Generation from Google Research - Pytorch |
2024-05-04 12:58:19 |
soundstorm-superfeel |
0.4.5 |
SoundStorm - Efficient Parallel Audio Generation from Google Deepmind, in Pytorch |
2024-05-04 10:33:55 |
REaLTabFormer |
0.1.7 |
A novel method for generating tabular and relational data using language models. |
2024-04-28 18:00:11 |
s5-pytorch |
0.2.1 |
S5 - Simplified State Space Layers for Sequence Modeling - Pytorch |
2024-04-26 09:39:13 |
deformable-attention |
0.0.19 |
Deformable Attention - from the paper "Vision Transformer with Deformable Attention" |
2024-04-23 23:45:52 |
graph-transformer |
0.2.1 |
This is the implementation of Graph Transformer (https://www.ijcai.org/proceedings/2021/0214.pdf) |
2024-04-20 11:09:13 |
inseq |
0.6.0 |
Interpretability for Sequence Generation Models 🔍 |
2024-04-13 13:37:37 |
andromeda-torch |
0.0.9 |
Andromeda - Pytorch |
2024-03-21 22:02:59 |