Name | Version | Summary | date |
transfusion-pytorch |
0.0.45 |
Transfusion in Pytorch |
2024-09-28 14:23:09 |
x-transformers |
1.37.4 |
X-Transformers - Pytorch |
2024-09-25 17:18:14 |
tabular-transformer |
0.3.0 |
Transformer adapted for tabular data domain |
2024-09-24 03:22:23 |
MEGABYTE-pytorch |
0.3.5 |
MEGABYTE - Pytorch |
2024-09-16 12:34:04 |
soundstorm-pytorch |
0.4.11 |
SoundStorm - Efficient Parallel Audio Generation from Google Deepmind, in Pytorch |
2024-09-15 01:36:07 |
e2-tts-pytorch |
1.0.5 |
E2-TTS in Pytorch |
2024-09-11 12:25:44 |
CoLT5-attention |
0.11.1 |
Conditionally Routed Attention |
2024-09-06 14:56:13 |
robotic-transformer-pytorch |
0.2.2 |
Robotic Transformer - Pytorch |
2024-09-01 17:47:36 |
vit-pytorch |
1.7.12 |
Vision Transformer (ViT) - Pytorch |
2024-08-28 19:22:12 |
mmdit |
0.1.4 |
MMDiT |
2024-08-24 22:59:28 |
taylor-series-linear-attention |
0.1.12 |
Taylor Series Linear Attention |
2024-08-18 16:59:01 |
iTransformer |
0.6.0 |
iTransformer - Inverted Transformer Are Effective for Time Series Forecasting |
2024-05-10 14:33:23 |
infini-transformer-pytorch |
0.1.1 |
Infini-Transformer in Pytorch |
2024-05-09 14:30:15 |
self-reasoning-tokens-pytorch |
0.0.4 |
Self Reasoning Tokens |
2024-05-05 18:01:34 |
audiolm-superfeel |
2.1.7 |
AudioLM - Language Modeling Approach to Audio Generation from Google Research - Pytorch |
2024-05-04 12:58:19 |
soundstorm-superfeel |
0.4.5 |
SoundStorm - Efficient Parallel Audio Generation from Google Deepmind, in Pytorch |
2024-05-04 10:33:55 |
make-a-video-pytorch |
0.4.0 |
Make-A-Video - Pytorch |
2024-05-03 17:34:51 |
s5-pytorch |
0.2.1 |
S5 - Simplified State Space Layers for Sequence Modeling - Pytorch |
2024-04-26 09:39:13 |
deformable-attention |
0.0.19 |
Deformable Attention - from the paper "Vision Transformer with Deformable Attention" |
2024-04-23 23:45:52 |
BS-RoFormer |
0.4.1 |
BS-RoFormer - Band-Split Rotary Transformer for SOTA Music Source Separation |
2024-04-21 16:44:28 |