Name | Version | Summary | date |
---|---|---|---|
local-attention | 1.9.15 | Local attention, window with lookback, for language modeling | 2024-09-06 14:56:56 |
CoLT5-attention | 0.11.1 | Conditionally Routed Attention | 2024-09-06 14:56:13 |
stylegan2-pytorch | 1.8.11 | StyleGan2 in Pytorch | 2024-08-23 17:16:04 |
slot-attention | 1.4.0 | Implementation of Slot Attention in Pytorch | 2024-08-20 18:35:29 |
taylor-series-linear-attention | 0.1.12 | Taylor Series Linear Attention | 2024-08-18 16:59:01 |
phenaki-pytorch | 0.5.0 | Phenaki - Pytorch | 2024-07-29 20:32:56 |
pytorch-custom-utils | 0.0.21 | Pytorch Custom Utils | 2024-07-26 23:29:14 |
lumiere-pytorch | 0.0.24 | Lumiere | 2024-07-26 12:22:44 |
magvit2-pytorch | 0.4.8 | MagViT2 - Pytorch | 2024-07-22 15:47:55 |
byol-pytorch | 0.8.2 | Self-supervised contrastive learning made simple | 2024-07-15 18:28:57 |
gateloop-transformer | 0.2.5 | GateLoop Transformer | 2024-06-18 21:08:18 |
lion-pytorch | 0.2.2 | Lion Optimizer - Pytorch | 2024-06-15 16:28:21 |
mogrifier | 0.0.5 | Implementation of Mogrifier circuit from Deepmind | 2024-06-09 21:54:32 |
En-transformer | 1.6.6 | E(n)-Equivariant Transformer | 2024-06-02 15:20:23 |
iTransformer | 0.6.0 | iTransformer - Inverted Transformer Are Effective for Time Series Forecasting | 2024-05-10 14:33:23 |
self-reasoning-tokens-pytorch | 0.0.4 | Self Reasoning Tokens | 2024-05-05 18:01:34 |
make-a-video-pytorch | 0.4.0 | Make-A-Video - Pytorch | 2024-05-03 17:34:51 |
x-unet | 0.4.0 | X-Unet | 2024-05-03 17:22:28 |
soft-moe-pytorch | 0.1.8 | Soft MoE - Pytorch | 2024-04-24 15:24:31 |
deformable-attention | 0.0.19 | Deformable Attention - from the paper "Vision Transformer with Deformable Attention" | 2024-04-23 23:45:52 |
hour | day | week | total |
---|---|---|---|
36 | 2398 | 10756 | 265607 |