mambapy


Namemambapy JSON
Version 1.0.0 PyPI version JSON
download
home_pagehttps://github.com/votre_nom/mamba.py
SummaryA simple and efficient Mamba implementation in pure PyTorch.
upload_time2024-06-27 07:24:24
maintainerNone
docs_urlNone
authorAlexandre TL
requires_python>=3.6
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # mamba.py 🐍 : a simple and efficient Mamba implementation
A straightfoward implementation of [Mamba](https://arxiv.org/abs/2312.00752) in PyTorch with a simple parallel scan implementation, offering an major speedup over a sequential implementation, as the parallel scan allows the parallelization over the time dimension.
It combines the ease of read with good performances when training. [Jamba](https://www.ai21.com/blog/announcing-jamba) is also supported.

## Updates
- <b>21/04/2024</b> : Added the `jamba.py` file, which implements the [Jamba](https://www.ai21.com/blog/announcing-jamba) architecture (mix of Mamba and attention layers). Also added as a possible backend the official CUDA implementation.

- <b>30/03/2024</b> : Updated inference function, now supports sampling temperature and batch_size.

- <b>09/02/2024</b> : First part of the performance update. For small sequences (<128), it can speed up training by more than 20% compared to the first version. For setups close to what can found in practice (like in NLP), it can speed up training by 10%. See [this](https://github.com/alxndrTL/mamba.py/pull/12).

- <b>22/01/2024</b> : Added a MLX version of `mamba.py`, which supports inference as well as training. This version is similar to PyTorch, and allows Mac users to play around with Mamba models. It was [tested]() on the largest Mamba trained to date (2.8b)

- <b>17/01/2024</b> : Added a step function for inference. It uses the "RNN-formulation" of Mamba to greatly speed up inference.
___
## Overview

![speed comparison](assets/speed_comparison.png)

This graph shows the training time (forward and backward pass) of a single Mamba layer (`d_model=16, d_state=16`) using 3 different methods : `CUDA`, which is the official [Mamba implementation](https://github.com/state-spaces/mamba), `mamba.py`, which is this repo, and `sequential`, which is a sequential (RNN-like) implementation of the selective scan.

This repo contains a simple and readable code implementing the [Mamba](https://arxiv.org/abs/2312.00752) architecture in pure PyTorch as well as MLX. You can also play around with the Jamba model, which combines Mamba and attention layers. The primary goal of this repo is educational.

<p align="center">
    <img src="assets/logo.png" alt="a python and a mamba" width="300" height="300" alt="python mamba"/>
</p>

<u>The repo is organized as follows : </u>
- `pscan.py` : a PyTorch implementation of Blelloch's parallel scan
- `mamba.py` : the Mamba model, as described in the [paper](https://arxiv.org/abs/2312.00752). It is numerically equivalent (initialization, forward and backward pass).
- `mamba_lm.py` : encapsulates a Mamba model in order to use it as a language model
- `jamba.py` : a clean implementation of the Jamba model in PyTorch
- `vim.py` : an implementation of [Vision Mamba](https://arxiv.org/abs/2401.09417).
- `📁 mlx` : basically the same code as above, but in MLX.
- `📁 onnx` : export a trained Mamba model in ONNX for inference.
- `📁 docs` : a folder containing annotated explanations about the code, focusing on the parallel scan
- `📁 examples` : two examples of how to use the Mamba model in PyTorch as well as a training file.

## Usage

The most basic usage is to use the `Mamba` object ([mamba.py](mamba.py)), which implements a simple Mamba model given a configuration.
No embedding, no head : input is `(B, L, D)` and output is `(B, L, D)` as well.

```python
import torch
from mamba import Mamba, MambaConfig

config = MambaConfig(d_model=16, n_layers=2)
model = Mamba(config)

B, L, D = 2, 64, 16
x = torch.randn(B, L, D)
y = model(x)

assert y.shape == x.shape
```

The class `MambaLM` ([mamba_lm.py](mamba_lm.py)) builds on the `Mamba` object and offers a classic API for language models. It can be used as follows :

```python
from mamba_lm import MambaLM, MambaLMConfig

config = MambaLMConfig(d_model=16, n_layers=4, vocab_size=32000)
model = MambaLM(config)

x = torch.randint(high=32000, size=(16, 64))
logits = model(x) # (B, L, vocab_size)
```

It simply encapsulates a `Mamba` object with an embedding layer, a final normalization and a language modeling head.

You can use it off the shelf with a pretrained Mamba model :
```python
from mamba_lm import from_pretrained
from transformers import AutoTokenizer

model = from_pretrained('state-spaces/mamba-130m').to("cuda")
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')

output = model.generate(tokenizer, "Mamba is a type of")
```

This is the structure of the `mamba.py` modules:

<p align="center">
    <img src="assets/mamba_structure.jpg" width="737" height="429" alt="mamba structure"/>
</p>

## Jamba
You can also train and run inference on Jamba models. Take a look at the `jamba.py` file, which constructs a `Jamba` object, which interleaves Mamba layers (from `mamba.py`) with attention layers.

This is the structure of the modules  found in `jamba.py` :

<p align="center">
    <img src="assets/jamba_structure.jpg" width="737" height="429'" alt="mamba structure"/>
</p>

<p align="center">
    <img src="assets/jamba_modules.jpg" width="602" height="343" alt="mamba structure"/>
</p>

The API is the same as with the `Mamba` and `MambaLM` models.
You can load a pretrained Jamba model like so :

```python
from jamba_lm import from_pretrained
from transformers import AutoTokenizer

model = from_pretrained('TechxGenus/Mini-Jamba').to("cuda")
tokenizer = AutoTokenizer.from_pretrained('TechxGenus/Mini-Jamba')

output = model.generate(tokenizer, "def min(arr):")
```

## Examples
There are two basics examples available :
- `example_llm.ipynb` : load a Mamba model with pretrained weights (from 130M to 2.8B from HuggingFace)
- `example_e2e_training.ipynb` : an end-to-end training example where a Mamba model is employed as a world model for a simple 3-3 grid game (training is not completed, the model should be larger).

If you want a full training example (like in llama2.c), you can check the [othello_mamba repo](https://github.com/alxndrTL/othello_mamba) I've done. With this repo, you can train a Mamba or a Jamba from scratch, use `bfloat16`, easily swipe it with a Transformer, come up with your own data, etc ...

___
## Performances
This section provides a more comprehensive performance comparison between `mamba.py` and the official Mamba implementation.
Overall, as the first graph of this file shows, both have approximately the same asymptotic performance with respect to the sequence length. You can think as `mamba.py` as a regular Transformer implementation, while the official Mamba implementation is more like FlashAttention v1. Both have their owns advantages.

That being said, does the two implementations have the same asymptotic performances with respect to the other parameters ?

##### `d_model` asymptotic performances
<p align="center">
    <img src="assets/training_vs_d_model.png" alt="a python and a mamba" 
    width="800" height="413" alt="python mamba"/>
</p>

We can see that both implementations behave the same as we increase `d_model`. The gap between the two stays roughly the same. (`mamba.py` is overall ~2x slower)

##### `d_state` asymptotic performances
<p align="center">
    <img src="assets/training_vs_d_state.png" alt="a python and a mamba" 
    width="800" height="413" alt="python mamba"/>
</p>

This graph is important. We see that here, the asymptotic performance is not the same as we increase `d_state`. For a reminder, `d_state`, or $N$ in the paper, is the state expansion factor : each channel of the input is expanded into $N$ channels of the hidden state.

<i>Note : the CUDA version doesn't seem to be impacted by the increase of `d_state`. This is because the benchmark was done with a batch size of 1 : the GPU was not at its full capacity and thus the impact of an increased `d_state` isn't visible. The same happens if you have a small model, or a small input length. See [this issue](https://github.com/alxndrTL/mamba.py/issues/8).</i>

Does it matter in practice ? As of now, all the pretrained Mamba models (up to 2.8B parameters) used `d_state=16`, so this change of performance over `d_state` isn't important in this case. As `d_state` is not something that is supposed to grow (contrary to the seq length or `d_model`), this isn't a catastrophic result, but something to consider.

However, it is interesting to relate this observation with the claim made by Albert Gu and Tri Dao [Mamba paper](https://arxiv.org/abs/2312.00752) : <i>The main idea is to leverage properties of modern accelerators (GPUs) to <b>materialize the state ℎ only in more efficient levels of the memory hierarchy.</b></i>
They also describe (Annex D) the main data movements of their selective scan : working mainly in SRAM, they can reduce the memory reads/writes by a factor of $O(N)$. This explains the different asymptotic behaviors that we see here.

With `d_state=16` (as in `state-spaces/mamba-2.8b-slimpj`), the gap between the two is relatively small, but with `d_state=64` (currently not used in any models), the gap widens. (note the OOM on the second graph)

<p align="center">
    <img src="assets/training_vs_seqlen_d_state_var.png" alt="a python and a mamba" 
    width="1152" height="240" alt="python mamba"/>
</p>

All the previous graph were computed with a batch size of 1, on a A100 80GB.
It is a measure of both the forward and backward pass of a single Mamba block.

The previous analysis showed the importance of kernel fusion, which reduces the memory accesses by $O(N)$, which makes the whole process faster.

But memory requierement should also be considered : the official Mamba implementation uses <b>recomputation</b> in the backward pass : rather than keeping in memory the activations computed during the forward pass, it simply recomputes them in the backward pass, when needed. This greatly reduces the memory requierement of the Mamba model when doing training. This is not implemented in this repo.

Hence, this repo implements one of the three techniques mentionned in the Mamba paper that form the so called "hardware-aware selective scan" : the parallel scan.
We say how kernel fusion impacts the speed while recomputation the memory requierements.

___
## Sources and where to learn more
- the [Mamba paper](https://arxiv.org/abs/2312.00752) : describes the Mamba architecture as implemented in this repo, which allows to model sequences in linear time.
- the [Mamba implementation](https://github.com/state-spaces/mamba), which is written in PyTorch but uses a parallel scan written in CUDA. This is the version that is the fastest. 
- [a minimal PyTorch implementation of Mamba](https://github.com/johnma2006/mamba-minimal), which implements the scan operation as a sequential loop (its performance are a bit worse than the 'sequential' line in the first graph). This code closely follows [this file](https://github.com/state-spaces/mamba/blob/da2626b5a5f347a8e844ac5e96a2cbcde3c34abb/mamba_ssm/modules/mamba_simple.py) from the officile Mamba implementation, but replaces the CUDA convolution with `torch.nn.Conv1d`, and the selective scan written in CUDA with a sequential loop. The code of this repo follows the structure of these 2 files.
- [Prefix Sums and Their Applications](https://www.cs.cmu.edu/~guyb/papers/Ble93.pdf), by Guy E. Blelloch (1993).
- [Parallelizing Linear Recurrent Neural Nets Over Sequence Length](https://arxiv.org/abs/1709.04057) : applies a parallel scan over the sequence in order to get rid of the sequential for-loop.
- x.com/fchollet : original pscan implementation.

## TODOs
- pscan implementation using [ThunderKittens](https://hazyresearch.stanford.edu/blog/2024-05-12-quick-tk) ?
- following the performance update, update perf graph
- plot the training mem consumption of the three differents mamba imple (official, naive, mamba.py)
- ~~Jamba ? inference and/or fine-tuning ?~~
- docs
- ~~more tests with an increased `d_model` (add a Performances section)~~
- ~~a step function, used for (auto-regressive) inference.~~
- ~~a training function, similar to [llama2.c](https://github.com/karpathy/llama2.c)~~

perfs related:
- ~~unfold the for-loops in `pscan.py` to achieve better performance (see [François Fleuret's pscan](https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?p=mygptrnn.git;a=blob;f=pscan.py;h=0bb0d145bf9c6c82115956c8ce1e6a063e56e747;hb=HEAD)) (although this will sacrifice readability of bit)~~
~~- write a reverse parallel scan specifically for the backward pass. (For now, we have to flip the array before and after the scan).~~
- reduce the memory usage somehow (at the cost of speed if needed)
- use torch.compile(). As far as I tested, it doesn’t work for now. It seems it isn’t happy with the custom PScan autograd function. Need to investigate. <b>(see [PR#1](https://github.com/alxndrTL/mamba.py/pull/1))</b>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/votre_nom/mamba.py",
    "name": "mambapy",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": null,
    "author": "Alexandre TL",
    "author_email": "alexandretl3434@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/d3/a7/885e67c4de2d34f84d588911940733c8e7e9b85382cff107e6652696b381/mambapy-1.0.0.tar.gz",
    "platform": null,
    "description": "# mamba.py \ud83d\udc0d : a simple and efficient Mamba implementation\nA straightfoward implementation of [Mamba](https://arxiv.org/abs/2312.00752) in PyTorch with a simple parallel scan implementation, offering an major speedup over a sequential implementation, as the parallel scan allows the parallelization over the time dimension.\nIt combines the ease of read with good performances when training. [Jamba](https://www.ai21.com/blog/announcing-jamba) is also supported.\n\n## Updates\n- <b>21/04/2024</b> : Added the `jamba.py` file, which implements the [Jamba](https://www.ai21.com/blog/announcing-jamba) architecture (mix of Mamba and attention layers). Also added as a possible backend the official CUDA implementation.\n\n- <b>30/03/2024</b> : Updated inference function, now supports sampling temperature and batch_size.\n\n- <b>09/02/2024</b> : First part of the performance update. For small sequences (<128), it can speed up training by more than 20% compared to the first version. For setups close to what can found in practice (like in NLP), it can speed up training by 10%. See [this](https://github.com/alxndrTL/mamba.py/pull/12).\n\n- <b>22/01/2024</b> : Added a MLX version of `mamba.py`, which supports inference as well as training. This version is similar to PyTorch, and allows Mac users to play around with Mamba models. It was [tested]() on the largest Mamba trained to date (2.8b)\n\n- <b>17/01/2024</b> : Added a step function for inference. It uses the \"RNN-formulation\" of Mamba to greatly speed up inference.\n___\n## Overview\n\n![speed comparison](assets/speed_comparison.png)\n\nThis graph shows the training time (forward and backward pass) of a single Mamba layer (`d_model=16, d_state=16`) using 3 different methods : `CUDA`, which is the official [Mamba implementation](https://github.com/state-spaces/mamba), `mamba.py`, which is this repo, and `sequential`, which is a sequential (RNN-like) implementation of the selective scan.\n\nThis repo contains a simple and readable code implementing the [Mamba](https://arxiv.org/abs/2312.00752) architecture in pure PyTorch as well as MLX. You can also play around with the Jamba model, which combines Mamba and attention layers. The primary goal of this repo is educational.\n\n<p align=\"center\">\n    <img src=\"assets/logo.png\" alt=\"a python and a mamba\" width=\"300\" height=\"300\" alt=\"python mamba\"/>\n</p>\n\n<u>The repo is organized as follows : </u>\n- `pscan.py` : a PyTorch implementation of Blelloch's parallel scan\n- `mamba.py` : the Mamba model, as described in the [paper](https://arxiv.org/abs/2312.00752). It is numerically equivalent (initialization, forward and backward pass).\n- `mamba_lm.py` : encapsulates a Mamba model in order to use it as a language model\n- `jamba.py` : a clean implementation of the Jamba model in PyTorch\n- `vim.py` : an implementation of [Vision Mamba](https://arxiv.org/abs/2401.09417).\n- `\ud83d\udcc1 mlx` : basically the same code as above, but in MLX.\n- `\ud83d\udcc1 onnx` : export a trained Mamba model in ONNX for inference.\n- `\ud83d\udcc1 docs` : a folder containing annotated explanations about the code, focusing on the parallel scan\n- `\ud83d\udcc1 examples` : two examples of how to use the Mamba model in PyTorch as well as a training file.\n\n## Usage\n\nThe most basic usage is to use the `Mamba` object ([mamba.py](mamba.py)), which implements a simple Mamba model given a configuration.\nNo embedding, no head : input is `(B, L, D)` and output is `(B, L, D)` as well.\n\n```python\nimport torch\nfrom mamba import Mamba, MambaConfig\n\nconfig = MambaConfig(d_model=16, n_layers=2)\nmodel = Mamba(config)\n\nB, L, D = 2, 64, 16\nx = torch.randn(B, L, D)\ny = model(x)\n\nassert y.shape == x.shape\n```\n\nThe class `MambaLM` ([mamba_lm.py](mamba_lm.py)) builds on the `Mamba` object and offers a classic API for language models. It can be used as follows :\n\n```python\nfrom mamba_lm import MambaLM, MambaLMConfig\n\nconfig = MambaLMConfig(d_model=16, n_layers=4, vocab_size=32000)\nmodel = MambaLM(config)\n\nx = torch.randint(high=32000, size=(16, 64))\nlogits = model(x) #\u00a0(B, L, vocab_size)\n```\n\nIt simply encapsulates a `Mamba` object with an embedding layer, a final normalization and a language modeling head.\n\nYou can use it off the shelf with a pretrained Mamba model :\n```python\nfrom mamba_lm import from_pretrained\nfrom transformers import AutoTokenizer\n\nmodel = from_pretrained('state-spaces/mamba-130m').to(\"cuda\")\ntokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')\n\noutput = model.generate(tokenizer, \"Mamba is a type of\")\n```\n\nThis is the structure of the `mamba.py` modules:\n\n<p align=\"center\">\n    <img src=\"assets/mamba_structure.jpg\" width=\"737\" height=\"429\" alt=\"mamba structure\"/>\n</p>\n\n## Jamba\nYou can also train and run inference on Jamba models. Take a look at the `jamba.py` file, which constructs a `Jamba` object, which interleaves Mamba layers (from `mamba.py`) with attention layers.\n\nThis is the structure of the modules  found in `jamba.py` :\n\n<p align=\"center\">\n    <img src=\"assets/jamba_structure.jpg\" width=\"737\" height=\"429'\" alt=\"mamba structure\"/>\n</p>\n\n<p align=\"center\">\n    <img src=\"assets/jamba_modules.jpg\" width=\"602\" height=\"343\" alt=\"mamba structure\"/>\n</p>\n\nThe API is the same as with the `Mamba` and `MambaLM` models.\nYou can load a pretrained Jamba model like so :\n\n```python\nfrom jamba_lm import from_pretrained\nfrom transformers import AutoTokenizer\n\nmodel = from_pretrained('TechxGenus/Mini-Jamba').to(\"cuda\")\ntokenizer = AutoTokenizer.from_pretrained('TechxGenus/Mini-Jamba')\n\noutput = model.generate(tokenizer, \"def min(arr):\")\n```\n\n## Examples\nThere are two basics examples available :\n- `example_llm.ipynb` : load a Mamba model with pretrained weights (from 130M to 2.8B from HuggingFace)\n- `example_e2e_training.ipynb` : an end-to-end training example where a Mamba model is employed as a world model for a simple 3-3 grid game (training is not completed, the model should be larger).\n\nIf you want a full training example (like in llama2.c), you can check the [othello_mamba repo](https://github.com/alxndrTL/othello_mamba) I've done. With this repo, you can train a Mamba or a Jamba from scratch, use `bfloat16`, easily swipe it with a Transformer, come up with your own data, etc ...\n\n___\n## Performances\nThis section provides a more comprehensive performance comparison between `mamba.py` and the official Mamba implementation.\nOverall, as the first graph of this file shows, both have approximately the same asymptotic performance with respect to the sequence length. You can think as `mamba.py` as a regular Transformer implementation, while the official Mamba implementation is more like FlashAttention v1. Both have their owns advantages.\n\nThat being said, does the two implementations have the same asymptotic performances with respect to the other parameters ?\n\n##### `d_model` asymptotic performances\n<p align=\"center\">\n    <img src=\"assets/training_vs_d_model.png\" alt=\"a python and a mamba\" \n    width=\"800\" height=\"413\" alt=\"python mamba\"/>\n</p>\n\nWe can see that both implementations behave the same as we increase `d_model`. The gap between the two stays roughly the same. (`mamba.py` is overall ~2x slower)\n\n##### `d_state` asymptotic performances\n<p align=\"center\">\n    <img src=\"assets/training_vs_d_state.png\" alt=\"a python and a mamba\" \n    width=\"800\" height=\"413\" alt=\"python mamba\"/>\n</p>\n\nThis graph is important. We see that here, the asymptotic performance is not the same as we increase `d_state`. For a reminder, `d_state`, or $N$ in the paper, is the state expansion factor : each channel of the input is expanded into $N$ channels of the hidden state.\n\n<i>Note : the CUDA version doesn't seem to be impacted by the increase of `d_state`. This is because the benchmark was done with a batch size of 1 : the GPU was not at its full capacity and thus the impact of an increased `d_state` isn't visible. The same happens if you have a small model, or a small input length. See [this issue](https://github.com/alxndrTL/mamba.py/issues/8).</i>\n\nDoes it matter in practice ? As of now, all the pretrained Mamba models (up to 2.8B parameters) used `d_state=16`, so this change of performance over `d_state` isn't important in this case. As `d_state` is not something that is supposed to grow (contrary to the seq length or `d_model`), this isn't a catastrophic result, but something to consider.\n\nHowever, it is interesting to relate this observation with the claim made by Albert Gu and Tri Dao [Mamba paper](https://arxiv.org/abs/2312.00752) : <i>The main idea is to leverage properties of modern accelerators (GPUs) to <b>materialize the state \u210e only in more efficient levels of the memory hierarchy.</b></i>\nThey also describe (Annex D) the main data movements of their selective scan : working mainly in SRAM, they can reduce the memory reads/writes by a factor of $O(N)$. This explains the different asymptotic behaviors that we see here.\n\nWith `d_state=16` (as in `state-spaces/mamba-2.8b-slimpj`), the gap between the two is relatively small, but with `d_state=64` (currently not used in any models), the gap widens. (note the OOM on the second graph)\n\n<p align=\"center\">\n    <img src=\"assets/training_vs_seqlen_d_state_var.png\" alt=\"a python and a mamba\" \n    width=\"1152\" height=\"240\" alt=\"python mamba\"/>\n</p>\n\nAll the previous graph were computed with a batch size of 1, on a A100 80GB.\nIt is a measure of both the forward and backward pass of a single Mamba block.\n\nThe previous analysis showed the importance of kernel fusion, which reduces the memory accesses by $O(N)$, which makes the whole process faster.\n\nBut memory requierement should also be considered : the official Mamba implementation uses <b>recomputation</b> in the backward pass : rather than keeping in memory the activations computed during the forward pass, it simply recomputes them in the backward pass, when needed. This greatly reduces the memory requierement of the Mamba model when doing training. This is not implemented in this repo.\n\nHence, this repo implements one of the three techniques mentionned in the Mamba paper that form the so called \"hardware-aware selective scan\" : the parallel scan.\nWe say how kernel fusion impacts the speed while recomputation the memory requierements.\n\n___\n## Sources and where to learn more\n- the [Mamba paper](https://arxiv.org/abs/2312.00752) : describes the Mamba architecture as implemented in this repo, which allows to model sequences in linear time.\n- the [Mamba implementation](https://github.com/state-spaces/mamba), which is written in PyTorch but uses a parallel scan written in CUDA. This is the version that is the fastest. \n- [a minimal PyTorch implementation of Mamba](https://github.com/johnma2006/mamba-minimal), which implements the scan operation as a sequential loop (its performance are a bit worse than the 'sequential' line in the first graph). This code closely follows [this file](https://github.com/state-spaces/mamba/blob/da2626b5a5f347a8e844ac5e96a2cbcde3c34abb/mamba_ssm/modules/mamba_simple.py) from the officile Mamba implementation, but replaces the CUDA convolution with `torch.nn.Conv1d`, and the selective scan written in CUDA with a sequential loop. The code of this repo follows the structure of these 2 files.\n- [Prefix Sums and Their Applications](https://www.cs.cmu.edu/~guyb/papers/Ble93.pdf), by Guy E. Blelloch (1993).\n- [Parallelizing Linear Recurrent Neural Nets Over Sequence Length](https://arxiv.org/abs/1709.04057) : applies a parallel scan over the sequence in order to get rid of the sequential for-loop.\n- x.com/fchollet : original pscan implementation.\n\n## TODOs\n- pscan implementation using [ThunderKittens](https://hazyresearch.stanford.edu/blog/2024-05-12-quick-tk) ?\n- following the performance update, update perf graph\n- plot the training mem consumption of the three differents mamba imple (official, naive, mamba.py)\n- ~~Jamba ? inference and/or fine-tuning ?~~\n- docs\n- ~~more tests with an increased `d_model` (add a Performances section)~~\n- ~~a step function, used for (auto-regressive) inference.~~\n- ~~a training function, similar to [llama2.c](https://github.com/karpathy/llama2.c)~~\n\nperfs related:\n- ~~unfold the for-loops in `pscan.py` to achieve better performance (see [Fran\u00e7ois Fleuret's pscan](https://fleuret.org/cgi-bin/gitweb/gitweb.cgi?p=mygptrnn.git;a=blob;f=pscan.py;h=0bb0d145bf9c6c82115956c8ce1e6a063e56e747;hb=HEAD)) (although this will sacrifice readability of bit)~~\n~~- write a reverse parallel scan specifically for the backward pass. (For now, we have to flip the array before and after the scan).~~\n- reduce the memory usage somehow (at the cost of speed if needed)\n- use torch.compile(). As far as I tested, it doesn\u2019t work for now. It seems it isn\u2019t happy with the custom PScan autograd function. Need to investigate. <b>(see [PR#1](https://github.com/alxndrTL/mamba.py/pull/1))</b>\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A simple and efficient Mamba implementation in pure PyTorch.",
    "version": "1.0.0",
    "project_urls": {
        "Homepage": "https://github.com/votre_nom/mamba.py"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "3e3b68436d6dc1e8edd244e931a18146271d8e867b422d2af1a2cdc843c14295",
                "md5": "6e3ff9dadbb19655e6ade0a2325af6bf",
                "sha256": "3d424f311a7cb7a79b9eb65c5abd046721017edef412875beccb9dd8f9901f65"
            },
            "downloads": -1,
            "filename": "mambapy-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6e3ff9dadbb19655e6ade0a2325af6bf",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 34522,
            "upload_time": "2024-06-27T07:24:23",
            "upload_time_iso_8601": "2024-06-27T07:24:23.622471Z",
            "url": "https://files.pythonhosted.org/packages/3e/3b/68436d6dc1e8edd244e931a18146271d8e867b422d2af1a2cdc843c14295/mambapy-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d3a7885e67c4de2d34f84d588911940733c8e7e9b85382cff107e6652696b381",
                "md5": "d237449ee47a08e2398e8c774c8ed401",
                "sha256": "ec05006692a5d630d8b4bf11faeffd470b74cff114822a00955e804b2f31051e"
            },
            "downloads": -1,
            "filename": "mambapy-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d237449ee47a08e2398e8c774c8ed401",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 31726,
            "upload_time": "2024-06-27T07:24:24",
            "upload_time_iso_8601": "2024-06-27T07:24:24.716642Z",
            "url": "https://files.pythonhosted.org/packages/d3/a7/885e67c4de2d34f84d588911940733c8e7e9b85382cff107e6652696b381/mambapy-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-27 07:24:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "votre_nom",
    "github_project": "mamba.py",
    "github_not_found": true,
    "lcname": "mambapy"
}
        
Elapsed time: 0.66224s