mblm


Namemblm JSON
Version 0.3.0 PyPI version JSON
download
home_pageNone
SummaryMultiscale Byte Language Model
upload_time2025-02-11 13:30:37
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords byte language models hierarchical architectures language models long context window machine learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI
coveralls test coverage No coveralls.
            # Multiscale Byte Language Model

The Multiscale Byte Language Model is a model-agnostic, hierarchical architecture for causal byte-level language modeling that scales to million-length sequences.

<p align="center">
    <img src="https://raw.githubusercontent.com/ai4sd/multiscale-byte-lm/refs/heads/main/assets/mblm.png" alt="mblm-architecture" width="600"/>
</p>

## Install

MBLM is tested against Python versions 3.10, 3.11, 3.12 and 3.13.

Install from PyPI:

```
pip install mblm
```

For `uv`:

```
uv add mblm
```

### Using Torch and Mamba

You will need to **install a recent PyTorch version manually**. We use `>=2.6.0`. It is best to do this after installing the package since some sub-dependencies might install their own (CPU) PyTorch version.

```
pip install 'torch>=2.6.0' --index-url https://download.pytorch.org/whl/cu124
```

For `uv`:

```
uv pip install 'torch>=2.6.0' --index-url https://download.pytorch.org/whl/cu124
```

Finally, in order to use the efficient [Mamba-SSM](https://github.com/state-spaces/mamba), follow their instructions on the homepage. You'll need Linux and a GPU available during installation.

```
pip install "mamba-ssm>=2.2.2" "causal-conv1d>=1.4.0" --no-build-isolation
```

For `uv`:

```
uv pip install "mamba-ssm>=2.2.2" "causal-conv1d>=1.4.0" --no-build-isolation
```

If `mamba-ssm` is not available, we fall back to using `mambapy`, which is written in pure PyTorch.

## Quickstart

### Using a built-in stage block

MBLM can be used with the default Transformer Decoder or Mamba block. The below model is a 2D MBLM with a global Mamba and local Transformer model.

```py
import torch

from mblm import (
    MBLM,
    MambaBlock,
    MBLMModelConfig,
    MBLMReturnType,
    TransformerBlock,
)

mblm = MBLM(
    MBLMModelConfig(
        num_tokens=257,
        hidden_dims=[1024, 1024],
        seq_lens=[1024, 8],
        num_layers=[5, 5],
        pad_token_id=256,
        train_checkpoint_chunks=None,
        block=[
            MambaBlock(
                d_state=128,
                d_conv=4,
                expand=2,
                headdim=64,
                pos_emb_type=None,
            ),
            TransformerBlock(
                attn_head_dims=64,
                attn_num_heads=16,
                attn_use_rot_embs=True,
                use_flash_attn=True,
                pos_emb_type="fixed",
            ),
        ],
    )
)

x = torch.randint(0, 258, (1, 12)).long()

# Choose between any of the return types
logits = mblm.forward(x, return_type=MBLMReturnType.LOGITS)
loss = mblm.forward(x, return_type=MBLMReturnType.LOSS)
loss, logits = mblm.forward(x, return_type=MBLMReturnType.LOSS_LOGITS)

assert logits.shape == (1, 12, 257)
assert loss.ndim == 0
```

Alternatively, you can read configuration from a YAML string (or file):

```py
import torch
import yaml

from mblm import MBLM, MBLMModelConfig, MBLMReturnType

yml_model_config = """
num_tokens: 257
hidden_dims: [1024, 1024]
seq_lens: [1024, 8]
num_layers: [5, 5]
pad_token_id: 256
train_checkpoint_chunks: null
block:
    - d_state: 128
      d_conv: 4
      expand: 2
      headdim: 64
      pos_emb_type: null
    - attn_head_dims: 64
      attn_num_heads: 16
      attn_use_rot_embs: true
      use_flash_attn: true
      pos_emb_type: fixed
"""

parsed_config = yaml.safe_load(yml_model_config)
mblm = MBLM(MBLMModelConfig.model_validate(parsed_config))
x = torch.randint(0, 258, (1, 12)).long()
mblm.forward(x, return_type=MBLMReturnType.LOSS)
```

### Custom stage blocks

You can define custom stage blocks for MBLM as follows. A stageblock must provide a `block_type` field as well as a `to_model` function with the signature below that returns a `torch.nn.Module`. Other than that, specify whatever other parameters you might need. Note that the default blocks (Transformer and Mamba) are already registered.

```py
import torch

from mblm import MBLM, MBLMModelConfig, MBLMReturnType, TransformerBlock
from mblm.model.block import StageBlock

# Define any custom model
class LSTM(torch.nn.Module):
    def __init__(self, lstm: torch.nn.LSTM):
        super().__init__()
        self.lstm = lstm

    def forward(self, input_ids: torch.Tensor) -> torch.Tensor:
        # Wrap the LSTM forward to extract the output
        out, _ = self.lstm(input_ids)
        return out

# Add a block config and inherit from StageBlock
class LSTMBlock(StageBlock):
    block_type: str = "lstm"

    # Add whatever is needed
    dropout: float

    def to_model(self, model_dim: int, num_layers: int) -> torch.nn.Module:
        return LSTM(
            torch.nn.LSTM(
                input_size=model_dim,
                hidden_size=model_dim,
                batch_first=True,
                dropout=self.dropout,
                num_layers=num_layers,
            )
        )

mblm = MBLM(
    MBLMModelConfig(
        num_tokens=257,
        hidden_dims=[1024, 1024],
        seq_lens=[1024, 8],
        num_layers=[5, 5],
        pad_token_id=256,
        train_checkpoint_chunks=None,
        block=[
            LSTMBlock(
                dropout=0.1,
                pos_emb_type=None,
            ),
            TransformerBlock(
                attn_head_dims=64,
                attn_num_heads=16,
                attn_use_rot_embs=True,
                use_flash_attn=True,
                pos_emb_type="fixed",
            ),
        ],
    )
)

x = torch.randint(0, 258, (1, 12)).long()
mblm.forward(x, return_type=MBLMReturnType.LOSS)
```

If you want to parse a YAML config to a custom block, **register the block** before creating the model:

```py
import torch
import yaml

from mblm import MBLM, MBLMModelConfig, MBLMReturnType
from mblm.model.block import StageBlock
from mblm.model.config import block_registry  # Add this!

# Define any custom model
class MyLSTM(torch.nn.Module):
    def __init__(self, lstm: torch.nn.LSTM):
        super().__init__()
        self.lstm = lstm

    def forward(self, input_ids: torch.Tensor) -> torch.Tensor:
        # Wrap the LSTM forward to extract the output
        out, _ = self.lstm(input_ids)
        return out

# Add a block config and inherit from StageBlock
@block_registry.register()
class LSTMBlockConfig(StageBlock):
    block_type: str = "lstm"

    # Add whatever is needed
    dropout: float
    my_property: int

    def to_model(self, model_dim: int, num_layers: int) -> torch.nn.Module:
        return MyLSTM(
            torch.nn.LSTM(
                input_size=model_dim,
                hidden_size=model_dim,
                batch_first=True,
                dropout=self.dropout,
                num_layers=num_layers,
            )
        )

yml_model_config = """
num_tokens: 257
hidden_dims: [1024, 1024]
seq_lens: [1024, 8]
num_layers: [5, 5]
pad_token_id: 256
train_checkpoint_chunks: null
block:
    - dropout: 0.1
      my_property: 1
      pos_emb_type: null
    - attn_head_dims: 64
      attn_num_heads: 16
      attn_use_rot_embs: true
      use_flash_attn: true
      pos_emb_type: fixed
"""

block_registry.register(LSTMBlockConfig)  # Add this!

parsed_config = yaml.safe_load(yml_model_config)
mblm = MBLM(MBLMModelConfig.model_validate(parsed_config))
x = torch.randint(0, 258, (1, 12)).long()
mblm.forward(x, return_type=MBLMReturnType.LOSS)
```

### Custom datasets

If you want to use the MBLM trainer with [torchrun](https://pytorch.org/docs/stable/elastic/run.html) with a custom dataset, you will need to add a few special methods. Here is an end-to-end example where you launch training on your own:

```py
# Filename: train_my_mblm.py

import torch
from typing_extensions import Unpack

from mblm import MambaBlock, TransformerBlock
from mblm.data.datasets import DistributedDataset, DistributedDatasetConfig
from mblm.data.types import BatchWithLossMask, ModelMode
from mblm.train.core.config import CoreTrainConfig
from mblm.train.mblm import (
    TrainEntryConfig,
    TrainMBLMIoConfig,
    TrainMBLMParams,
    dataset_registry,
    train_mblm,
)

# Register dataset with a unique ID
@dataset_registry.register("mydataset")
class MyDataset(DistributedDataset[BatchWithLossMask]):
    def __init__(
        self,
        mode: ModelMode,
        dataset_dir: str,
        **args: Unpack[DistributedDatasetConfig],
    ):
        # Dummy example - Get data from anywhere, e.g., the disk
        print(f"Reading dataset from {dataset_dir}")
        if mode == ModelMode.TRAIN:
            data = list(range(10_000))
        else:
            data = list(range(2_000))
        self._data = data
        super().__init__(
            data_size=len(data),
            is_sequential=True,  # We have a sequential dataset
            **args,
        )

    def get_sample(self, from_idx: int):
        """
        Tell the superclass how to get a single sample - here, a sequence of
        the specified length.
        """
        data = torch.tensor(self._data[from_idx : from_idx + self.seq_len])
        return torch.ones_like(data), data

    @staticmethod
    def from_train_entry_config(
        config: TrainEntryConfig,
        mode: ModelMode,
        worker_id: int,
        num_workers: int,
    ) -> DistributedDataset[BatchWithLossMask]:
        """
        How to parse a training config to a dataset.
        """
        return MyDataset(
            dataset_dir=config.io.dataset_dir,
            mode=mode,
            seq_len=config.params.input_seq_len,
            num_workers=num_workers,
            worker_id=worker_id,
        )

    @staticmethod
    def supports_test_mode() -> bool:
        """
        Whether or not this dataset supports a test mode. Some datasets might not
        expose the answers in their test set so we cannot evaluate a model on it.
        Override if necessary
        """
        return True


config = TrainEntryConfig(
    io=TrainMBLMIoConfig(
        dataset_dir="data/datasets/my-dataset",
        dataset_id="mydataset",  # Must match the ID above
        name_model="my-model",
        output_dir="data/outputs",
        num_models_to_save=3,
        validate_amount=20,
        log_train_loss_amount=100,
    ),
    train=CoreTrainConfig(
        batch_size=1,
        target_elements=1000,
        target_elements_strategy="sequence",
        learning_rate=0.001,
        gradient_accumulate_every=4,
        gradient_clipping=1,
        shuffle_train=True,
        shuffle_eval=False,
    ),
    params=TrainMBLMParams(
        input_seq_len=128,
        num_tokens=257,
        hidden_dims=[512, 512],
        seq_lens=[16, 8],
        num_layers=[5, 5],
        pad_token_id=256,
        train_checkpoint_chunks=None,
        block=[
            MambaBlock(
                d_state=128,
                d_conv=4,
                expand=2,
                headdim=64,
                pos_emb_type=None,
            ),
            TransformerBlock(
                attn_head_dims=64,
                attn_num_heads=16,
                attn_use_rot_embs=True,
                use_flash_attn=True,
                pos_emb_type="fixed",
            ),
        ],
    ),
)

if __name__ == "__main__":
    train_mblm(config)

```

Then, run the above file with:

```sh
OMP_NUM_THREADS=1 uv run torchrun --standalone \
    --nproc_per_node=gpu train_my_mblm.py
```

Generally, training is started from a config file in YAML format. The above is just to give an idea of how everything works together.

Check the [example configs](config) - they should look very similar to the config above - and how we launch training (with `scripts/train_mblm.py`). With any config, simply run:

```bash
bash scripts/train_mblm.py -c <your-config>
```

Which will launch [torchrun](https://pytorch.org/docs/stable/elastic/run.html) with all the necessary configuration.

Alternatively, you can always subclass the core trainer and do things you way. There are many examples in the source dir and the end-to-end tests.

## Streaming responses

As a byte language model, MBLM generates integer representations of bytes. We can hook into the generation process and stream all generated bytes directly to a [file object](https://docs.python.org/3/glossary.html#term-file-object) such as `sys.stdout` (for debugging or interactive sessions) or `io.TextIO` and `io.BinaryIO` stream interfaces.

Let's assume our model is conditioned to generate the following text string:

```
👉🏽 bytes generated by a 🤖
```

In UTF-8 bytes, this corresponds to:

```sh
# hex representation
f0 9f 91 89 f0 9f 8f bd 20 62 79 74 65 73 20 67 65 6e 65 72 61 74 65 64 20 62 79 20 61 20 f0 9f a4 96

# integer representation
240 159 145 137 240 159 143 189 32 98 121 116 101 115 32 103 101 110 101 114 97 116 101 100 32 98 121 32 97 32 240 159 164 150
```

Internally, these integers are what the model generates. However, maybe you have trained to output a different modality such as a PNG file or an MP4 video - the possibilities are endless.

For simplicity, let's assume we have some `root_dir` and a function `create_mblm` to create an MBLM module.

### Streaming to a file

We can **stream the response directly to a file** - no need to specify the encoding. All we need to do is open a file in binary mode. In this example, the output corresponds to UTF-8.

```py
from pathlib import Path

from mblm.utils.stream import ByteStreamer

mblm = create_mblm(...)

# any modality that the model learns to output - .png, .txt, .bin, etc.
file_path = Path(root_dir) / "output.txt"

# open in binary mode and write raw bytes
with Path(file_path).open("wb") as file:
    with ByteStreamer(stream=file) as streamer:
        mblm.generate(streamer)

# we can open the file and interprete its content as UTF-8
with Path(file_path).open("r", encoding="utf8") as file:
    assert file.read() == "👉🏽 bytes generated by a 🤖"
```

### Streaming to stdout

For developing and interactive sessions, we can **stream the response directly to the terminal**. We can either decode the bytes from UTF-8 on the fly or stream the raw integer bytes to the terminal when the bytes represent something other than text.

```py
import sys

from mblm.utils.stream import ByteStreamer

mblm = create_mblm(...)

# approach 1: stream to stdout and decode on the fly
with ByteStreamer(stream=sys.stdout, decode_utf8=True) as streamer:
    mblm.generate(streamer)

# streams the decoded bytes to the terminal:
# 👉🏽 bytes generated by a 🤖

# approach 2: stream raw output to stdout
with ByteStreamer(stream=sys.stdout) as streamer:
    mblm.generate(streamer)

# streams the bytes as integers to the terminal:
# 240 159 145 ... 159 164 150
```

Our approach of decoding from UTF-8 uses the [`replace` strategy](https://docs.python.org/3/library/codecs.html#error-handlers) for dealing with malformed data, which enables continuous decoding even for partially corrupted sequences. Whenever `decode_utf8` is `False`, raw bytes are streamed and you'll need to deal with corrupted UTF-8 sequences on your own.

## Local development setup

We use `uv` for packaging and dependency management. Before proceeding, install a recent version (>= `0.5`) via the instructions on [the homepage](https://docs.astral.sh/uv/getting-started/installation/).

### Install dependencies

- With CUDA: `make install_cuda`
- CPU only (e.g., MacOS): `make install_cpu`

If you've noticed, there are two SSM/Mamba dependencies:

- `mambapy`, defined in `pyproject.toml`
- `mamba-ssm` (with `causal-conv1d`), defined in `Makefile`

Because the official Mamba implementation `mamba-ssm` requires a Linux machine and a GPU available during installation, we shim the dependencies. `mambapy` is used as a fallback for all unsupported platforms or when `mamba-ssm` is not installed. Because `mamba-ssm` is so delicate, it needs to be installed manually:

```sh
make install_mamba
```

For any experiments, we wish to use the new Mamba 2 block from `mamba-ssm`. If the import of this module fails, we fall back to a Mamba 1 block from `mambapy`, which is written in pure PyTorch.

## Running scripts

- Project-related tasks (e.g., installing dependencies, running tests) are defined in the [Makefile](Makefile)

## Pre-Commit Hooks

Before every commit, we lint the _staged_ Python and Jupyter Notebook files and check if they are formatted correctly. Doing this locally speeds up development because one does not have to wait for the CI to catch issues. Errors of these checks are not fixed automatically, instead, you will have to fix the files yourself before committing. You may bypass hooks with `git commit -m <message> --no-verify`. However, the CI will likely fail in this case.

All Pre-commit hooks can be run manually as well:

- `pre-commit run lint`
- `pre-commit run check-format`

Note that:

- The `lint` command is similar to the `make lint` command, but the `make` command operates on _all_ files in the project and not just the staged files
- While `check-format` simply _checks_ the format, `make format` will _actually_ format the files

## Citation

TBD.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mblm",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "Byte Language Models, Hierarchical Architectures, Language Models, Long Context Window, Machine Learning",
    "author": null,
    "author_email": "Eric Egli <eric.christian.egli@ibm.com>, Jannis Born <jab@zurich.ibm.com>",
    "download_url": "https://files.pythonhosted.org/packages/c4/a5/8536566c555bae4fcd7d73a1ee2905b40103df42fb37c044a4be01a9a94b/mblm-0.3.0.tar.gz",
    "platform": null,
    "description": "# Multiscale Byte Language Model\n\nThe Multiscale Byte Language Model is a model-agnostic, hierarchical architecture for causal byte-level language modeling that scales to million-length sequences.\n\n<p align=\"center\">\n    <img src=\"https://raw.githubusercontent.com/ai4sd/multiscale-byte-lm/refs/heads/main/assets/mblm.png\" alt=\"mblm-architecture\" width=\"600\"/>\n</p>\n\n## Install\n\nMBLM is tested against Python versions 3.10, 3.11, 3.12 and 3.13.\n\nInstall from PyPI:\n\n```\npip install mblm\n```\n\nFor `uv`:\n\n```\nuv add mblm\n```\n\n### Using Torch and Mamba\n\nYou will need to **install a recent PyTorch version manually**. We use `>=2.6.0`. It is best to do this after installing the package since some sub-dependencies might install their own (CPU) PyTorch version.\n\n```\npip install 'torch>=2.6.0' --index-url https://download.pytorch.org/whl/cu124\n```\n\nFor `uv`:\n\n```\nuv pip install 'torch>=2.6.0' --index-url https://download.pytorch.org/whl/cu124\n```\n\nFinally, in order to use the efficient [Mamba-SSM](https://github.com/state-spaces/mamba), follow their instructions on the homepage. You'll need Linux and a GPU available during installation.\n\n```\npip install \"mamba-ssm>=2.2.2\" \"causal-conv1d>=1.4.0\" --no-build-isolation\n```\n\nFor `uv`:\n\n```\nuv pip install \"mamba-ssm>=2.2.2\" \"causal-conv1d>=1.4.0\" --no-build-isolation\n```\n\nIf `mamba-ssm` is not available, we fall back to using `mambapy`, which is written in pure PyTorch.\n\n## Quickstart\n\n### Using a built-in stage block\n\nMBLM can be used with the default Transformer Decoder or Mamba block. The below model is a 2D MBLM with a global Mamba and local Transformer model.\n\n```py\nimport torch\n\nfrom mblm import (\n    MBLM,\n    MambaBlock,\n    MBLMModelConfig,\n    MBLMReturnType,\n    TransformerBlock,\n)\n\nmblm = MBLM(\n    MBLMModelConfig(\n        num_tokens=257,\n        hidden_dims=[1024, 1024],\n        seq_lens=[1024, 8],\n        num_layers=[5, 5],\n        pad_token_id=256,\n        train_checkpoint_chunks=None,\n        block=[\n            MambaBlock(\n                d_state=128,\n                d_conv=4,\n                expand=2,\n                headdim=64,\n                pos_emb_type=None,\n            ),\n            TransformerBlock(\n                attn_head_dims=64,\n                attn_num_heads=16,\n                attn_use_rot_embs=True,\n                use_flash_attn=True,\n                pos_emb_type=\"fixed\",\n            ),\n        ],\n    )\n)\n\nx = torch.randint(0, 258, (1, 12)).long()\n\n# Choose between any of the return types\nlogits = mblm.forward(x, return_type=MBLMReturnType.LOGITS)\nloss = mblm.forward(x, return_type=MBLMReturnType.LOSS)\nloss, logits = mblm.forward(x, return_type=MBLMReturnType.LOSS_LOGITS)\n\nassert logits.shape == (1, 12, 257)\nassert loss.ndim == 0\n```\n\nAlternatively, you can read configuration from a YAML string (or file):\n\n```py\nimport torch\nimport yaml\n\nfrom mblm import MBLM, MBLMModelConfig, MBLMReturnType\n\nyml_model_config = \"\"\"\nnum_tokens: 257\nhidden_dims: [1024, 1024]\nseq_lens: [1024, 8]\nnum_layers: [5, 5]\npad_token_id: 256\ntrain_checkpoint_chunks: null\nblock:\n    - d_state: 128\n      d_conv: 4\n      expand: 2\n      headdim: 64\n      pos_emb_type: null\n    - attn_head_dims: 64\n      attn_num_heads: 16\n      attn_use_rot_embs: true\n      use_flash_attn: true\n      pos_emb_type: fixed\n\"\"\"\n\nparsed_config = yaml.safe_load(yml_model_config)\nmblm = MBLM(MBLMModelConfig.model_validate(parsed_config))\nx = torch.randint(0, 258, (1, 12)).long()\nmblm.forward(x, return_type=MBLMReturnType.LOSS)\n```\n\n### Custom stage blocks\n\nYou can define custom stage blocks for MBLM as follows. A stageblock must provide a `block_type` field as well as a `to_model` function with the signature below that returns a `torch.nn.Module`. Other than that, specify whatever other parameters you might need. Note that the default blocks (Transformer and Mamba) are already registered.\n\n```py\nimport torch\n\nfrom mblm import MBLM, MBLMModelConfig, MBLMReturnType, TransformerBlock\nfrom mblm.model.block import StageBlock\n\n# Define any custom model\nclass LSTM(torch.nn.Module):\n    def __init__(self, lstm: torch.nn.LSTM):\n        super().__init__()\n        self.lstm = lstm\n\n    def forward(self, input_ids: torch.Tensor) -> torch.Tensor:\n        # Wrap the LSTM forward to extract the output\n        out, _ = self.lstm(input_ids)\n        return out\n\n# Add a block config and inherit from StageBlock\nclass LSTMBlock(StageBlock):\n    block_type: str = \"lstm\"\n\n    # Add whatever is needed\n    dropout: float\n\n    def to_model(self, model_dim: int, num_layers: int) -> torch.nn.Module:\n        return LSTM(\n            torch.nn.LSTM(\n                input_size=model_dim,\n                hidden_size=model_dim,\n                batch_first=True,\n                dropout=self.dropout,\n                num_layers=num_layers,\n            )\n        )\n\nmblm = MBLM(\n    MBLMModelConfig(\n        num_tokens=257,\n        hidden_dims=[1024, 1024],\n        seq_lens=[1024, 8],\n        num_layers=[5, 5],\n        pad_token_id=256,\n        train_checkpoint_chunks=None,\n        block=[\n            LSTMBlock(\n                dropout=0.1,\n                pos_emb_type=None,\n            ),\n            TransformerBlock(\n                attn_head_dims=64,\n                attn_num_heads=16,\n                attn_use_rot_embs=True,\n                use_flash_attn=True,\n                pos_emb_type=\"fixed\",\n            ),\n        ],\n    )\n)\n\nx = torch.randint(0, 258, (1, 12)).long()\nmblm.forward(x, return_type=MBLMReturnType.LOSS)\n```\n\nIf you want to parse a YAML config to a custom block, **register the block** before creating the model:\n\n```py\nimport torch\nimport yaml\n\nfrom mblm import MBLM, MBLMModelConfig, MBLMReturnType\nfrom mblm.model.block import StageBlock\nfrom mblm.model.config import block_registry  # Add this!\n\n# Define any custom model\nclass MyLSTM(torch.nn.Module):\n    def __init__(self, lstm: torch.nn.LSTM):\n        super().__init__()\n        self.lstm = lstm\n\n    def forward(self, input_ids: torch.Tensor) -> torch.Tensor:\n        # Wrap the LSTM forward to extract the output\n        out, _ = self.lstm(input_ids)\n        return out\n\n# Add a block config and inherit from StageBlock\n@block_registry.register()\nclass LSTMBlockConfig(StageBlock):\n    block_type: str = \"lstm\"\n\n    # Add whatever is needed\n    dropout: float\n    my_property: int\n\n    def to_model(self, model_dim: int, num_layers: int) -> torch.nn.Module:\n        return MyLSTM(\n            torch.nn.LSTM(\n                input_size=model_dim,\n                hidden_size=model_dim,\n                batch_first=True,\n                dropout=self.dropout,\n                num_layers=num_layers,\n            )\n        )\n\nyml_model_config = \"\"\"\nnum_tokens: 257\nhidden_dims: [1024, 1024]\nseq_lens: [1024, 8]\nnum_layers: [5, 5]\npad_token_id: 256\ntrain_checkpoint_chunks: null\nblock:\n    - dropout: 0.1\n      my_property: 1\n      pos_emb_type: null\n    - attn_head_dims: 64\n      attn_num_heads: 16\n      attn_use_rot_embs: true\n      use_flash_attn: true\n      pos_emb_type: fixed\n\"\"\"\n\nblock_registry.register(LSTMBlockConfig)  # Add this!\n\nparsed_config = yaml.safe_load(yml_model_config)\nmblm = MBLM(MBLMModelConfig.model_validate(parsed_config))\nx = torch.randint(0, 258, (1, 12)).long()\nmblm.forward(x, return_type=MBLMReturnType.LOSS)\n```\n\n### Custom datasets\n\nIf you want to use the MBLM trainer with [torchrun](https://pytorch.org/docs/stable/elastic/run.html) with a custom dataset, you will need to add a few special methods. Here is an end-to-end example where you launch training on your own:\n\n```py\n# Filename: train_my_mblm.py\n\nimport torch\nfrom typing_extensions import Unpack\n\nfrom mblm import MambaBlock, TransformerBlock\nfrom mblm.data.datasets import DistributedDataset, DistributedDatasetConfig\nfrom mblm.data.types import BatchWithLossMask, ModelMode\nfrom mblm.train.core.config import CoreTrainConfig\nfrom mblm.train.mblm import (\n    TrainEntryConfig,\n    TrainMBLMIoConfig,\n    TrainMBLMParams,\n    dataset_registry,\n    train_mblm,\n)\n\n# Register dataset with a unique ID\n@dataset_registry.register(\"mydataset\")\nclass MyDataset(DistributedDataset[BatchWithLossMask]):\n    def __init__(\n        self,\n        mode: ModelMode,\n        dataset_dir: str,\n        **args: Unpack[DistributedDatasetConfig],\n    ):\n        # Dummy example - Get data from anywhere, e.g., the disk\n        print(f\"Reading dataset from {dataset_dir}\")\n        if mode == ModelMode.TRAIN:\n            data = list(range(10_000))\n        else:\n            data = list(range(2_000))\n        self._data = data\n        super().__init__(\n            data_size=len(data),\n            is_sequential=True,  # We have a sequential dataset\n            **args,\n        )\n\n    def get_sample(self, from_idx: int):\n        \"\"\"\n        Tell the superclass how to get a single sample - here, a sequence of\n        the specified length.\n        \"\"\"\n        data = torch.tensor(self._data[from_idx : from_idx + self.seq_len])\n        return torch.ones_like(data), data\n\n    @staticmethod\n    def from_train_entry_config(\n        config: TrainEntryConfig,\n        mode: ModelMode,\n        worker_id: int,\n        num_workers: int,\n    ) -> DistributedDataset[BatchWithLossMask]:\n        \"\"\"\n        How to parse a training config to a dataset.\n        \"\"\"\n        return MyDataset(\n            dataset_dir=config.io.dataset_dir,\n            mode=mode,\n            seq_len=config.params.input_seq_len,\n            num_workers=num_workers,\n            worker_id=worker_id,\n        )\n\n    @staticmethod\n    def supports_test_mode() -> bool:\n        \"\"\"\n        Whether or not this dataset supports a test mode. Some datasets might not\n        expose the answers in their test set so we cannot evaluate a model on it.\n        Override if necessary\n        \"\"\"\n        return True\n\n\nconfig = TrainEntryConfig(\n    io=TrainMBLMIoConfig(\n        dataset_dir=\"data/datasets/my-dataset\",\n        dataset_id=\"mydataset\",  # Must match the ID above\n        name_model=\"my-model\",\n        output_dir=\"data/outputs\",\n        num_models_to_save=3,\n        validate_amount=20,\n        log_train_loss_amount=100,\n    ),\n    train=CoreTrainConfig(\n        batch_size=1,\n        target_elements=1000,\n        target_elements_strategy=\"sequence\",\n        learning_rate=0.001,\n        gradient_accumulate_every=4,\n        gradient_clipping=1,\n        shuffle_train=True,\n        shuffle_eval=False,\n    ),\n    params=TrainMBLMParams(\n        input_seq_len=128,\n        num_tokens=257,\n        hidden_dims=[512, 512],\n        seq_lens=[16, 8],\n        num_layers=[5, 5],\n        pad_token_id=256,\n        train_checkpoint_chunks=None,\n        block=[\n            MambaBlock(\n                d_state=128,\n                d_conv=4,\n                expand=2,\n                headdim=64,\n                pos_emb_type=None,\n            ),\n            TransformerBlock(\n                attn_head_dims=64,\n                attn_num_heads=16,\n                attn_use_rot_embs=True,\n                use_flash_attn=True,\n                pos_emb_type=\"fixed\",\n            ),\n        ],\n    ),\n)\n\nif __name__ == \"__main__\":\n    train_mblm(config)\n\n```\n\nThen, run the above file with:\n\n```sh\nOMP_NUM_THREADS=1 uv run torchrun --standalone \\\n    --nproc_per_node=gpu train_my_mblm.py\n```\n\nGenerally, training is started from a config file in YAML format. The above is just to give an idea of how everything works together.\n\nCheck the [example configs](config) - they should look very similar to the config above - and how we launch training (with `scripts/train_mblm.py`). With any config, simply run:\n\n```bash\nbash scripts/train_mblm.py -c <your-config>\n```\n\nWhich will launch [torchrun](https://pytorch.org/docs/stable/elastic/run.html) with all the necessary configuration.\n\nAlternatively, you can always subclass the core trainer and do things you way. There are many examples in the source dir and the end-to-end tests.\n\n## Streaming responses\n\nAs a byte language model, MBLM generates integer representations of bytes. We can hook into the generation process and stream all generated bytes directly to a [file object](https://docs.python.org/3/glossary.html#term-file-object) such as `sys.stdout` (for debugging or interactive sessions) or `io.TextIO` and `io.BinaryIO` stream interfaces.\n\nLet's assume our model is conditioned to generate the following text string:\n\n```\n\ud83d\udc49\ud83c\udffd bytes generated by a \ud83e\udd16\n```\n\nIn UTF-8 bytes, this corresponds to:\n\n```sh\n# hex representation\nf0 9f 91 89 f0 9f 8f bd 20 62 79 74 65 73 20 67 65 6e 65 72 61 74 65 64 20 62 79 20 61 20 f0 9f a4 96\n\n# integer representation\n240 159 145 137 240 159 143 189 32 98 121 116 101 115 32 103 101 110 101 114 97 116 101 100 32 98 121 32 97 32 240 159 164 150\n```\n\nInternally, these integers are what the model generates. However, maybe you have trained to output a different modality such as a PNG file or an MP4 video - the possibilities are endless.\n\nFor simplicity, let's assume we have some `root_dir` and a function `create_mblm` to create an MBLM module.\n\n### Streaming to a file\n\nWe can **stream the response directly to a file** - no need to specify the encoding. All we need to do is open a file in binary mode. In this example, the output corresponds to UTF-8.\n\n```py\nfrom pathlib import Path\n\nfrom mblm.utils.stream import ByteStreamer\n\nmblm = create_mblm(...)\n\n# any modality that the model learns to output - .png, .txt, .bin, etc.\nfile_path = Path(root_dir) / \"output.txt\"\n\n# open in binary mode and write raw bytes\nwith Path(file_path).open(\"wb\") as file:\n    with ByteStreamer(stream=file) as streamer:\n        mblm.generate(streamer)\n\n# we can open the file and interprete its content as UTF-8\nwith Path(file_path).open(\"r\", encoding=\"utf8\") as file:\n    assert file.read() == \"\ud83d\udc49\ud83c\udffd bytes generated by a \ud83e\udd16\"\n```\n\n### Streaming to stdout\n\nFor developing and interactive sessions, we can **stream the response directly to the terminal**. We can either decode the bytes from UTF-8 on the fly or stream the raw integer bytes to the terminal when the bytes represent something other than text.\n\n```py\nimport sys\n\nfrom mblm.utils.stream import ByteStreamer\n\nmblm = create_mblm(...)\n\n# approach 1: stream to stdout and decode on the fly\nwith ByteStreamer(stream=sys.stdout, decode_utf8=True) as streamer:\n    mblm.generate(streamer)\n\n# streams the decoded bytes to the terminal:\n# \ud83d\udc49\ud83c\udffd bytes generated by a \ud83e\udd16\n\n# approach 2: stream raw output to stdout\nwith ByteStreamer(stream=sys.stdout) as streamer:\n    mblm.generate(streamer)\n\n# streams the bytes as integers to the terminal:\n# 240 159 145 ... 159 164 150\n```\n\nOur approach of decoding from UTF-8 uses the [`replace` strategy](https://docs.python.org/3/library/codecs.html#error-handlers) for dealing with malformed data, which enables continuous decoding even for partially corrupted sequences. Whenever `decode_utf8` is `False`, raw bytes are streamed and you'll need to deal with corrupted UTF-8 sequences on your own.\n\n## Local development setup\n\nWe use `uv` for packaging and dependency management. Before proceeding, install a recent version (>= `0.5`) via the instructions on [the homepage](https://docs.astral.sh/uv/getting-started/installation/).\n\n### Install dependencies\n\n- With CUDA: `make install_cuda`\n- CPU only (e.g., MacOS): `make install_cpu`\n\nIf you've noticed, there are two SSM/Mamba dependencies:\n\n- `mambapy`, defined in `pyproject.toml`\n- `mamba-ssm` (with `causal-conv1d`), defined in `Makefile`\n\nBecause the official Mamba implementation `mamba-ssm` requires a Linux machine and a GPU available during installation, we shim the dependencies. `mambapy` is used as a fallback for all unsupported platforms or when `mamba-ssm` is not installed. Because `mamba-ssm` is so delicate, it needs to be installed manually:\n\n```sh\nmake install_mamba\n```\n\nFor any experiments, we wish to use the new Mamba 2 block from `mamba-ssm`. If the import of this module fails, we fall back to a Mamba 1 block from `mambapy`, which is written in pure PyTorch.\n\n## Running scripts\n\n- Project-related tasks (e.g., installing dependencies, running tests) are defined in the [Makefile](Makefile)\n\n## Pre-Commit Hooks\n\nBefore every commit, we lint the _staged_ Python and Jupyter Notebook files and check if they are formatted correctly. Doing this locally speeds up development because one does not have to wait for the CI to catch issues. Errors of these checks are not fixed automatically, instead, you will have to fix the files yourself before committing. You may bypass hooks with `git commit -m <message> --no-verify`. However, the CI will likely fail in this case.\n\nAll Pre-commit hooks can be run manually as well:\n\n- `pre-commit run lint`\n- `pre-commit run check-format`\n\nNote that:\n\n- The `lint` command is similar to the `make lint` command, but the `make` command operates on _all_ files in the project and not just the staged files\n- While `check-format` simply _checks_ the format, `make format` will _actually_ format the files\n\n## Citation\n\nTBD.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Multiscale Byte Language Model",
    "version": "0.3.0",
    "project_urls": {
        "homepage": "https://github.com/ai4sd/mblm",
        "releasenotes": "https://github.com/ai4sd/mblm/releases",
        "source": "https://github.com/ai4sd/mblm"
    },
    "split_keywords": [
        "byte language models",
        " hierarchical architectures",
        " language models",
        " long context window",
        " machine learning"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6cdcd8304d3bfdce7dc0711ca7217d72ccd19aefbeddb1de59e7272fba063404",
                "md5": "6ffd8aa9306c29273a5db43a4de7c8aa",
                "sha256": "7fb4c8067403cbf7f21bb85c4f75d4b9cde05e8d643b7dd3843b3aa3aaa71f35"
            },
            "downloads": -1,
            "filename": "mblm-0.3.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6ffd8aa9306c29273a5db43a4de7c8aa",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 85879,
            "upload_time": "2025-02-11T13:30:35",
            "upload_time_iso_8601": "2025-02-11T13:30:35.467091Z",
            "url": "https://files.pythonhosted.org/packages/6c/dc/d8304d3bfdce7dc0711ca7217d72ccd19aefbeddb1de59e7272fba063404/mblm-0.3.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c4a58536566c555bae4fcd7d73a1ee2905b40103df42fb37c044a4be01a9a94b",
                "md5": "d64ad494540ea5f3457c53094ba9e4c4",
                "sha256": "0c853f9e351943c1340a2220c55db12966b76be7a512aeac4e87909fe9b36f7e"
            },
            "downloads": -1,
            "filename": "mblm-0.3.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d64ad494540ea5f3457c53094ba9e4c4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 55080,
            "upload_time": "2025-02-11T13:30:37",
            "upload_time_iso_8601": "2025-02-11T13:30:37.215344Z",
            "url": "https://files.pythonhosted.org/packages/c4/a5/8536566c555bae4fcd7d73a1ee2905b40103df42fb37c044a4be01a9a94b/mblm-0.3.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-11 13:30:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ai4sd",
    "github_project": "mblm",
    "travis_ci": true,
    "coveralls": false,
    "github_actions": true,
    "lcname": "mblm"
}
        
Elapsed time: 2.05476s