dictionary-learning


Namedictionary-learning JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryDictionary learning via sparse autoencoders on neural network activations
upload_time2025-02-12 06:55:32
maintainerNone
docs_urlNone
authorSamuel Marks
requires_python<4.0,>=3.10
licenseMIT
keywords deep-learning sparse-autoencoders mechanistic-interpretability pytorch
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            This is a repository for doing dictionary learning via sparse autoencoders on neural network activations. It was developed by Samuel Marks, Adam Karvonen, and Aaron Mueller. 

For accessing, saving, and intervening on NN activations, we use the [`nnsight`](http://nnsight.net/) package; as of March 2024, `nnsight` is under active development and may undergo breaking changes. That said, `nnsight` is easy to use and quick to learn; if you plan to modify this repo, then we recommend going through the main `nnsight` demo [here](https://nnsight.net/notebooks/tutorials/walkthrough/).

Some dictionaries trained using this repository (and associated training checkpoints) can be accessed at [https://baulab.us/u/smarks/autoencoders/](https://baulab.us/u/smarks/autoencoders/). See below for more information about these dictionaries. SAEs trained with `dictionary_learning` can be evaluated with [SAE Bench](https://www.neuronpedia.org/sae-bench/info) using a convenient [evaluation script](https://github.com/adamkarvonen/SAEBench/tree/main/sae_bench/custom_saes).

# Set-up

Navigate to the to the location where you would like to clone this repo, clone and enter the repo, and install the requirements.
```bash
pip install dictionary-learning
```

We also provide a [demonstration](https://github.com/adamkarvonen/dictionary_learning_demo), which trains and evaluates 2 SAEs in ~30 minutes before plotting the results.

# Using trained dictionaries

You can load and used a pretrained dictionary as follows
```python
from dictionary_learning import AutoEncoder

# load autoencoder
ae = AutoEncoder.from_pretrained("path/to/dictionary/weights")

# get NN activations using your preferred method: hooks, transformer_lens, nnsight, etc. ...
# for now we'll just use random activations
activations = torch.randn(64, activation_dim)
features = ae.encode(activations) # get features from activations
reconstructed_activations = ae.decode(features)

# you can also just get the reconstruction ...
reconstructed_activations = ae(activations)
# ... or get the features and reconstruction at the same time
reconstructed_activations, features = ae(activations, output_features=True)
```
Dictionaries have `encode`, `decode`, and `forward` methods -- see `dictionary.py`.

## Loading JumpReLU SAEs from `sae_lens`
We have limited support for automatically converting SAEs from `sae_lens`; currently this is only supported for JumpReLU SAEs, but we may expand support if users are interested.
```python
from dictionary_learning import JumpReluAutoEncoder

ae = JumpReluAutoEncoder.from_pretrained(
    load_from_sae_lens=True,
    release="your_release_name",
    sae_id="your_sae_id"
)
```
The arguments should should match those used in the `SAE.from_pretrained` call you would use to load an SAE in `sae_lens`. For this to work, `sae_lens` should be installed in your environment.


# Training your own dictionaries

To train your own dictionaries, you'll need to understand a bit about our infrastructure. (See below for downloading our dictionaries.)

This repository supports different sparse autoencoder architectures, including standard `AutoEncoder` ([Bricken et al., 2023](https://transformer-circuits.pub/2023/monosemantic-features/index.html)), `GatedAutoEncoder` ([Rajamanoharan et al., 2024](https://arxiv.org/abs/2404.16014)), and `AutoEncoderTopK` ([Gao et al., 2024](https://arxiv.org/abs/2406.04093)).
Each sparse autoencoder architecture is implemented with a corresponding trainer that implements the training protocol described by the authors.
This allows us to implement different training protocols (e.g. p-annealing) for different architectures without a lot of overhead.
Specifically, this repository supports the following trainers:
- [`StandardTrainer`](trainers/standard.py): Implements a training scheme similar to that of [Bricken et al., 2023](https://transformer-circuits.pub/2023/monosemantic-features/index.html#appendix-autoencoder).
- [`GatedSAETrainer`](trainers/gdm.py): Implements the training scheme for Gated SAEs described in [Rajamanoharan et al., 2024](https://arxiv.org/abs/2404.16014).
- [`TopKSAETrainer`](trainers/top_k.py): Implemented the training scheme for Top-K SAEs described in [Gao et al., 2024](https://arxiv.org/abs/2406.04093).
- [`BatchTopKSAETrainer`](trainers/batch_top_k.py): Implemented the training scheme for Batch Top-K SAEs described in [Bussmann et al., 2024](https://arxiv.org/abs/2412.06410).
- [`JumpReluTrainer`](trainers/jumprelu.py): Implemented the training scheme for JumpReLU SAEs described in [Rajamanoharan et al., 2024](https://arxiv.org/abs/2407.14435).
- [`PAnnealTrainer`](trainers/p_anneal.py): Extends the `StandardTrainer` by providing the option to anneal the sparsity parameter p.
- [`GatedAnnealTrainer`](trainers/gated_anneal.py): Extends the `GatedSAETrainer` by providing the option for p-annealing, similar to `PAnnealTrainer`.

Another key object is the `ActivationBuffer`, defined in `buffer.py`. Following [Neel Nanda's appraoch](https://www.lesswrong.com/posts/fKuugaxt2XLTkASkk/open-source-replication-and-commentary-on-anthropic-s), `ActivationBuffer`s maintain a buffer of NN activations, which it outputs in batches.

An `ActivationBuffer` is initialized from an `nnsight` `LanguageModel` object, a submodule (e.g. an MLP), and a generator which yields strings (the text data). It processes a large number of strings, up to some capacity, and saves the submodule's activations. You sample batches from it, and when it is half-depleted, it refreshes itself with new text data.

Here's an example for training a dictionary; in it we load a language model as an `nnsight` `LanguageModel` (this will work for any Huggingface model), specify a submodule, create an `ActivationBuffer`, and then train an autoencoder with `trainSAE`.
```python
from nnsight import LanguageModel
from dictionary_learning import ActivationBuffer, AutoEncoder
from dictionary_learning.trainers import StandardTrainer
from dictionary_learning.training import trainSAE

device = "cuda:0"
model_name = "EleutherAI/pythia-70m-deduped" # can be any Huggingface model

model = LanguageModel(
    model_name,
    device_map=device,
)
submodule = model.gpt_neox.layers[1].mlp # layer 1 MLP
activation_dim = 512 # output dimension of the MLP
dictionary_size = 16 * activation_dim

# data must be an iterator that outputs strings
data = iter(
    [
        "This is some example data",
        "In real life, for training a dictionary",
        "you would need much more data than this",
    ]
)
buffer = ActivationBuffer(
    data=data,
    model=model,
    submodule=submodule,
    d_submodule=activation_dim, # output dimension of the model component
    n_ctxs=3e4,  # you can set this higher or lower dependong on your available memory
    device=device,
)  # buffer will yield batches of tensors of dimension = submodule's output dimension

trainer_cfg = {
    "trainer": StandardTrainer,
    "dict_class": AutoEncoder,
    "activation_dim": activation_dim,
    "dict_size": dictionary_size,
    "lr": 1e-3,
    "device": device,
}

# train the sparse autoencoder (SAE)
ae = trainSAE(
    data=buffer,  # you could also use another (i.e. pytorch dataloader) here instead of buffer
    trainer_configs=[trainer_cfg],
)
```
Some technical notes our training infrastructure and supported features:
* Training uses the `ConstrainedAdam` optimizer defined in `training.py`. This is a variant of Adam which supports constraining the `AutoEncoder`'s decoder weights to be norm 1.
* Neuron resampling: if a `resample_steps` argument is passed to the Trainer, then dead neurons will periodically be resampled according to the procedure specified [here](https://transformer-circuits.pub/2023/monosemantic-features/index.html#appendix-autoencoder-resampling).
* Learning rate warmup: if a `warmup_steps` argument is passed to the Trainer, then a linear LR warmup is used at the start of training and, if doing neuron resampling, also after every time neurons are resampled.
* Sparsity penalty warmup: if a `sparsity_warmup_steps` is passed to the Trainer, then a linear warmup is applied to the sparsity penalty at the start of training.
* Learning rate decay: if a `decay_start` is passed to the Trainer, then a linear LR decay is used from `decay_start` to the end of training.
* If `normalize_activations` is True and passed to `trainSAE`, then the activations will be normalized to have unit mean squared norm. The autoencoders weights will be scaled before saving, so the activations don't need to be scaled during inference. This is very helpful for hyperparameter transfer between different layers and models.

If `submodule` is a model component where the activations are tuples (e.g. this is common when working with residual stream activations), then the buffer yields the first coordinate of the tuple.

# Downloading our open-source dictionaries

To download our pretrained dictionaries automatically, run:

```bash
./pretrained_dictionary_downloader.sh
```
This will download dictionaries of all submodules (~2.5 GB) hosted on huggingface. Currently, we provide dictionaries from the `10_32768` training run. This set has dictionaries for MLP outputs, attention outputs, and residual streams (including embeddings) in all layers of EleutherAI's Pythia-70m-deduped model. These dictionaries were trained on 2B tokens from The Pile.

Let's explain the directory structure by example. After using the script above, you'll have a `dictionaries/pythia-70m-deduped/mlp_out_layer1/10_32768` directory corresponding to the layer 1 MLP dictionary from the `10_32768` set. This directory contains:
* `ae.pt`: the `state_dict` of the fully trained dictionary
* `config.json`: a json file which specifies the hyperparameters used to train the dictionary
* `checkpoints/`: a directory containing training checkpoints of the form `ae_step.pt` (only if you used the `--checkpoints` flag)

We've also previously released other dictionaries which can be found and downloaded [here](https://baulab.us/u/smarks/autoencoders/). 

## Statistics for our dictionaries

We'll report the following statistics for our `10_32768` dictionaries. These were measured using the code in `evaluation.py`.
* **MSE loss**: average squared L2 distance between an activation and the autoencoder's reconstruction of it
* **L1 loss**: a measure of the autoencoder's sparsity
* **L0**: average number of features active above a random token
* **Percentage of neurons alive**: fraction of the dictionary features which are active on at least one token out of 8192 random tokens
* **CE diff**: difference between the usual cross-entropy loss of the model for next token prediction and the cross entropy when replacing activations with our dictionary's reconstruction
* **Percentage of CE loss recovered**: when replacing the activation with the dictionary's reconstruction, the percentage of the model's cross-entropy loss on next token prediction that is recovered (relative to the baseline of zero ablating the activation)

### Attention output dictionaries

| Layer | Variance Explained (%) | L1 | L0  | % Alive | CE Diff | % CE Recovered |
|-------|------------------------|----|-----|---------|---------|----------------|
| 0     | 92                     | 8  | 128 | 17      | 0.02    | 99             |
| 1     | 87                     | 9  | 127 | 17      | 0.03    | 94             |
| 2     | 90                     | 19 | 215 | 12      | 0.05    | 93             |
| 3     | 89                     | 12 | 169 | 13      | 0.03    | 93             |
| 4     | 83                     | 8  | 132 | 14      | 0.01    | 95             |
| 5     | 89                     | 11 | 144 | 20      | 0.02    | 93             |


### MLP output dictionaries

| Layer  | Variance Explained (%) | L1 | L0  | % Alive | CE Diff | % CE Recovered |
|--------|------------------------|----|-----|---------|---------|----------------|
|     0  | 97                     | 5  | 5   | 40      | 0.10    | 99             |
|     1  | 85                     | 8  | 69  | 44      | 0.06    | 95             |
|     2  | 99                     | 12 | 88  | 31      | 0.11    | 88             |
|     3  | 88                     | 20 | 160 | 25      | 0.12    | 94             |
|     4  | 92                     | 20 | 100 | 29      | 0.14    | 90             |
|     5  | 96                     | 31 | 102 | 35      | 0.15    | 97             |


### Residual stream dictionaries
NOTE: these are indexed so that the resid_i dictionary is the *output* of the ith layer. Thus embeddings go first, then layer 0, etc.

| Layer   | Variance Explained (%) | L1 | L0  | % Alive | CE Diff | % CE Recovered |
|---------|------------------------|----|-----|---------|---------|----------------|
|    embed| 96                     |  1 |  3  | 36      | 0.17    | 98             |
|       0 | 92                     | 11 | 59  | 41      | 0.24    | 97             |
|       1 | 85                     | 13 | 54  | 38      | 0.45    | 95             |
|       2 | 96                     | 24 | 108 | 27      | 0.55    | 94             |
|       3 | 96                     | 23 | 68  | 22      | 0.58    | 95             |
|       4 | 88                     | 23 | 61  | 27      | 0.48    | 95             |
|       5 | 90                     | 35 | 72  | 45      | 0.55    | 92             |




# Extra functionality supported by this repo

**Note:** these features are likely to be depricated in future releases.

We've included support for some experimental features. We briefly investigated them as an alternative approaches to training dictionaries.

* **MLP stretchers.** Based on the perspective that one may be able to identify features with "[neurons in a sufficiently large model](https://transformer-circuits.pub/2022/toy_model/index.html)," we experimented with training "autoencoders" to, given as input an MLP *input* activation $x$, output not $x$ but $MLP(x)$ (the same output as the MLP). For instance, given an MLP which maps a 512-dimensional input $x$ to a 1024-dimensional hidden state $h$ and then a 512-dimensional output $y$, we train a dictionary $A$ with hidden dimension 16384 = 16 x 1024 so that $A(x)$ is close to $y$ (and, as usual, so that the hidden state of the dictionary is sparse).
    * The resulting dictionaries seemed decent, but we decided not to pursue the idea further.
    * To use this functionality, set the `io` parameter of an activaiton buffer to `'in_to_out'` (default is `'out'`).
    * h/t to Max Li for this suggestion.
* **Replacing L1 loss with entropy**. Based on the ideas in this [post](https://transformer-circuits.pub/2023/may-update/index.html#simple-factorization), we experimented with using entropy to regularize a dictionary's hidden state instead of L1 loss. This seemed to cause the features to split into dead features (which never fired) and very high-frequency features which fired on nearly every input, which was not the desired behavior. But plausibly there is a way to make this work better.
* **Ghost grads**, as described [here](https://transformer-circuits.pub/2024/jan-update/index.html). 

# Citation

Please cite the package as follows:

```
@misc{marks2024dictionary_learning,
   title = {dictionary_learning},
   author = {Samuel Marks, Adam Karvonen, and Aaron Mueller},
   year = {2024},
   howpublished = {\url{https://github.com/saprmarks/dictionary_learning}},
}
```


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "dictionary-learning",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "deep-learning, sparse-autoencoders, mechanistic-interpretability, PyTorch",
    "author": "Samuel Marks",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/b0/f4/83297fe368e300929786e58497eb7b7ba83b889508313a0051f956559cf6/dictionary_learning-0.1.0.tar.gz",
    "platform": null,
    "description": "This is a repository for doing dictionary learning via sparse autoencoders on neural network activations. It was developed by Samuel Marks, Adam Karvonen, and Aaron Mueller. \n\nFor accessing, saving, and intervening on NN activations, we use the [`nnsight`](http://nnsight.net/) package; as of March 2024, `nnsight` is under active development and may undergo breaking changes. That said, `nnsight` is easy to use and quick to learn; if you plan to modify this repo, then we recommend going through the main `nnsight` demo [here](https://nnsight.net/notebooks/tutorials/walkthrough/).\n\nSome dictionaries trained using this repository (and associated training checkpoints) can be accessed at [https://baulab.us/u/smarks/autoencoders/](https://baulab.us/u/smarks/autoencoders/). See below for more information about these dictionaries. SAEs trained with `dictionary_learning` can be evaluated with [SAE Bench](https://www.neuronpedia.org/sae-bench/info) using a convenient [evaluation script](https://github.com/adamkarvonen/SAEBench/tree/main/sae_bench/custom_saes).\n\n# Set-up\n\nNavigate to the to the location where you would like to clone this repo, clone and enter the repo, and install the requirements.\n```bash\npip install dictionary-learning\n```\n\nWe also provide a [demonstration](https://github.com/adamkarvonen/dictionary_learning_demo), which trains and evaluates 2 SAEs in ~30 minutes before plotting the results.\n\n# Using trained dictionaries\n\nYou can load and used a pretrained dictionary as follows\n```python\nfrom dictionary_learning import AutoEncoder\n\n# load autoencoder\nae = AutoEncoder.from_pretrained(\"path/to/dictionary/weights\")\n\n# get NN activations using your preferred method: hooks, transformer_lens, nnsight, etc. ...\n# for now we'll just use random activations\nactivations = torch.randn(64, activation_dim)\nfeatures = ae.encode(activations) # get features from activations\nreconstructed_activations = ae.decode(features)\n\n# you can also just get the reconstruction ...\nreconstructed_activations = ae(activations)\n# ... or get the features and reconstruction at the same time\nreconstructed_activations, features = ae(activations, output_features=True)\n```\nDictionaries have `encode`, `decode`, and `forward` methods -- see `dictionary.py`.\n\n## Loading JumpReLU SAEs from `sae_lens`\nWe have limited support for automatically converting SAEs from `sae_lens`; currently this is only supported for JumpReLU SAEs, but we may expand support if users are interested.\n```python\nfrom dictionary_learning import JumpReluAutoEncoder\n\nae = JumpReluAutoEncoder.from_pretrained(\n    load_from_sae_lens=True,\n    release=\"your_release_name\",\n    sae_id=\"your_sae_id\"\n)\n```\nThe arguments should should match those used in the `SAE.from_pretrained` call you would use to load an SAE in `sae_lens`. For this to work, `sae_lens` should be installed in your environment.\n\n\n# Training your own dictionaries\n\nTo train your own dictionaries, you'll need to understand a bit about our infrastructure. (See below for downloading our dictionaries.)\n\nThis repository supports different sparse autoencoder architectures, including standard `AutoEncoder` ([Bricken et al., 2023](https://transformer-circuits.pub/2023/monosemantic-features/index.html)), `GatedAutoEncoder` ([Rajamanoharan et al., 2024](https://arxiv.org/abs/2404.16014)), and `AutoEncoderTopK` ([Gao et al., 2024](https://arxiv.org/abs/2406.04093)).\nEach sparse autoencoder architecture is implemented with a corresponding trainer that implements the training protocol described by the authors.\nThis allows us to implement different training protocols (e.g. p-annealing) for different architectures without a lot of overhead.\nSpecifically, this repository supports the following trainers:\n- [`StandardTrainer`](trainers/standard.py): Implements a training scheme similar to that of [Bricken et al., 2023](https://transformer-circuits.pub/2023/monosemantic-features/index.html#appendix-autoencoder).\n- [`GatedSAETrainer`](trainers/gdm.py): Implements the training scheme for Gated SAEs described in [Rajamanoharan et al., 2024](https://arxiv.org/abs/2404.16014).\n- [`TopKSAETrainer`](trainers/top_k.py): Implemented the training scheme for Top-K SAEs described in [Gao et al., 2024](https://arxiv.org/abs/2406.04093).\n- [`BatchTopKSAETrainer`](trainers/batch_top_k.py): Implemented the training scheme for Batch Top-K SAEs described in [Bussmann et al., 2024](https://arxiv.org/abs/2412.06410).\n- [`JumpReluTrainer`](trainers/jumprelu.py): Implemented the training scheme for JumpReLU SAEs described in [Rajamanoharan et al., 2024](https://arxiv.org/abs/2407.14435).\n- [`PAnnealTrainer`](trainers/p_anneal.py): Extends the `StandardTrainer` by providing the option to anneal the sparsity parameter p.\n- [`GatedAnnealTrainer`](trainers/gated_anneal.py): Extends the `GatedSAETrainer` by providing the option for p-annealing, similar to `PAnnealTrainer`.\n\nAnother key object is the `ActivationBuffer`, defined in `buffer.py`. Following [Neel Nanda's appraoch](https://www.lesswrong.com/posts/fKuugaxt2XLTkASkk/open-source-replication-and-commentary-on-anthropic-s), `ActivationBuffer`s maintain a buffer of NN activations, which it outputs in batches.\n\nAn `ActivationBuffer` is initialized from an `nnsight` `LanguageModel` object, a submodule (e.g. an MLP), and a generator which yields strings (the text data). It processes a large number of strings, up to some capacity, and saves the submodule's activations. You sample batches from it, and when it is half-depleted, it refreshes itself with new text data.\n\nHere's an example for training a dictionary; in it we load a language model as an `nnsight` `LanguageModel` (this will work for any Huggingface model), specify a submodule, create an `ActivationBuffer`, and then train an autoencoder with `trainSAE`.\n```python\nfrom nnsight import LanguageModel\nfrom dictionary_learning import ActivationBuffer, AutoEncoder\nfrom dictionary_learning.trainers import StandardTrainer\nfrom dictionary_learning.training import trainSAE\n\ndevice = \"cuda:0\"\nmodel_name = \"EleutherAI/pythia-70m-deduped\" # can be any Huggingface model\n\nmodel = LanguageModel(\n    model_name,\n    device_map=device,\n)\nsubmodule = model.gpt_neox.layers[1].mlp # layer 1 MLP\nactivation_dim = 512 # output dimension of the MLP\ndictionary_size = 16 * activation_dim\n\n# data must be an iterator that outputs strings\ndata = iter(\n    [\n        \"This is some example data\",\n        \"In real life, for training a dictionary\",\n        \"you would need much more data than this\",\n    ]\n)\nbuffer = ActivationBuffer(\n    data=data,\n    model=model,\n    submodule=submodule,\n    d_submodule=activation_dim, # output dimension of the model component\n    n_ctxs=3e4,  # you can set this higher or lower dependong on your available memory\n    device=device,\n)  # buffer will yield batches of tensors of dimension = submodule's output dimension\n\ntrainer_cfg = {\n    \"trainer\": StandardTrainer,\n    \"dict_class\": AutoEncoder,\n    \"activation_dim\": activation_dim,\n    \"dict_size\": dictionary_size,\n    \"lr\": 1e-3,\n    \"device\": device,\n}\n\n# train the sparse autoencoder (SAE)\nae = trainSAE(\n    data=buffer,  # you could also use another (i.e. pytorch dataloader) here instead of buffer\n    trainer_configs=[trainer_cfg],\n)\n```\nSome technical notes our training infrastructure and supported features:\n* Training uses the `ConstrainedAdam` optimizer defined in `training.py`. This is a variant of Adam which supports constraining the `AutoEncoder`'s decoder weights to be norm 1.\n* Neuron resampling: if a `resample_steps` argument is passed to the Trainer, then dead neurons will periodically be resampled according to the procedure specified [here](https://transformer-circuits.pub/2023/monosemantic-features/index.html#appendix-autoencoder-resampling).\n* Learning rate warmup: if a `warmup_steps` argument is passed to the Trainer, then a linear LR warmup is used at the start of training and, if doing neuron resampling, also after every time neurons are resampled.\n* Sparsity penalty warmup: if a `sparsity_warmup_steps` is passed to the Trainer, then a linear warmup is applied to the sparsity penalty at the start of training.\n* Learning rate decay: if a `decay_start` is passed to the Trainer, then a linear LR decay is used from `decay_start` to the end of training.\n* If `normalize_activations` is True and passed to `trainSAE`, then the activations will be normalized to have unit mean squared norm. The autoencoders weights will be scaled before saving, so the activations don't need to be scaled during inference. This is very helpful for hyperparameter transfer between different layers and models.\n\nIf `submodule` is a model component where the activations are tuples (e.g. this is common when working with residual stream activations), then the buffer yields the first coordinate of the tuple.\n\n# Downloading our open-source dictionaries\n\nTo download our pretrained dictionaries automatically, run:\n\n```bash\n./pretrained_dictionary_downloader.sh\n```\nThis will download dictionaries of all submodules (~2.5 GB) hosted on huggingface. Currently, we provide dictionaries from the `10_32768` training run. This set has dictionaries for MLP outputs, attention outputs, and residual streams (including embeddings) in all layers of EleutherAI's Pythia-70m-deduped model. These dictionaries were trained on 2B tokens from The Pile.\n\nLet's explain the directory structure by example. After using the script above, you'll have a `dictionaries/pythia-70m-deduped/mlp_out_layer1/10_32768` directory corresponding to the layer 1 MLP dictionary from the `10_32768` set. This directory contains:\n* `ae.pt`: the `state_dict` of the fully trained dictionary\n* `config.json`: a json file which specifies the hyperparameters used to train the dictionary\n* `checkpoints/`: a directory containing training checkpoints of the form `ae_step.pt` (only if you used the `--checkpoints` flag)\n\nWe've also previously released other dictionaries which can be found and downloaded [here](https://baulab.us/u/smarks/autoencoders/). \n\n## Statistics for our dictionaries\n\nWe'll report the following statistics for our `10_32768` dictionaries. These were measured using the code in `evaluation.py`.\n* **MSE loss**: average squared L2 distance between an activation and the autoencoder's reconstruction of it\n* **L1 loss**: a measure of the autoencoder's sparsity\n* **L0**: average number of features active above a random token\n* **Percentage of neurons alive**: fraction of the dictionary features which are active on at least one token out of 8192 random tokens\n* **CE diff**: difference between the usual cross-entropy loss of the model for next token prediction and the cross entropy when replacing activations with our dictionary's reconstruction\n* **Percentage of CE loss recovered**: when replacing the activation with the dictionary's reconstruction, the percentage of the model's cross-entropy loss on next token prediction that is recovered (relative to the baseline of zero ablating the activation)\n\n### Attention output dictionaries\n\n| Layer | Variance Explained (%) | L1 | L0  | % Alive | CE Diff | % CE Recovered |\n|-------|------------------------|----|-----|---------|---------|----------------|\n| 0     | 92                     | 8  | 128 | 17      | 0.02    | 99             |\n| 1     | 87                     | 9  | 127 | 17      | 0.03    | 94             |\n| 2     | 90                     | 19 | 215 | 12      | 0.05    | 93             |\n| 3     | 89                     | 12 | 169 | 13      | 0.03    | 93             |\n| 4     | 83                     | 8  | 132 | 14      | 0.01    | 95             |\n| 5     | 89                     | 11 | 144 | 20      | 0.02    | 93             |\n\n\n### MLP output dictionaries\n\n| Layer  | Variance Explained (%) | L1 | L0  | % Alive | CE Diff | % CE Recovered |\n|--------|------------------------|----|-----|---------|---------|----------------|\n|     0  | 97                     | 5  | 5   | 40      | 0.10    | 99             |\n|     1  | 85                     | 8  | 69  | 44      | 0.06    | 95             |\n|     2  | 99                     | 12 | 88  | 31      | 0.11    | 88             |\n|     3  | 88                     | 20 | 160 | 25      | 0.12    | 94             |\n|     4  | 92                     | 20 | 100 | 29      | 0.14    | 90             |\n|     5  | 96                     | 31 | 102 | 35      | 0.15    | 97             |\n\n\n### Residual stream dictionaries\nNOTE: these are indexed so that the resid_i dictionary is the *output* of the ith layer. Thus embeddings go first, then layer 0, etc.\n\n| Layer   | Variance Explained (%) | L1 | L0  | % Alive | CE Diff | % CE Recovered |\n|---------|------------------------|----|-----|---------|---------|----------------|\n|    embed| 96                     |  1 |  3  | 36      | 0.17    | 98             |\n|       0 | 92                     | 11 | 59  | 41      | 0.24    | 97             |\n|       1 | 85                     | 13 | 54  | 38      | 0.45    | 95             |\n|       2 | 96                     | 24 | 108 | 27      | 0.55    | 94             |\n|       3 | 96                     | 23 | 68  | 22      | 0.58    | 95             |\n|       4 | 88                     | 23 | 61  | 27      | 0.48    | 95             |\n|       5 | 90                     | 35 | 72  | 45      | 0.55    | 92             |\n\n\n\n\n# Extra functionality supported by this repo\n\n**Note:** these features are likely to be depricated in future releases.\n\nWe've included support for some experimental features. We briefly investigated them as an alternative approaches to training dictionaries.\n\n* **MLP stretchers.** Based on the perspective that one may be able to identify features with \"[neurons in a sufficiently large model](https://transformer-circuits.pub/2022/toy_model/index.html),\" we experimented with training \"autoencoders\" to, given as input an MLP *input* activation $x$, output not $x$ but $MLP(x)$ (the same output as the MLP). For instance, given an MLP which maps a 512-dimensional input $x$ to a 1024-dimensional hidden state $h$ and then a 512-dimensional output $y$, we train a dictionary $A$ with hidden dimension 16384 = 16 x 1024 so that $A(x)$ is close to $y$ (and, as usual, so that the hidden state of the dictionary is sparse).\n    * The resulting dictionaries seemed decent, but we decided not to pursue the idea further.\n    * To use this functionality, set the `io` parameter of an activaiton buffer to `'in_to_out'` (default is `'out'`).\n    * h/t to Max Li for this suggestion.\n* **Replacing L1 loss with entropy**. Based on the ideas in this [post](https://transformer-circuits.pub/2023/may-update/index.html#simple-factorization), we experimented with using entropy to regularize a dictionary's hidden state instead of L1 loss. This seemed to cause the features to split into dead features (which never fired) and very high-frequency features which fired on nearly every input, which was not the desired behavior. But plausibly there is a way to make this work better.\n* **Ghost grads**, as described [here](https://transformer-circuits.pub/2024/jan-update/index.html). \n\n# Citation\n\nPlease cite the package as follows:\n\n```\n@misc{marks2024dictionary_learning,\n   title = {dictionary_learning},\n   author = {Samuel Marks, Adam Karvonen, and Aaron Mueller},\n   year = {2024},\n   howpublished = {\\url{https://github.com/saprmarks/dictionary_learning}},\n}\n```\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Dictionary learning via sparse autoencoders on neural network activations",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/saprmarks/dictionary_learning",
        "Repository": "https://github.com/saprmarks/dictionary_learning"
    },
    "split_keywords": [
        "deep-learning",
        " sparse-autoencoders",
        " mechanistic-interpretability",
        " pytorch"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "a31fd7a5055a88c77f46b7e40b7d33c6af539d7b24468693eb85e8e638f9922d",
                "md5": "a6856ba131f0423c823f2af9428be81b",
                "sha256": "211116f9cc6fde643ea447f11da1c1ea9d0174dd9460e2031ae95b2724622ca5"
            },
            "downloads": -1,
            "filename": "dictionary_learning-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a6856ba131f0423c823f2af9428be81b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 52294,
            "upload_time": "2025-02-12T06:55:30",
            "upload_time_iso_8601": "2025-02-12T06:55:30.693771Z",
            "url": "https://files.pythonhosted.org/packages/a3/1f/d7a5055a88c77f46b7e40b7d33c6af539d7b24468693eb85e8e638f9922d/dictionary_learning-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "b0f483297fe368e300929786e58497eb7b7ba83b889508313a0051f956559cf6",
                "md5": "d81290aa0cd335b9be12c1f3ac8868ce",
                "sha256": "daecb2c8fade22425be0f18969f8a00750544fae078a54703581c982a9b6ab37"
            },
            "downloads": -1,
            "filename": "dictionary_learning-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "d81290aa0cd335b9be12c1f3ac8868ce",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 43555,
            "upload_time": "2025-02-12T06:55:32",
            "upload_time_iso_8601": "2025-02-12T06:55:32.655465Z",
            "url": "https://files.pythonhosted.org/packages/b0/f4/83297fe368e300929786e58497eb7b7ba83b889508313a0051f956559cf6/dictionary_learning-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-12 06:55:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "saprmarks",
    "github_project": "dictionary_learning",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dictionary-learning"
}
        
Elapsed time: 0.89195s