codebook-features


Namecodebook-features JSON
Version 0.1.2 PyPI version JSON
download
home_pagehttps://huggingface.co/spaces/taufeeque/codebook-features
SummarySparse and discrete interpretability tool for neural networks
upload_time2024-02-05 22:09:52
maintainer
docs_urlNone
authorMohammad Taufeeque
requires_python>3.9.7,<3.12
licenseMIT
keywords codebook features transformers language-models interpretability
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Codebook Features
<a target="_blank" href="https://colab.research.google.com/github/taufeeque9/codebook-features/blob/main/tutorials/code_intervention.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
<a target="_blank" href="https://huggingface.co/spaces/taufeeque/codebook-features">
<img src="https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg", alt="Open in Spaces">
</a>

Alex Tamkin, Mohammad Taufeeque and Noah D. Goodman: "Codebook Features: Sparse and Discrete Interpretability for Neural Networks", 2023. [[arXiv]](https://arxiv.org/abs/2310.17230)

<img alt="header-old" src="https://github.com/taufeeque9/codebook-features/assets/46495671/ba0c31e5-4983-4504-ad02-9f8208d9396d">


Codebook Features is a method for training neural networks with a set of learned sparse and discrete hidden states, enabling interpretability and control of the resulting model.

Codebook features work by inserting vector quantization bottlenecks called _codebooks_ into each layer of a neural network. The library provides a range of features to train and interpret codebook models, including by analyzing the activations of codes, searching for codes that activate on a pattern, and performing code interventions to verify the causal effect of a code on the output of a model. Many of these features are also available through an easy-to-use webapp that helps in analyzing and experimenting with the codebook models.


## Installation

### PyPI

Install from PyPI to directly use the library:

```
pip install codebook-features
```

### Source Code

Install from source code if you plan to modify part of the code or contribute to the library:

```
git clone https://github.com/taufeeque9/codebook-features
cd codebook-features
pip install -e .
```

For development mode, we recommend using Poetry:

```
poetry install
```

## Usage

### Training a codebook model

We adapt the [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) script from HuggingFace to train/finetune conventional models or the codebook models. We also use the [hydra](https://hydra.cc/) library for configuration management of the training scripts. The default config for training codebooks is available in `codebook_features/config/main.yaml`. The hydra syntax can be used to override any of the default config values, which includes arguments for the codebook model and arguments inherited from Huggingface's [TrainingArguments](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments). For example, to train a codebook model using gpt2-small on the wikitext dataset, run:
```
python -m codebook_features.train_codebook model_args.model_name_or_path=roneneldan/TinyStories-1M 'data_args.dataset_name=roneneldan/TinyStories'
```

### Interpretability WebApp for Codebook Models

Once a codebook model has been trained and saved on disk, we can use the interpretability webapp to visualize the codebook. First, we need to generate the relevant cache files for the codebook model that is required for the webapp. This can be done by running the script `codebook_features/code_search_cache.py`:
```
python -m codebook_features.code_search_cache --orig_model_name <orig name/path of model> --pretrained_path <path to codebook model> --dataset_name <dataset name> --dataset_config_name <dataset config name> --output_base_dir <path to output directory>
```

Once the cache files have been generated, we can run the webapp using the following command with the base output directory used in the above command:
```
python -m streamlit run codebook_features/webapp/Code_Browser.py -- --cache_dir <path to the base cache directory>
```

### Code Intervention

To control a network, one can _intervene_ on codes by causing them to always be activated during the forward pass. This can be useful to influence the sampled generations, e.g., to cause the network to discuss certain topics. For a general tutorial on using codebook models and seeing how you can perform code intervention, please see the [Code Intervention Tutorial](https://github.com/taufeeque9/codebook-features/blob/main/tutorials/code_intervention.ipynb).


<details>
<summary>
<h2>Guide to the codebase [click to expand] </h2>
</summary>

### Codebook Model

`codebook_features/models` is the main module used to define codebooks. It has the following classes:
- `CodebookLayer`: defines a `torch.nn.Module` that implements the codebook layer. It takes in arguments like `num_codes`, `dim`, `snap_fn` `kcodes` that define the codebook. It provides various functionalities including logging methods, hook function that can disable specific codes during inference, etc.
  - `GroupCodebookLayer`: defines a `torch.nn.Module` that implements a group of codebook layer each of which are applied to a different part of the input vector. This is useful for applying a group of codebooks on the attention head outputs of a transformer model.
- `CodebookWrapper`: is an abstract class to wrap a codebook around any `torch.nn.Module`. It takes in the `module_layer`, `codebook_cls`, and arguments for the codebook class to instantiate the codebook layer. The wrapper provides a `snap` boolean field that can be used to enable/disable the codebook layer.
  - `TransformerLayerWrapper`: subclasses `CodebookWrapper` to wrap a codebook around a transformer layer, i.e. a codebook is applied on the output of the a whole transformer block.
  - `MLPWrapper`: subclasses `CodebookWrapper` to wrap a codebook around an MLP layer, i.e. a codebook is applied on the output of the MLP block.
- `CodebookModelConfig`: defines the config to be used by a codebook model. It contains important parameters like `codebook_type`, `num_codes`, `num_codebooks`, `layers_to_snap`, `similarity_metric`, `codebook_at`, etc.
- `CodebookModel`: defines the abstract base class for a codebook model. It takes in a neural network model through the `model` argument and the config through the `config` argument and return a codebook model.
  - `GPT2CodebookModel`: subclasses `CodebookModel` to define a codebook model specifically for GPT2.
  - `GPTNeoCodebookModel`: subclasses `CodebookModel` to define a codebook model specifically for GPTNeo.
  - `GPTNeoXCodebookModel`: subclasses `CodebookModel` to define a codebook model specifically for GPTNeoX.
  - `HookedTransformerCodebookModel`: subclasses `CodebookModel` to define a codebook model for any transformer model defined using the `HookedTransformer` class of `transformer_lens`. This is mostly while interpreting the codebooks while the other classes are used for training the codebook models. The `convert_to_hooked_model()` function can be used to convert a trained codebook model to a `HookedTransformerCodebookModel`.

### Codebook Training
The `codebook_features/train_codebook.py` script is used to train a codebook model based on a causal language model. We use the `run_clm.py` script provided by the transformers library for training. It can take in a dataset name available in the [datasets](https://huggingface.co/datasets) library or a custom dataset. The default arguments for the training script is available in `codebook_features/config/main.yaml`. The hydra syntax can be used to override any of the default config values.

### TokFSM Experiment
The `codebook_features/train_fsm_model.py` script provides an algorithmic sequence modeling task to analyse the codebook models. The task is to predict the next element in a sequence of numbers generated using a Finite State Machine (FSM). The `train_fsm_model/FSM` class defines the FSM by taking in the number of states through `N`, the number of outbound edges from each state through `edges`, and the base in which to represent the state using `representation_base`. The `train_fsm_model/TokFSMDataset` class defines an iterable torch dataset using the FSM that generates the dataset on the fly. The `train_fsm_model/TokFSMModelTrainer` provides additional logging feature specific to the fsm models like logging the transition accuracy of a model.

The `codebook_features/train_fsm_model.py` script can be used to train a codebook model on the TokFSM dataset. The syntax for the arguments and training procedure is similar to the `train_codebook.py` script. The default arguments for the training script is available in `codebook_features/config/fsm_main.yaml`.


For tutorials on how to use the library, please see the [Codebook Features Tutorials](https://github.com/taufeeque9/codebook-features/tree/main/tutorials).

</details>


## Citation

```bibtex
@misc{tamkin2023codebook,
      title={Codebook Features: Sparse and Discrete Interpretability for Neural Networks},
      author={Alex Tamkin and Mohammad Taufeeque and Noah D. Goodman},
      year={2023},
      eprint={2310.17230},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://huggingface.co/spaces/taufeeque/codebook-features",
    "name": "codebook-features",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">3.9.7,<3.12",
    "maintainer_email": "",
    "keywords": "codebook,features,transformers,language-models,interpretability",
    "author": "Mohammad Taufeeque",
    "author_email": "9taufeeque9@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c9/b7/cbeb2cca58748380421843ff211ee8a2bc47f23610e0ee2567cd7b904c94/codebook_features-0.1.2.tar.gz",
    "platform": null,
    "description": "# Codebook Features\n<a target=\"_blank\" href=\"https://colab.research.google.com/github/taufeeque9/codebook-features/blob/main/tutorials/code_intervention.ipynb\">\n  <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n</a>\n<a target=\"_blank\" href=\"https://huggingface.co/spaces/taufeeque/codebook-features\">\n<img src=\"https://huggingface.co/datasets/huggingface/badges/resolve/main/open-in-hf-spaces-sm-dark.svg\", alt=\"Open in Spaces\">\n</a>\n\nAlex Tamkin, Mohammad Taufeeque and Noah D. Goodman: \"Codebook Features: Sparse and Discrete Interpretability for Neural Networks\", 2023. [[arXiv]](https://arxiv.org/abs/2310.17230)\n\n<img alt=\"header-old\" src=\"https://github.com/taufeeque9/codebook-features/assets/46495671/ba0c31e5-4983-4504-ad02-9f8208d9396d\">\n\n\nCodebook Features is a method for training neural networks with a set of learned sparse and discrete hidden states, enabling interpretability and control of the resulting model.\n\nCodebook features work by inserting vector quantization bottlenecks called _codebooks_ into each layer of a neural network. The library provides a range of features to train and interpret codebook models, including by analyzing the activations of codes, searching for codes that activate on a pattern, and performing code interventions to verify the causal effect of a code on the output of a model. Many of these features are also available through an easy-to-use webapp that helps in analyzing and experimenting with the codebook models.\n\n\n## Installation\n\n### PyPI\n\nInstall from PyPI to directly use the library:\n\n```\npip install codebook-features\n```\n\n### Source Code\n\nInstall from source code if you plan to modify part of the code or contribute to the library:\n\n```\ngit clone https://github.com/taufeeque9/codebook-features\ncd codebook-features\npip install -e .\n```\n\nFor development mode, we recommend using Poetry:\n\n```\npoetry install\n```\n\n## Usage\n\n### Training a codebook model\n\nWe adapt the [run_clm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_clm.py) script from HuggingFace to train/finetune conventional models or the codebook models. We also use the [hydra](https://hydra.cc/) library for configuration management of the training scripts. The default config for training codebooks is available in `codebook_features/config/main.yaml`. The hydra syntax can be used to override any of the default config values, which includes arguments for the codebook model and arguments inherited from Huggingface's [TrainingArguments](https://huggingface.co/docs/transformers/en/main_classes/trainer#transformers.TrainingArguments). For example, to train a codebook model using gpt2-small on the wikitext dataset, run:\n```\npython -m codebook_features.train_codebook model_args.model_name_or_path=roneneldan/TinyStories-1M 'data_args.dataset_name=roneneldan/TinyStories'\n```\n\n### Interpretability WebApp for Codebook Models\n\nOnce a codebook model has been trained and saved on disk, we can use the interpretability webapp to visualize the codebook. First, we need to generate the relevant cache files for the codebook model that is required for the webapp. This can be done by running the script `codebook_features/code_search_cache.py`:\n```\npython -m codebook_features.code_search_cache --orig_model_name <orig name/path of model> --pretrained_path <path to codebook model> --dataset_name <dataset name> --dataset_config_name <dataset config name> --output_base_dir <path to output directory>\n```\n\nOnce the cache files have been generated, we can run the webapp using the following command with the base output directory used in the above command:\n```\npython -m streamlit run codebook_features/webapp/Code_Browser.py -- --cache_dir <path to the base cache directory>\n```\n\n### Code Intervention\n\nTo control a network, one can _intervene_ on codes by causing them to always be activated during the forward pass. This can be useful to influence the sampled generations, e.g., to cause the network to discuss certain topics. For a general tutorial on using codebook models and seeing how you can perform code intervention, please see the [Code Intervention Tutorial](https://github.com/taufeeque9/codebook-features/blob/main/tutorials/code_intervention.ipynb).\n\n\n<details>\n<summary>\n<h2>Guide to the codebase [click to expand] </h2>\n</summary>\n\n### Codebook Model\n\n`codebook_features/models` is the main module used to define codebooks. It has the following classes:\n- `CodebookLayer`: defines a `torch.nn.Module` that implements the codebook layer. It takes in arguments like `num_codes`, `dim`, `snap_fn` `kcodes` that define the codebook. It provides various functionalities including logging methods, hook function that can disable specific codes during inference, etc.\n  - `GroupCodebookLayer`: defines a `torch.nn.Module` that implements a group of codebook layer each of which are applied to a different part of the input vector. This is useful for applying a group of codebooks on the attention head outputs of a transformer model.\n- `CodebookWrapper`: is an abstract class to wrap a codebook around any `torch.nn.Module`. It takes in the `module_layer`, `codebook_cls`, and arguments for the codebook class to instantiate the codebook layer. The wrapper provides a `snap` boolean field that can be used to enable/disable the codebook layer.\n  - `TransformerLayerWrapper`: subclasses `CodebookWrapper` to wrap a codebook around a transformer layer, i.e. a codebook is applied on the output of the a whole transformer block.\n  - `MLPWrapper`: subclasses `CodebookWrapper` to wrap a codebook around an MLP layer, i.e. a codebook is applied on the output of the MLP block.\n- `CodebookModelConfig`: defines the config to be used by a codebook model. It contains important parameters like `codebook_type`, `num_codes`, `num_codebooks`, `layers_to_snap`, `similarity_metric`, `codebook_at`, etc.\n- `CodebookModel`: defines the abstract base class for a codebook model. It takes in a neural network model through the `model` argument and the config through the `config` argument and return a codebook model.\n  - `GPT2CodebookModel`: subclasses `CodebookModel` to define a codebook model specifically for GPT2.\n  - `GPTNeoCodebookModel`: subclasses `CodebookModel` to define a codebook model specifically for GPTNeo.\n  - `GPTNeoXCodebookModel`: subclasses `CodebookModel` to define a codebook model specifically for GPTNeoX.\n  - `HookedTransformerCodebookModel`: subclasses `CodebookModel` to define a codebook model for any transformer model defined using the `HookedTransformer` class of `transformer_lens`. This is mostly while interpreting the codebooks while the other classes are used for training the codebook models. The `convert_to_hooked_model()` function can be used to convert a trained codebook model to a `HookedTransformerCodebookModel`.\n\n### Codebook Training\nThe `codebook_features/train_codebook.py` script is used to train a codebook model based on a causal language model. We use the `run_clm.py` script provided by the transformers library for training. It can take in a dataset name available in the [datasets](https://huggingface.co/datasets) library or a custom dataset. The default arguments for the training script is available in `codebook_features/config/main.yaml`. The hydra syntax can be used to override any of the default config values.\n\n### TokFSM Experiment\nThe `codebook_features/train_fsm_model.py` script provides an algorithmic sequence modeling task to analyse the codebook models. The task is to predict the next element in a sequence of numbers generated using a Finite State Machine (FSM). The `train_fsm_model/FSM` class defines the FSM by taking in the number of states through `N`, the number of outbound edges from each state through `edges`, and the base in which to represent the state using `representation_base`. The `train_fsm_model/TokFSMDataset` class defines an iterable torch dataset using the FSM that generates the dataset on the fly. The `train_fsm_model/TokFSMModelTrainer` provides additional logging feature specific to the fsm models like logging the transition accuracy of a model.\n\nThe `codebook_features/train_fsm_model.py` script can be used to train a codebook model on the TokFSM dataset. The syntax for the arguments and training procedure is similar to the `train_codebook.py` script. The default arguments for the training script is available in `codebook_features/config/fsm_main.yaml`.\n\n\nFor tutorials on how to use the library, please see the [Codebook Features Tutorials](https://github.com/taufeeque9/codebook-features/tree/main/tutorials).\n\n</details>\n\n\n## Citation\n\n```bibtex\n@misc{tamkin2023codebook,\n      title={Codebook Features: Sparse and Discrete Interpretability for Neural Networks},\n      author={Alex Tamkin and Mohammad Taufeeque and Noah D. Goodman},\n      year={2023},\n      eprint={2310.17230},\n      archivePrefix={arXiv},\n      primaryClass={cs.LG}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Sparse and discrete interpretability tool for neural networks",
    "version": "0.1.2",
    "project_urls": {
        "Homepage": "https://huggingface.co/spaces/taufeeque/codebook-features",
        "Repository": "https://github.com/taufeeque9/codebook-features"
    },
    "split_keywords": [
        "codebook",
        "features",
        "transformers",
        "language-models",
        "interpretability"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64d6b1de97c29a813a33d1d3936d26b14e9c2cd40f13f270037a5a29b6d09445",
                "md5": "a24db4823529425838d9b4fbe758725a",
                "sha256": "50fec77f0869fef1611e060dde20ff347c3de25fca2c5b35c9d26b1dab6704c3"
            },
            "downloads": -1,
            "filename": "codebook_features-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a24db4823529425838d9b4fbe758725a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">3.9.7,<3.12",
            "size": 77696,
            "upload_time": "2024-02-05T22:09:51",
            "upload_time_iso_8601": "2024-02-05T22:09:51.124831Z",
            "url": "https://files.pythonhosted.org/packages/64/d6/b1de97c29a813a33d1d3936d26b14e9c2cd40f13f270037a5a29b6d09445/codebook_features-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c9b7cbeb2cca58748380421843ff211ee8a2bc47f23610e0ee2567cd7b904c94",
                "md5": "a8883e8c8677744523cc0db16a11dd21",
                "sha256": "6254ec40760c4e87f3c25cb842f013c4175ce759be780fed8977e5a515ee3f6f"
            },
            "downloads": -1,
            "filename": "codebook_features-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "a8883e8c8677744523cc0db16a11dd21",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">3.9.7,<3.12",
            "size": 70632,
            "upload_time": "2024-02-05T22:09:52",
            "upload_time_iso_8601": "2024-02-05T22:09:52.638765Z",
            "url": "https://files.pythonhosted.org/packages/c9/b7/cbeb2cca58748380421843ff211ee8a2bc47f23610e0ee2567cd7b904c94/codebook_features-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-05 22:09:52",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "taufeeque9",
    "github_project": "codebook-features",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "codebook-features"
}
        
Elapsed time: 0.18533s