eleuther-elk


Nameeleuther-elk JSON
Version 0.1.1 PyPI version JSON
download
home_page
SummaryKeeping language models honest by directly eliciting knowledge encoded in their activations
upload_time2023-07-20 23:32:21
maintainer
docs_urlNone
author
requires_python>=3.10
licenseMIT License
keywords nlp interpretability language-models explainable-ai
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## Introduction

**WIP: This codebase is under active development**

Because language models are trained to predict the next token in naturally occurring text, they often reproduce common
human errors and misconceptions, even when they "know better" in some sense. More worryingly, when models are trained to
generate text that's rated highly by humans, they may learn to output false statements that human evaluators can't
detect. We aim to circumvent this issue by directly [**eliciting latent knowledge
**](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) (ELK) inside the activations
of a language model.

Specifically, we're building on the **Contrastive Representation Clustering** (CRC) method described in the
paper [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) by Burns
et al. (2022). In CRC, we search for features in the hidden states of a language model which satisfy certain logical
consistency requirements. It turns out that these features are often useful for question-answering and text
classification tasks, even though the features are trained without labels.

### Quick **Start**

Our code is based on [PyTorch](http://pytorch.org)
and [Huggingface Transformers](https://huggingface.co/docs/transformers/index). We test the code on Python 3.10 and
3.11.

First install the package with `pip install -e .` in the root directory, or `pip install -e .[dev]` if you'd like to
contribute to the project (see **Development** section below). This should install all the necessary dependencies.

To fit reporters for the HuggingFace model `model` and dataset `dataset`, just run:

```bash
elk elicit microsoft/deberta-v2-xxlarge-mnli imdb
```

This will automatically download the model and dataset, run the model and extract the relevant representations if they
aren't cached on disk, fit reporters on them, and save the reporter checkpoints to the `elk-reporters` folder in your
home directory. It will also evaluate the reporter classification performance on a held out test set and save it to a
CSV file in the same folder.

The following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the
default.

```bash
elk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs
```

The following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the
model deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are
stored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`.

```bash
elk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb
```

The following runs `elicit` on the Cartesian product of the listed models and datasets, storing it in a special folder
ELK_DIR/sweeps/<memorable_name>. Moreover, `--add_pooled` adds an additional dataset that pools all of the datasets
together. You can also add a `--visualize` flag to visualize the results of the sweep.

```bash
elk sweep --models gpt2-{medium,large,xl} --datasets imdb amazon_polarity --add_pooled
```

If you just do `elk plot`, it will plot the results from the most recent sweep.
If you want to plot a specific sweep, you can do so with:

```bash
elk plot {sweep_name}
```

## Caching

The hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them
every time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is
usually `~/.cache/huggingface/datasets`.

## Development

Use `pip install pre-commit && pre-commit install` in the root folder before your first commit.

### Devcontainer

[
![Open in Remote - Containers](
https://img.shields.io/static/v1?label=Remote%20-%20Containers&message=Open&color=blue&logo=visualstudiocode
)
](
https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/EleutherAI/elk
)

### Run tests

```bash
pytest
```

### Run type checking

We use [pyright](https://github.com/microsoft/pyright), which is built into the VSCode editor. If you'd like to run it
as a standalone tool, it requires a [nodejs installation.](https://nodejs.org/en/download/)

```bash
pyright
```

### Run the linter

We use [ruff](https://beta.ruff.rs/docs/). It is installed as a pre-commit hook, so you don't have to run it manually.
If you want to run it manually, you can do so with:

```bash
ruff . --fix
```

### Contributing to this repository

If you work on a new feature / fix or some other code task, make sure to create an issue and assign it to yourself (
Maybe, even share it in the elk channel of Eleuther's Discord with a small note). In this way, others know you are
working on the issue and people won't do the same thing twice 👍 Also others can contact you easily.

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "eleuther-elk",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "nlp,interpretability,language-models,explainable-ai",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/68/ff/241a0ba4bd2bd8f7c18830f6105c1ce4106ecb50061970cd1be1387537bf/eleuther-elk-0.1.1.tar.gz",
    "platform": null,
    "description": "## Introduction\n\n**WIP: This codebase is under active development**\n\nBecause language models are trained to predict the next token in naturally occurring text, they often reproduce common\nhuman errors and misconceptions, even when they \"know better\" in some sense. More worryingly, when models are trained to\ngenerate text that's rated highly by humans, they may learn to output false statements that human evaluators can't\ndetect. We aim to circumvent this issue by directly [**eliciting latent knowledge\n**](https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit) (ELK) inside the activations\nof a language model.\n\nSpecifically, we're building on the **Contrastive Representation Clustering** (CRC) method described in the\npaper [Discovering Latent Knowledge in Language Models Without Supervision](https://arxiv.org/abs/2212.03827) by Burns\net al. (2022). In CRC, we search for features in the hidden states of a language model which satisfy certain logical\nconsistency requirements. It turns out that these features are often useful for question-answering and text\nclassification tasks, even though the features are trained without labels.\n\n### Quick **Start**\n\nOur code is based on [PyTorch](http://pytorch.org)\nand [Huggingface Transformers](https://huggingface.co/docs/transformers/index). We test the code on Python 3.10 and\n3.11.\n\nFirst install the package with `pip install -e .` in the root directory, or `pip install -e .[dev]` if you'd like to\ncontribute to the project (see **Development** section below). This should install all the necessary dependencies.\n\nTo fit reporters for the HuggingFace model `model` and dataset `dataset`, just run:\n\n```bash\nelk elicit microsoft/deberta-v2-xxlarge-mnli imdb\n```\n\nThis will automatically download the model and dataset, run the model and extract the relevant representations if they\naren't cached on disk, fit reporters on them, and save the reporter checkpoints to the `elk-reporters` folder in your\nhome directory. It will also evaluate the reporter classification performance on a held out test set and save it to a\nCSV file in the same folder.\n\nThe following will generate a CCS (Contrast Consistent Search) reporter instead of the CRC-based reporter, which is the\ndefault.\n\n```bash\nelk elicit microsoft/deberta-v2-xxlarge-mnli imdb --net ccs\n```\n\nThe following command will evaluate the probe from the run naughty-northcutt on the hidden states extracted from the\nmodel deberta-v2-xxlarge-mnli for the imdb dataset. It will result in an `eval.csv` and `cfg.yaml` file, which are\nstored under a subfolder in `elk-reporters/naughty-northcutt/transfer_eval`.\n\n```bash\nelk eval naughty-northcutt microsoft/deberta-v2-xxlarge-mnli imdb\n```\n\nThe following runs `elicit` on the Cartesian product of the listed models and datasets, storing it in a special folder\nELK_DIR/sweeps/<memorable_name>. Moreover, `--add_pooled` adds an additional dataset that pools all of the datasets\ntogether. You can also add a `--visualize` flag to visualize the results of the sweep.\n\n```bash\nelk sweep --models gpt2-{medium,large,xl} --datasets imdb amazon_polarity --add_pooled\n```\n\nIf you just do `elk plot`, it will plot the results from the most recent sweep.\nIf you want to plot a specific sweep, you can do so with:\n\n```bash\nelk plot {sweep_name}\n```\n\n## Caching\n\nThe hidden states resulting from `elk elicit` are cached as a HuggingFace dataset to avoid having to recompute them\nevery time we want to train a probe. The cache is stored in the same place as all other HuggingFace datasets, which is\nusually `~/.cache/huggingface/datasets`.\n\n## Development\n\nUse `pip install pre-commit && pre-commit install` in the root folder before your first commit.\n\n### Devcontainer\n\n[\n![Open in Remote - Containers](\nhttps://img.shields.io/static/v1?label=Remote%20-%20Containers&message=Open&color=blue&logo=visualstudiocode\n)\n](\nhttps://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://github.com/EleutherAI/elk\n)\n\n### Run tests\n\n```bash\npytest\n```\n\n### Run type checking\n\nWe use [pyright](https://github.com/microsoft/pyright), which is built into the VSCode editor. If you'd like to run it\nas a standalone tool, it requires a [nodejs installation.](https://nodejs.org/en/download/)\n\n```bash\npyright\n```\n\n### Run the linter\n\nWe use [ruff](https://beta.ruff.rs/docs/). It is installed as a pre-commit hook, so you don't have to run it manually.\nIf you want to run it manually, you can do so with:\n\n```bash\nruff . --fix\n```\n\n### Contributing to this repository\n\nIf you work on a new feature / fix or some other code task, make sure to create an issue and assign it to yourself (\nMaybe, even share it in the elk channel of Eleuther's Discord with a small note). In this way, others know you are\nworking on the issue and people won't do the same thing twice \ud83d\udc4d Also others can contact you easily.\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Keeping language models honest by directly eliciting knowledge encoded in their activations",
    "version": "0.1.1",
    "project_urls": null,
    "split_keywords": [
        "nlp",
        "interpretability",
        "language-models",
        "explainable-ai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c24bee3e7c1d3e4e2ed0d876ad4295d4fc45445ed5583772075e75362ff72db2",
                "md5": "ad875c6e416aaaaa09cfdbb3ade8a342",
                "sha256": "8522b095453134c9be918f5a1b1a7c671ad00172c70aba5903413d99b97a8659"
            },
            "downloads": -1,
            "filename": "eleuther_elk-0.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ad875c6e416aaaaa09cfdbb3ade8a342",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 458397,
            "upload_time": "2023-07-20T23:32:19",
            "upload_time_iso_8601": "2023-07-20T23:32:19.743555Z",
            "url": "https://files.pythonhosted.org/packages/c2/4b/ee3e7c1d3e4e2ed0d876ad4295d4fc45445ed5583772075e75362ff72db2/eleuther_elk-0.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "68ff241a0ba4bd2bd8f7c18830f6105c1ce4106ecb50061970cd1be1387537bf",
                "md5": "4aa2a9ffedc3bb37334053c8b15d39fc",
                "sha256": "f15c8aa8312a133900c04400135da159f2969965358f7b71f18a23e4284382f7"
            },
            "downloads": -1,
            "filename": "eleuther-elk-0.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "4aa2a9ffedc3bb37334053c8b15d39fc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 290254,
            "upload_time": "2023-07-20T23:32:21",
            "upload_time_iso_8601": "2023-07-20T23:32:21.644523Z",
            "url": "https://files.pythonhosted.org/packages/68/ff/241a0ba4bd2bd8f7c18830f6105c1ce4106ecb50061970cd1be1387537bf/eleuther-elk-0.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-20 23:32:21",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "eleuther-elk"
}
        
Elapsed time: 0.09333s