minicons


Nameminicons JSON
Version 0.2.41 PyPI version JSON
download
home_pagehttps://github.com/kanishkamisra/minicons
SummaryA package of useful functions to analyze transformer based language models.
upload_time2024-03-21 18:55:19
maintainerNone
docs_urlNone
authorKanishka Misra
requires_python<4,>=3.8.0
licenseMIT
keywords transformers language models nlp interpretability
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models

[![Downloads](https://static.pepy.tech/personalized-badge/minicons?period=total&units=international_system&left_color=black&right_color=brightgreen&left_text=Downloads)](https://pepy.tech/project/minicons)

This repo is a wrapper around the `transformers` [library](https://huggingface.co/transformers) from Hugging Face :hugs:

<!-- TODO: Description-->



## Installation

Install from Pypi using:

```pip install minicons```

## Supported Functionality

- Extract word representations from Contextualized Word Embeddings
- Score sequences using language model scoring techniques, including masked language models following [Salazar et al. (2020)](https://www.aclweb.org/anthology/2020.acl-main.240.pdf).


## Examples

1. Extract word representations from contextualized word embeddings:

```py
from minicons import cwe

model = cwe.CWE('bert-base-uncased')

context_words = [("I went to the bank to withdraw money.", "bank"), 
                 ("i was at the bank of the river ganga!", "bank")]

print(model.extract_representation(context_words, layer = 12))

''' 
tensor([[ 0.5399, -0.2461, -0.0968,  ..., -0.4670, -0.5312, -0.0549],
        [-0.8258, -0.4308,  0.2744,  ..., -0.5987, -0.6984,  0.2087]],
       grad_fn=<MeanBackward1>)
'''

# if model is seq2seq:
model = cwe.EncDecCWE('t5-small')

print(model.extract_representation(context_words))

'''(last layer, by default)
tensor([[-0.0895,  0.0758,  0.0753,  ...,  0.0130, -0.1093, -0.2354],
        [-0.0695,  0.1142,  0.0803,  ...,  0.0807, -0.1139, -0.2888]])
'''
```

2. Compute sentence acceptability measures (surprisals) using Word Prediction Models:

```py
from minicons import scorer

mlm_model = scorer.MaskedLMScorer('bert-base-uncased', 'cpu')
ilm_model = scorer.IncrementalLMScorer('distilgpt2', 'cpu')
s2s_model = scorer.Seq2SeqScorer('t5-base', 'cpu')

stimuli = ["The keys to the cabinet are on the table.",
           "The keys to the cabinet is on the table."]

# use sequence_score with different reduction options: 
# Sequence Surprisal - lambda x: -x.sum(0).item()
# Sequence Log-probability - lambda x: x.sum(0).item()
# Sequence Surprisal, normalized by number of tokens - lambda x: -x.mean(0).item()
# Sequence Log-probability, normalized by number of tokens - lambda x: x.mean(0).item()
# and so on...

print(ilm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item()))

'''
[39.879737854003906, 42.75846481323242]
'''

# MLM scoring, inspired by Salazar et al., 2020
print(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item()))
'''
[13.962685585021973, 23.415111541748047]
'''

# Seq2seq scoring
## Blank source sequence, target sequence specified in `stimuli`
print(s2s_model.sequence_score(stimuli, source_format = 'blank'))
## Source sequence is the same as the target sequence in `stimuli`
print(s2s_model.sequence_score(stimuli, source_format = 'copy'))
'''
[-7.910910129547119, -7.835635185241699]
[-10.555519104003906, -9.532546997070312]
'''
```

## A better version of MLM Scoring by Kauf and Ivanova

This version leverages a locally-autoregressive scoring strategy to avoid the overestimation of probabilities of tokens in multi-token words (e.g., "ostrich" -> "ostr" + "#ich"). In particular, tokens probabilities are estimated using the bidirectional context, excluding any future tokens that belong to the same word as the current target token.

For more details, refer to [Kauf and Ivanova, 2023](https://arxiv.org/abs/2305.10588)

```py
from minicons import scorer
mlm_model = scorer.MaskedLMScorer('bert-base-uncased', 'cpu')

stimuli = ['The traveler lost the souvenir.']

# un-normalized sequence score
print(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item(), PLL_metric='within_word_l2r'))
'''
[32.77983617782593]
'''

# original metric, for comparison:
print(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item(), PLL_metric='original'))
'''
[18.014726161956787]
'''

print(mlm_model.token_score(stimuli, PLL_metric='within_word_l2r'))
'''
[[('the', -0.07324600219726562), ('traveler', -9.668401718139648), ('lost', -6.955361366271973),
('the', -1.1923179626464844), ('so', -7.776356220245361), ('##uven', -6.989711761474609),
('##ir', -0.037807464599609375), ('.', -0.08663368225097656)]]
'''

# original values, for comparison (notice the 'souvenir' tokens):

print(mlm_model.token_score(stimuli, PLL_metric='original'))
'''
[[('the', -0.07324600219726562), ('traveler', -9.668402671813965), ('lost', -6.955359935760498), ('the', -1.192317008972168), ('so', -3.0517578125e-05), ('##uven', -0.0009250640869140625), ('##ir', -0.03780937194824219), ('.', -0.08663558959960938)]]
'''
```

## OpenAI API
Some models on the OpenAI API also allow for querying of log-probs (for now), and minicons now (as of Sept 29) also supports it! Here's how:

First, make sure you save your OpenAI API Key in some file (say `~/.openaikey`). Register the key using:
```py
from minicons import openai as mo

PATH = "/path/to/apikey"
mo.register_api_key(PATH)
```
Then,

```py
from minicons import openai as mo

stimuli = ["the keys to the cabinet are", "the keys to the cabinet is"]

# we want to test if p(are | prefix) > p(is | prefix)
model = "gpt-3.5-turbo-instruct"
query = mo.OpenAIQuery(model, stimuli)

# run query using the above batch
query.query()

# get conditional log-probs for are and is given prior context:
query.conditional_score(["are", "is"])

#> [-2.5472614765167236, -5.633198261260986] SUCCESS!

# NOTE: this will not be 100% reproducible since it seems OpenAI adds a little noise to its outputs.
# see https://twitter.com/xuanalogue/status/1653280462935146496
```

## Tutorials

- [Introduction to using LM-scoring methods using minicons](https://kanishka.website/post/minicons-running-large-scale-behavioral-analyses-on-transformer-lms/)
- [Computing sentence and token surprisals using minicons](examples/surprisals.md)
- [Extracting word/phrase representations using minicons](examples/word_representations.md)

## Recent Updates
- **November 6, 2021:** MLM scoring has been fixed! You can now use `model.token_score()` and `model.sequence_score()` with `MaskedLMScorers` as well!
- **June 4, 2022:** Added support for Seq2seq models. Thanks to [Aaron Mueller](https://github.com/aaronmueller) 🥳
- **June 13, 2023:** Added support for `within_word_l2r`, a better way to do MLM scoring, thanks to Carina Kauf (https://github.com/carina-kauf) 🥳
- **January, 2024:** minicons now supports mamba!

## Citation

If you use `minicons`, please cite the following paper:

```tex
@article{misra2022minicons,
    title={minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models},
    author={Kanishka Misra},
    journal={arXiv preprint arXiv:2203.13112},
    year={2022}
}
```

If you use Kauf and Ivanova's PLL scoring technique, please additionally also cite the following paper:

```tex
@inproceedings{kauf2023better,
  title={A Better Way to Do Masked Language Model Scoring},
  author={Kauf, Carina and Ivanova, Anna},
  booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  year={2023}
}
```

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kanishkamisra/minicons",
    "name": "minicons",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4,>=3.8.0",
    "maintainer_email": null,
    "keywords": "transformers, language models, nlp, interpretability",
    "author": "Kanishka Misra",
    "author_email": "kanishka.replies@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/9c/f9/675e6fa659d05ab97a3d29092871a17c759b05ce5802e8fae7fd66699310/minicons-0.2.41.tar.gz",
    "platform": null,
    "description": "# minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models\n\n[![Downloads](https://static.pepy.tech/personalized-badge/minicons?period=total&units=international_system&left_color=black&right_color=brightgreen&left_text=Downloads)](https://pepy.tech/project/minicons)\n\nThis repo is a wrapper around the `transformers` [library](https://huggingface.co/transformers) from Hugging Face :hugs:\n\n<!-- TODO: Description-->\n\n\n\n## Installation\n\nInstall from Pypi using:\n\n```pip install minicons```\n\n## Supported Functionality\n\n- Extract word representations from Contextualized Word Embeddings\n- Score sequences using language model scoring techniques, including masked language models following [Salazar et al. (2020)](https://www.aclweb.org/anthology/2020.acl-main.240.pdf).\n\n\n## Examples\n\n1. Extract word representations from contextualized word embeddings:\n\n```py\nfrom minicons import cwe\n\nmodel = cwe.CWE('bert-base-uncased')\n\ncontext_words = [(\"I went to the bank to withdraw money.\", \"bank\"), \n                 (\"i was at the bank of the river ganga!\", \"bank\")]\n\nprint(model.extract_representation(context_words, layer = 12))\n\n''' \ntensor([[ 0.5399, -0.2461, -0.0968,  ..., -0.4670, -0.5312, -0.0549],\n        [-0.8258, -0.4308,  0.2744,  ..., -0.5987, -0.6984,  0.2087]],\n       grad_fn=<MeanBackward1>)\n'''\n\n# if model is seq2seq:\nmodel = cwe.EncDecCWE('t5-small')\n\nprint(model.extract_representation(context_words))\n\n'''(last layer, by default)\ntensor([[-0.0895,  0.0758,  0.0753,  ...,  0.0130, -0.1093, -0.2354],\n        [-0.0695,  0.1142,  0.0803,  ...,  0.0807, -0.1139, -0.2888]])\n'''\n```\n\n2. Compute sentence acceptability measures (surprisals) using Word Prediction Models:\n\n```py\nfrom minicons import scorer\n\nmlm_model = scorer.MaskedLMScorer('bert-base-uncased', 'cpu')\nilm_model = scorer.IncrementalLMScorer('distilgpt2', 'cpu')\ns2s_model = scorer.Seq2SeqScorer('t5-base', 'cpu')\n\nstimuli = [\"The keys to the cabinet are on the table.\",\n           \"The keys to the cabinet is on the table.\"]\n\n# use sequence_score with different reduction options: \n# Sequence Surprisal - lambda x: -x.sum(0).item()\n# Sequence Log-probability - lambda x: x.sum(0).item()\n# Sequence Surprisal, normalized by number of tokens - lambda x: -x.mean(0).item()\n# Sequence Log-probability, normalized by number of tokens - lambda x: x.mean(0).item()\n# and so on...\n\nprint(ilm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item()))\n\n'''\n[39.879737854003906, 42.75846481323242]\n'''\n\n# MLM scoring, inspired by Salazar et al., 2020\nprint(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item()))\n'''\n[13.962685585021973, 23.415111541748047]\n'''\n\n# Seq2seq scoring\n## Blank source sequence, target sequence specified in `stimuli`\nprint(s2s_model.sequence_score(stimuli, source_format = 'blank'))\n## Source sequence is the same as the target sequence in `stimuli`\nprint(s2s_model.sequence_score(stimuli, source_format = 'copy'))\n'''\n[-7.910910129547119, -7.835635185241699]\n[-10.555519104003906, -9.532546997070312]\n'''\n```\n\n## A better version of MLM Scoring by Kauf and Ivanova\n\nThis version leverages a locally-autoregressive scoring strategy to avoid the overestimation of probabilities of tokens in multi-token words (e.g., \"ostrich\" -> \"ostr\" + \"#ich\"). In particular, tokens probabilities are estimated using the bidirectional context, excluding any future tokens that belong to the same word as the current target token.\n\nFor more details, refer to [Kauf and Ivanova, 2023](https://arxiv.org/abs/2305.10588)\n\n```py\nfrom minicons import scorer\nmlm_model = scorer.MaskedLMScorer('bert-base-uncased', 'cpu')\n\nstimuli = ['The traveler lost the souvenir.']\n\n# un-normalized sequence score\nprint(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item(), PLL_metric='within_word_l2r'))\n'''\n[32.77983617782593]\n'''\n\n# original metric, for comparison:\nprint(mlm_model.sequence_score(stimuli, reduction = lambda x: -x.sum(0).item(), PLL_metric='original'))\n'''\n[18.014726161956787]\n'''\n\nprint(mlm_model.token_score(stimuli, PLL_metric='within_word_l2r'))\n'''\n[[('the', -0.07324600219726562), ('traveler', -9.668401718139648), ('lost', -6.955361366271973),\n('the', -1.1923179626464844), ('so', -7.776356220245361), ('##uven', -6.989711761474609),\n('##ir', -0.037807464599609375), ('.', -0.08663368225097656)]]\n'''\n\n# original values, for comparison (notice the 'souvenir' tokens):\n\nprint(mlm_model.token_score(stimuli, PLL_metric='original'))\n'''\n[[('the', -0.07324600219726562), ('traveler', -9.668402671813965), ('lost', -6.955359935760498), ('the', -1.192317008972168), ('so', -3.0517578125e-05), ('##uven', -0.0009250640869140625), ('##ir', -0.03780937194824219), ('.', -0.08663558959960938)]]\n'''\n```\n\n## OpenAI API\nSome models on the OpenAI API also allow for querying of log-probs (for now), and minicons now (as of Sept 29) also supports it! Here's how:\n\nFirst, make sure you save your OpenAI API Key in some file (say `~/.openaikey`). Register the key using:\n```py\nfrom minicons import openai as mo\n\nPATH = \"/path/to/apikey\"\nmo.register_api_key(PATH)\n```\nThen,\n\n```py\nfrom minicons import openai as mo\n\nstimuli = [\"the keys to the cabinet are\", \"the keys to the cabinet is\"]\n\n# we want to test if p(are | prefix) > p(is | prefix)\nmodel = \"gpt-3.5-turbo-instruct\"\nquery = mo.OpenAIQuery(model, stimuli)\n\n# run query using the above batch\nquery.query()\n\n# get conditional log-probs for are and is given prior context:\nquery.conditional_score([\"are\", \"is\"])\n\n#> [-2.5472614765167236, -5.633198261260986] SUCCESS!\n\n# NOTE: this will not be 100% reproducible since it seems OpenAI adds a little noise to its outputs.\n# see https://twitter.com/xuanalogue/status/1653280462935146496\n```\n\n## Tutorials\n\n- [Introduction to using LM-scoring methods using minicons](https://kanishka.website/post/minicons-running-large-scale-behavioral-analyses-on-transformer-lms/)\n- [Computing sentence and token surprisals using minicons](examples/surprisals.md)\n- [Extracting word/phrase representations using minicons](examples/word_representations.md)\n\n## Recent Updates\n- **November 6, 2021:** MLM scoring has been fixed! You can now use `model.token_score()` and `model.sequence_score()` with `MaskedLMScorers` as well!\n- **June 4, 2022:** Added support for Seq2seq models. Thanks to [Aaron Mueller](https://github.com/aaronmueller) \ud83e\udd73\n- **June 13, 2023:** Added support for `within_word_l2r`, a better way to do MLM scoring, thanks to Carina Kauf (https://github.com/carina-kauf) \ud83e\udd73\n- **January, 2024:** minicons now supports mamba!\n\n## Citation\n\nIf you use `minicons`, please cite the following paper:\n\n```tex\n@article{misra2022minicons,\n    title={minicons: Enabling Flexible Behavioral and Representational Analyses of Transformer Language Models},\n    author={Kanishka Misra},\n    journal={arXiv preprint arXiv:2203.13112},\n    year={2022}\n}\n```\n\nIf you use Kauf and Ivanova's PLL scoring technique, please additionally also cite the following paper:\n\n```tex\n@inproceedings{kauf2023better,\n  title={A Better Way to Do Masked Language Model Scoring},\n  author={Kauf, Carina and Ivanova, Anna},\n  booktitle={Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},\n  year={2023}\n}\n```\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A package of useful functions to analyze transformer based language models.",
    "version": "0.2.41",
    "project_urls": {
        "Homepage": "https://github.com/kanishkamisra/minicons",
        "Repository": "https://github.com/kanishkamisra/minicons"
    },
    "split_keywords": [
        "transformers",
        " language models",
        " nlp",
        " interpretability"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5976e391644bf8ad5dd999934e0b747e3635e390adcaa46cc0c5c003274d8985",
                "md5": "402c07ea3de937a4ccafd11ae62b093e",
                "sha256": "b4b3f1d3df4988c568ee999e39eac9d42a4ef18604c66c71d42fc193ff3efba2"
            },
            "downloads": -1,
            "filename": "minicons-0.2.41-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "402c07ea3de937a4ccafd11ae62b093e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4,>=3.8.0",
            "size": 31641,
            "upload_time": "2024-03-21T18:55:18",
            "upload_time_iso_8601": "2024-03-21T18:55:18.411878Z",
            "url": "https://files.pythonhosted.org/packages/59/76/e391644bf8ad5dd999934e0b747e3635e390adcaa46cc0c5c003274d8985/minicons-0.2.41-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9cf9675e6fa659d05ab97a3d29092871a17c759b05ce5802e8fae7fd66699310",
                "md5": "7b2187841cd4c7877cf32d5be512c60f",
                "sha256": "53255d44581fb3af46b153132929bbe009b5d9c402e5c56404afe308edf6d9f4"
            },
            "downloads": -1,
            "filename": "minicons-0.2.41.tar.gz",
            "has_sig": false,
            "md5_digest": "7b2187841cd4c7877cf32d5be512c60f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4,>=3.8.0",
            "size": 31586,
            "upload_time": "2024-03-21T18:55:19",
            "upload_time_iso_8601": "2024-03-21T18:55:19.902345Z",
            "url": "https://files.pythonhosted.org/packages/9c/f9/675e6fa659d05ab97a3d29092871a17c759b05ce5802e8fae7fd66699310/minicons-0.2.41.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-03-21 18:55:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kanishkamisra",
    "github_project": "minicons",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "minicons"
}
        
Elapsed time: 0.23049s