transformers-visualizer


Nametransformers-visualizer JSON
Version 0.2.2 PyPI version JSON
download
home_pagehttps://github.com/VDuchauffour/transformers-visualizer
SummaryExplain your 🤗 transformers without effort! Display the internal behavior of your model.
upload_time2022-12-29 16:12:43
maintainer
docs_urlNone
authorVDuchauffour
requires_python>=3.8,<4.0
licenseApache-2.0
keywords machine learning natural language processing nlp explainability transformers model interpretability
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <h1 align="center">Transformers visualizer</h1>
<p align="center">Explain your 🤗 transformers without effort!</p>
<h1 align="center"></h1>

<p align="center">
    <a href="https://opensource.org/licenses/Apache-2.0">
        <img alt="Apache" src="https://img.shields.io/badge/License-Apache%202.0-blue.svg">
    </a>
    <a href="https://github.com/VDuchauffour/transformers-visualizer/blob/main/.github/workflows/unit_tests.yml">
        <img alg="Unit tests" src="https://github.com/VDuchauffour/transformers-visualizer/actions/workflows/unit_tests.yml/badge.svg">
    </a>
    <a href="">
        <img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/transformers-visualizer?color=red">
    </a>
    <a href="https://github.com/VDuchauffour/transformers-visualizer">
        <img alt="PyPI - Package Version" src="https://img.shields.io/pypi/v/transformers-visualizer?label=version">
    </a>
</p>

Transformers visualizer is a python package designed to work with the [🤗 transformers](https://huggingface.co/docs/transformers/index) package. Given a `model` and a `tokenizer`, this package supports multiple ways to explain your model by plotting its internal behavior.

This package is mostly based on the [Captum][Captum] tutorials [[1]][captum_part1] [[2]][Captum_part2].

## Installation

```shell
pip install transformers-visualizer
```

## Quickstart

Let's define a model, a tokenizer and a text input for the following examples.

```python
from transformers import AutoModel, AutoTokenizer

model_name = "bert-base-uncased"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder."
```

### Visualizers

<details><summary>Attention matrices of a specific layer</summary>

<p>

```python
from transformers_visualizer import TokenToTokenAttentions

visualizer = TokenToTokenAttentions(model, tokenizer)
visualizer(text)
```

Instead of using `__call__` function, you can use the `compute` method. Both work in place, `compute` method allows chaining method.

`plot` method accept a layer index as parameter to specify which part of your model you want to plot. By default, the last layer is plotted.

```python
import matplotlib.pyplot as plt

visualizer.plot(layer_index = 6)
plt.savefig("token_to_token.jpg")
```

<p align="center">
    <img alt="token to token" src="https://raw.githubusercontent.com/VDuchauffour/transformers-visualizer/main/images/token_to_token.jpg" />
</p>

</p>

</details>

<details><summary>Attention matrices normalized across head axis</summary>

<p>

You can specify the `order` used in `torch.linalg.norm` in `__call__` and `compute` methods. By default, an L2 norm is applied.

```python
from transformers_visualizer import TokenToTokenNormalizedAttentions

visualizer = TokenToTokenNormalizedAttentions(model, tokenizer)
visualizer.compute(text).plot()
```

<p align="center">
    <img alt="normalized token to token"src="https://raw.githubusercontent.com/VDuchauffour/transformers-visualizer/main/images/token_to_token_normalized.jpg" />
</p>

</p>

</details>

## Plotting

`plot` method accept to skip special tokens with the parameter `skip_special_tokens`, by default it's set to `False`.

You can use the following imports to use plotting functions directly.

```python
from transformers_visualizer.plotting import plot_token_to_token, plot_token_to_token_specific_dimension
```

These functions or the `plot` method of a visualizer can use the following parameters.

- `figsize (Tuple[int, int])`: Figsize of the plot. Defaults to (20, 20).
- `ticks_fontsize (int)`: Ticks fontsize. Defaults to 7.
- `title_fontsize (int)`: Title fontsize. Defaults to 9.
- `cmap (str)`: Colormap. Defaults to "viridis".
- `colorbar (bool)`: Display colorbars. Defaults to True.

## Upcoming features

- [x] Add an option to mask special tokens.
- [ ] Add an option to specify head/layer indices to plot.
- [ ] Add other plotting backends such as Plotly, Bokeh, Altair.
- [ ] Implement other visualizers such as [vector norm](https://arxiv.org/pdf/2004.10102.pdf).

## References

- [[1]][captum_part1] Captum's BERT tutorial (part 1)
- [[2]][captum_part2] Captum's BERT tutorial (part 2)

## Acknowledgements

- [Transformers Interpret](https://github.com/cdpierse/transformers-interpret) for the idea of this project.

[Captum]: https://captum.ai/
[captum_part1]: https://captum.ai/tutorials/Bert_SQUAD_Interpret
[Captum_part2]: https://captum.ai/tutorials/Bert_SQUAD_Interpret2
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/VDuchauffour/transformers-visualizer",
    "name": "transformers-visualizer",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": "",
    "keywords": "machine learning,natural language processing,nlp,explainability,transformers,model interpretability",
    "author": "VDuchauffour",
    "author_email": "vincent.duchauffour@proton.me",
    "download_url": "https://files.pythonhosted.org/packages/ff/3e/5b9dd78e650e05a019f523a06c5fa4962efa976d8f9e42d6f83a6c1e200c/transformers_visualizer-0.2.2.tar.gz",
    "platform": null,
    "description": "<h1 align=\"center\">Transformers visualizer</h1>\n<p align=\"center\">Explain your \ud83e\udd17 transformers without effort!</p>\n<h1 align=\"center\"></h1>\n\n<p align=\"center\">\n    <a href=\"https://opensource.org/licenses/Apache-2.0\">\n        <img alt=\"Apache\" src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\">\n    </a>\n    <a href=\"https://github.com/VDuchauffour/transformers-visualizer/blob/main/.github/workflows/unit_tests.yml\">\n        <img alg=\"Unit tests\" src=\"https://github.com/VDuchauffour/transformers-visualizer/actions/workflows/unit_tests.yml/badge.svg\">\n    </a>\n    <a href=\"\">\n        <img alt=\"PyPI - Python Version\" src=\"https://img.shields.io/pypi/pyversions/transformers-visualizer?color=red\">\n    </a>\n    <a href=\"https://github.com/VDuchauffour/transformers-visualizer\">\n        <img alt=\"PyPI - Package Version\" src=\"https://img.shields.io/pypi/v/transformers-visualizer?label=version\">\n    </a>\n</p>\n\nTransformers visualizer is a python package designed to work with the [\ud83e\udd17 transformers](https://huggingface.co/docs/transformers/index) package. Given a `model` and a `tokenizer`, this package supports multiple ways to explain your model by plotting its internal behavior.\n\nThis package is mostly based on the [Captum][Captum] tutorials [[1]][captum_part1] [[2]][Captum_part2].\n\n## Installation\n\n```shell\npip install transformers-visualizer\n```\n\n## Quickstart\n\nLet's define a model, a tokenizer and a text input for the following examples.\n\n```python\nfrom transformers import AutoModel, AutoTokenizer\n\nmodel_name = \"bert-base-uncased\"\nmodel = AutoModel.from_pretrained(model_name)\ntokenizer = AutoTokenizer.from_pretrained(model_name)\ntext = \"The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder.\"\n```\n\n### Visualizers\n\n<details><summary>Attention matrices of a specific layer</summary>\n\n<p>\n\n```python\nfrom transformers_visualizer import TokenToTokenAttentions\n\nvisualizer = TokenToTokenAttentions(model, tokenizer)\nvisualizer(text)\n```\n\nInstead of using `__call__` function, you can use the `compute` method. Both work in place, `compute` method allows chaining method.\n\n`plot` method accept a layer index as parameter to specify which part of your model you want to plot. By default, the last layer is plotted.\n\n```python\nimport matplotlib.pyplot as plt\n\nvisualizer.plot(layer_index = 6)\nplt.savefig(\"token_to_token.jpg\")\n```\n\n<p align=\"center\">\n    <img alt=\"token to token\" src=\"https://raw.githubusercontent.com/VDuchauffour/transformers-visualizer/main/images/token_to_token.jpg\" />\n</p>\n\n</p>\n\n</details>\n\n<details><summary>Attention matrices normalized across head axis</summary>\n\n<p>\n\nYou can specify the `order` used in `torch.linalg.norm` in `__call__` and `compute` methods. By default, an L2 norm is applied.\n\n```python\nfrom transformers_visualizer import TokenToTokenNormalizedAttentions\n\nvisualizer = TokenToTokenNormalizedAttentions(model, tokenizer)\nvisualizer.compute(text).plot()\n```\n\n<p align=\"center\">\n    <img alt=\"normalized token to token\"src=\"https://raw.githubusercontent.com/VDuchauffour/transformers-visualizer/main/images/token_to_token_normalized.jpg\" />\n</p>\n\n</p>\n\n</details>\n\n## Plotting\n\n`plot` method accept to skip special tokens with the parameter `skip_special_tokens`, by default it's set to `False`.\n\nYou can use the following imports to use plotting functions directly.\n\n```python\nfrom transformers_visualizer.plotting import plot_token_to_token, plot_token_to_token_specific_dimension\n```\n\nThese functions or the `plot` method of a visualizer can use the following parameters.\n\n- `figsize (Tuple[int, int])`: Figsize of the plot. Defaults to (20, 20).\n- `ticks_fontsize (int)`: Ticks fontsize. Defaults to 7.\n- `title_fontsize (int)`: Title fontsize. Defaults to 9.\n- `cmap (str)`: Colormap. Defaults to \"viridis\".\n- `colorbar (bool)`: Display colorbars. Defaults to True.\n\n## Upcoming features\n\n- [x] Add an option to mask special tokens.\n- [ ] Add an option to specify head/layer indices to plot.\n- [ ] Add other plotting backends such as Plotly, Bokeh, Altair.\n- [ ] Implement other visualizers such as [vector norm](https://arxiv.org/pdf/2004.10102.pdf).\n\n## References\n\n- [[1]][captum_part1] Captum's BERT tutorial (part 1)\n- [[2]][captum_part2] Captum's BERT tutorial (part 2)\n\n## Acknowledgements\n\n- [Transformers Interpret](https://github.com/cdpierse/transformers-interpret) for the idea of this project.\n\n[Captum]: https://captum.ai/\n[captum_part1]: https://captum.ai/tutorials/Bert_SQUAD_Interpret\n[Captum_part2]: https://captum.ai/tutorials/Bert_SQUAD_Interpret2",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Explain your \ud83e\udd17 transformers without effort! Display the internal behavior of your model.",
    "version": "0.2.2",
    "split_keywords": [
        "machine learning",
        "natural language processing",
        "nlp",
        "explainability",
        "transformers",
        "model interpretability"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "695e2bde9418c49f0279bb89c3b6abb4",
                "sha256": "854719c59fd5bda5fd014827e6bcb921b2be92ff33e36b9c5df9ae69ad226416"
            },
            "downloads": -1,
            "filename": "transformers_visualizer-0.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "695e2bde9418c49f0279bb89c3b6abb4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 13202,
            "upload_time": "2022-12-29T16:12:41",
            "upload_time_iso_8601": "2022-12-29T16:12:41.660354Z",
            "url": "https://files.pythonhosted.org/packages/05/f1/77d72d5d6c1e1e3a2998b39cf89c1c92a33ac770e91b48ef64645fdc4d69/transformers_visualizer-0.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "md5": "3b9e1df6dfe6d8231d8c9b34457fb2e0",
                "sha256": "e2edde56840dbebc8345c06906f8c17f3a66bb4b8512434bffa793ee77af6cd8"
            },
            "downloads": -1,
            "filename": "transformers_visualizer-0.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "3b9e1df6dfe6d8231d8c9b34457fb2e0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 12941,
            "upload_time": "2022-12-29T16:12:43",
            "upload_time_iso_8601": "2022-12-29T16:12:43.006147Z",
            "url": "https://files.pythonhosted.org/packages/ff/3e/5b9dd78e650e05a019f523a06c5fa4962efa976d8f9e42d6f83a6c1e200c/transformers_visualizer-0.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2022-12-29 16:12:43",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "VDuchauffour",
    "github_project": "transformers-visualizer",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "transformers-visualizer"
}
        
Elapsed time: 0.02167s