attention


Nameattention JSON
Version 5.0.0 PyPI version JSON
download
home_page
SummaryKeras Attention Layer
upload_time2023-03-19 03:19:06
maintainer
docs_urlNone
authorPhilippe Remy
requires_python
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Keras Attention Mechanism

[![Downloads](https://pepy.tech/badge/attention)](https://pepy.tech/project/attention)
[![Downloads](https://pepy.tech/badge/attention/month)](https://pepy.tech/project/attention)
[![license](https://img.shields.io/badge/License-Apache_2.0-brightgreen.svg)](https://github.com/philipperemy/keras-attention-mechanism/blob/master/LICENSE) [![dep1](https://img.shields.io/badge/Tensorflow-2.0+-brightgreen.svg)](https://www.tensorflow.org/)
![Simple Keras Attention CI](https://github.com/philipperemy/keras-attention-mechanism/workflows/Simple%20Keras%20Attention%20CI/badge.svg)

Many-to-one attention mechanism for Keras.

<p align="center">
  <img src="examples/equations.png" width="600">
</p>


## Installation

*PyPI*

```bash
pip install attention
```

## Example

```python
import numpy as np
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.models import load_model, Model

from attention import Attention


def main():
    # Dummy data. There is nothing to learn in this example.
    num_samples, time_steps, input_dim, output_dim = 100, 10, 1, 1
    data_x = np.random.uniform(size=(num_samples, time_steps, input_dim))
    data_y = np.random.uniform(size=(num_samples, output_dim))

    # Define/compile the model.
    model_input = Input(shape=(time_steps, input_dim))
    x = LSTM(64, return_sequences=True)(model_input)
    x = Attention(units=32)(x)
    x = Dense(1)(x)
    model = Model(model_input, x)
    model.compile(loss='mae', optimizer='adam')
    model.summary()

    # train.
    model.fit(data_x, data_y, epochs=10)

    # test save/reload model.
    pred1 = model.predict(data_x)
    model.save('test_model.h5')
    model_h5 = load_model('test_model.h5', custom_objects={'Attention': Attention})
    pred2 = model_h5.predict(data_x)
    np.testing.assert_almost_equal(pred1, pred2)
    print('Success.')


if __name__ == '__main__':
    main()
```

## Other Examples

Browse [examples](examples).

Install the requirements before running the examples: `pip install -r examples/examples-requirements.txt`.


### IMDB Dataset

In this experiment, we demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two
LSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number
of parameters for a fair comparison (250K).

Here are the results on 10 runs. For every run, we record the max accuracy on the test set for 10 epochs.


| Measure  | No Attention (250K params) | Attention (250K params) |
| ------------- | ------------- | ------------- |
| MAX Accuracy | 88.22 | 88.76 |
| AVG Accuracy | 87.02 | 87.62 |
| STDDEV Accuracy | 0.18 | 0.14 |

As expected, there is a boost in accuracy for the model with attention. It also reduces the variability between the runs, which is something nice to have.


### Adding two numbers

Let's consider the task of adding two numbers that come right after some delimiters (0 in this case):

`x = [1, 2, 3, 0, 4, 5, 6, 0, 7, 8]`. Result is `y = 4 + 7 = 11`.

The attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the
top represents the attention map and the bottom the ground truth. As the training  progresses, the model learns the 
task and the attention map converges to the ground truth.

<p align="center">
  <img src="examples/attention.gif" width="320">
</p>

### Finding max of a sequence

We consider many 1D sequences of the same length. The task is to find the maximum of each sequence. 

We give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.

After a few epochs, the attention layer converges perfectly to what we expected.

<p align="center">
  <img src="examples/readme/example.png" width="320">
</p>

## References

- https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf
- https://arxiv.org/abs/1508.04025
- https://arxiv.org/abs/1409.0473

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "attention",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Philippe Remy",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/c3/3f/4f821fbcf4c401ec43b549b67d12bf5dd00eb4545378c336b09a17bdd9f3/attention-5.0.0.tar.gz",
    "platform": null,
    "description": "# Keras Attention Mechanism\n\n[![Downloads](https://pepy.tech/badge/attention)](https://pepy.tech/project/attention)\n[![Downloads](https://pepy.tech/badge/attention/month)](https://pepy.tech/project/attention)\n[![license](https://img.shields.io/badge/License-Apache_2.0-brightgreen.svg)](https://github.com/philipperemy/keras-attention-mechanism/blob/master/LICENSE) [![dep1](https://img.shields.io/badge/Tensorflow-2.0+-brightgreen.svg)](https://www.tensorflow.org/)\n![Simple Keras Attention CI](https://github.com/philipperemy/keras-attention-mechanism/workflows/Simple%20Keras%20Attention%20CI/badge.svg)\n\nMany-to-one attention mechanism for Keras.\n\n<p align=\"center\">\n  <img src=\"examples/equations.png\" width=\"600\">\n</p>\n\n\n## Installation\n\n*PyPI*\n\n```bash\npip install attention\n```\n\n## Example\n\n```python\nimport numpy as np\nfrom tensorflow.keras import Input\nfrom tensorflow.keras.layers import Dense, LSTM\nfrom tensorflow.keras.models import load_model, Model\n\nfrom attention import Attention\n\n\ndef main():\n    # Dummy data. There is nothing to learn in this example.\n    num_samples, time_steps, input_dim, output_dim = 100, 10, 1, 1\n    data_x = np.random.uniform(size=(num_samples, time_steps, input_dim))\n    data_y = np.random.uniform(size=(num_samples, output_dim))\n\n    # Define/compile the model.\n    model_input = Input(shape=(time_steps, input_dim))\n    x = LSTM(64, return_sequences=True)(model_input)\n    x = Attention(units=32)(x)\n    x = Dense(1)(x)\n    model = Model(model_input, x)\n    model.compile(loss='mae', optimizer='adam')\n    model.summary()\n\n    # train.\n    model.fit(data_x, data_y, epochs=10)\n\n    # test save/reload model.\n    pred1 = model.predict(data_x)\n    model.save('test_model.h5')\n    model_h5 = load_model('test_model.h5', custom_objects={'Attention': Attention})\n    pred2 = model_h5.predict(data_x)\n    np.testing.assert_almost_equal(pred1, pred2)\n    print('Success.')\n\n\nif __name__ == '__main__':\n    main()\n```\n\n## Other Examples\n\nBrowse [examples](examples).\n\nInstall the requirements before running the examples: `pip install -r examples/examples-requirements.txt`.\n\n\n### IMDB Dataset\n\nIn this experiment, we demonstrate that using attention yields a higher accuracy on the IMDB dataset. We consider two\nLSTM networks: one with this attention layer and the other one with a fully connected layer. Both have the same number\nof parameters for a fair comparison (250K).\n\nHere are the results on 10 runs. For every run, we record the max accuracy on the test set for 10 epochs.\n\n\n| Measure  | No Attention (250K params) | Attention (250K params) |\n| ------------- | ------------- | ------------- |\n| MAX Accuracy | 88.22 | 88.76 |\n| AVG Accuracy | 87.02 | 87.62 |\n| STDDEV Accuracy | 0.18 | 0.14 |\n\nAs expected, there is a boost in accuracy for the model with attention. It also reduces the variability between the runs, which is something nice to have.\n\n\n### Adding two numbers\n\nLet's consider the task of adding two numbers that come right after some delimiters (0 in this case):\n\n`x = [1, 2, 3, 0, 4, 5, 6, 0, 7, 8]`. Result is `y = 4 + 7 = 11`.\n\nThe attention is expected to be the highest after the delimiters. An overview of the training is shown below, where the\ntop represents the attention map and the bottom the ground truth. As the training  progresses, the model learns the \ntask and the attention map converges to the ground truth.\n\n<p align=\"center\">\n  <img src=\"examples/attention.gif\" width=\"320\">\n</p>\n\n### Finding max of a sequence\n\nWe consider many 1D sequences of the same length. The task is to find the maximum of each sequence. \n\nWe give the full sequence processed by the RNN layer to the attention layer. We expect the attention layer to focus on the maximum of each sequence.\n\nAfter a few epochs, the attention layer converges perfectly to what we expected.\n\n<p align=\"center\">\n  <img src=\"examples/readme/example.png\" width=\"320\">\n</p>\n\n## References\n\n- https://www.cs.cmu.edu/~./hovy/papers/16HLT-hierarchical-attention-networks.pdf\n- https://arxiv.org/abs/1508.04025\n- https://arxiv.org/abs/1409.0473\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Keras Attention Layer",
    "version": "5.0.0",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5559e43b191c104ba7f5f289acd11921511838fbab273c1164b954203cf8d966",
                "md5": "3178864cc0d20c1e7180fce6967f11c1",
                "sha256": "5605b4b2fb5780f161b525819d94ebdf05ccf5aa5febbd70eeb9c6e9eea239bd"
            },
            "downloads": -1,
            "filename": "attention-5.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3178864cc0d20c1e7180fce6967f11c1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 9040,
            "upload_time": "2023-03-19T03:19:03",
            "upload_time_iso_8601": "2023-03-19T03:19:03.847042Z",
            "url": "https://files.pythonhosted.org/packages/55/59/e43b191c104ba7f5f289acd11921511838fbab273c1164b954203cf8d966/attention-5.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c33f4f821fbcf4c401ec43b549b67d12bf5dd00eb4545378c336b09a17bdd9f3",
                "md5": "31df6e2f394bbb8499b1b6d37718e8a6",
                "sha256": "dec0734c8de45be9b15765b4b2fd5c952484246a8bbfa4953b81951948402b8e"
            },
            "downloads": -1,
            "filename": "attention-5.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "31df6e2f394bbb8499b1b6d37718e8a6",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 8471,
            "upload_time": "2023-03-19T03:19:06",
            "upload_time_iso_8601": "2023-03-19T03:19:06.026642Z",
            "url": "https://files.pythonhosted.org/packages/c3/3f/4f821fbcf4c401ec43b549b67d12bf5dd00eb4545378c336b09a17bdd9f3/attention-5.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-03-19 03:19:06",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "attention"
}
        
Elapsed time: 0.06497s