cond-rnn


Namecond-rnn JSON
Version 3.2.2 PyPI version JSON
download
home_page
SummaryConditional RNN
upload_time2023-06-19 08:33:04
maintainer
docs_urlNone
authorPhilippe Remy
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Conditional Recurrent for Tensorflow/Keras

[![Downloads](https://pepy.tech/badge/cond-rnn)](https://pepy.tech/project/cond-rnn)
[![Downloads](https://pepy.tech/badge/cond-rnn/month)](https://pepy.tech/project/cond-rnn/month)
![CI](https://github.com/philipperemy/cond_rnn/workflows/Cond%20RNN%20CI/badge.svg)

- Conditions time series predictions on time-invariant data. 
- `ConditionalRecurrent` is a fully compatible Keras wrapper which supports any recurrent layer. 
- Tested with all versions of Tensorflow (until 2.10, Sep 2022).

## Installation / PyPI

ConditionalRecurrent is on PyPI. You can also install it from the sources.

```
pip install cond-rnn
```

## What is Conditional RNN?

The `ConditionalRecurrent` layer is useful if you have time series data with external inputs that do not depend on time. Let's consider some weather data for two different cities: Paris and San Francisco. The aim is to predict the next temperature data point. Based on our knowledge, the weather behaves differently depending on the city. You can either:
- Combine the auxiliary features with the time series data (ugly!).
- Concatenate the auxiliary features with the output of the RNN layer. It's some kind of post-RNN adjustment since the RNN layer won't see this auxiliary info.
- Or just use this library! Long story short, we initialize the RNN states with a learned representation of the conditions (e.g. Paris or San Francisco). This way, you model *elegantly* `P(x_{t+1}|x_{0:t}, cond)`.

<p align="center">
  <img src="misc/arch.png" width="500">
</p>

## API

This Keras wrapper `ConditionalRecurrent` initiates the internal states of recurrent layers with conditions given as separate inputs. It can be used with any recurrent layer supported by Keras and supports wrappers like `Bidirectional`.

### Arguments

- **layer**: a `tf.keras.layers.Layer` instance (`LSTM`, `GRU` or `SimpleRNN`...).

### Call arguments

- **inputs**: `3-D` Tensor with shape `[batch_size, timesteps, input_dim]`.
- **inputs_cond**: `2-D` Tensor or list of tensors with shape `[batch_size, cond_dim]`. In the case of a list, the tensors can have a different `cond_dim`.
- **training**: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the wrapped layer.

### Raises

*AssertionError*: If not initialized with a `tf.keras.layers.Layer` instance.

## Example

```python
from tensorflow.keras import Input
from tensorflow.keras.layers import LSTM

from cond_rnn import ConditionalRecurrent

time_steps, input_dim, output_dim, batch_size, cond_size = 128, 6, 12, 32, 5
inputs = Input(batch_input_shape=(batch_size, time_steps, input_dim))
cond_inputs = Input(batch_input_shape=(batch_size, cond_size))

outputs = ConditionalRecurrent(LSTM(units=output_dim))([inputs, cond_inputs])
print(outputs.shape)  # (batch_size, output_dim)
```

You can also have a look at a real world example to see how `ConditionalRecurrent` performs: [here](examples/temperature).

## A bit more background...

This implementation was inspired from the very good answer: [Adding Features To Time Series Model LSTM](https://datascience.stackexchange.com/a/17139), which I quote below. The option 3 was implemented in this library (with a slight modification: we do not add 𝑣⃗ to the hidden state but rather overwrite the hidden state by 𝑣⃗. We can argue that this is almost exactly the same thing as 𝑣⃗ is obtained 𝑣⃗ =𝐖𝑥⃗ +𝑏⃗ where 𝑏⃗ could be the hidden state).

For RNNs (e.g., LSTMs and GRUs), the layer input is a list of timesteps, and each timestep is a feature tensor. That means that you could have a input tensor like this (in Pythonic notation):

```
# Input tensor to RNN
[
    # Timestep 1
    [ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
    # Timestep 2
    [ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
    # Timestep 3
    [ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
    ...
]
```

So absolutely, you can have multiple features at each timestep. In my mind, weather is a time series feature: where I live, it happens to be a function of time. So it would be quite reasonable to encode weather information as one of your features in each timestep (with an appropriate encoding, like cloudy=0, sunny=1, etc.).

If you have non-time-series data, then it doesn't really make sense to pass it through the LSTM, though. Maybe the LSTM will work anyway, but even if it does, it will probably come at the cost of higher loss / lower accuracy per training time.

Alternatively, you can introduce this sort of "extra" information into your model outside of the LSTM by means of additional layers. You might have a data flow like this:

```
TIME_SERIES_INPUT ------> LSTM -------\
                                       *---> MERGE ---> [more processing]
AUXILIARY_INPUTS --> [do something] --/
```

So you would merge your auxiliary inputs into the LSTM outputs, and continue your network from there. Now your model is simply multi-input.

For example, let's say that in your particular application, you only keep the last output of the LSTM output sequence. Let's say that it is a vector of length 10. You auxiliary input might be your encoded weather (so a scalar). Your merge layer could simply append the auxiliary weather information onto the end of the LSTM output vector to produce a single vector of length 11. But you don't need to just keep the last LSTM output timestep: if the LSTM outputted 100 timesteps, each with a 10-vector of features, you could still tack on your auxiliary weather information, resulting in 100 timesteps, each consisting of a vector of 11 datapoints.

The Keras documentation on its functional API has a good overview of this.

In other cases, you may want to condition the LSTM on non-temporal data. For example, predict the weather tomorrow, given location. In this case, here are three suggestions, each with positive/negatives:

1. Have the first timestep contain your conditioning data, since it will effectively "set" the internal/hidden state of your RNN. Frankly, I would not do this, for a bunch of reasons: your conditioning data needs to be the same shape as the rest of your features, makes it harder to create stateful RNNs (in terms of being really careful to track how you feed data into the network), the network may "forget" the conditioning data with enough time (e.g., long training sequences, or long prediction sequences), etc.

2. Include the data as part of the temporal data itself. So each feature vector at a particular timestep includes "mostly" time-series data, but then has the conditioning data appended to the end of each feature vector. Will the network learn to recognize this? Probably, but even then, you are creating a harder learning task by polluting the sequence data with non-sequential information. So I would also discourage this.

3. Probably the best approach would be to directly affect the hidden state of the RNN at time zero. This is the approach taken by Karpathy and Fei-Fei and by Vinyals et al. This is how it works:

    * For each training sample, take your condition variables 𝑥⃗ .
    * Transform/reshape your condition variables with an affine transformation to get it into the right shape as the internal state of the RNN: 𝑣⃗ =𝐖𝑥⃗ +𝑏⃗  (these 𝐖 and 𝑏⃗  are trainable weights). You can obtain it with a Dense layer in keras.
    * For the very first timestep, add 𝑣⃗  to the hidden state of the RNN when calculating its value.
This approach is the most "theoretically" correct, since it properly conditions the RNN on your non-temporal inputs, naturally solves the shape problem, and also avoids polluting your inputs timesteps with additional, non-temporal information. The downside is that this approach often requires graph-level control of your architecture, so if you are using a higher-level abstraction like Keras, you will find it hard to implement unless you add your own layer type.

## Citation

```
@misc{CondRNN,
  author = {Philippe Remy},
  title = {Conditional RNN for Keras},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/philipperemy/cond_rnn}},
}
```

## FAQ

- [Why not merge conditions in only one vector?](https://github.com/philipperemy/cond_rnn/issues/3)

## References

- https://adventuresinmachinelearning.com/recurrent-neural-networks-lstm-tutorial-tensorflow
- https://datascience.stackexchange.com/a/17139

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "cond-rnn",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Philippe Remy",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/61/3f/5ad3008a8416728d78182e302fc6c602859dd0bfd91159e5d74bb4957161/cond-rnn-3.2.2.tar.gz",
    "platform": null,
    "description": "# Conditional Recurrent for Tensorflow/Keras\n\n[![Downloads](https://pepy.tech/badge/cond-rnn)](https://pepy.tech/project/cond-rnn)\n[![Downloads](https://pepy.tech/badge/cond-rnn/month)](https://pepy.tech/project/cond-rnn/month)\n![CI](https://github.com/philipperemy/cond_rnn/workflows/Cond%20RNN%20CI/badge.svg)\n\n- Conditions time series predictions on time-invariant data. \n- `ConditionalRecurrent` is a fully compatible Keras wrapper which supports any recurrent layer. \n- Tested with all versions of Tensorflow (until 2.10, Sep 2022).\n\n## Installation / PyPI\n\nConditionalRecurrent is on PyPI. You can also install it from the sources.\n\n```\npip install cond-rnn\n```\n\n## What is Conditional RNN?\n\nThe `ConditionalRecurrent` layer is useful if you have time series data with external inputs that do not depend on time. Let's consider some weather data for two different cities: Paris and San Francisco. The aim is to predict the next temperature data point. Based on our knowledge, the weather behaves differently depending on the city. You can either:\n- Combine the auxiliary features with the time series data (ugly!).\n- Concatenate the auxiliary features with the output of the RNN layer. It's some kind of post-RNN adjustment since the RNN layer won't see this auxiliary info.\n- Or just use this library! Long story short, we initialize the RNN states with a learned representation of the conditions (e.g. Paris or San Francisco). This way, you model *elegantly* `P(x_{t+1}|x_{0:t}, cond)`.\n\n<p align=\"center\">\n  <img src=\"misc/arch.png\" width=\"500\">\n</p>\n\n## API\n\nThis Keras wrapper `ConditionalRecurrent` initiates the internal states of recurrent layers with conditions given as separate inputs. It can be used with any recurrent layer supported by Keras and supports wrappers like `Bidirectional`.\n\n### Arguments\n\n- **layer**: a `tf.keras.layers.Layer` instance (`LSTM`, `GRU` or `SimpleRNN`...).\n\n### Call arguments\n\n- **inputs**: `3-D` Tensor with shape `[batch_size, timesteps, input_dim]`.\n- **inputs_cond**: `2-D` Tensor or list of tensors with shape `[batch_size, cond_dim]`. In the case of a list, the tensors can have a different `cond_dim`.\n- **training**: Python boolean indicating whether the layer should behave in training mode or in inference mode. This argument is passed to the wrapped layer.\n\n### Raises\n\n*AssertionError*: If not initialized with a `tf.keras.layers.Layer` instance.\n\n## Example\n\n```python\nfrom tensorflow.keras import Input\nfrom tensorflow.keras.layers import LSTM\n\nfrom cond_rnn import ConditionalRecurrent\n\ntime_steps, input_dim, output_dim, batch_size, cond_size = 128, 6, 12, 32, 5\ninputs = Input(batch_input_shape=(batch_size, time_steps, input_dim))\ncond_inputs = Input(batch_input_shape=(batch_size, cond_size))\n\noutputs = ConditionalRecurrent(LSTM(units=output_dim))([inputs, cond_inputs])\nprint(outputs.shape)  # (batch_size, output_dim)\n```\n\nYou can also have a look at a real world example to see how `ConditionalRecurrent` performs: [here](examples/temperature).\n\n## A bit more background...\n\nThis implementation was inspired from the very good answer: [Adding Features To Time Series Model LSTM](https://datascience.stackexchange.com/a/17139), which I quote below. The option 3 was implemented in this library (with a slight modification: we do not add \ud835\udc63\u20d7 to the hidden state but rather overwrite the hidden state by \ud835\udc63\u20d7. We can argue that this is almost exactly the same thing as \ud835\udc63\u20d7 is obtained \ud835\udc63\u20d7 =\ud835\udc16\ud835\udc65\u20d7 +\ud835\udc4f\u20d7 where \ud835\udc4f\u20d7 could be the hidden state).\n\nFor RNNs (e.g., LSTMs and GRUs), the layer input is a list of timesteps, and each timestep is a feature tensor. That means that you could have a input tensor like this (in Pythonic notation):\n\n```\n# Input tensor to RNN\n[\n    # Timestep 1\n    [ temperature_in_paris, value_of_nasdaq, unemployment_rate ],\n    # Timestep 2\n    [ temperature_in_paris, value_of_nasdaq, unemployment_rate ],\n    # Timestep 3\n    [ temperature_in_paris, value_of_nasdaq, unemployment_rate ],\n    ...\n]\n```\n\nSo absolutely, you can have multiple features at each timestep. In my mind, weather is a time series feature: where I live, it happens to be a function of time. So it would be quite reasonable to encode weather information as one of your features in each timestep (with an appropriate encoding, like cloudy=0, sunny=1, etc.).\n\nIf you have non-time-series data, then it doesn't really make sense to pass it through the LSTM, though. Maybe the LSTM will work anyway, but even if it does, it will probably come at the cost of higher loss / lower accuracy per training time.\n\nAlternatively, you can introduce this sort of \"extra\" information into your model outside of the LSTM by means of additional layers. You might have a data flow like this:\n\n```\nTIME_SERIES_INPUT ------> LSTM -------\\\n                                       *---> MERGE ---> [more processing]\nAUXILIARY_INPUTS --> [do something] --/\n```\n\nSo you would merge your auxiliary inputs into the LSTM outputs, and continue your network from there. Now your model is simply multi-input.\n\nFor example, let's say that in your particular application, you only keep the last output of the LSTM output sequence. Let's say that it is a vector of length 10. You auxiliary input might be your encoded weather (so a scalar). Your merge layer could simply append the auxiliary weather information onto the end of the LSTM output vector to produce a single vector of length 11. But you don't need to just keep the last LSTM output timestep: if the LSTM outputted 100 timesteps, each with a 10-vector of features, you could still tack on your auxiliary weather information, resulting in 100 timesteps, each consisting of a vector of 11 datapoints.\n\nThe Keras documentation on its functional API has a good overview of this.\n\nIn other cases, you may want to condition the LSTM on non-temporal data. For example, predict the weather tomorrow, given location. In this case, here are three suggestions, each with positive/negatives:\n\n1. Have the first timestep contain your conditioning data, since it will effectively \"set\" the internal/hidden state of your RNN. Frankly, I would not do this, for a bunch of reasons: your conditioning data needs to be the same shape as the rest of your features, makes it harder to create stateful RNNs (in terms of being really careful to track how you feed data into the network), the network may \"forget\" the conditioning data with enough time (e.g., long training sequences, or long prediction sequences), etc.\n\n2. Include the data as part of the temporal data itself. So each feature vector at a particular timestep includes \"mostly\" time-series data, but then has the conditioning data appended to the end of each feature vector. Will the network learn to recognize this? Probably, but even then, you are creating a harder learning task by polluting the sequence data with non-sequential information. So I would also discourage this.\n\n3. Probably the best approach would be to directly affect the hidden state of the RNN at time zero. This is the approach taken by Karpathy and Fei-Fei and by Vinyals et al. This is how it works:\n\n    * For each training sample, take your condition variables \ud835\udc65\u20d7 .\n    * Transform/reshape your condition variables with an affine transformation to get it into the right shape as the internal state of the RNN: \ud835\udc63\u20d7 =\ud835\udc16\ud835\udc65\u20d7 +\ud835\udc4f\u20d7  (these \ud835\udc16 and \ud835\udc4f\u20d7  are trainable weights). You can obtain it with a Dense layer in keras.\n    * For the very first timestep, add \ud835\udc63\u20d7  to the hidden state of the RNN when calculating its value.\nThis approach is the most \"theoretically\" correct, since it properly conditions the RNN on your non-temporal inputs, naturally solves the shape problem, and also avoids polluting your inputs timesteps with additional, non-temporal information. The downside is that this approach often requires graph-level control of your architecture, so if you are using a higher-level abstraction like Keras, you will find it hard to implement unless you add your own layer type.\n\n## Citation\n\n```\n@misc{CondRNN,\n  author = {Philippe Remy},\n  title = {Conditional RNN for Keras},\n  year = {2020},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/philipperemy/cond_rnn}},\n}\n```\n\n## FAQ\n\n- [Why not merge conditions in only one vector?](https://github.com/philipperemy/cond_rnn/issues/3)\n\n## References\n\n- https://adventuresinmachinelearning.com/recurrent-neural-networks-lstm-tutorial-tensorflow\n- https://datascience.stackexchange.com/a/17139\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Conditional RNN",
    "version": "3.2.2",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c171d9d6553c6f18b5659716ac18c0066dddec142be803b4972f849693254fe7",
                "md5": "8f2f2040cb7e93436bfc692ee7e916f4",
                "sha256": "67bc6e5752797f722f0c790868726447d5209189edd7bf1bfd431a18defea409"
            },
            "downloads": -1,
            "filename": "cond_rnn-3.2.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8f2f2040cb7e93436bfc692ee7e916f4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 7109,
            "upload_time": "2023-06-19T08:33:03",
            "upload_time_iso_8601": "2023-06-19T08:33:03.164712Z",
            "url": "https://files.pythonhosted.org/packages/c1/71/d9d6553c6f18b5659716ac18c0066dddec142be803b4972f849693254fe7/cond_rnn-3.2.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "613f5ad3008a8416728d78182e302fc6c602859dd0bfd91159e5d74bb4957161",
                "md5": "b6dad2fb4608ea1e433af49cb3ec40fd",
                "sha256": "52ec3bea7d2a6b70d2378cb47d90651f67b526bd456794f845116048061c31ab"
            },
            "downloads": -1,
            "filename": "cond-rnn-3.2.2.tar.gz",
            "has_sig": false,
            "md5_digest": "b6dad2fb4608ea1e433af49cb3ec40fd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 6816,
            "upload_time": "2023-06-19T08:33:04",
            "upload_time_iso_8601": "2023-06-19T08:33:04.908191Z",
            "url": "https://files.pythonhosted.org/packages/61/3f/5ad3008a8416728d78182e302fc6c602859dd0bfd91159e5d74bb4957161/cond-rnn-3.2.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-19 08:33:04",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "cond-rnn"
}
        
Elapsed time: 0.10216s