imbrium


Nameimbrium JSON
Version 2.1.0 PyPI version JSON
download
home_pagehttps://github.com/maxmekiska/imbrium
SummaryStandard and Hybrid Deep Learning Multivariate-Multi-Step & Univariate-Multi-Step Time Series Forecasting.
upload_time2023-10-25 05:36:13
maintainer
docs_urlNone
authorMaximilian Mekiska
requires_python
license
keywords machinelearning keras deeplearning timeseries forecasting
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # imbrium [![Downloads](https://pepy.tech/badge/imbrium)](https://pepy.tech/project/imbrium) [![PyPi](https://img.shields.io/pypi/v/imbrium.svg?color=blue)](https://pypi.org/project/imbrium/) [![GitHub license](https://img.shields.io/github/license/maxmekiska/Imbrium?color=black)](https://github.com/maxmekiska/Imbrium/blob/main/LICENSE) [![PyPI pyversions](https://img.shields.io/pypi/pyversions/imbrium.svg)](https://pypi.python.org/project/imbrium/)

## Status

| Build | Status|
|---|---|
| `MAIN BUILD`  |  ![master](https://github.com/maxmekiska/imbrium/actions/workflows/main.yml/badge.svg?branch=main) |
|  `DEV BUILD`   |  ![development](https://github.com/maxmekiska/imbrium/actions/workflows/main.yml/badge.svg?branch=development) |

## Pip install

```shell
pip install imbrium
```

Standard and Hybrid Deep Learning Multivariate-Multi-Step & Univariate-Multi-Step
Time Series Forecasting.


                          ██╗███╗░░░███╗██████╗░██████╗░██╗██╗░░░██╗███╗░░░███╗
                            ║████╗░████║██╔══██╗██╔══██╗██║██║░░░██║████╗░████║
                          ██║██╔████╔██║██████╦╝██████╔╝██║██║░░░██║██╔████╔██║
                          ██║██║╚██╔╝██║██╔══██╗██╔══██╗██║██║░░░██║██║╚██╔╝██║
                          ██║██║░╚═╝░██║██████╦╝██║░░██║██║╚██████╔╝██║░╚═╝░██║
                          ╚═╝╚═╝░░░░░╚═╝╚═════╝░╚═╝░░╚═╝╚═╝░╚═════╝░╚═╝░░░░░╚═╝


## Introduction to imbrium

imbrium is a deep learning library that specializes in time series forecasting. Its primary objective is to provide a user-friendly repository of deep learning architectures for this purpose. The focus is on simplifying the process of creating and applying these architectures, with the goal of allowing users to create complex architectures without having to build them from scratch. Instead, the emphasis shifts to high-level configuration of the architectures.


## imbrium Summary

imbrium is designed to simplify the application of deep learning models for time series forecasting. The library offers a variety of pre-built architectures. The user retains full control over the configuration of each layer, including the number of neurons, the type of activation function, loss function, optimizer, and metrics applied. This allows for the flexibility to adapt the architecture to the specific needs of the forecast task at hand. Imbrium also offers a user-friendly interface for training and evaluating these models, making it easy to quickly iterate and test different configurations.

imbrium uses the sliding window approach to generate forecasts. The sliding window approach in time series forecasting involves moving a fixed-size window (steps_past) through historical data, using the data within the window as input features. The next data points outside the window are used as the target variables (steps_future). This method allows the model to learn sequential patterns and trends in the data, enabling accurate predictions for future points in the time series. 

## imbrium 2.0.0

- adapting `keras_core`
- removing internal hyperparameter tuning
- removing encoder-decoder architectures
- improve layer configuration
- split input data into target and feature numpy arrays
- overall lighten the library

### Get started with imbrium

<details>
  <summary>Expand</summary>
  <br>

<details>
  <summary>Univariate Pure Predictors</summary>
  <br>


```python
from imbrium import PureUni

# create a PureUni object (numpy array expected)
predictor = PureUni(target = target_numpy_array) 

# the following models are available for a PureUni objects;

# create and fit a muti-layer perceptron model
predictor.create_fit_mlp( 
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        dense_block_one = 1,
        dense_block_two = 1,
        dense_block_three = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {"neurons": 50, "activation": "relu", "regularization": 0.0}
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a recurrent neural network model
predictor.create_fit_rnn(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        rnn_block_one = 1,
        rnn_block_two = 1,
        rnn_block_three = 1,
        metrics = "mean_squared_error",
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {"neurons": 50, "activation": "relu", "regularization": 0.0}
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a long short-term neural network model
predictor.create_fit_lstm(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        lstm_block_one = 1,
        lstm_block_two = 1,
        lstm_block_three = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {"neurons": 50, "activation": "relu", "regularization": 0.0}
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional neural network
predictor = create_fit_cnn(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        dense_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a gated recurrent unit neural network  
predictor.create_fit_gru(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        gru_block_one = 1,
        gru_block_two = 1,
        gru_block_three = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a bidirectional recurrent neural network
predictor.create_fit_birnn(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        birnn_block_one = 1,
        rnn_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a bidirectional long short-term memory neural network
predictor.create_fit_bilstm(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        bilstm_block_one = 1,
        lstm_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a bidirectional gated recurrent neural network
predictor.create_fit_bigru(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        bigru_block_one = 1,
        gru_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.

# in addition, you can add to all models early stopping arguments:

monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False,
start_from_epoch=0


# instpect model structure
predictor.model_blueprint()

# insptect keras model performances via (access dictionary via history key):
predictor.show_performance()

# make predictions via (numpy array expected):
predictor.predict(data)

# save predictor via:
predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```  

</details>

<details>
  <summary>Multivariate Pure Predictors</summary>
  <br>


```python
from imbrium import PureMulti

# create a PureMulti object (numpy array expected)
predictor = PureMulti(target = target_numpy_array, features = features_numpy_array) 

# the following models are available for a PureMulti objects;

# create and fit a muti-layer perceptron model
predictor.create_fit_mlp( 
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        dense_block_one = 1,
        dense_block_two = 1,
        dense_block_three = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {"neurons": 50, "activation": "relu", "regularization": 0.0}
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a recurrent neural network model
predictor.create_fit_rnn(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        rnn_block_one = 1,
        rnn_block_two = 1,
        rnn_block_three = 1,
        metrics = "mean_squared_error",
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {"neurons": 50, "activation": "relu", "regularization": 0.0}
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a long short-term neural network model
predictor.create_fit_lstm(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        lstm_block_one = 1,
        lstm_block_two = 1,
        lstm_block_three = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {"neurons": 50, "activation": "relu", "regularization": 0.0}
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional neural network
predictor = create_fit_cnn(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        dense_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a gated recurrent unit neural network  
predictor.create_fit_gru(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        gru_block_one = 1,
        gru_block_two = 1,
        gru_block_three = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a bidirectional recurrent neural network
predictor.create_fit_birnn(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        birnn_block_one = 1,
        rnn_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a bidirectional long short-term memory neural network
predictor.create_fit_bilstm(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        bilstm_block_one = 1,
        lstm_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a bidirectional gated recurrent neural network
predictor.create_fit_bigru(
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        bigru_block_one = 1,
        gru_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "neurons": 50,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.

# in addition, you can add to all models early stopping arguments:

monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False,
start_from_epoch=0


# instpect model structure
predictor.model_blueprint()

# insptect keras model performances via (access dictionary via history key):
predictor.show_performance()

# make predictions via (numpy array expected):
predictor.predict(data)

# save predictor via:
predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```  
</details>

<details>
  <summary>Univariate Hybrid Predictors</summary>
  <br>

```python
from imbrium import HybridUni

# create a HybridUni object (numpy array expected)
predictor = HybridUni(target = target_numpy_array) 

# the following models are available for a HybridUni objects:
# create and fit a convolutional recurrent neural network
predictor.create_fit_cnnrnn(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        rnn_block_one = 1,
        rnn_block_two = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional long short-term memory neural network
predictor.create_fit_cnnlstm(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        lstm_block_one = 1,
        lstm_block_two = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional gated recurrent unit neural network  
predictor.create_fit_cnngru(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        gru_block_one = 1,
        gru_block_two = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional bidirectional recurrent neural network
predictor.create_fit_cnnbirnn(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        birnn_block_one = 1,
        rnn_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional bidirectional long short-term neural network
predictor.create_fit_cnnbilstm(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        bilstm_block_one = 1,
        lstm_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional bidirectional gated recurrent neural network
predictor.create_fit_cnnbigru(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        bigru_block_one = 1,
        gru_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.

# in addition, you can add to all models early stopping arguments:

monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False,
start_from_epoch=0


# instpect model structure
predictor.model_blueprint()

# insptect keras model performances via (access dictionary via history key):
predictor.show_performance()

# make predictions via (numpy array expected):
# - when loading/retrieving a saved model, provide sub_seq, steps_past, steps_future in the predict method!
predictor.predict(data, sub_seq=None, steps_past=None, steps_future=None)

# save predictor via:
predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```  

</details>

<details>
  <summary>Multivariate Hybrid Predictors</summary>
  <br>


```python
from imbrium import HybridMulti

# create a HybridMulti object (numpy array expected)
predictor = HybridMulti(target = target_numpy_array, features = features_numpy_array) 

# the following models are available for a HybridMulti objects:
# create and fit a convolutional recurrent neural network
predictor.create_fit_cnnrnn(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        rnn_block_one = 1,
        rnn_block_two = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional long short-term memory neural network
predictor.create_fit_cnnlstm(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        lstm_block_one = 1,
        lstm_block_two = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional gated recurrent unit neural network  
predictor.create_fit_cnngru(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        gru_block_one = 1,
        gru_block_two = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional bidirectional recurrent neural network
predictor.create_fit_cnnbirnn(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        birnn_block_one = 1,
        rnn_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional bidirectional long short-term neural network
predictor.create_fit_cnnbilstm(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        bilstm_block_one = 1,
        lstm_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# create and fit a convolutional bidirectional gated recurrent neural network
predictor.create_fit_cnnbigru(
        sub_seq,
        steps_past,
        steps_future,
        optimizer = "adam",
        optimizer_args = None,
        loss = "mean_squared_error",
        metrics = "mean_squared_error",
        conv_block_one = 1,
        conv_block_two = 1,
        bigru_block_one = 1,
        gru_block_one = 1,
        layer_config = {
            "layer0": {
                "config": {
                    "filters": 64,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer1": {
                "config": {
                    "filters": 32,
                    "kernel_size": 1,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer2": {
                "config": {
                    "pool_size": 2,
                }
            },
            "layer3": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                    "dropout": 0.0,
                }
            },
            "layer4": {
                "config": {
                    "neurons": 32,
                    "activation": "relu",
                    "regularization": 0.0,
                }
            },
        },
        epochs = 100,
        show_progress = 1,
        validation_split = 0.20,
        board = False,
)

# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.

# in addition, you can add to all models early stopping arguments:

monitor='val_loss',
min_delta=0,
patience=0,
verbose=0,
mode='auto',
baseline=None,
restore_best_weights=False,
start_from_epoch=0


# instpect model structure
predictor.model_blueprint()

# insptect keras model performances via (access dictionary via history key):
predictor.show_performance()

# make predictions via (numpy array expected):
# - when loading/retrieving a saved model, provide sub_seq, steps_past, steps_future in the predict method!
predictor.predict(data, sub_seq=None, steps_past=None, steps_future=None)

# save predictor via:
predictor.freeze(absolute_path)

# load saved predictor via:
predictor.retrieve(location)
```  
</details>

</details>

### Use Case: scaling + hyper parameter optimization

https://github.com/maxmekiska/ImbriumTesting-Demo/blob/main/use-case-1.ipynb

### Integration tests

https://github.com/maxmekiska/ImbriumTesting-Demo/blob/main/IntegrationTest.ipynb


## LEGACY: imbrium versions <= v.1.3.0
<details>
  <summary>Expand</summary>
  <br>

The library differentiates between two
modes:

1. Univariate-Multistep forecasting
2. Multivariate-Multistep forecasting

These two main modes are further divided based on the complexity of the underlying model architectures:

1. Pure
2. Hybrid

Pure supports the following architectures:

- Multilayer perceptron (MLP)
- Recurrent neural network (RNN)
- Long short-term memory (LSTM)
- Gated recurrent unit (GRU)
- Convolutional neural network (CNN)
- Bidirectional recurrent neural network (BI-RNN)
- Bidirectional long-short term memory (BI-LSTM)
- Bidirectional gated recurrent unit (BI-GRU)
- Encoder-Decoder recurrent neural network
- Encoder-Decoder long-short term memory
- Encoder-Decoder convolutional neural network (Encoding via CNN, Decoding via GRU)
- Encoder-Decoder gated recurrent unit

Hybrid supports:

- Convolutional neural network + recurrent neural network (CNN-RNN)
- Convolutional neural network + Long short-term memory (CNN-LSTM)
- Convolutional neural network + Gated recurrent unit (CNN-GRU)
- Convolutional neural network + Bidirectional recurrent neural network (CNN-BI-RNN)
- Convolutional neural network + Bidirectional long-short term memory (CNN-BI-LSTM)
- Convolutional neural network + Bidirectional gated recurrent unit (CNN-BI-GRU)

Please note that each model is supported by a prior input data pre-processing procedure which allows to set a look-back period, look-forward period, sub-sequences division (only for hybrid architectures) and data scaling method.

The following scikit-learn scaling procedures are supported:

- StandardScaler
- MinMaxScaler
- MaxAbsScaler
- Normalizing ([0, 1])
- None (raw data input)

During training/fitting, callback conditions can be defined to guard against
overfitting.

Trained models can furthermore be saved or loaded if the user wishes to do so.

## How to use imbrium?

<details>
  <summary>Expand</summary>
  <br>

Attention: Typing has been left in the below examples to ease the configuration readability.

#### Version updates:

##### Version >= 1.2.0

Version 1.2.0 started supporting tensor board dashboards: https://www.tensorflow.org/tensorboard/get_started

##### Version >= 1.3.0

Version 1.3.0 started supporting adjustable layer depth configurations for all architectures. If you wish to adjust the layer depth, please make sure to include a custom layer_config accounting for the correct number of layers. The last layer cannot contain a dropout parameter -> tuple needs to be of length 3: (neurons, activation, regularization).

### `Univariate Models`:

1. Univariate-Multistep forecasting - Pure architectures

```python
from imbrium.predictors.univarpure import PureUni

predictor = PureUni(
                    steps_past: int,
                    steps_future: int,
                    data = pd.DataFrame(),
                    scale: str = ''
                   )

# Choose between one of the architectures:

predictor.create_mlp(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     dense_block_one: int = 1,
                     dense_block_two: int = 1,
                     dense_block_three: int = 1,
                     layer_config: dict =
                     {
                      'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer1': (25,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer2': (25, 'relu', 0.0) # (neurons, activation, regularization)
                      }
                    )

predictor.create_rnn(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     rnn_block_one: int = 1,
                     rnn_block_two: int = 1,
                     rnn_block_three: int = 1,
                     layer_config: dict = 
                     {
                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)
                     }
                    )

predictor.create_lstm(
                      optimizer: str = 'adam',
                      optimizer_args: dict = None,
                      loss: str = 'mean_squared_error',
                      metrics: str = 'mean_squared_error',
                      lstm_block_one: int = 1,
                      lstm_block_two: int = 1,
                      lstm_block_three: int = 1,
                      layer_config: dict =
                      {
                        'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)
                      }
                     )

predictor.create_gru(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     gru_block_one: int = 1,
                     gru_block_two: int = 1,
                     gru_block_three: int = 1,
                     layer_config: dict =
                     {
                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)
                     }
                    )

predictor.create_cnn(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     conv_block_one: int = 1,
                     conv_block_two: int = 1,
                     dense_block_one: int = 1,
                     layer_config: dict =
                     {
                      'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                      'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                      'layer2': (2), # (pool_size)
                      'layer3': (50, 'relu', 0.0) # (neurons, activation, regularization)
                     }
                    )

predictor.create_birnn(
                       optimizer: str = 'adam',
                       optimizer_args: dict = None,
                       loss: str = 'mean_squared_error',
                       metrics: str = 'mean_squared_error',
                       birnn_block_one: int = 1,
                       rnn_block_one: int = 1,
                       layer_config: dict =
                       {
                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)
                       }
                      )

predictor.create_bilstm(
                        optimizer: str = 'adam', 
                        optimizer_args: dict = None,
                        loss: str = 'mean_squared_error',
                        metrics: str = 'mean_squared_error',
                        bilstm_block_one: int = 1,
                        lstm_block_one: int = 1,
                        layer_config: dict = 
                        {
                          'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)
                        }
                       )

predictor.create_bigru(
                       optimizer: str = 'adam',
                       optimizer_args: dict = None,
                       loss: str = 'mean_squared_error',
                       metrics: str = 'mean_squared_error',
                       bigru_block_one: int = 1,
                       gru_block_one: int = 1,
                       layer_config: dict = 
                       {
                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)
                       }
                      )

predictor.create_encdec_rnn(
                            optimizer: str = 'adam',
                            optimizer_args: dict = None,
                            loss: str = 'mean_squared_error',
                            metrics: str = 'mean_squared_error',
                            enc_rnn_block_one: int = 1,
                            enc_rnn_block_two: int = 1,
                            dec_rnn_block_one: int = 1,
                            dec_rnn_block_two: int = 1,
                            layer_config: dict =
                            {
                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer2': (50, 'relu', 0.0, 0.0),  # (neurons, activation, regularization, dropout)
                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)
                            }
                           )

predictor.create_encdec_lstm(
                             optimizer: str = 'adam',
                             optimizer_args: dict = None,
                             loss: str = 'mean_squared_error',
                             metrics: str = 'mean_squared_error',
                             enc_lstm_block_one: int = 1,
                             enc_lstm_block_two: int = 1,
                             dec_lstm_block_one: int = 1,
                             dec_lstm_block_two: int = 1,
                             layer_config: dict = 
                             {
                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)
                             }
                            )

predictor.create_encdec_cnn(
                            optimizer: str = 'adam',
                            optimizer_args: dict = None,
                            loss: str = 'mean_squared_error',
                            metrics: str = 'mean_squared_error',
                            enc_conv_block_one: int = 1,
                            enc_conv_block_two: int = 1,
                            dec_gru_block_one: int = 1,
                            dec_gru_block_two: int = 1,
                            layer_config: dict = 
                            {
                              'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                              'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                              'layer2': (2), # (pool_size)
                              'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer4': (100, 'relu', 0.0)  # (neurons, activation, regularization)
                            }
                          )

predictor.create_encdec_gru(
                            optimizer: str = 'adam',
                            optimizer_args: dict = None,
                            loss: str = 'mean_squared_error',
                            metrics: str = 'mean_squared_error',
                            enc_gru_block_one: int = 1,
                            enc_gru_block_two: int = 1,
                            dec_gru_block_one: int = 1,
                            dec_gru_block_two: int = 1,
                            layer_config: dict = 
                            {
                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)
                            }
                          )

# Fit the predictor object - more callback settings at:

# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping

predictor.fit_model(
                    epochs: int,
                    show_progress: int = 1,
                    validation_split: float = 0.20,
                    board: bool = True, # record training progress in tensorboard
                    monitor='loss', 
                    patience=3
                   )

# Have a look at the model performance
predictor.show_performance(metric_name: str = None) # optionally plot metric name against loss

# Make a prediction based on new unseen data
predictor.predict(data)

# Safe your model:
predictor.save_model()

# Load a model:
# Step 1: initialize a new predictor object with same characteristics as model to load
# Step 2: Do not pass in any data
# Step 3: Invoke the method load_model()
# optional Step 4: Use the setter method set_model_id(name: str) to give model a name

loading_predictor = PureUni(steps_past: int, steps_future: int)
loading_predictor.load_model(location: str)
loading_predictor.set_model_id(name: str)
```

2. Univariate-Multistep forecasting - Hybrid architectures

```python
from imbrium.predictors.univarhybrid import HybridUni

predictor = HybridUni(
                      sub_seq: int,
                      steps_past: int,
                      steps_future: int, data = pd.DataFrame(),
                      scale: str = ''
                     )

# Choose between one of the architectures:

predictor.create_cnnrnn(
                        optimizer: str = 'adam',
                        optimizer_args: dict = None,
                        loss: str = 'mean_squared_error',
                        metrics: str = 'mean_squared_error',
                        conv_block_one: int = 1,
                        conv_block_two: int = 1,
                        rnn_block_one: int = 1,
                        rnn_block_two: int = 1,
                        layer_config = 
                        {
                          'layer0': (64, 1, 'relu', 0.0, 0.0),  # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer2': (2), # (pool_size)
                          'layer3': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer4': (25, 'relu', 0.0, 0.0) # (neurons, activation, regularization, dropout)
                        }
                      )

predictor.create_cnnlstm(
                         optimizer: str = 'adam', 
                         optimizer_args: dict = None,
                         loss: str = 'mean_squared_error',
                         metrics: str = 'mean_squared_error',
                         conv_block_one: int = 1,
                         conv_block_two: int = 1,
                         lstm_block_one: int = 1,
                         lstm_block_two: int = 1,
                         layer_config = 
                        {
                          'layer0': (64, 1, 'relu', 0.0, 0.0),  # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer2': (2), # (pool_size)
                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                        }
                      )

predictor.create_cnngru(
                        optimizer: str = 'adam',
                        optimizer_args: dict = None,
                        loss: str = 'mean_squared_error',
                        metrics: str = 'mean_squared_error',
                        conv_block_one: int = 1,
                        conv_block_two: int = 1,
                        gru_block_one: int = 1,
                        gru_block_two: int = 1,
                        layer_config =
                        {
                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer2': (2), # (pool_size)
                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                        }
                      )

predictor.create_cnnbirnn(
                          optimizer: str = 'adam',
                          optimizer_args: dict = None,
                          loss: str = 'mean_squared_error',
                          metrics: str = 'mean_squared_error',
                          conv_block_one: int = 1,
                          conv_block_two: int = 1,
                          birnn_block_one: int = 1,
                          rnn_block_one: int = 1,
                          layer_config =
                          {
                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer2': (2), # (pool_size)
                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                          }
                        )

predictor.create_cnnbilstm(
                           optimizer: str = 'adam',
                           optimizer_args: dict = None,
                           loss: str = 'mean_squared_error',
                           metrics: str = 'mean_squared_error',
                           conv_block_one: int = 1,
                           conv_block_two: int = 1,
                           bilstm_block_one: int = 1,
                           lstm_block_one: int = 1,
                           layer_config =
                           {
                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer2': (2), # (pool_size)
                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                            }
                          )

predictor.create_cnnbigru(
                          optimizer: str = 'adam',
                          optimizer_args: dict = None,
                          loss: str = 'mean_squared_error',
                          metrics: str = 'mean_squared_error',
                          conv_block_one: int = 1,
                          conv_block_two: int = 1,
                          bigru_block_one: int = 1,
                          gru_block_one: int = 1,
                          layer_config =
                          {
                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer2': (2), # (pool_size)
                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                          }
                        )

# Fit the predictor object - more callback settings at:

# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping

predictor.fit_model(
                    epochs: int,
                    show_progress: int = 1,
                    validation_split: float = 0.20,
                    board: bool = True, # record training progress in tensorboard
                    monitor='loss',
                    patience=3
                    )

# Have a look at the model performance
predictor.show_performance(metric_name: str = None) # optionally plot metric name against loss

# Make a prediction based on new unseen data
predictor.predict(data: array)

# Safe your model:
predictor.save_model()

# Load a model:
# Step 1: initialize a new predictor object with same characteristics as model to load
# Step 2: Do not pass in any data
# Step 3: Invoke the method load_model()
# optional Step 4: Use the setter method set_model_id(name: str) to give model a name

loading_predictor =  HybridUni(sub_seq: int, steps_past: int, steps_future: int)
loading_predictor.load_model(location: str)
loading_predictor.set_model_id(name: str)
```

### `Multivariate Models`:

1. Multivariate-Multistep forecasting - Pure architectures

```python
from imbrium.predictors.multivarpure import PureMulti

# please make sure that the target feature is the first variable in the feature list
predictor = PureMulti(steps_past: int, steps_future: int, data = DataFrame(), features = [], scale: str = '')

# Choose between one of the architectures:

predictor.create_mlp(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     dense_block_one: int = 1,
                     dense_block_two: int = 1,
                     dense_block_three: int = 1,
                     layer_config: dict =
                     {
                      'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer1': (25,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer2': (25, 'relu', 0.0) # (neurons, activation, regularization)
                     }
                    )

predictor.create_rnn(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     rnn_block_one: int = 1,
                     rnn_block_two: int = 1,
                     rnn_block_three: int = 1,
                     layer_config: dict = 
                     {
                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)
                     }
                    )

predictor.create_lstm(
                      optimizer: str = 'adam',
                      optimizer_args: dict = None,
                      loss: str = 'mean_squared_error',
                      metrics: str = 'mean_squared_error',
                      lstm_block_one: int = 1,
                      lstm_block_two: int = 1,
                      lstm_block_three: int = 1,
                      layer_config: dict =
                      {
                        'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer1': (50,'relu', 0.0, 0.0),  # (neurons, activation, regularization, dropout)
                        'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)
                      }
                    )

predictor.create_gru(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     gru_block_one: int = 1,
                     gru_block_two: int = 1,
                     gru_block_three: int = 1,
                     layer_config: dict =
                     {
                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)
                     } 
                    )

predictor.create_cnn(
                     optimizer: str = 'adam',
                     optimizer_args: dict = None,
                     loss: str = 'mean_squared_error',
                     metrics: str = 'mean_squared_error',
                     conv_block_one: int = 1,
                     conv_block_two: int = 1,
                     dense_block_one: int = 1,
                     layer_config: dict =
                     {
                      'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                      'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                      'layer2': (2), # (pool_size)
                      'layer3': (50, 'relu', 0.0) # (neurons, activation, regularization)
                     }
                    )

predictor.create_birnn(
                       optimizer: str = 'adam',
                       optimizer_args: dict = None,
                       loss: str = 'mean_squared_error',
                       metrics: str = 'mean_squared_error',
                       birnn_block_one: int = 1,
                       rnn_block_one: int = 1,
                       layer_config: dict =
                       {
                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)
                       }
                      )

predictor.create_bilstm(
                        optimizer: str = 'adam',
                        optimizer_args: dict = None,
                        loss: str = 'mean_squared_error',
                        metrics: str = 'mean_squared_error',
                        bilstm_block_one: int = 1,
                        lstm_block_one: int = 1,
                        layer_config: dict =
                        {
                          'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)
                        }
                      )

predictor.create_bigru(
                       optimizer: str = 'adam',
                       optimizer_args: dict = None,
                       loss: str = 'mean_squared_error',
                       metrics: str = 'mean_squared_error',
                       bigru_block_one: int = 1,
                       gru_block_one: int = 1,
                       layer_config: dict =
                       {
                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)
                       }
                      )

predictor.create_encdec_rnn(
                            optimizer: str = 'adam',
                            optimizer_args: dict = None,
                            loss: str = 'mean_squared_error',
                            metrics: str = 'mean_squared_error',
                            enc_rnn_block_one: int = 1,
                            enc_rnn_block_two: int = 1,
                            dec_rnn_block_one: int = 1,
                            dec_rnn_block_two: int = 1,
                            layer_config: dict =
                            {
                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)
                            }
                          )

predictor.create_encdec_lstm(
                             optimizer: str = 'adam',
                             optimizer_args: dict = None,
                             loss: str = 'mean_squared_error',
                             metrics: str = 'mean_squared_error',
                             enc_lstm_block_one: int = 1,
                             enc_lstm_block_two: int = 1,
                             dec_lstm_block_one: int = 1,
                             dec_lstm_block_two: int = 1,
                             layer_config: dict =
                             {
                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)
                             }
                            )

predictor.create_encdec_cnn(
                            optimizer: str = 'adam',
                            optimizer_args: dict = None,
                            loss: str = 'mean_squared_error',
                            metrics: str = 'mean_squared_error',
                            enc_conv_block_one: int = 1,
                            enc_conv_block_two: int = 1,
                            dec_gru_block_one: int = 1,
                            dec_gru_block_two: int = 1,
                            layer_config: dict =
                            {
                              'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                              'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                              'layer2': (2), # (pool_size)
                              'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer4': (100, 'relu', 0.0) # (neurons, activation, regularization)
                            }
                          )

predictor.create_encdec_gru(
                            optimizer: str = 'adam',
                            optimizer_args: dict = None,
                            loss: str = 'mean_squared_error',
                            metrics: str = 'mean_squared_error',
                            enc_gru_block_one: int = 1,
                            enc_gru_block_two: int = 1,
                            dec_gru_block_one: int = 1,
                            dec_gru_block_two: int = 1,
                            layer_config: dict =
                            {
                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)
                            }
                          )

# Fit the predictor object - more callback settings at:

# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping

predictor.fit_model(
                    epochs: int,
                    show_progress: int = 1,
                    validation_split: float = 0.20,
                    board: bool = True, # record training progress in tensorboard
                    monitor='loss',
                    patience=3
                  )

# Have a look at the model performance
predictor.show_performance(metric_name: str = None) # optionally plot metric name against loss

# Make a prediction based on new unseen data
predictor.predict(data: array)

# Safe your model:
predictor.save_model()

# Load a model:
# Step 1: initialize a new predictor object with same characteristics as model to load
# Step 2: Do not pass in any data
# Step 3: Invoke the method load_model()
# optional Step 4: Use the setter method set_model_id(name: str) to give model a name

loading_predictor = PureMulti(steps_past: int, steps_future: int)
loading_predictor.load_model(location: str)
loading_predictor.set_model_id(name: str)
```
2. Multivariate-Multistep forecasting - Hybrid architectures

```python
from imbrium.predictors.multivarhybrid import HybridMulti

# please make sure that the target feature is the first variable in the feature list
predictor = HybridMulti(sub_seq: int, steps_past: int, steps_future: int, data = DataFrame(), features:list = [], scale: str = '')

# Choose between one of the architectures:

predictor.create_cnnrnn(
                        optimizer: str = 'adam',
                        optimizer_args: dict = None,
                        loss: str = 'mean_squared_error',
                        metrics: str = 'mean_squared_error',
                        conv_block_one: int = 1,
                        conv_block_two: int = 1,
                        rnn_block_one: int = 1,
                        rnn_block_two: int = 1,
                        layer_config =
                        {
                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer2': (2), # (pool_size)
                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                        }
                      )

predictor.create_cnnlstm(
                         optimizer: str = 'adam',
                         optimizer_args: dict = None,
                         loss: str = 'mean_squared_error',
                         metrics: str = 'mean_squared_error',
                         conv_block_one: int = 1,
                         conv_block_two: int = 1,
                         lstm_block_one: int = 1,
                         lstm_block_two: int = 1,
                         layer_config =
                         {
                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer2': (2), # (pool_size)
                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                         }
                        )

predictor.create_cnngru(
                        optimizer: str = 'adam',
                        optimizer_args: dict = None,
                        loss: str = 'mean_squared_error',
                        metrics: str = 'mean_squared_error',
                        conv_block_one: int = 1,
                        conv_block_two: int = 1,
                        gru_block_one: int = 1,
                        gru_block_two: int = 1,
                        layer_config =
                        {
                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                          'layer2': (2), # (pool_size)
                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                        }
                      )

predictor.create_cnnbirnn(
                          optimizer: str = 'adam',
                          optimizer_args: dict = None,
                          loss: str = 'mean_squared_error',
                          metrics: str = 'mean_squared_error',
                          conv_block_one: int = 1,
                          conv_block_two: int = 1,
                          birnn_block_one: int = 1,
                          rnn_block_one: int = 1,
                          layer_config =
                          {
                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer2': (2), # (pool_size)
                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                          }
                        )

predictor.create_cnnbilstm(
                           optimizer: str = 'adam',
                           optimizer_args: dict = None,
                           loss: str = 'mean_squared_error',
                           metrics: str = 'mean_squared_error',
                           conv_block_one: int = 1,
                           conv_block_two: int = 1,
                           bilstm_block_one: int = 1,
                           lstm_block_one: int = 1,
                           layer_config =
                           {
                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer2': (2), # (pool_size)
                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                           }
                          )

predictor.create_cnnbigru(
                          optimizer: str = 'adam',
                          optimizer_args: dict = None,
                          loss: str = 'mean_squared_error',
                          metrics: str = 'mean_squared_error',
                          conv_block_one: int = 1,
                          conv_block_two: int = 1,
                          bigru_block_one: int = 1,
                          gru_block_one: int = 1,
                          layer_config =
                          {
                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)
                            'layer2': (2), # (pool_size)
                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)
                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)
                          }
                        )

# Fit the predictor object - more callback settings at:

# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping

predictor.fit_model(
                    epochs: int,
                    show_progress: int = 1,
                    validation_split: float = 0.20,
                    board: bool = True, # record training progress in tensorboard
                    monitor='loss',
                    patience=3
                  )

# Have a look at the model performance
predictor.show_performance(metric_name: str = None) # optionally plot metric name against loss

# Make a prediction based on new unseen data
predictor.predict(data: array)

# Safe your model:
predictor.save_model()

# Load a model:
# Step 1: initialize a new predictor object with same characteristics as model to load
# Step 2: Do not pass in any data
# Step 3: Invoke the method load_model()
# optional Step 4: Use the setter method set_model_id(name: str) to give model a name

loading_predictor =  HybridMulti(sub_seq: int, steps_past: int, steps_future: int)
loading_predictor.load_model(location: str)
loading_predictor.set_model_id(name: str)
```
</details>

## Hyperparameter Optimization imbrium 1.1.0
<details>
  <summary>Expand</summary>
  <br>

Starting from version 1.1.0, imbrium will support experimental hyperparamerter optimization for the model layer config and optimizer arguments. The optimization process uses the Optuna library (https://optuna.org/).

### Optimization via the seeker decorator

To leverage Optimization, use the new classes `OptimizePureUni`, `OptimizeHybridUni`, `OptimizePureMulti` and `OptimizeHybridMulti`. These classes implement optimizable model architecture methods:

`OptimizePureUni` & `OptimizePureMulti`:

  - create_fit_mlp
  - create_fit_rnn
  - create_fit_lstm
  - create_fit_cnn
  - create_fit_gru
  - create_fit_birnn
  - create_fit_bilstm
  - create_fit_bigru
  - create_fit_encdec_rnn
  - create_fit_encdec_lstm
  - create_fit_encdec_gru
  - create_fit_encdec_cnn

`OptimizeHybridUni` & `OptimizeHybridMulti`:

  - create_fit_cnnrnn
  - create_fit_cnnlstm
  - create_fit_cnngru
  - create_fit_cnnbirnn
  - create_fit_cnnbilstm
  - create_fit_cnnbigru

#### Example `OptimizePureUni`

```python
from imbrium.predictors.univarpure import OptimizePureUni
from imbrium.utils.optimization import seeker

# initialize optimizable predictor object
predictor = OptimizePureUni(steps_past=5, steps_future=10, data=data, scale='standard')


# use seeker decorator on optimization harness
@seeker(optimizer_range=["adam", "sgd"], 
        layer_config_range= [
            {
              'layer0': (5, 'relu'),
              'layer1': (10,'relu'),
              'layer2': (5, 'relu')
            },
            {
              'layer0': (2, 'relu'),
              'layer1': (5, 'relu'),
              'layer2': (2, 'relu')
            }
        ], 
        optimizer_args_range = [
            {
              'learning_rate': 0.02,
            },
            {
              'learning_rate': 0.0001,
            }
        ]
        optimization_target='minimize', n_trials = 2)
def create_fit_model(predictor: object, *args, **kwargs):
    # use optimizable create_fit_xxx method
    return predictor.create_fit_lstm(*args, **kwargs)


create_fit_model(
                 predictor,
                 loss='mean_squared_error',
                 metrics='mean_squared_error',
                 epochs=2,
                 show_progress=0,
                 validation_split=0.20,
                 board=True,
                 monitor='val_loss',
                 patience=2,
                 min_delta=0,
                 verbose=1
                )

predictor.show_performance()
predictor.predict(data.tail(5))
predictor.model_blueprint()
```

#### Example `OptimizeHybridUni`

```python
from imbrium.predictors.univarhybrid import OptimizeHybridUni
from imbrium.utils.optimization import seeker

predictor = OptimizeHybridUni(sub_seq = 2, steps_past = 10, steps_future = 5, data = data, scale = 'maxabs')

@seeker(optimizer_range=["adam", "sgd"], 
        layer_config_range= [
            {
              'layer0': (8, 1, 'relu'),
              'layer1': (4, 1, 'relu'),
              'layer2': (2),
              'layer3': (25, 'relu'),
              'layer4': (10, 'relu')
            },
            {
              'layer0': (16, 1, 'relu'),
              'layer1': (8, 1, 'relu'),
              'layer2': (2)
              'layer3': (55, 'relu'),
              'layer4': (10, 'relu')
            },
            {
              'layer0': (32, 1, 'relu'),
              'layer1': (16, 1, 'relu'),
              'layer2': (2),
              'layer3': (25, 'relu'),
              'layer4': (10, 'relu')
            }
        ], 
        optimizer_args_range = [
            {
              'learning_rate': 0.02,
            },
            {
              'learning_rate': 0.0001,
            }
        ]
        optimization_target='minimize', n_trials = 2)
def create_fit_model(predictor: object, *args, **kwargs):
    return predictor.create_fit_cnnlstm(*args, **kwargs)

create_fit_model(
                 predictor,
                 loss='mean_squared_error',
                 metrics='mean_squared_error',
                 epochs=2,
                 show_progress=0,
                 validation_split=0.20,
                 board=True,
                 monitor='val_loss',
                 patience=2,
                 min_delta=0,
                 verbose=1
                )

predictor.show_performance()
predictor.predict(data.tail(10))
predictor.model_blueprint()
```

#### Example `OptimizePureMulti`

```python
predictor = OptimizePureMulti(
                              steps_past =  5,
                              steps_future = 10,
                              data = data,
                              features = ['target', 'target', 'HouseAge', 'AveRooms', 'AveBedrms'],
                              scale = 'normalize'
                            )


@seeker(optimizer_range=["adam", "sgd"], 
        layer_config_range= [
            {
              'layer0': (5, 'relu'),
              'layer1': (10,'relu'),
              'layer2': (5, 'relu')
            },
            {
              'layer0': (2, 'relu'),
              'layer1': (5, 'relu'),
              'layer2': (2, 'relu')
            },
            {
              'layer0': (20, 'relu'),
              'layer1': (50, 'relu'),
              'layer2': (20, 'sigmoid')
            }
        ], 
        optimizer_args_range = [
            {
              'learning_rate': 0.02,
            },
            {
              'learning_rate': 0.0001,
            }
        ]
        optimization_target='minimize', n_trials = 3)
def create_fit_model(predictor: object, *args, **kwargs):
    return predictor.create_fit_lstm(*args, **kwargs)

create_fit_model(
                 predictor,
                 loss='mean_squared_error',
                 metrics='mean_squared_error',
                 epochs=2,
                 show_progress=1, 
                 validation_split=0.20,
                 board=True,
                 monitor='val_loss',
                 patience=2,
                 min_delta=0,
                 verbose=1
                )


predictor.show_performance()
predictor.predict(data[['target', 'HouseAge', 'AveRooms', 'AveBedrms']].tail(5))
predictor.model_blueprint()
```


#### Example `OptimizeHybridMulti`

```python
predictor = OptimizeHybridMulti(
                                sub_seq = 2, 
                                steps_past = 10,
                                steps_future = 5,
                                data = data,
                                features = ['target', 'target', 'HouseAge', 'AveRooms', 'AveBedrms'],
                                scale = 'normalize'
                              )


@seeker(optimizer_range=["adam", "sgd"], 
        layer_config_range= [
            {
              'layer0': (8, 1, 'relu'),
              'layer1': (4, 1, 'relu'),
              'layer2': (2),
              'layer3': (5, 'relu'),
              'layer4': (5, 'relu')
            },
            {
              'layer0': (8, 1, 'relu'),
              'layer1': (4, 1, 'relu'),
              'layer2': (2),
              'layer3': (5, 'relu'),
              'layer4': (5, 'relu')
            },
            {
              'layer0': (8, 1, 'relu'),
              'layer1': (4, 1, 'relu'),
              'layer2': (2),
              'layer3': (5, 'relu'),
              'layer4': (5, 'relu')
            }
        ], 
        optimizer_args_range = [
            {
              'learning_rate': 0.02,
            },
            {
              'learning_rate': 0.0001,
            }
        ]
        optimization_target='minimize', n_trials = 3)
def create_fit_model(predictor: object, *args, **kwargs):
    return predictor.create_fit_cnnlstm(*args, **kwargs)

create_fit_model(
                 predictor,
                 loss='mean_squared_error',
                 metrics='mean_squared_error',
                 epochs=2,
                 show_progress=1,
                 validation_split=0.20,
                 board=True,
                 monitor='val_loss',
                 patience=2,
                 min_delta=0,
                 verbose=1
                )


predictor.show_performance()
predictor.predict(data[['target', 'HouseAge', 'AveRooms', 'AveBedrms']].tail(10))
predictor.model_blueprint()
```
#### The shell of the seeker harness

```python
predictor = OptimizePureMulti(...)

@seeker(optimizer_range=[...], 
        layer_config_range= [
            {...},
            {...},
            {...}
        ], 
        optimizer_args_range = [
            {...},
            {...},
        ]
        optimization_target = '...', n_trials = x)
def create_fit_model(predictor: object, *args, **kwargs): # seeker harness
    return predictor.create_fit_xxx(*args, **kwargs)

create_fit_model(...) # execute seeker harness


predictor.show_performance()
predictor.predict(...)
predictor.model_blueprint()
```


</details>

## References

<details>
  <summary>Expand</summary>
  <br>

Brwonlee, J., 2016. Display deep learning model training history in keras [Online]. Available from:
https://machinelearningmastery.com/display-deep-
learning-model-training-history-in-keras/.

Brwonlee, J., 2018a. How to develop convolutional neural network models for time series forecasting [Online]. Available from:
https://machinelearningmastery.com/how-to-develop-convolutional-
neural-network-models-for-time-series-forecasting/.

Brwonlee, J., 2018b. How to develop lstm models for time series forecasting [Online]. Available from:
https://machinelearningmastery.com/how-to-develop-
lstm-models-for-time-series-forecasting/.

Brwonlee, J., 2018c. How to develop multilayer perceptron models for time series forecasting [Online]. Available from:
https://machinelearningmastery.com/how-to-develop-multilayer-
perceptron-models-for-time-series-forecasting/.

</details>


</details>


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/maxmekiska/imbrium",
    "name": "imbrium",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "machinelearning,keras,deeplearning,timeseries,forecasting",
    "author": "Maximilian Mekiska",
    "author_email": "maxmekiska@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/08/73/c4f6c18db3cfe9d442b600c68d93807f7c963fda9522be36361e67e1eaa4/imbrium-2.1.0.tar.gz",
    "platform": null,
    "description": "# imbrium [![Downloads](https://pepy.tech/badge/imbrium)](https://pepy.tech/project/imbrium) [![PyPi](https://img.shields.io/pypi/v/imbrium.svg?color=blue)](https://pypi.org/project/imbrium/) [![GitHub license](https://img.shields.io/github/license/maxmekiska/Imbrium?color=black)](https://github.com/maxmekiska/Imbrium/blob/main/LICENSE) [![PyPI pyversions](https://img.shields.io/pypi/pyversions/imbrium.svg)](https://pypi.python.org/project/imbrium/)\r\n\r\n## Status\r\n\r\n| Build | Status|\r\n|---|---|\r\n| `MAIN BUILD`  |  ![master](https://github.com/maxmekiska/imbrium/actions/workflows/main.yml/badge.svg?branch=main) |\r\n|  `DEV BUILD`   |  ![development](https://github.com/maxmekiska/imbrium/actions/workflows/main.yml/badge.svg?branch=development) |\r\n\r\n## Pip install\r\n\r\n```shell\r\npip install imbrium\r\n```\r\n\r\nStandard and Hybrid Deep Learning Multivariate-Multi-Step & Univariate-Multi-Step\r\nTime Series Forecasting.\r\n\r\n\r\n                          \u2588\u2588\u2557\u2588\u2588\u2588\u2557\u2591\u2591\u2591\u2588\u2588\u2588\u2557\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2557\u2588\u2588\u2557\u2591\u2591\u2591\u2588\u2588\u2557\u2588\u2588\u2588\u2557\u2591\u2591\u2591\u2588\u2588\u2588\u2557\r\n                            \u2551\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2557\u2591\u2588\u2588\u2588\u2588\u2551\r\n                          \u2588\u2588\u2551\u2588\u2588\u2554\u2588\u2588\u2588\u2588\u2554\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2566\u255d\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2554\u2588\u2588\u2588\u2588\u2554\u2588\u2588\u2551\r\n                          \u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2554\u255d\u2588\u2588\u2551\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2554\u2550\u2550\u2588\u2588\u2557\u2588\u2588\u2551\u2588\u2588\u2551\u2591\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2554\u255d\u2588\u2588\u2551\r\n                          \u2588\u2588\u2551\u2588\u2588\u2551\u2591\u255a\u2550\u255d\u2591\u2588\u2588\u2551\u2588\u2588\u2588\u2588\u2588\u2588\u2566\u255d\u2588\u2588\u2551\u2591\u2591\u2588\u2588\u2551\u2588\u2588\u2551\u255a\u2588\u2588\u2588\u2588\u2588\u2588\u2554\u255d\u2588\u2588\u2551\u2591\u255a\u2550\u255d\u2591\u2588\u2588\u2551\r\n                          \u255a\u2550\u255d\u255a\u2550\u255d\u2591\u2591\u2591\u2591\u2591\u255a\u2550\u255d\u255a\u2550\u2550\u2550\u2550\u2550\u255d\u2591\u255a\u2550\u255d\u2591\u2591\u255a\u2550\u255d\u255a\u2550\u255d\u2591\u255a\u2550\u2550\u2550\u2550\u2550\u255d\u2591\u255a\u2550\u255d\u2591\u2591\u2591\u2591\u2591\u255a\u2550\u255d\r\n\r\n\r\n## Introduction to imbrium\r\n\r\nimbrium is a deep learning library that specializes in time series forecasting. Its primary objective is to provide a user-friendly repository of deep learning architectures for this purpose. The focus is on simplifying the process of creating and applying these architectures, with the goal of allowing users to create complex architectures without having to build them from scratch. Instead, the emphasis shifts to high-level configuration of the architectures.\r\n\r\n\r\n## imbrium Summary\r\n\r\nimbrium is designed to simplify the application of deep learning models for time series forecasting. The library offers a variety of pre-built architectures. The user retains full control over the configuration of each layer, including the number of neurons, the type of activation function, loss function, optimizer, and metrics applied. This allows for the flexibility to adapt the architecture to the specific needs of the forecast task at hand. Imbrium also offers a user-friendly interface for training and evaluating these models, making it easy to quickly iterate and test different configurations.\r\n\r\nimbrium uses the sliding window approach to generate forecasts. The sliding window approach in time series forecasting involves moving a fixed-size window (steps_past) through historical data, using the data within the window as input features. The next data points outside the window are used as the target variables (steps_future). This method allows the model to learn sequential patterns and trends in the data, enabling accurate predictions for future points in the time series. \r\n\r\n## imbrium 2.0.0\r\n\r\n- adapting `keras_core`\r\n- removing internal hyperparameter tuning\r\n- removing encoder-decoder architectures\r\n- improve layer configuration\r\n- split input data into target and feature numpy arrays\r\n- overall lighten the library\r\n\r\n### Get started with imbrium\r\n\r\n<details>\r\n  <summary>Expand</summary>\r\n  <br>\r\n\r\n<details>\r\n  <summary>Univariate Pure Predictors</summary>\r\n  <br>\r\n\r\n\r\n```python\r\nfrom imbrium import PureUni\r\n\r\n# create a PureUni object (numpy array expected)\r\npredictor = PureUni(target = target_numpy_array) \r\n\r\n# the following models are available for a PureUni objects;\r\n\r\n# create and fit a muti-layer perceptron model\r\npredictor.create_fit_mlp( \r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        dense_block_one = 1,\r\n        dense_block_two = 1,\r\n        dense_block_three = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\"neurons\": 50, \"activation\": \"relu\", \"regularization\": 0.0}\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a recurrent neural network model\r\npredictor.create_fit_rnn(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        rnn_block_one = 1,\r\n        rnn_block_two = 1,\r\n        rnn_block_three = 1,\r\n        metrics = \"mean_squared_error\",\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\"neurons\": 50, \"activation\": \"relu\", \"regularization\": 0.0}\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a long short-term neural network model\r\npredictor.create_fit_lstm(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        lstm_block_one = 1,\r\n        lstm_block_two = 1,\r\n        lstm_block_three = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\"neurons\": 50, \"activation\": \"relu\", \"regularization\": 0.0}\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional neural network\r\npredictor = create_fit_cnn(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        dense_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a gated recurrent unit neural network  \r\npredictor.create_fit_gru(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        gru_block_one = 1,\r\n        gru_block_two = 1,\r\n        gru_block_three = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a bidirectional recurrent neural network\r\npredictor.create_fit_birnn(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        birnn_block_one = 1,\r\n        rnn_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a bidirectional long short-term memory neural network\r\npredictor.create_fit_bilstm(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        bilstm_block_one = 1,\r\n        lstm_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a bidirectional gated recurrent neural network\r\npredictor.create_fit_bigru(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        bigru_block_one = 1,\r\n        gru_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.\r\n\r\n# in addition, you can add to all models early stopping arguments:\r\n\r\nmonitor='val_loss',\r\nmin_delta=0,\r\npatience=0,\r\nverbose=0,\r\nmode='auto',\r\nbaseline=None,\r\nrestore_best_weights=False,\r\nstart_from_epoch=0\r\n\r\n\r\n# instpect model structure\r\npredictor.model_blueprint()\r\n\r\n# insptect keras model performances via (access dictionary via history key):\r\npredictor.show_performance()\r\n\r\n# make predictions via (numpy array expected):\r\npredictor.predict(data)\r\n\r\n# save predictor via:\r\npredictor.freeze(absolute_path)\r\n\r\n# load saved predictor via:\r\npredictor.retrieve(location)\r\n```  \r\n\r\n</details>\r\n\r\n<details>\r\n  <summary>Multivariate Pure Predictors</summary>\r\n  <br>\r\n\r\n\r\n```python\r\nfrom imbrium import PureMulti\r\n\r\n# create a PureMulti object (numpy array expected)\r\npredictor = PureMulti(target = target_numpy_array, features = features_numpy_array) \r\n\r\n# the following models are available for a PureMulti objects;\r\n\r\n# create and fit a muti-layer perceptron model\r\npredictor.create_fit_mlp( \r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        dense_block_one = 1,\r\n        dense_block_two = 1,\r\n        dense_block_three = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\"neurons\": 50, \"activation\": \"relu\", \"regularization\": 0.0}\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a recurrent neural network model\r\npredictor.create_fit_rnn(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        rnn_block_one = 1,\r\n        rnn_block_two = 1,\r\n        rnn_block_three = 1,\r\n        metrics = \"mean_squared_error\",\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\"neurons\": 50, \"activation\": \"relu\", \"regularization\": 0.0}\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a long short-term neural network model\r\npredictor.create_fit_lstm(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        lstm_block_one = 1,\r\n        lstm_block_two = 1,\r\n        lstm_block_three = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\"neurons\": 50, \"activation\": \"relu\", \"regularization\": 0.0}\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional neural network\r\npredictor = create_fit_cnn(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        dense_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a gated recurrent unit neural network  \r\npredictor.create_fit_gru(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        gru_block_one = 1,\r\n        gru_block_two = 1,\r\n        gru_block_three = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a bidirectional recurrent neural network\r\npredictor.create_fit_birnn(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        birnn_block_one = 1,\r\n        rnn_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a bidirectional long short-term memory neural network\r\npredictor.create_fit_bilstm(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        bilstm_block_one = 1,\r\n        lstm_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a bidirectional gated recurrent neural network\r\npredictor.create_fit_bigru(\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        bigru_block_one = 1,\r\n        gru_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"neurons\": 50,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.\r\n\r\n# in addition, you can add to all models early stopping arguments:\r\n\r\nmonitor='val_loss',\r\nmin_delta=0,\r\npatience=0,\r\nverbose=0,\r\nmode='auto',\r\nbaseline=None,\r\nrestore_best_weights=False,\r\nstart_from_epoch=0\r\n\r\n\r\n# instpect model structure\r\npredictor.model_blueprint()\r\n\r\n# insptect keras model performances via (access dictionary via history key):\r\npredictor.show_performance()\r\n\r\n# make predictions via (numpy array expected):\r\npredictor.predict(data)\r\n\r\n# save predictor via:\r\npredictor.freeze(absolute_path)\r\n\r\n# load saved predictor via:\r\npredictor.retrieve(location)\r\n```  \r\n</details>\r\n\r\n<details>\r\n  <summary>Univariate Hybrid Predictors</summary>\r\n  <br>\r\n\r\n```python\r\nfrom imbrium import HybridUni\r\n\r\n# create a HybridUni object (numpy array expected)\r\npredictor = HybridUni(target = target_numpy_array) \r\n\r\n# the following models are available for a HybridUni objects:\r\n# create and fit a convolutional recurrent neural network\r\npredictor.create_fit_cnnrnn(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        rnn_block_one = 1,\r\n        rnn_block_two = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional long short-term memory neural network\r\npredictor.create_fit_cnnlstm(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        lstm_block_one = 1,\r\n        lstm_block_two = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional gated recurrent unit neural network  \r\npredictor.create_fit_cnngru(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        gru_block_one = 1,\r\n        gru_block_two = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional bidirectional recurrent neural network\r\npredictor.create_fit_cnnbirnn(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        birnn_block_one = 1,\r\n        rnn_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional bidirectional long short-term neural network\r\npredictor.create_fit_cnnbilstm(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        bilstm_block_one = 1,\r\n        lstm_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional bidirectional gated recurrent neural network\r\npredictor.create_fit_cnnbigru(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        bigru_block_one = 1,\r\n        gru_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.\r\n\r\n# in addition, you can add to all models early stopping arguments:\r\n\r\nmonitor='val_loss',\r\nmin_delta=0,\r\npatience=0,\r\nverbose=0,\r\nmode='auto',\r\nbaseline=None,\r\nrestore_best_weights=False,\r\nstart_from_epoch=0\r\n\r\n\r\n# instpect model structure\r\npredictor.model_blueprint()\r\n\r\n# insptect keras model performances via (access dictionary via history key):\r\npredictor.show_performance()\r\n\r\n# make predictions via (numpy array expected):\r\n# - when loading/retrieving a saved model, provide sub_seq, steps_past, steps_future in the predict method!\r\npredictor.predict(data, sub_seq=None, steps_past=None, steps_future=None)\r\n\r\n# save predictor via:\r\npredictor.freeze(absolute_path)\r\n\r\n# load saved predictor via:\r\npredictor.retrieve(location)\r\n```  \r\n\r\n</details>\r\n\r\n<details>\r\n  <summary>Multivariate Hybrid Predictors</summary>\r\n  <br>\r\n\r\n\r\n```python\r\nfrom imbrium import HybridMulti\r\n\r\n# create a HybridMulti object (numpy array expected)\r\npredictor = HybridMulti(target = target_numpy_array, features = features_numpy_array) \r\n\r\n# the following models are available for a HybridMulti objects:\r\n# create and fit a convolutional recurrent neural network\r\npredictor.create_fit_cnnrnn(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        rnn_block_one = 1,\r\n        rnn_block_two = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional long short-term memory neural network\r\npredictor.create_fit_cnnlstm(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        lstm_block_one = 1,\r\n        lstm_block_two = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional gated recurrent unit neural network  \r\npredictor.create_fit_cnngru(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        gru_block_one = 1,\r\n        gru_block_two = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional bidirectional recurrent neural network\r\npredictor.create_fit_cnnbirnn(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        birnn_block_one = 1,\r\n        rnn_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional bidirectional long short-term neural network\r\npredictor.create_fit_cnnbilstm(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        bilstm_block_one = 1,\r\n        lstm_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# create and fit a convolutional bidirectional gated recurrent neural network\r\npredictor.create_fit_cnnbigru(\r\n        sub_seq,\r\n        steps_past,\r\n        steps_future,\r\n        optimizer = \"adam\",\r\n        optimizer_args = None,\r\n        loss = \"mean_squared_error\",\r\n        metrics = \"mean_squared_error\",\r\n        conv_block_one = 1,\r\n        conv_block_two = 1,\r\n        bigru_block_one = 1,\r\n        gru_block_one = 1,\r\n        layer_config = {\r\n            \"layer0\": {\r\n                \"config\": {\r\n                    \"filters\": 64,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer1\": {\r\n                \"config\": {\r\n                    \"filters\": 32,\r\n                    \"kernel_size\": 1,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer2\": {\r\n                \"config\": {\r\n                    \"pool_size\": 2,\r\n                }\r\n            },\r\n            \"layer3\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                    \"dropout\": 0.0,\r\n                }\r\n            },\r\n            \"layer4\": {\r\n                \"config\": {\r\n                    \"neurons\": 32,\r\n                    \"activation\": \"relu\",\r\n                    \"regularization\": 0.0,\r\n                }\r\n            },\r\n        },\r\n        epochs = 100,\r\n        show_progress = 1,\r\n        validation_split = 0.20,\r\n        board = False,\r\n)\r\n\r\n# you can add additional layer to the defualt layers by increasing the layer block count and adding the configuration for the layer in the layer_config dictionary. Please note that the last layer should not have a dropout key.\r\n\r\n# in addition, you can add to all models early stopping arguments:\r\n\r\nmonitor='val_loss',\r\nmin_delta=0,\r\npatience=0,\r\nverbose=0,\r\nmode='auto',\r\nbaseline=None,\r\nrestore_best_weights=False,\r\nstart_from_epoch=0\r\n\r\n\r\n# instpect model structure\r\npredictor.model_blueprint()\r\n\r\n# insptect keras model performances via (access dictionary via history key):\r\npredictor.show_performance()\r\n\r\n# make predictions via (numpy array expected):\r\n# - when loading/retrieving a saved model, provide sub_seq, steps_past, steps_future in the predict method!\r\npredictor.predict(data, sub_seq=None, steps_past=None, steps_future=None)\r\n\r\n# save predictor via:\r\npredictor.freeze(absolute_path)\r\n\r\n# load saved predictor via:\r\npredictor.retrieve(location)\r\n```  \r\n</details>\r\n\r\n</details>\r\n\r\n### Use Case: scaling + hyper parameter optimization\r\n\r\nhttps://github.com/maxmekiska/ImbriumTesting-Demo/blob/main/use-case-1.ipynb\r\n\r\n### Integration tests\r\n\r\nhttps://github.com/maxmekiska/ImbriumTesting-Demo/blob/main/IntegrationTest.ipynb\r\n\r\n\r\n## LEGACY: imbrium versions <= v.1.3.0\r\n<details>\r\n  <summary>Expand</summary>\r\n  <br>\r\n\r\nThe library differentiates between two\r\nmodes:\r\n\r\n1. Univariate-Multistep forecasting\r\n2. Multivariate-Multistep forecasting\r\n\r\nThese two main modes are further divided based on the complexity of the underlying model architectures:\r\n\r\n1. Pure\r\n2. Hybrid\r\n\r\nPure supports the following architectures:\r\n\r\n- Multilayer perceptron (MLP)\r\n- Recurrent neural network (RNN)\r\n- Long short-term memory (LSTM)\r\n- Gated recurrent unit (GRU)\r\n- Convolutional neural network (CNN)\r\n- Bidirectional recurrent neural network (BI-RNN)\r\n- Bidirectional long-short term memory (BI-LSTM)\r\n- Bidirectional gated recurrent unit (BI-GRU)\r\n- Encoder-Decoder recurrent neural network\r\n- Encoder-Decoder long-short term memory\r\n- Encoder-Decoder convolutional neural network (Encoding via CNN, Decoding via GRU)\r\n- Encoder-Decoder gated recurrent unit\r\n\r\nHybrid supports:\r\n\r\n- Convolutional neural network + recurrent neural network (CNN-RNN)\r\n- Convolutional neural network + Long short-term memory (CNN-LSTM)\r\n- Convolutional neural network + Gated recurrent unit (CNN-GRU)\r\n- Convolutional neural network + Bidirectional recurrent neural network (CNN-BI-RNN)\r\n- Convolutional neural network + Bidirectional long-short term memory (CNN-BI-LSTM)\r\n- Convolutional neural network + Bidirectional gated recurrent unit (CNN-BI-GRU)\r\n\r\nPlease note that each model is supported by a prior input data pre-processing procedure which allows to set a look-back period, look-forward period, sub-sequences division (only for hybrid architectures) and data scaling method.\r\n\r\nThe following scikit-learn scaling procedures are supported:\r\n\r\n- StandardScaler\r\n- MinMaxScaler\r\n- MaxAbsScaler\r\n- Normalizing ([0, 1])\r\n- None (raw data input)\r\n\r\nDuring training/fitting, callback conditions can be defined to guard against\r\noverfitting.\r\n\r\nTrained models can furthermore be saved or loaded if the user wishes to do so.\r\n\r\n## How to use imbrium?\r\n\r\n<details>\r\n  <summary>Expand</summary>\r\n  <br>\r\n\r\nAttention: Typing has been left in the below examples to ease the configuration readability.\r\n\r\n#### Version updates:\r\n\r\n##### Version >= 1.2.0\r\n\r\nVersion 1.2.0 started supporting tensor board dashboards: https://www.tensorflow.org/tensorboard/get_started\r\n\r\n##### Version >= 1.3.0\r\n\r\nVersion 1.3.0 started supporting adjustable layer depth configurations for all architectures. If you wish to adjust the layer depth, please make sure to include a custom layer_config accounting for the correct number of layers. The last layer cannot contain a dropout parameter -> tuple needs to be of length 3: (neurons, activation, regularization).\r\n\r\n### `Univariate Models`:\r\n\r\n1. Univariate-Multistep forecasting - Pure architectures\r\n\r\n```python\r\nfrom imbrium.predictors.univarpure import PureUni\r\n\r\npredictor = PureUni(\r\n                    steps_past: int,\r\n                    steps_future: int,\r\n                    data = pd.DataFrame(),\r\n                    scale: str = ''\r\n                   )\r\n\r\n# Choose between one of the architectures:\r\n\r\npredictor.create_mlp(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     dense_block_one: int = 1,\r\n                     dense_block_two: int = 1,\r\n                     dense_block_three: int = 1,\r\n                     layer_config: dict =\r\n                     {\r\n                      'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer1': (25,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer2': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                      }\r\n                    )\r\n\r\npredictor.create_rnn(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     rnn_block_one: int = 1,\r\n                     rnn_block_two: int = 1,\r\n                     rnn_block_three: int = 1,\r\n                     layer_config: dict = \r\n                     {\r\n                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     }\r\n                    )\r\n\r\npredictor.create_lstm(\r\n                      optimizer: str = 'adam',\r\n                      optimizer_args: dict = None,\r\n                      loss: str = 'mean_squared_error',\r\n                      metrics: str = 'mean_squared_error',\r\n                      lstm_block_one: int = 1,\r\n                      lstm_block_two: int = 1,\r\n                      lstm_block_three: int = 1,\r\n                      layer_config: dict =\r\n                      {\r\n                        'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                      }\r\n                     )\r\n\r\npredictor.create_gru(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     gru_block_one: int = 1,\r\n                     gru_block_two: int = 1,\r\n                     gru_block_three: int = 1,\r\n                     layer_config: dict =\r\n                     {\r\n                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     }\r\n                    )\r\n\r\npredictor.create_cnn(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     conv_block_one: int = 1,\r\n                     conv_block_two: int = 1,\r\n                     dense_block_one: int = 1,\r\n                     layer_config: dict =\r\n                     {\r\n                      'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                      'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                      'layer2': (2), # (pool_size)\r\n                      'layer3': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     }\r\n                    )\r\n\r\npredictor.create_birnn(\r\n                       optimizer: str = 'adam',\r\n                       optimizer_args: dict = None,\r\n                       loss: str = 'mean_squared_error',\r\n                       metrics: str = 'mean_squared_error',\r\n                       birnn_block_one: int = 1,\r\n                       rnn_block_one: int = 1,\r\n                       layer_config: dict =\r\n                       {\r\n                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                       }\r\n                      )\r\n\r\npredictor.create_bilstm(\r\n                        optimizer: str = 'adam', \r\n                        optimizer_args: dict = None,\r\n                        loss: str = 'mean_squared_error',\r\n                        metrics: str = 'mean_squared_error',\r\n                        bilstm_block_one: int = 1,\r\n                        lstm_block_one: int = 1,\r\n                        layer_config: dict = \r\n                        {\r\n                          'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                        }\r\n                       )\r\n\r\npredictor.create_bigru(\r\n                       optimizer: str = 'adam',\r\n                       optimizer_args: dict = None,\r\n                       loss: str = 'mean_squared_error',\r\n                       metrics: str = 'mean_squared_error',\r\n                       bigru_block_one: int = 1,\r\n                       gru_block_one: int = 1,\r\n                       layer_config: dict = \r\n                       {\r\n                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                       }\r\n                      )\r\n\r\npredictor.create_encdec_rnn(\r\n                            optimizer: str = 'adam',\r\n                            optimizer_args: dict = None,\r\n                            loss: str = 'mean_squared_error',\r\n                            metrics: str = 'mean_squared_error',\r\n                            enc_rnn_block_one: int = 1,\r\n                            enc_rnn_block_two: int = 1,\r\n                            dec_rnn_block_one: int = 1,\r\n                            dec_rnn_block_two: int = 1,\r\n                            layer_config: dict =\r\n                            {\r\n                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer2': (50, 'relu', 0.0, 0.0),  # (neurons, activation, regularization, dropout)\r\n                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                            }\r\n                           )\r\n\r\npredictor.create_encdec_lstm(\r\n                             optimizer: str = 'adam',\r\n                             optimizer_args: dict = None,\r\n                             loss: str = 'mean_squared_error',\r\n                             metrics: str = 'mean_squared_error',\r\n                             enc_lstm_block_one: int = 1,\r\n                             enc_lstm_block_two: int = 1,\r\n                             dec_lstm_block_one: int = 1,\r\n                             dec_lstm_block_two: int = 1,\r\n                             layer_config: dict = \r\n                             {\r\n                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                             }\r\n                            )\r\n\r\npredictor.create_encdec_cnn(\r\n                            optimizer: str = 'adam',\r\n                            optimizer_args: dict = None,\r\n                            loss: str = 'mean_squared_error',\r\n                            metrics: str = 'mean_squared_error',\r\n                            enc_conv_block_one: int = 1,\r\n                            enc_conv_block_two: int = 1,\r\n                            dec_gru_block_one: int = 1,\r\n                            dec_gru_block_two: int = 1,\r\n                            layer_config: dict = \r\n                            {\r\n                              'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                              'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                              'layer2': (2), # (pool_size)\r\n                              'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer4': (100, 'relu', 0.0)  # (neurons, activation, regularization)\r\n                            }\r\n                          )\r\n\r\npredictor.create_encdec_gru(\r\n                            optimizer: str = 'adam',\r\n                            optimizer_args: dict = None,\r\n                            loss: str = 'mean_squared_error',\r\n                            metrics: str = 'mean_squared_error',\r\n                            enc_gru_block_one: int = 1,\r\n                            enc_gru_block_two: int = 1,\r\n                            dec_gru_block_one: int = 1,\r\n                            dec_gru_block_two: int = 1,\r\n                            layer_config: dict = \r\n                            {\r\n                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                            }\r\n                          )\r\n\r\n# Fit the predictor object - more callback settings at:\r\n\r\n# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping\r\n\r\npredictor.fit_model(\r\n                    epochs: int,\r\n                    show_progress: int = 1,\r\n                    validation_split: float = 0.20,\r\n                    board: bool = True, # record training progress in tensorboard\r\n                    monitor='loss', \r\n                    patience=3\r\n                   )\r\n\r\n# Have a look at the model performance\r\npredictor.show_performance(metric_name: str = None) # optionally plot metric name against loss\r\n\r\n# Make a prediction based on new unseen data\r\npredictor.predict(data)\r\n\r\n# Safe your model:\r\npredictor.save_model()\r\n\r\n# Load a model:\r\n# Step 1: initialize a new predictor object with same characteristics as model to load\r\n# Step 2: Do not pass in any data\r\n# Step 3: Invoke the method load_model()\r\n# optional Step 4: Use the setter method set_model_id(name: str) to give model a name\r\n\r\nloading_predictor = PureUni(steps_past: int, steps_future: int)\r\nloading_predictor.load_model(location: str)\r\nloading_predictor.set_model_id(name: str)\r\n```\r\n\r\n2. Univariate-Multistep forecasting - Hybrid architectures\r\n\r\n```python\r\nfrom imbrium.predictors.univarhybrid import HybridUni\r\n\r\npredictor = HybridUni(\r\n                      sub_seq: int,\r\n                      steps_past: int,\r\n                      steps_future: int, data = pd.DataFrame(),\r\n                      scale: str = ''\r\n                     )\r\n\r\n# Choose between one of the architectures:\r\n\r\npredictor.create_cnnrnn(\r\n                        optimizer: str = 'adam',\r\n                        optimizer_args: dict = None,\r\n                        loss: str = 'mean_squared_error',\r\n                        metrics: str = 'mean_squared_error',\r\n                        conv_block_one: int = 1,\r\n                        conv_block_two: int = 1,\r\n                        rnn_block_one: int = 1,\r\n                        rnn_block_two: int = 1,\r\n                        layer_config = \r\n                        {\r\n                          'layer0': (64, 1, 'relu', 0.0, 0.0),  # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer2': (2), # (pool_size)\r\n                          'layer3': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer4': (25, 'relu', 0.0, 0.0) # (neurons, activation, regularization, dropout)\r\n                        }\r\n                      )\r\n\r\npredictor.create_cnnlstm(\r\n                         optimizer: str = 'adam', \r\n                         optimizer_args: dict = None,\r\n                         loss: str = 'mean_squared_error',\r\n                         metrics: str = 'mean_squared_error',\r\n                         conv_block_one: int = 1,\r\n                         conv_block_two: int = 1,\r\n                         lstm_block_one: int = 1,\r\n                         lstm_block_two: int = 1,\r\n                         layer_config = \r\n                        {\r\n                          'layer0': (64, 1, 'relu', 0.0, 0.0),  # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer2': (2), # (pool_size)\r\n                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                        }\r\n                      )\r\n\r\npredictor.create_cnngru(\r\n                        optimizer: str = 'adam',\r\n                        optimizer_args: dict = None,\r\n                        loss: str = 'mean_squared_error',\r\n                        metrics: str = 'mean_squared_error',\r\n                        conv_block_one: int = 1,\r\n                        conv_block_two: int = 1,\r\n                        gru_block_one: int = 1,\r\n                        gru_block_two: int = 1,\r\n                        layer_config =\r\n                        {\r\n                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer2': (2), # (pool_size)\r\n                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                        }\r\n                      )\r\n\r\npredictor.create_cnnbirnn(\r\n                          optimizer: str = 'adam',\r\n                          optimizer_args: dict = None,\r\n                          loss: str = 'mean_squared_error',\r\n                          metrics: str = 'mean_squared_error',\r\n                          conv_block_one: int = 1,\r\n                          conv_block_two: int = 1,\r\n                          birnn_block_one: int = 1,\r\n                          rnn_block_one: int = 1,\r\n                          layer_config =\r\n                          {\r\n                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer2': (2), # (pool_size)\r\n                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                          }\r\n                        )\r\n\r\npredictor.create_cnnbilstm(\r\n                           optimizer: str = 'adam',\r\n                           optimizer_args: dict = None,\r\n                           loss: str = 'mean_squared_error',\r\n                           metrics: str = 'mean_squared_error',\r\n                           conv_block_one: int = 1,\r\n                           conv_block_two: int = 1,\r\n                           bilstm_block_one: int = 1,\r\n                           lstm_block_one: int = 1,\r\n                           layer_config =\r\n                           {\r\n                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer2': (2), # (pool_size)\r\n                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                            }\r\n                          )\r\n\r\npredictor.create_cnnbigru(\r\n                          optimizer: str = 'adam',\r\n                          optimizer_args: dict = None,\r\n                          loss: str = 'mean_squared_error',\r\n                          metrics: str = 'mean_squared_error',\r\n                          conv_block_one: int = 1,\r\n                          conv_block_two: int = 1,\r\n                          bigru_block_one: int = 1,\r\n                          gru_block_one: int = 1,\r\n                          layer_config =\r\n                          {\r\n                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer2': (2), # (pool_size)\r\n                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                          }\r\n                        )\r\n\r\n# Fit the predictor object - more callback settings at:\r\n\r\n# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping\r\n\r\npredictor.fit_model(\r\n                    epochs: int,\r\n                    show_progress: int = 1,\r\n                    validation_split: float = 0.20,\r\n                    board: bool = True, # record training progress in tensorboard\r\n                    monitor='loss',\r\n                    patience=3\r\n                    )\r\n\r\n# Have a look at the model performance\r\npredictor.show_performance(metric_name: str = None) # optionally plot metric name against loss\r\n\r\n# Make a prediction based on new unseen data\r\npredictor.predict(data: array)\r\n\r\n# Safe your model:\r\npredictor.save_model()\r\n\r\n# Load a model:\r\n# Step 1: initialize a new predictor object with same characteristics as model to load\r\n# Step 2: Do not pass in any data\r\n# Step 3: Invoke the method load_model()\r\n# optional Step 4: Use the setter method set_model_id(name: str) to give model a name\r\n\r\nloading_predictor =  HybridUni(sub_seq: int, steps_past: int, steps_future: int)\r\nloading_predictor.load_model(location: str)\r\nloading_predictor.set_model_id(name: str)\r\n```\r\n\r\n### `Multivariate Models`:\r\n\r\n1. Multivariate-Multistep forecasting - Pure architectures\r\n\r\n```python\r\nfrom imbrium.predictors.multivarpure import PureMulti\r\n\r\n# please make sure that the target feature is the first variable in the feature list\r\npredictor = PureMulti(steps_past: int, steps_future: int, data = DataFrame(), features = [], scale: str = '')\r\n\r\n# Choose between one of the architectures:\r\n\r\npredictor.create_mlp(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     dense_block_one: int = 1,\r\n                     dense_block_two: int = 1,\r\n                     dense_block_three: int = 1,\r\n                     layer_config: dict =\r\n                     {\r\n                      'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer1': (25,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer2': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     }\r\n                    )\r\n\r\npredictor.create_rnn(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     rnn_block_one: int = 1,\r\n                     rnn_block_two: int = 1,\r\n                     rnn_block_three: int = 1,\r\n                     layer_config: dict = \r\n                     {\r\n                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     }\r\n                    )\r\n\r\npredictor.create_lstm(\r\n                      optimizer: str = 'adam',\r\n                      optimizer_args: dict = None,\r\n                      loss: str = 'mean_squared_error',\r\n                      metrics: str = 'mean_squared_error',\r\n                      lstm_block_one: int = 1,\r\n                      lstm_block_two: int = 1,\r\n                      lstm_block_three: int = 1,\r\n                      layer_config: dict =\r\n                      {\r\n                        'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer1': (50,'relu', 0.0, 0.0),  # (neurons, activation, regularization, dropout)\r\n                        'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                      }\r\n                    )\r\n\r\npredictor.create_gru(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     gru_block_one: int = 1,\r\n                     gru_block_two: int = 1,\r\n                     gru_block_three: int = 1,\r\n                     layer_config: dict =\r\n                     {\r\n                      'layer0': (40, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer1': (50,'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                      'layer2': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     } \r\n                    )\r\n\r\npredictor.create_cnn(\r\n                     optimizer: str = 'adam',\r\n                     optimizer_args: dict = None,\r\n                     loss: str = 'mean_squared_error',\r\n                     metrics: str = 'mean_squared_error',\r\n                     conv_block_one: int = 1,\r\n                     conv_block_two: int = 1,\r\n                     dense_block_one: int = 1,\r\n                     layer_config: dict =\r\n                     {\r\n                      'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                      'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                      'layer2': (2), # (pool_size)\r\n                      'layer3': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                     }\r\n                    )\r\n\r\npredictor.create_birnn(\r\n                       optimizer: str = 'adam',\r\n                       optimizer_args: dict = None,\r\n                       loss: str = 'mean_squared_error',\r\n                       metrics: str = 'mean_squared_error',\r\n                       birnn_block_one: int = 1,\r\n                       rnn_block_one: int = 1,\r\n                       layer_config: dict =\r\n                       {\r\n                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                       }\r\n                      )\r\n\r\npredictor.create_bilstm(\r\n                        optimizer: str = 'adam',\r\n                        optimizer_args: dict = None,\r\n                        loss: str = 'mean_squared_error',\r\n                        metrics: str = 'mean_squared_error',\r\n                        bilstm_block_one: int = 1,\r\n                        lstm_block_one: int = 1,\r\n                        layer_config: dict =\r\n                        {\r\n                          'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                        }\r\n                      )\r\n\r\npredictor.create_bigru(\r\n                       optimizer: str = 'adam',\r\n                       optimizer_args: dict = None,\r\n                       loss: str = 'mean_squared_error',\r\n                       metrics: str = 'mean_squared_error',\r\n                       bigru_block_one: int = 1,\r\n                       gru_block_one: int = 1,\r\n                       layer_config: dict =\r\n                       {\r\n                        'layer0': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                        'layer1': (50, 'relu', 0.0) # (neurons, activation, regularization)\r\n                       }\r\n                      )\r\n\r\npredictor.create_encdec_rnn(\r\n                            optimizer: str = 'adam',\r\n                            optimizer_args: dict = None,\r\n                            loss: str = 'mean_squared_error',\r\n                            metrics: str = 'mean_squared_error',\r\n                            enc_rnn_block_one: int = 1,\r\n                            enc_rnn_block_two: int = 1,\r\n                            dec_rnn_block_one: int = 1,\r\n                            dec_rnn_block_two: int = 1,\r\n                            layer_config: dict =\r\n                            {\r\n                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                            }\r\n                          )\r\n\r\npredictor.create_encdec_lstm(\r\n                             optimizer: str = 'adam',\r\n                             optimizer_args: dict = None,\r\n                             loss: str = 'mean_squared_error',\r\n                             metrics: str = 'mean_squared_error',\r\n                             enc_lstm_block_one: int = 1,\r\n                             enc_lstm_block_two: int = 1,\r\n                             dec_lstm_block_one: int = 1,\r\n                             dec_lstm_block_two: int = 1,\r\n                             layer_config: dict =\r\n                             {\r\n                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                             }\r\n                            )\r\n\r\npredictor.create_encdec_cnn(\r\n                            optimizer: str = 'adam',\r\n                            optimizer_args: dict = None,\r\n                            loss: str = 'mean_squared_error',\r\n                            metrics: str = 'mean_squared_error',\r\n                            enc_conv_block_one: int = 1,\r\n                            enc_conv_block_two: int = 1,\r\n                            dec_gru_block_one: int = 1,\r\n                            dec_gru_block_two: int = 1,\r\n                            layer_config: dict =\r\n                            {\r\n                              'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                              'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                              'layer2': (2), # (pool_size)\r\n                              'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer4': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                            }\r\n                          )\r\n\r\npredictor.create_encdec_gru(\r\n                            optimizer: str = 'adam',\r\n                            optimizer_args: dict = None,\r\n                            loss: str = 'mean_squared_error',\r\n                            metrics: str = 'mean_squared_error',\r\n                            enc_gru_block_one: int = 1,\r\n                            enc_gru_block_two: int = 1,\r\n                            dec_gru_block_one: int = 1,\r\n                            dec_gru_block_two: int = 1,\r\n                            layer_config: dict =\r\n                            {\r\n                              'layer0': (100, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer1': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer2': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                              'layer3': (100, 'relu', 0.0) # (neurons, activation, regularization)\r\n                            }\r\n                          )\r\n\r\n# Fit the predictor object - more callback settings at:\r\n\r\n# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping\r\n\r\npredictor.fit_model(\r\n                    epochs: int,\r\n                    show_progress: int = 1,\r\n                    validation_split: float = 0.20,\r\n                    board: bool = True, # record training progress in tensorboard\r\n                    monitor='loss',\r\n                    patience=3\r\n                  )\r\n\r\n# Have a look at the model performance\r\npredictor.show_performance(metric_name: str = None) # optionally plot metric name against loss\r\n\r\n# Make a prediction based on new unseen data\r\npredictor.predict(data: array)\r\n\r\n# Safe your model:\r\npredictor.save_model()\r\n\r\n# Load a model:\r\n# Step 1: initialize a new predictor object with same characteristics as model to load\r\n# Step 2: Do not pass in any data\r\n# Step 3: Invoke the method load_model()\r\n# optional Step 4: Use the setter method set_model_id(name: str) to give model a name\r\n\r\nloading_predictor = PureMulti(steps_past: int, steps_future: int)\r\nloading_predictor.load_model(location: str)\r\nloading_predictor.set_model_id(name: str)\r\n```\r\n2. Multivariate-Multistep forecasting - Hybrid architectures\r\n\r\n```python\r\nfrom imbrium.predictors.multivarhybrid import HybridMulti\r\n\r\n# please make sure that the target feature is the first variable in the feature list\r\npredictor = HybridMulti(sub_seq: int, steps_past: int, steps_future: int, data = DataFrame(), features:list = [], scale: str = '')\r\n\r\n# Choose between one of the architectures:\r\n\r\npredictor.create_cnnrnn(\r\n                        optimizer: str = 'adam',\r\n                        optimizer_args: dict = None,\r\n                        loss: str = 'mean_squared_error',\r\n                        metrics: str = 'mean_squared_error',\r\n                        conv_block_one: int = 1,\r\n                        conv_block_two: int = 1,\r\n                        rnn_block_one: int = 1,\r\n                        rnn_block_two: int = 1,\r\n                        layer_config =\r\n                        {\r\n                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer2': (2), # (pool_size)\r\n                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                        }\r\n                      )\r\n\r\npredictor.create_cnnlstm(\r\n                         optimizer: str = 'adam',\r\n                         optimizer_args: dict = None,\r\n                         loss: str = 'mean_squared_error',\r\n                         metrics: str = 'mean_squared_error',\r\n                         conv_block_one: int = 1,\r\n                         conv_block_two: int = 1,\r\n                         lstm_block_one: int = 1,\r\n                         lstm_block_two: int = 1,\r\n                         layer_config =\r\n                         {\r\n                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer2': (2), # (pool_size)\r\n                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                         }\r\n                        )\r\n\r\npredictor.create_cnngru(\r\n                        optimizer: str = 'adam',\r\n                        optimizer_args: dict = None,\r\n                        loss: str = 'mean_squared_error',\r\n                        metrics: str = 'mean_squared_error',\r\n                        conv_block_one: int = 1,\r\n                        conv_block_two: int = 1,\r\n                        gru_block_one: int = 1,\r\n                        gru_block_two: int = 1,\r\n                        layer_config =\r\n                        {\r\n                          'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                          'layer2': (2), # (pool_size)\r\n                          'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                          'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                        }\r\n                      )\r\n\r\npredictor.create_cnnbirnn(\r\n                          optimizer: str = 'adam',\r\n                          optimizer_args: dict = None,\r\n                          loss: str = 'mean_squared_error',\r\n                          metrics: str = 'mean_squared_error',\r\n                          conv_block_one: int = 1,\r\n                          conv_block_two: int = 1,\r\n                          birnn_block_one: int = 1,\r\n                          rnn_block_one: int = 1,\r\n                          layer_config =\r\n                          {\r\n                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer2': (2), # (pool_size)\r\n                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                          }\r\n                        )\r\n\r\npredictor.create_cnnbilstm(\r\n                           optimizer: str = 'adam',\r\n                           optimizer_args: dict = None,\r\n                           loss: str = 'mean_squared_error',\r\n                           metrics: str = 'mean_squared_error',\r\n                           conv_block_one: int = 1,\r\n                           conv_block_two: int = 1,\r\n                           bilstm_block_one: int = 1,\r\n                           lstm_block_one: int = 1,\r\n                           layer_config =\r\n                           {\r\n                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer2': (2), # (pool_size)\r\n                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                           }\r\n                          )\r\n\r\npredictor.create_cnnbigru(\r\n                          optimizer: str = 'adam',\r\n                          optimizer_args: dict = None,\r\n                          loss: str = 'mean_squared_error',\r\n                          metrics: str = 'mean_squared_error',\r\n                          conv_block_one: int = 1,\r\n                          conv_block_two: int = 1,\r\n                          bigru_block_one: int = 1,\r\n                          gru_block_one: int = 1,\r\n                          layer_config =\r\n                          {\r\n                            'layer0': (64, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer1': (32, 1, 'relu', 0.0, 0.0), # (filter_size, kernel_size, activation, regularization, dropout)\r\n                            'layer2': (2), # (pool_size)\r\n                            'layer3': (50, 'relu', 0.0, 0.0), # (neurons, activation, regularization, dropout)\r\n                            'layer4': (25, 'relu', 0.0) # (neurons, activation, regularization)\r\n                          }\r\n                        )\r\n\r\n# Fit the predictor object - more callback settings at:\r\n\r\n# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping\r\n\r\npredictor.fit_model(\r\n                    epochs: int,\r\n                    show_progress: int = 1,\r\n                    validation_split: float = 0.20,\r\n                    board: bool = True, # record training progress in tensorboard\r\n                    monitor='loss',\r\n                    patience=3\r\n                  )\r\n\r\n# Have a look at the model performance\r\npredictor.show_performance(metric_name: str = None) # optionally plot metric name against loss\r\n\r\n# Make a prediction based on new unseen data\r\npredictor.predict(data: array)\r\n\r\n# Safe your model:\r\npredictor.save_model()\r\n\r\n# Load a model:\r\n# Step 1: initialize a new predictor object with same characteristics as model to load\r\n# Step 2: Do not pass in any data\r\n# Step 3: Invoke the method load_model()\r\n# optional Step 4: Use the setter method set_model_id(name: str) to give model a name\r\n\r\nloading_predictor =  HybridMulti(sub_seq: int, steps_past: int, steps_future: int)\r\nloading_predictor.load_model(location: str)\r\nloading_predictor.set_model_id(name: str)\r\n```\r\n</details>\r\n\r\n## Hyperparameter Optimization imbrium 1.1.0\r\n<details>\r\n  <summary>Expand</summary>\r\n  <br>\r\n\r\nStarting from version 1.1.0, imbrium will support experimental hyperparamerter optimization for the model layer config and optimizer arguments. The optimization process uses the Optuna library (https://optuna.org/).\r\n\r\n### Optimization via the seeker decorator\r\n\r\nTo leverage Optimization, use the new classes `OptimizePureUni`, `OptimizeHybridUni`, `OptimizePureMulti` and `OptimizeHybridMulti`. These classes implement optimizable model architecture methods:\r\n\r\n`OptimizePureUni` & `OptimizePureMulti`:\r\n\r\n  - create_fit_mlp\r\n  - create_fit_rnn\r\n  - create_fit_lstm\r\n  - create_fit_cnn\r\n  - create_fit_gru\r\n  - create_fit_birnn\r\n  - create_fit_bilstm\r\n  - create_fit_bigru\r\n  - create_fit_encdec_rnn\r\n  - create_fit_encdec_lstm\r\n  - create_fit_encdec_gru\r\n  - create_fit_encdec_cnn\r\n\r\n`OptimizeHybridUni` & `OptimizeHybridMulti`:\r\n\r\n  - create_fit_cnnrnn\r\n  - create_fit_cnnlstm\r\n  - create_fit_cnngru\r\n  - create_fit_cnnbirnn\r\n  - create_fit_cnnbilstm\r\n  - create_fit_cnnbigru\r\n\r\n#### Example `OptimizePureUni`\r\n\r\n```python\r\nfrom imbrium.predictors.univarpure import OptimizePureUni\r\nfrom imbrium.utils.optimization import seeker\r\n\r\n# initialize optimizable predictor object\r\npredictor = OptimizePureUni(steps_past=5, steps_future=10, data=data, scale='standard')\r\n\r\n\r\n# use seeker decorator on optimization harness\r\n@seeker(optimizer_range=[\"adam\", \"sgd\"], \r\n        layer_config_range= [\r\n            {\r\n              'layer0': (5, 'relu'),\r\n              'layer1': (10,'relu'),\r\n              'layer2': (5, 'relu')\r\n            },\r\n            {\r\n              'layer0': (2, 'relu'),\r\n              'layer1': (5, 'relu'),\r\n              'layer2': (2, 'relu')\r\n            }\r\n        ], \r\n        optimizer_args_range = [\r\n            {\r\n              'learning_rate': 0.02,\r\n            },\r\n            {\r\n              'learning_rate': 0.0001,\r\n            }\r\n        ]\r\n        optimization_target='minimize', n_trials = 2)\r\ndef create_fit_model(predictor: object, *args, **kwargs):\r\n    # use optimizable create_fit_xxx method\r\n    return predictor.create_fit_lstm(*args, **kwargs)\r\n\r\n\r\ncreate_fit_model(\r\n                 predictor,\r\n                 loss='mean_squared_error',\r\n                 metrics='mean_squared_error',\r\n                 epochs=2,\r\n                 show_progress=0,\r\n                 validation_split=0.20,\r\n                 board=True,\r\n                 monitor='val_loss',\r\n                 patience=2,\r\n                 min_delta=0,\r\n                 verbose=1\r\n                )\r\n\r\npredictor.show_performance()\r\npredictor.predict(data.tail(5))\r\npredictor.model_blueprint()\r\n```\r\n\r\n#### Example `OptimizeHybridUni`\r\n\r\n```python\r\nfrom imbrium.predictors.univarhybrid import OptimizeHybridUni\r\nfrom imbrium.utils.optimization import seeker\r\n\r\npredictor = OptimizeHybridUni(sub_seq = 2, steps_past = 10, steps_future = 5, data = data, scale = 'maxabs')\r\n\r\n@seeker(optimizer_range=[\"adam\", \"sgd\"], \r\n        layer_config_range= [\r\n            {\r\n              'layer0': (8, 1, 'relu'),\r\n              'layer1': (4, 1, 'relu'),\r\n              'layer2': (2),\r\n              'layer3': (25, 'relu'),\r\n              'layer4': (10, 'relu')\r\n            },\r\n            {\r\n              'layer0': (16, 1, 'relu'),\r\n              'layer1': (8, 1, 'relu'),\r\n              'layer2': (2)\r\n              'layer3': (55, 'relu'),\r\n              'layer4': (10, 'relu')\r\n            },\r\n            {\r\n              'layer0': (32, 1, 'relu'),\r\n              'layer1': (16, 1, 'relu'),\r\n              'layer2': (2),\r\n              'layer3': (25, 'relu'),\r\n              'layer4': (10, 'relu')\r\n            }\r\n        ], \r\n        optimizer_args_range = [\r\n            {\r\n              'learning_rate': 0.02,\r\n            },\r\n            {\r\n              'learning_rate': 0.0001,\r\n            }\r\n        ]\r\n        optimization_target='minimize', n_trials = 2)\r\ndef create_fit_model(predictor: object, *args, **kwargs):\r\n    return predictor.create_fit_cnnlstm(*args, **kwargs)\r\n\r\ncreate_fit_model(\r\n                 predictor,\r\n                 loss='mean_squared_error',\r\n                 metrics='mean_squared_error',\r\n                 epochs=2,\r\n                 show_progress=0,\r\n                 validation_split=0.20,\r\n                 board=True,\r\n                 monitor='val_loss',\r\n                 patience=2,\r\n                 min_delta=0,\r\n                 verbose=1\r\n                )\r\n\r\npredictor.show_performance()\r\npredictor.predict(data.tail(10))\r\npredictor.model_blueprint()\r\n```\r\n\r\n#### Example `OptimizePureMulti`\r\n\r\n```python\r\npredictor = OptimizePureMulti(\r\n                              steps_past =  5,\r\n                              steps_future = 10,\r\n                              data = data,\r\n                              features = ['target', 'target', 'HouseAge', 'AveRooms', 'AveBedrms'],\r\n                              scale = 'normalize'\r\n                            )\r\n\r\n\r\n@seeker(optimizer_range=[\"adam\", \"sgd\"], \r\n        layer_config_range= [\r\n            {\r\n              'layer0': (5, 'relu'),\r\n              'layer1': (10,'relu'),\r\n              'layer2': (5, 'relu')\r\n            },\r\n            {\r\n              'layer0': (2, 'relu'),\r\n              'layer1': (5, 'relu'),\r\n              'layer2': (2, 'relu')\r\n            },\r\n            {\r\n              'layer0': (20, 'relu'),\r\n              'layer1': (50, 'relu'),\r\n              'layer2': (20, 'sigmoid')\r\n            }\r\n        ], \r\n        optimizer_args_range = [\r\n            {\r\n              'learning_rate': 0.02,\r\n            },\r\n            {\r\n              'learning_rate': 0.0001,\r\n            }\r\n        ]\r\n        optimization_target='minimize', n_trials = 3)\r\ndef create_fit_model(predictor: object, *args, **kwargs):\r\n    return predictor.create_fit_lstm(*args, **kwargs)\r\n\r\ncreate_fit_model(\r\n                 predictor,\r\n                 loss='mean_squared_error',\r\n                 metrics='mean_squared_error',\r\n                 epochs=2,\r\n                 show_progress=1, \r\n                 validation_split=0.20,\r\n                 board=True,\r\n                 monitor='val_loss',\r\n                 patience=2,\r\n                 min_delta=0,\r\n                 verbose=1\r\n                )\r\n\r\n\r\npredictor.show_performance()\r\npredictor.predict(data[['target', 'HouseAge', 'AveRooms', 'AveBedrms']].tail(5))\r\npredictor.model_blueprint()\r\n```\r\n\r\n\r\n#### Example `OptimizeHybridMulti`\r\n\r\n```python\r\npredictor = OptimizeHybridMulti(\r\n                                sub_seq = 2, \r\n                                steps_past = 10,\r\n                                steps_future = 5,\r\n                                data = data,\r\n                                features = ['target', 'target', 'HouseAge', 'AveRooms', 'AveBedrms'],\r\n                                scale = 'normalize'\r\n                              )\r\n\r\n\r\n@seeker(optimizer_range=[\"adam\", \"sgd\"], \r\n        layer_config_range= [\r\n            {\r\n              'layer0': (8, 1, 'relu'),\r\n              'layer1': (4, 1, 'relu'),\r\n              'layer2': (2),\r\n              'layer3': (5, 'relu'),\r\n              'layer4': (5, 'relu')\r\n            },\r\n            {\r\n              'layer0': (8, 1, 'relu'),\r\n              'layer1': (4, 1, 'relu'),\r\n              'layer2': (2),\r\n              'layer3': (5, 'relu'),\r\n              'layer4': (5, 'relu')\r\n            },\r\n            {\r\n              'layer0': (8, 1, 'relu'),\r\n              'layer1': (4, 1, 'relu'),\r\n              'layer2': (2),\r\n              'layer3': (5, 'relu'),\r\n              'layer4': (5, 'relu')\r\n            }\r\n        ], \r\n        optimizer_args_range = [\r\n            {\r\n              'learning_rate': 0.02,\r\n            },\r\n            {\r\n              'learning_rate': 0.0001,\r\n            }\r\n        ]\r\n        optimization_target='minimize', n_trials = 3)\r\ndef create_fit_model(predictor: object, *args, **kwargs):\r\n    return predictor.create_fit_cnnlstm(*args, **kwargs)\r\n\r\ncreate_fit_model(\r\n                 predictor,\r\n                 loss='mean_squared_error',\r\n                 metrics='mean_squared_error',\r\n                 epochs=2,\r\n                 show_progress=1,\r\n                 validation_split=0.20,\r\n                 board=True,\r\n                 monitor='val_loss',\r\n                 patience=2,\r\n                 min_delta=0,\r\n                 verbose=1\r\n                )\r\n\r\n\r\npredictor.show_performance()\r\npredictor.predict(data[['target', 'HouseAge', 'AveRooms', 'AveBedrms']].tail(10))\r\npredictor.model_blueprint()\r\n```\r\n#### The shell of the seeker harness\r\n\r\n```python\r\npredictor = OptimizePureMulti(...)\r\n\r\n@seeker(optimizer_range=[...], \r\n        layer_config_range= [\r\n            {...},\r\n            {...},\r\n            {...}\r\n        ], \r\n        optimizer_args_range = [\r\n            {...},\r\n            {...},\r\n        ]\r\n        optimization_target = '...', n_trials = x)\r\ndef create_fit_model(predictor: object, *args, **kwargs): # seeker harness\r\n    return predictor.create_fit_xxx(*args, **kwargs)\r\n\r\ncreate_fit_model(...) # execute seeker harness\r\n\r\n\r\npredictor.show_performance()\r\npredictor.predict(...)\r\npredictor.model_blueprint()\r\n```\r\n\r\n\r\n</details>\r\n\r\n## References\r\n\r\n<details>\r\n  <summary>Expand</summary>\r\n  <br>\r\n\r\nBrwonlee, J., 2016. Display deep learning model training history in keras [Online]. Available from:\r\nhttps://machinelearningmastery.com/display-deep-\r\nlearning-model-training-history-in-keras/.\r\n\r\nBrwonlee, J., 2018a. How to develop convolutional neural network models for time series forecasting [Online]. Available from:\r\nhttps://machinelearningmastery.com/how-to-develop-convolutional-\r\nneural-network-models-for-time-series-forecasting/.\r\n\r\nBrwonlee, J., 2018b. How to develop lstm models for time series forecasting [Online]. Available from:\r\nhttps://machinelearningmastery.com/how-to-develop-\r\nlstm-models-for-time-series-forecasting/.\r\n\r\nBrwonlee, J., 2018c. How to develop multilayer perceptron models for time series forecasting [Online]. Available from:\r\nhttps://machinelearningmastery.com/how-to-develop-multilayer-\r\nperceptron-models-for-time-series-forecasting/.\r\n\r\n</details>\r\n\r\n\r\n</details>\r\n\r\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Standard and Hybrid Deep Learning Multivariate-Multi-Step & Univariate-Multi-Step Time Series Forecasting.",
    "version": "2.1.0",
    "project_urls": {
        "Homepage": "https://github.com/maxmekiska/imbrium"
    },
    "split_keywords": [
        "machinelearning",
        "keras",
        "deeplearning",
        "timeseries",
        "forecasting"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bf842dc6d28df381f539b9fc75d50f3a5e709f4966666ad5d3284c7ab120d72c",
                "md5": "fb7a93331eb91202d2c50d4cf9b72491",
                "sha256": "f65e8768fd55c8d53216ce611aa0dca78e30bdc5404f3316d8dadba078c3275c"
            },
            "downloads": -1,
            "filename": "imbrium-2.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "fb7a93331eb91202d2c50d4cf9b72491",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 30377,
            "upload_time": "2023-10-25T05:36:11",
            "upload_time_iso_8601": "2023-10-25T05:36:11.186792Z",
            "url": "https://files.pythonhosted.org/packages/bf/84/2dc6d28df381f539b9fc75d50f3a5e709f4966666ad5d3284c7ab120d72c/imbrium-2.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0873c4f6c18db3cfe9d442b600c68d93807f7c963fda9522be36361e67e1eaa4",
                "md5": "5f9345e66d05b329ce8f52fd326d50f8",
                "sha256": "6fab3a0954ad363f4aac3a2cfb86015a2792ce0ee4a02498b6a09a223b4346bf"
            },
            "downloads": -1,
            "filename": "imbrium-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "5f9345e66d05b329ce8f52fd326d50f8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 41315,
            "upload_time": "2023-10-25T05:36:13",
            "upload_time_iso_8601": "2023-10-25T05:36:13.594652Z",
            "url": "https://files.pythonhosted.org/packages/08/73/c4f6c18db3cfe9d442b600c68d93807f7c963fda9522be36361e67e1eaa4/imbrium-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-25 05:36:13",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "maxmekiska",
    "github_project": "imbrium",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "tox": true,
    "lcname": "imbrium"
}
        
Elapsed time: 0.27693s