monotonic-nn


Namemonotonic-nn JSON
Version 0.3.4 PyPI version JSON
download
home_pagehttps://github.com/airtai/monotonic-nn
SummaryMonotonic Neural Networks
upload_time2023-06-09 13:12:31
maintainer
docs_urlNone
authorAIRT Technologies d.o.o.
requires_python>=3.8
licenseCreative Commons License
keywords tensorflow keras monotone "monotonic neural networks" "dense layer"
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            Constrained Monotonic Neural Networks
================

<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

## Running in Google Colab

You can execute this interactive tutorial in Google Colab by clicking
the button below:

<a href="https://colab.research.google.com/github/airtai/monotonic-nn/blob/main/nbs/index.ipynb" target=”_blank”>
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" />
</a>

## Summary

This Python library implements Constrained Monotonic Neural Networks as
described in:

Davor Runje, Sharath M. Shankaranarayana, “Constrained Monotonic Neural
Networks”, in Proceedings of the 40th International Conference on
Machine Learning, 2023. \[[PDF](https://arxiv.org/pdf/2205.11775.pdf)\].

#### Abstract

Wider adoption of neural networks in many critical domains such as
finance and healthcare is being hindered by the need to explain their
predictions and to impose additional constraints on them. Monotonicity
constraint is one of the most requested properties in real-world
scenarios and is the focus of this paper. One of the oldest ways to
construct a monotonic fully connected neural network is to constrain
signs on its weights. Unfortunately, this construction does not work
with popular non-saturated activation functions as it can only
approximate convex functions. We show this shortcoming can be fixed by
constructing two additional activation functions from a typical
unsaturated monotonic activation function and employing each of them on
the part of neurons. Our experiments show this approach of building
monotonic neural networks has better accuracy when compared to other
state-of-the-art methods, while being the simplest one in the sense of
having the least number of parameters, and not requiring any
modifications to the learning procedure or post-learning steps. Finally,
we prove it can approximate any continuous monotone function on a
compact subset of $\mathbb{R}^n$.

#### Citation

If you use this library, please cite:

``` title="bibtex"
@inproceedings{runje2023,
  title={Constrained Monotonic Neural Networks},
  author={Davor Runje and Sharath M. Shankaranarayana},
  booktitle={Proceedings of the 40th {International Conference on Machine Learning}},
  year={2023}
}
```

## Python package

This package contains an implementation of our Monotonic Dense Layer
[`MonoDense`](https://monotonic.airt.ai/latest/api/airt/keras/layers/MonoDense/#airt.keras.layers.MonoDense)
(Constrained Monotonic Fully Connected Layer). Below is the figure from
the paper for reference.

In the code, the variable `monotonicity_indicator` corresponds to **t**
in the figure and parameters `is_convex`, `is_concave` and
`activation_weights` are used to calculate the activation selector **s**
as follows:

- if `is_convex` or `is_concave` is **True**, then the activation
  selector **s** will be (`units`, 0, 0) and (0, `units`, 0),
  respecively.

- if both `is_convex` or `is_concave` is **False**, then the
  `activation_weights` represent ratios between $\breve{s}$, $\hat{s}$
  and $\tilde{s}$, respecively. E.g. if `activation_weights = (2, 2, 1)`
  and `units = 10`, then

$$
(\breve{s}, \hat{s}, \tilde{s}) = (4, 4, 2)
$$

![mono-dense-layer-diagram](https://github.com/airtai/monotonic-nn/raw/main/nbs/images/mono-dense-layer-diagram.png)

### Install

``` sh
pip install monotonic-nn
```

### How to use

In this example, we’ll assume we have a simple dataset with three inputs
values $x_1$, $x_2$ and $x_3$ sampled from the normal distribution,
while the output value $y$ is calculated according to the following
formula before adding Gaussian noise to it:

$y = x_1^3 + \sin\left(\frac{x_2}{2 \pi}\right) + e^{-x_3}$

<table id="T_37b51">
  <thead>
    <tr>
      <th id="T_37b51_level0_col0" class="col_heading level0 col0" >x0</th>
      <th id="T_37b51_level0_col1" class="col_heading level0 col1" >x1</th>
      <th id="T_37b51_level0_col2" class="col_heading level0 col2" >x2</th>
      <th id="T_37b51_level0_col3" class="col_heading level0 col3" >y</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td id="T_37b51_row0_col0" class="data row0 col0" >0.304717</td>
      <td id="T_37b51_row0_col1" class="data row0 col1" >-1.039984</td>
      <td id="T_37b51_row0_col2" class="data row0 col2" >0.750451</td>
      <td id="T_37b51_row0_col3" class="data row0 col3" >0.234541</td>
    </tr>
    <tr>
      <td id="T_37b51_row1_col0" class="data row1 col0" >0.940565</td>
      <td id="T_37b51_row1_col1" class="data row1 col1" >-1.951035</td>
      <td id="T_37b51_row1_col2" class="data row1 col2" >-1.302180</td>
      <td id="T_37b51_row1_col3" class="data row1 col3" >4.199094</td>
    </tr>
    <tr>
      <td id="T_37b51_row2_col0" class="data row2 col0" >0.127840</td>
      <td id="T_37b51_row2_col1" class="data row2 col1" >-0.316243</td>
      <td id="T_37b51_row2_col2" class="data row2 col2" >-0.016801</td>
      <td id="T_37b51_row2_col3" class="data row2 col3" >0.834086</td>
    </tr>
    <tr>
      <td id="T_37b51_row3_col0" class="data row3 col0" >-0.853044</td>
      <td id="T_37b51_row3_col1" class="data row3 col1" >0.879398</td>
      <td id="T_37b51_row3_col2" class="data row3 col2" >0.777792</td>
      <td id="T_37b51_row3_col3" class="data row3 col3" >-0.093359</td>
    </tr>
    <tr>
      <td id="T_37b51_row4_col0" class="data row4 col0" >0.066031</td>
      <td id="T_37b51_row4_col1" class="data row4 col1" >1.127241</td>
      <td id="T_37b51_row4_col2" class="data row4 col2" >0.467509</td>
      <td id="T_37b51_row4_col3" class="data row4 col3" >0.780875</td>
    </tr>
  </tbody>
</table>

Now, we’ll use the
[`MonoDense`](https://monotonic.airt.ai/latest/api/airt/keras/layers/MonoDense/#airt.keras.layers.MonoDense)
layer instead of `Dense` layer to build a simple monotonic network. By
default, the
[`MonoDense`](https://monotonic.airt.ai/latest/api/airt/keras/layers/MonoDense/#airt.keras.layers.MonoDense)
layer assumes the output of the layer is monotonically increasing with
all inputs. This assumtion is always true for all layers except possibly
the first one. For the first layer, we use `monotonicity_indicator` to
specify which input parameters are monotonic and to specify are they
increasingly or decreasingly monotonic:

- set 1 for increasingly monotonic parameter,

- set -1 for decreasingly monotonic parameter, and

- set 0 otherwise.

In our case, the `monotonicity_indicator` is `[1, 0, -1]` because $y$
is:

- monotonically increasing w.r.t. $x_1$
  $\left(\frac{\partial y}{x_1} = 3 {x_1}^2 \geq 0\right)$, and

- monotonically decreasing w.r.t. $x_3$
  $\left(\frac{\partial y}{x_3} = - e^{-x_2} \leq 0\right)$.

``` python
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Input

from airt.keras.layers import MonoDense

model = Sequential()

model.add(Input(shape=(3,)))
monotonicity_indicator = [1, 0, -1]
model.add(
    MonoDense(128, activation="elu", monotonicity_indicator=monotonicity_indicator)
)
model.add(MonoDense(128, activation="elu"))
model.add(MonoDense(1))

model.summary()
```

    Model: "sequential"
    _________________________________________________________________
     Layer (type)                Output Shape              Param #   
    =================================================================
     mono_dense (MonoDense)      (None, 128)               512       
                                                                     
     mono_dense_1 (MonoDense)    (None, 128)               16512     
                                                                     
     mono_dense_2 (MonoDense)    (None, 1)                 129       
                                                                     
    =================================================================
    Total params: 17,153
    Trainable params: 17,153
    Non-trainable params: 0
    _________________________________________________________________

Now we can train the model as usual using `Model.fit`:

``` python
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.optimizers.schedules import ExponentialDecay

lr_schedule = ExponentialDecay(
    initial_learning_rate=0.01,
    decay_steps=10_000 // 32,
    decay_rate=0.9,
)
optimizer = Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer, loss="mse")

model.fit(
    x=x_train, y=y_train, batch_size=32, validation_data=(x_val, y_val), epochs=10
)
```

    Epoch 1/10
    313/313 [==============================] - 3s 5ms/step - loss: 9.4221 - val_loss: 6.1277
    Epoch 2/10
    313/313 [==============================] - 1s 4ms/step - loss: 4.6001 - val_loss: 2.7813
    Epoch 3/10
    313/313 [==============================] - 1s 4ms/step - loss: 1.6221 - val_loss: 2.1111
    Epoch 4/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.9479 - val_loss: 0.2976
    Epoch 5/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.9008 - val_loss: 0.3240
    Epoch 6/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.5027 - val_loss: 0.1455
    Epoch 7/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.4360 - val_loss: 0.1144
    Epoch 8/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.4993 - val_loss: 0.1211
    Epoch 9/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.3162 - val_loss: 1.0021
    Epoch 10/10
    313/313 [==============================] - 1s 4ms/step - loss: 0.2640 - val_loss: 0.2522

    <keras.callbacks.History>

## License

<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/"><img alt="Creative Commons Licence" style="border-width:0" src="https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png" /></a><br />This
work is licensed under a
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/4.0/">Creative
Commons Attribution-NonCommercial-ShareAlike 4.0 International
License</a>.

You are free to:

- Share — copy and redistribute the material in any
medium or format

- Adapt — remix, transform, and build upon the material

The licensor cannot revoke these freedoms as long as you follow the
license terms.

Under the following terms: 

- Attribution — You must give appropriate
credit, provide a link to the license, and indicate if changes were
made. You may do so in any reasonable manner, but not in any way that
suggests the licensor endorses you or your use.

- NonCommercial — You may not use the material for commercial purposes.

- ShareAlike — If you remix, transform, or build upon the material, you
  must distribute your contributions under the same license as the
  original.

- No additional restrictions — You may not apply legal terms or
  technological measures that legally restrict others from doing
  anything the license permits.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/airtai/monotonic-nn",
    "name": "monotonic-nn",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "tensorflow keras monotone \"monotonic neural networks\" \"dense layer\"",
    "author": "AIRT Technologies d.o.o.",
    "author_email": "info@airt.ai",
    "download_url": "https://files.pythonhosted.org/packages/50/85/fc77c8688b887c84d75fcb2da4c7b8cd06d6882296b1017296e35f2dd303/monotonic-nn-0.3.4.tar.gz",
    "platform": null,
    "description": "Constrained Monotonic Neural Networks\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n## Running in Google Colab\n\nYou can execute this interactive tutorial in Google Colab by clicking\nthe button below:\n\n<a href=\"https://colab.research.google.com/github/airtai/monotonic-nn/blob/main/nbs/index.ipynb\" target=\u201d_blank\u201d>\n<img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open in Colab\" />\n</a>\n\n## Summary\n\nThis Python library implements Constrained Monotonic Neural Networks as\ndescribed in:\n\nDavor Runje, Sharath M. Shankaranarayana, \u201cConstrained Monotonic Neural\nNetworks\u201d, in Proceedings of the 40th International Conference on\nMachine Learning, 2023. \\[[PDF](https://arxiv.org/pdf/2205.11775.pdf)\\].\n\n#### Abstract\n\nWider adoption of neural networks in many critical domains such as\nfinance and healthcare is being hindered by the need to explain their\npredictions and to impose additional constraints on them. Monotonicity\nconstraint is one of the most requested properties in real-world\nscenarios and is the focus of this paper. One of the oldest ways to\nconstruct a monotonic fully connected neural network is to constrain\nsigns on its weights. Unfortunately, this construction does not work\nwith popular non-saturated activation functions as it can only\napproximate convex functions. We show this shortcoming can be fixed by\nconstructing two additional activation functions from a typical\nunsaturated monotonic activation function and employing each of them on\nthe part of neurons. Our experiments show this approach of building\nmonotonic neural networks has better accuracy when compared to other\nstate-of-the-art methods, while being the simplest one in the sense of\nhaving the least number of parameters, and not requiring any\nmodifications to the learning procedure or post-learning steps. Finally,\nwe prove it can approximate any continuous monotone function on a\ncompact subset of $\\mathbb{R}^n$.\n\n#### Citation\n\nIf you use this library, please cite:\n\n``` title=\"bibtex\"\n@inproceedings{runje2023,\n  title={Constrained Monotonic Neural Networks},\n  author={Davor Runje and Sharath M. Shankaranarayana},\n  booktitle={Proceedings of the 40th {International Conference on Machine Learning}},\n  year={2023}\n}\n```\n\n## Python package\n\nThis package contains an implementation of our Monotonic Dense Layer\n[`MonoDense`](https://monotonic.airt.ai/latest/api/airt/keras/layers/MonoDense/#airt.keras.layers.MonoDense)\n(Constrained Monotonic Fully Connected Layer). Below is the figure from\nthe paper for reference.\n\nIn the code, the variable `monotonicity_indicator` corresponds to **t**\nin the figure and parameters `is_convex`, `is_concave` and\n`activation_weights` are used to calculate the activation selector **s**\nas follows:\n\n- if `is_convex` or `is_concave` is **True**, then the activation\n  selector **s** will be (`units`, 0, 0) and (0, `units`, 0),\n  respecively.\n\n- if both `is_convex` or `is_concave` is **False**, then the\n  `activation_weights` represent ratios between $\\breve{s}$, $\\hat{s}$\n  and $\\tilde{s}$, respecively. E.g. if `activation_weights = (2, 2, 1)`\n  and `units = 10`, then\n\n$$\n(\\breve{s}, \\hat{s}, \\tilde{s}) = (4, 4, 2)\n$$\n\n![mono-dense-layer-diagram](https://github.com/airtai/monotonic-nn/raw/main/nbs/images/mono-dense-layer-diagram.png)\n\n### Install\n\n``` sh\npip install monotonic-nn\n```\n\n### How to use\n\nIn this example, we\u2019ll assume we have a simple dataset with three inputs\nvalues $x_1$, $x_2$ and $x_3$ sampled from the normal distribution,\nwhile the output value $y$ is calculated according to the following\nformula before adding Gaussian noise to it:\n\n$y = x_1^3 + \\sin\\left(\\frac{x_2}{2 \\pi}\\right) + e^{-x_3}$\n\n<table id=\"T_37b51\">\n  <thead>\n    <tr>\n      <th id=\"T_37b51_level0_col0\" class=\"col_heading level0 col0\" >x0</th>\n      <th id=\"T_37b51_level0_col1\" class=\"col_heading level0 col1\" >x1</th>\n      <th id=\"T_37b51_level0_col2\" class=\"col_heading level0 col2\" >x2</th>\n      <th id=\"T_37b51_level0_col3\" class=\"col_heading level0 col3\" >y</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <td id=\"T_37b51_row0_col0\" class=\"data row0 col0\" >0.304717</td>\n      <td id=\"T_37b51_row0_col1\" class=\"data row0 col1\" >-1.039984</td>\n      <td id=\"T_37b51_row0_col2\" class=\"data row0 col2\" >0.750451</td>\n      <td id=\"T_37b51_row0_col3\" class=\"data row0 col3\" >0.234541</td>\n    </tr>\n    <tr>\n      <td id=\"T_37b51_row1_col0\" class=\"data row1 col0\" >0.940565</td>\n      <td id=\"T_37b51_row1_col1\" class=\"data row1 col1\" >-1.951035</td>\n      <td id=\"T_37b51_row1_col2\" class=\"data row1 col2\" >-1.302180</td>\n      <td id=\"T_37b51_row1_col3\" class=\"data row1 col3\" >4.199094</td>\n    </tr>\n    <tr>\n      <td id=\"T_37b51_row2_col0\" class=\"data row2 col0\" >0.127840</td>\n      <td id=\"T_37b51_row2_col1\" class=\"data row2 col1\" >-0.316243</td>\n      <td id=\"T_37b51_row2_col2\" class=\"data row2 col2\" >-0.016801</td>\n      <td id=\"T_37b51_row2_col3\" class=\"data row2 col3\" >0.834086</td>\n    </tr>\n    <tr>\n      <td id=\"T_37b51_row3_col0\" class=\"data row3 col0\" >-0.853044</td>\n      <td id=\"T_37b51_row3_col1\" class=\"data row3 col1\" >0.879398</td>\n      <td id=\"T_37b51_row3_col2\" class=\"data row3 col2\" >0.777792</td>\n      <td id=\"T_37b51_row3_col3\" class=\"data row3 col3\" >-0.093359</td>\n    </tr>\n    <tr>\n      <td id=\"T_37b51_row4_col0\" class=\"data row4 col0\" >0.066031</td>\n      <td id=\"T_37b51_row4_col1\" class=\"data row4 col1\" >1.127241</td>\n      <td id=\"T_37b51_row4_col2\" class=\"data row4 col2\" >0.467509</td>\n      <td id=\"T_37b51_row4_col3\" class=\"data row4 col3\" >0.780875</td>\n    </tr>\n  </tbody>\n</table>\n\nNow, we\u2019ll use the\n[`MonoDense`](https://monotonic.airt.ai/latest/api/airt/keras/layers/MonoDense/#airt.keras.layers.MonoDense)\nlayer instead of `Dense` layer to build a simple monotonic network. By\ndefault, the\n[`MonoDense`](https://monotonic.airt.ai/latest/api/airt/keras/layers/MonoDense/#airt.keras.layers.MonoDense)\nlayer assumes the output of the layer is monotonically increasing with\nall inputs. This assumtion is always true for all layers except possibly\nthe first one. For the first layer, we use `monotonicity_indicator` to\nspecify which input parameters are monotonic and to specify are they\nincreasingly or decreasingly monotonic:\n\n- set 1 for increasingly monotonic parameter,\n\n- set -1 for decreasingly monotonic parameter, and\n\n- set 0 otherwise.\n\nIn our case, the `monotonicity_indicator` is `[1, 0, -1]` because $y$\nis:\n\n- monotonically increasing w.r.t. $x_1$\n  $\\left(\\frac{\\partial y}{x_1} = 3 {x_1}^2 \\geq 0\\right)$, and\n\n- monotonically decreasing w.r.t. $x_3$\n  $\\left(\\frac{\\partial y}{x_3} = - e^{-x_2} \\leq 0\\right)$.\n\n``` python\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Dense, Input\n\nfrom airt.keras.layers import MonoDense\n\nmodel = Sequential()\n\nmodel.add(Input(shape=(3,)))\nmonotonicity_indicator = [1, 0, -1]\nmodel.add(\n    MonoDense(128, activation=\"elu\", monotonicity_indicator=monotonicity_indicator)\n)\nmodel.add(MonoDense(128, activation=\"elu\"))\nmodel.add(MonoDense(1))\n\nmodel.summary()\n```\n\n    Model: \"sequential\"\n    _________________________________________________________________\n     Layer (type)                Output Shape              Param #   \n    =================================================================\n     mono_dense (MonoDense)      (None, 128)               512       \n                                                                     \n     mono_dense_1 (MonoDense)    (None, 128)               16512     \n                                                                     \n     mono_dense_2 (MonoDense)    (None, 1)                 129       \n                                                                     \n    =================================================================\n    Total params: 17,153\n    Trainable params: 17,153\n    Non-trainable params: 0\n    _________________________________________________________________\n\nNow we can train the model as usual using `Model.fit`:\n\n``` python\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.optimizers.schedules import ExponentialDecay\n\nlr_schedule = ExponentialDecay(\n    initial_learning_rate=0.01,\n    decay_steps=10_000 // 32,\n    decay_rate=0.9,\n)\noptimizer = Adam(learning_rate=lr_schedule)\nmodel.compile(optimizer=optimizer, loss=\"mse\")\n\nmodel.fit(\n    x=x_train, y=y_train, batch_size=32, validation_data=(x_val, y_val), epochs=10\n)\n```\n\n    Epoch 1/10\n    313/313 [==============================] - 3s 5ms/step - loss: 9.4221 - val_loss: 6.1277\n    Epoch 2/10\n    313/313 [==============================] - 1s 4ms/step - loss: 4.6001 - val_loss: 2.7813\n    Epoch 3/10\n    313/313 [==============================] - 1s 4ms/step - loss: 1.6221 - val_loss: 2.1111\n    Epoch 4/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.9479 - val_loss: 0.2976\n    Epoch 5/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.9008 - val_loss: 0.3240\n    Epoch 6/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.5027 - val_loss: 0.1455\n    Epoch 7/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.4360 - val_loss: 0.1144\n    Epoch 8/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.4993 - val_loss: 0.1211\n    Epoch 9/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.3162 - val_loss: 1.0021\n    Epoch 10/10\n    313/313 [==============================] - 1s 4ms/step - loss: 0.2640 - val_loss: 0.2522\n\n    <keras.callbacks.History>\n\n## License\n\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\"><img alt=\"Creative Commons Licence\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png\" /></a><br />This\nwork is licensed under a\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by-nc-sa/4.0/\">Creative\nCommons Attribution-NonCommercial-ShareAlike 4.0 International\nLicense</a>.\n\nYou are free to:\n\n- Share \u2014 copy and redistribute the material in any\nmedium or format\n\n- Adapt \u2014 remix, transform, and build upon the material\n\nThe licensor cannot revoke these freedoms as long as you follow the\nlicense terms.\n\nUnder the following terms: \n\n- Attribution \u2014 You must give appropriate\ncredit, provide a link to the license, and indicate if changes were\nmade. You may do so in any reasonable manner, but not in any way that\nsuggests the licensor endorses you or your use.\n\n- NonCommercial \u2014 You may not use the material for commercial purposes.\n\n- ShareAlike \u2014 If you remix, transform, or build upon the material, you\n  must distribute your contributions under the same license as the\n  original.\n\n- No additional restrictions \u2014 You may not apply legal terms or\n  technological measures that legally restrict others from doing\n  anything the license permits.\n",
    "bugtrack_url": null,
    "license": "Creative Commons License",
    "summary": "Monotonic Neural Networks",
    "version": "0.3.4",
    "project_urls": {
        "Bug Tracker": "https://github.com/airtai/monotonic-nn/issues",
        "CI": "https://github.com/airtai/monotonic-nn/actions",
        "Documentation": "https://monotonic.airt.ai/",
        "Homepage": "https://github.com/airtai/monotonic-nn",
        "Tutorial": "https://colab.research.google.com/github/airtai/monotonic-nn/blob/main/nbs/index.ipynb"
    },
    "split_keywords": [
        "tensorflow",
        "keras",
        "monotone",
        "\"monotonic",
        "neural",
        "networks\"",
        "\"dense",
        "layer\""
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64ba14c3a3af81b8875451340f4474c3f127efa52ee52bc06c094e2ff6c348ea",
                "md5": "512561a792d9a662f820203fcb0e0301",
                "sha256": "b8bc1ccdb2703649494421c917938dfecaee642c807d73fde42da3e5bc7ea6fc"
            },
            "downloads": -1,
            "filename": "monotonic_nn-0.3.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "512561a792d9a662f820203fcb0e0301",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 25119,
            "upload_time": "2023-06-09T13:12:29",
            "upload_time_iso_8601": "2023-06-09T13:12:29.037246Z",
            "url": "https://files.pythonhosted.org/packages/64/ba/14c3a3af81b8875451340f4474c3f127efa52ee52bc06c094e2ff6c348ea/monotonic_nn-0.3.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5085fc77c8688b887c84d75fcb2da4c7b8cd06d6882296b1017296e35f2dd303",
                "md5": "0bad2b8773a09e36d22cab4869362199",
                "sha256": "00703d99ed7385215b0ab896f707e74a8eab7dabf16c32f608fa6aeb9459a389"
            },
            "downloads": -1,
            "filename": "monotonic-nn-0.3.4.tar.gz",
            "has_sig": false,
            "md5_digest": "0bad2b8773a09e36d22cab4869362199",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 28007,
            "upload_time": "2023-06-09T13:12:31",
            "upload_time_iso_8601": "2023-06-09T13:12:31.066215Z",
            "url": "https://files.pythonhosted.org/packages/50/85/fc77c8688b887c84d75fcb2da4c7b8cd06d6882296b1017296e35f2dd303/monotonic-nn-0.3.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-09 13:12:31",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "airtai",
    "github_project": "monotonic-nn",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "monotonic-nn"
}
        
Elapsed time: 0.11489s