nndp


Namenndp JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryDynamic Programming using Neural Networks
upload_time2024-05-03 14:45:25
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseThe MIT License (MIT) Copyright © 2022 <copyright holders> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords economics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Dynamic Programming with Neural Networks `(nndp)`

By: Marc de la Barrera i Bardalet, Tim de Silva

## Overview

`nndp` provides a framework for solving finite horizon dynamic programming problems using neural networks that is implemented using the [JAX](https://github.com/google/jax) functional programming paradigm and [Haiku](https://github.com/deepmind/dm-haiku). This solution technique, introduced and described in detail by [Duarte, Fonesca, Goodman, and Parker (2021)](https://0f2486b1-f568-477b-8307-dd98a6c77afd.filesusr.com/ugd/f9db9d_972da014adb2453b8a4dab0239909062.pdf), applies to problems of the following form: 

$$V(s_0)=\max_{a_t\in\Gamma(s_t)} E_0\left[\sum_{t=0}^T u(s_t,a_t)\right],$$

$$s_{t+1}=m(s_{t},a_{t},\epsilon_t), $$

$$s_0 \sim F(\cdot).$$

The state vector is denoted by $s_t=(k_t, x_t)$, where $k_t$ are exogenous states and $x_t$ are endogenous states. We adopt the convention that the first exogenous state in $k_t$ is $t$. The goal is to find a policy function $\pi(s_t)$ that satisfies:

$$\hat V(s_0,\pi)=E_0\left[\sum_{t=0}^T u(s_t,\pi(s_t))\right],$$

$$s_{t+1}=m(s_{t},\pi(s_{t}),\epsilon_t),$$

$$V(s_0)=\hat V(s_0,\pi)\quad \forall s_0.$$

We parametrize $\pi(s_t)=\tilde\pi(s_t,\theta)$ as a fully connected feedforward neural network and update the networks’ parameters, $\theta$, using stochastic gradient descent. To use this framework, the user only needs to write the following functions that are defined by the dynamic programming problem of interest:

1. `u(state, action)`: reward function for $s_t$ = `state` and $a_t$ = `action`
2. `m(key, state, action)`: state evolution equation for $s_{t+1}$ if $s_t$ = `state` and $a_t$ = `action`. `key` is a JAX RNG key used to simulate any shocks present in the model.
3. `Gamma(state)`: defines the set of possible actions, $a_t$, at $s_t$ = `state`
4. `F(key, N)`: samples `N` observations from the distribution of $s_0$. `key` is a JAX RNG key used to simulate any shocks present in the model.
5. `nn_to_action(state, params, nn)`: defines how the output of a Haiku Neural Network, `nn`, with parameters, `params`, is mapped into an action at $s_t$ = `state`

We provide an example application to the income fluctations problem in `docs/source/notebooks/income_fluctuations/main.ipynb` to illustrate how this framework can be used.

## Installation

`nndp` requires [JAX](https://github.com/google/jax) and [Haiku](https://github.com/deepmind/dm-haiku) to be installed. To install with `pip`, run `pip install nndp`.

## References
Duarte, Victor, Julia Fonseca, Jonathan A. Parker, and Aaron Goodman (2021), Simple Allocation Rules and Optimal Portfolio Choice Over the Lifecycle, Working Paper.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nndp",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "economics",
    "author": null,
    "author_email": "Marc de la Barrera <marc.delabarrera@gmail.com>, Tim de Silva <tdesilva@mit.edu>",
    "download_url": "https://files.pythonhosted.org/packages/41/39/59302e675fd88794d646415a66240d5f22b7e3eeec63993cf482d85e2364/nndp-0.1.0.tar.gz",
    "platform": null,
    "description": "# Dynamic Programming with Neural Networks `(nndp)`\n\nBy: Marc de la Barrera i Bardalet, Tim de Silva\n\n## Overview\n\n`nndp` provides a framework for solving finite horizon dynamic programming problems using neural networks that is implemented using the [JAX](https://github.com/google/jax) functional programming paradigm and [Haiku](https://github.com/deepmind/dm-haiku). This solution technique, introduced and described in detail by [Duarte, Fonesca, Goodman, and Parker (2021)](https://0f2486b1-f568-477b-8307-dd98a6c77afd.filesusr.com/ugd/f9db9d_972da014adb2453b8a4dab0239909062.pdf), applies to problems of the following form: \n\n$$V(s_0)=\\max_{a_t\\in\\Gamma(s_t)} E_0\\left[\\sum_{t=0}^T u(s_t,a_t)\\right],$$\n\n$$s_{t+1}=m(s_{t},a_{t},\\epsilon_t), $$\n\n$$s_0 \\sim F(\\cdot).$$\n\nThe state vector is denoted by $s_t=(k_t, x_t)$, where $k_t$ are exogenous states and $x_t$ are endogenous states. We adopt the convention that the first exogenous state in $k_t$ is $t$. The goal is to find a policy function $\\pi(s_t)$ that satisfies:\n\n$$\\hat V(s_0,\\pi)=E_0\\left[\\sum_{t=0}^T u(s_t,\\pi(s_t))\\right],$$\n\n$$s_{t+1}=m(s_{t},\\pi(s_{t}),\\epsilon_t),$$\n\n$$V(s_0)=\\hat V(s_0,\\pi)\\quad \\forall s_0.$$\n\nWe parametrize $\\pi(s_t)=\\tilde\\pi(s_t,\\theta)$ as a fully connected feedforward neural network and update the networks\u2019 parameters, $\\theta$, using stochastic gradient descent. To use this framework, the user only needs to write the following functions that are defined by the dynamic programming problem of interest:\n\n1. `u(state, action)`: reward function for $s_t$ = `state` and $a_t$ = `action`\n2. `m(key, state, action)`: state evolution equation for $s_{t+1}$ if $s_t$ = `state` and $a_t$ = `action`. `key` is a JAX RNG key used to simulate any shocks present in the model.\n3. `Gamma(state)`: defines the set of possible actions, $a_t$, at $s_t$ = `state`\n4. `F(key, N)`: samples `N` observations from the distribution of $s_0$. `key` is a JAX RNG key used to simulate any shocks present in the model.\n5. `nn_to_action(state, params, nn)`: defines how the output of a Haiku Neural Network, `nn`, with parameters, `params`, is mapped into an action at $s_t$ = `state`\n\nWe provide an example application to the income fluctations problem in `docs/source/notebooks/income_fluctuations/main.ipynb` to illustrate how this framework can be used.\n\n## Installation\n\n`nndp` requires [JAX](https://github.com/google/jax) and [Haiku](https://github.com/deepmind/dm-haiku) to be installed. To install with `pip`, run `pip install nndp`.\n\n## References\nDuarte, Victor, Julia Fonseca, Jonathan A. Parker, and Aaron Goodman (2021), Simple Allocation Rules and Optimal Portfolio Choice Over the Lifecycle, Working Paper.\n\n",
    "bugtrack_url": null,
    "license": "The MIT License (MIT) Copyright \u00a9 2022 <copyright holders>  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
    "summary": "Dynamic Programming using Neural Networks",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [
        "economics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bd961c9497af81fa2f906896e843f854d78152a9b184801b0330843d1a013c67",
                "md5": "a201ee62d4520c3d37d40aa1a1d2f4e4",
                "sha256": "c70c238152b75ac3481eb690bb36a67b27c687daeefc60534a095ee6b4d09dc9"
            },
            "downloads": -1,
            "filename": "nndp-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a201ee62d4520c3d37d40aa1a1d2f4e4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 12612,
            "upload_time": "2024-05-03T14:45:22",
            "upload_time_iso_8601": "2024-05-03T14:45:22.691713Z",
            "url": "https://files.pythonhosted.org/packages/bd/96/1c9497af81fa2f906896e843f854d78152a9b184801b0330843d1a013c67/nndp-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "413959302e675fd88794d646415a66240d5f22b7e3eeec63993cf482d85e2364",
                "md5": "630b5ff8632effd043b9365621f96a68",
                "sha256": "ce6140ad892038f4a0c5224434d0789739800154247511dc74296c98be98c246"
            },
            "downloads": -1,
            "filename": "nndp-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "630b5ff8632effd043b9365621f96a68",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 11757,
            "upload_time": "2024-05-03T14:45:25",
            "upload_time_iso_8601": "2024-05-03T14:45:25.460278Z",
            "url": "https://files.pythonhosted.org/packages/41/39/59302e675fd88794d646415a66240d5f22b7e3eeec63993cf482d85e2364/nndp-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-05-03 14:45:25",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "nndp"
}
        
Elapsed time: 0.25295s