dlkoopman


Namedlkoopman JSON
Version 1.2.0 PyPI version JSON
download
home_pagehttps://github.com/GaloisInc/dlkoopman
SummaryA general-purpose Python package for Koopman theory using deep learning.
upload_time2023-10-14 22:35:14
maintainerGalois dlkoopman team
docs_urlNone
authorSourya Dey
requires_python>=3.9,<4.0
licenseMIT
keywords koopman theory koopman operator deep learning autoencoder
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <figure>
<img src="https://raw.githubusercontent.com/GaloisInc/dlkoopman/36108ffcfd9608a393985ac9af431d3910fe2fc5/logo.png" height=150/>
</figure>

**DLKoopman: A general-purpose Python package for Koopman theory using deep learning**.

Koopman theory is a technique to encode sampled data (aka states) of a nonlinear dynamical system into a linear domain. This is very powerful as a linear model can:
- Give insight into the dynamics via eigenvalues and eigenvectors.
- Leverage linear algebra techniques to easily analyze the system and predict its behavior under unknown conditions.


## Why DLKoopman?
*DLKoopman uses deep learning to learn an encoding of a nonlinear dynamical system into a linear domain, while simultaneously learning the dynamics of the linear model*. DLKoopman bridges the gap between:
- Software packages that restrict the learning of a good encoding (e.g. [`pykoopman`](https://github.com/dynamicslab/pykoopman)), and
- Efforts that learn encodings for specific applications instead of being a general-purpose tool (e.g. [`DeepKoopman`](https://github.com/BethanyL/DeepKoopman)).

### Key DLKoopman features
- State prediction (`StatePred`) - Train on individual states of a system, then predict unknown states.
    - E.g: What is the pressure vector on this aircraft for $23.5^{\circ}$ angle of attack?
- Trajectory prediction (`TrajPred`) - Train on generated trajectories of a system, then predict unknown trajectories for new initial states.
    - E.g: What is the behavior of this pendulum if I start from the point $[1,-1]$?
- General-purpose and reusable - supports data from any dynamical system.
- Novel error function Average Normalized Absolute Error (ANAE) for visualizing performance.
- Extensive options and a ready-to-use hyperparameter search module to improve performance.
- Built using [Pytorch](https://pytorch.org/), supports both CPU and GPU platforms.

Read more about DLKoopman in this [blog article](https://galois.com/blog/2023/01/dl-koopman/).


## Installation

### With pip (for regular users)
`pip install dlkoopman`

### From source (for development)
```
git clone https://github.com/GaloisInc/dlkoopman.git
cd dlkoopman
pip install .
```

### Running as a Docker container
DLKoopman can also be run as a docker container by pulling the image from `galoisinc/dlkoopman:<version>`, e.g. `docker pull galoisinc/dlkoopman:v1.2.0`.


## Tutorials and examples
Available in the [`examples`](https://github.com/GaloisInc/dlkoopman/tree/ed11bef92b90112d9ca90722942a6789e6af7d5a/examples) folder.


## Documentation and API Reference
Available at https://galoisinc.github.io/dlkoopman/.

## Changelog
See [Releases](https://github.com/GaloisInc/dlkoopman/releases) and their notes.


## Description 

### Koopman theory
Assume a dynamical system $x_{i+1} = F(x_i)$, where $x$ is the (genrally multi-dimensional) state of the system at index $i$, and $F$ is the (generally nonlinear) evolution rule describing the dynamics of the system. Koopman theory attempts to *encode* $x$ into a different space $y = g(x)$ where the dynamics are linear, i.e. $y_{i+1} = Ky_i$, where $K$ is the Koopman matrix. This is incredibly powerful since the state $y_i$ at any index $i$ can be predicted from the initial state $y_0$ as $y_i = K^iy_0$. This is then *decoded* back into the original space as $x = g^{-1}(y)$.

For a thorough mathematical treatment, see [this technical report](https://arxiv.org/abs/2211.07561).

### dlkoopman training
<figure>
<img src="https://raw.githubusercontent.com/GaloisInc/dlkoopman/ed11bef92b90112d9ca90722942a6789e6af7d5a/training_architecture.png" width=750/>
</figure>

This is a small example with three input states $\left[x_0, x_1, x_2\right]$. These are passed through an encoder neural network to get encoded states $\left[y_0, y_1, y_2\right]$. These are passed through a decoder neural network to get $\left[\hat{x}_0, \hat{x}_1, \hat{x}_2\right]$, and also used to learn $K$. This is used to derive predicted encoded states $\left[\mathsf{y}_1, \mathsf{y}_2\right]$, which are then passed through the same decoder to get predicted approximations $\left[\hat{\mathsf{x}}_1, \hat{\mathsf{x}}_2\right]$ to the original input states.

Errors mimimized during training:
- Train the autoencoder - Reconstruction `recon` between $x$ and $\hat{x}$.
- Train the Koopman matrix - Linearity `lin` between $y$ and $\mathsf{y}$.
- Combine the above - Prediction `pred` between $x$ and $\hat{\mathsf{x}}$.

### dlkoopman prediction
<figure>
<img src="https://raw.githubusercontent.com/GaloisInc/dlkoopman/ed11bef92b90112d9ca90722942a6789e6af7d5a/prediction_architecture.png" width=750/>
</figure>

Prediction happens after training.

(a) State prediction - Compute predicted states for new indexes such as $i'$. This uses the eigendecomposition of $K$, so $i'$ can be any real number - positive (forward extapolation), negative (backward extrapolation), or fractional (interpolation).

(b) Trajectory prediction - Generate predicted trajectories $j'$ for new starting states such as $x^{j'}_0$. This uses a linear neural net layer to evolve the initial state.


## Known issues
Some common issues and ways to overcome them are described in the [known issues](https://github.com/GaloisInc/dlkoopman/issues?q=is%3Aissue+is%3Aclosed+label%3Aknown-issue).


## How to cite
Please cite the [accompanying paper](https://proceedings.mlr.press/v211/dey23a.html):
```
@inproceedings{Dey2023_L4DC,
    author = {Sourya Dey and Eric William Davis},
    title = {{DLKoopman: A deep learning software package for Koopman theory}},
    booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference},
    pages = {1467--1479},
    volume = {211},
    series = {Proceedings of Machine Learning Research},
    publisher = {PMLR},
    year = {2023},
    month = {Jun}
}
```


## References
- B. O. Koopman - Hamiltonian systems and transformation in Hilbert space
- J. Nathan Kutz, Steven L. Brunton, Bingni Brunton, Joshua L. Proctor - Dynamic Mode Decomposition
- Bethany Lusch, J. Nathan Kutz & Steven L. Brunton - Deep learning for universal linear embeddings of nonlinear dynamics


## Distribution Statement
This material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-20-C-0534. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA. Distribution Statement A, "Approved for Public Release, Distribution Unlimited."

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/GaloisInc/dlkoopman",
    "name": "dlkoopman",
    "maintainer": "Galois dlkoopman team",
    "docs_url": null,
    "requires_python": ">=3.9,<4.0",
    "maintainer_email": "dlkoopman@galois.com",
    "keywords": "koopman theory,koopman operator,deep learning,autoencoder",
    "author": "Sourya Dey",
    "author_email": "sourya@galois.com",
    "download_url": "https://files.pythonhosted.org/packages/1e/42/8ae245d517def97c0931973a6dc167425d85b35653ed74d9d013539c823c/dlkoopman-1.2.0.tar.gz",
    "platform": null,
    "description": "<figure>\n<img src=\"https://raw.githubusercontent.com/GaloisInc/dlkoopman/36108ffcfd9608a393985ac9af431d3910fe2fc5/logo.png\" height=150/>\n</figure>\n\n**DLKoopman: A general-purpose Python package for Koopman theory using deep learning**.\n\nKoopman theory is a technique to encode sampled data (aka states) of a nonlinear dynamical system into a linear domain. This is very powerful as a linear model can:\n- Give insight into the dynamics via eigenvalues and eigenvectors.\n- Leverage linear algebra techniques to easily analyze the system and predict its behavior under unknown conditions.\n\n\n## Why DLKoopman?\n*DLKoopman uses deep learning to learn an encoding of a nonlinear dynamical system into a linear domain, while simultaneously learning the dynamics of the linear model*. DLKoopman bridges the gap between:\n- Software packages that restrict the learning of a good encoding (e.g. [`pykoopman`](https://github.com/dynamicslab/pykoopman)), and\n- Efforts that learn encodings for specific applications instead of being a general-purpose tool (e.g. [`DeepKoopman`](https://github.com/BethanyL/DeepKoopman)).\n\n### Key DLKoopman features\n- State prediction (`StatePred`) - Train on individual states of a system, then predict unknown states.\n    - E.g: What is the pressure vector on this aircraft for $23.5^{\\circ}$ angle of attack?\n- Trajectory prediction (`TrajPred`) - Train on generated trajectories of a system, then predict unknown trajectories for new initial states.\n    - E.g: What is the behavior of this pendulum if I start from the point $[1,-1]$?\n- General-purpose and reusable - supports data from any dynamical system.\n- Novel error function Average Normalized Absolute Error (ANAE) for visualizing performance.\n- Extensive options and a ready-to-use hyperparameter search module to improve performance.\n- Built using [Pytorch](https://pytorch.org/), supports both CPU and GPU platforms.\n\nRead more about DLKoopman in this [blog article](https://galois.com/blog/2023/01/dl-koopman/).\n\n\n## Installation\n\n### With pip (for regular users)\n`pip install dlkoopman`\n\n### From source (for development)\n```\ngit clone https://github.com/GaloisInc/dlkoopman.git\ncd dlkoopman\npip install .\n```\n\n### Running as a Docker container\nDLKoopman can also be run as a docker container by pulling the image from `galoisinc/dlkoopman:<version>`, e.g. `docker pull galoisinc/dlkoopman:v1.2.0`.\n\n\n## Tutorials and examples\nAvailable in the [`examples`](https://github.com/GaloisInc/dlkoopman/tree/ed11bef92b90112d9ca90722942a6789e6af7d5a/examples) folder.\n\n\n## Documentation and API Reference\nAvailable at https://galoisinc.github.io/dlkoopman/.\n\n## Changelog\nSee [Releases](https://github.com/GaloisInc/dlkoopman/releases) and their notes.\n\n\n## Description \n\n### Koopman theory\nAssume a dynamical system $x_{i+1} = F(x_i)$, where $x$ is the (genrally multi-dimensional) state of the system at index $i$, and $F$ is the (generally nonlinear) evolution rule describing the dynamics of the system. Koopman theory attempts to *encode* $x$ into a different space $y = g(x)$ where the dynamics are linear, i.e. $y_{i+1} = Ky_i$, where $K$ is the Koopman matrix. This is incredibly powerful since the state $y_i$ at any index $i$ can be predicted from the initial state $y_0$ as $y_i = K^iy_0$. This is then *decoded* back into the original space as $x = g^{-1}(y)$.\n\nFor a thorough mathematical treatment, see [this technical report](https://arxiv.org/abs/2211.07561).\n\n### dlkoopman training\n<figure>\n<img src=\"https://raw.githubusercontent.com/GaloisInc/dlkoopman/ed11bef92b90112d9ca90722942a6789e6af7d5a/training_architecture.png\" width=750/>\n</figure>\n\nThis is a small example with three input states $\\left[x_0, x_1, x_2\\right]$. These are passed through an encoder neural network to get encoded states $\\left[y_0, y_1, y_2\\right]$. These are passed through a decoder neural network to get $\\left[\\hat{x}_0, \\hat{x}_1, \\hat{x}_2\\right]$, and also used to learn $K$. This is used to derive predicted encoded states $\\left[\\mathsf{y}_1, \\mathsf{y}_2\\right]$, which are then passed through the same decoder to get predicted approximations $\\left[\\hat{\\mathsf{x}}_1, \\hat{\\mathsf{x}}_2\\right]$ to the original input states.\n\nErrors mimimized during training:\n- Train the autoencoder - Reconstruction `recon` between $x$ and $\\hat{x}$.\n- Train the Koopman matrix - Linearity `lin` between $y$ and $\\mathsf{y}$.\n- Combine the above - Prediction `pred` between $x$ and $\\hat{\\mathsf{x}}$.\n\n### dlkoopman prediction\n<figure>\n<img src=\"https://raw.githubusercontent.com/GaloisInc/dlkoopman/ed11bef92b90112d9ca90722942a6789e6af7d5a/prediction_architecture.png\" width=750/>\n</figure>\n\nPrediction happens after training.\n\n(a) State prediction - Compute predicted states for new indexes such as $i'$. This uses the eigendecomposition of $K$, so $i'$ can be any real number - positive (forward extapolation), negative (backward extrapolation), or fractional (interpolation).\n\n(b) Trajectory prediction - Generate predicted trajectories $j'$ for new starting states such as $x^{j'}_0$. This uses a linear neural net layer to evolve the initial state.\n\n\n## Known issues\nSome common issues and ways to overcome them are described in the [known issues](https://github.com/GaloisInc/dlkoopman/issues?q=is%3Aissue+is%3Aclosed+label%3Aknown-issue).\n\n\n## How to cite\nPlease cite the [accompanying paper](https://proceedings.mlr.press/v211/dey23a.html):\n```\n@inproceedings{Dey2023_L4DC,\n    author = {Sourya Dey and Eric William Davis},\n    title = {{DLKoopman: A deep learning software package for Koopman theory}},\n    booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference},\n    pages = {1467--1479},\n    volume = {211},\n    series = {Proceedings of Machine Learning Research},\n    publisher = {PMLR},\n    year = {2023},\n    month = {Jun}\n}\n```\n\n\n## References\n- B. O. Koopman - Hamiltonian systems and transformation in Hilbert space\n- J. Nathan Kutz, Steven L. Brunton, Bingni Brunton, Joshua L. Proctor - Dynamic Mode Decomposition\n- Bethany Lusch, J. Nathan Kutz & Steven L. Brunton - Deep learning for universal linear embeddings of nonlinear dynamics\n\n\n## Distribution Statement\nThis material is based upon work supported by the United States Air Force and DARPA under Contract No. FA8750-20-C-0534. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force and DARPA. Distribution Statement A, \"Approved for Public Release, Distribution Unlimited.\"\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A general-purpose Python package for Koopman theory using deep learning.",
    "version": "1.2.0",
    "project_urls": {
        "Documentation": "https://galoisinc.github.io/dlkoopman/",
        "Homepage": "https://github.com/GaloisInc/dlkoopman",
        "Repository": "https://github.com/GaloisInc/dlkoopman"
    },
    "split_keywords": [
        "koopman theory",
        "koopman operator",
        "deep learning",
        "autoencoder"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c67410207494081d68bc4697223ab9c1d82d281583c4195e56a1484ed72c9c25",
                "md5": "601f8efadccd676b8e01b28eeb8daa7e",
                "sha256": "1c9bb9f5e5d284b79f53d34666957aebaddd07bbf9202d172a8d564b956c31c7"
            },
            "downloads": -1,
            "filename": "dlkoopman-1.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "601f8efadccd676b8e01b28eeb8daa7e",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9,<4.0",
            "size": 33018,
            "upload_time": "2023-10-14T22:35:12",
            "upload_time_iso_8601": "2023-10-14T22:35:12.195177Z",
            "url": "https://files.pythonhosted.org/packages/c6/74/10207494081d68bc4697223ab9c1d82d281583c4195e56a1484ed72c9c25/dlkoopman-1.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1e428ae245d517def97c0931973a6dc167425d85b35653ed74d9d013539c823c",
                "md5": "3b8b9916cffaaf5ee797f8e27b4877fa",
                "sha256": "e0d7a9e6b4d4d17a7f260cff4f028a998952637dac007b57ff56a1d334aa8389"
            },
            "downloads": -1,
            "filename": "dlkoopman-1.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3b8b9916cffaaf5ee797f8e27b4877fa",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9,<4.0",
            "size": 28982,
            "upload_time": "2023-10-14T22:35:14",
            "upload_time_iso_8601": "2023-10-14T22:35:14.062977Z",
            "url": "https://files.pythonhosted.org/packages/1e/42/8ae245d517def97c0931973a6dc167425d85b35653ed74d9d013539c823c/dlkoopman-1.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-10-14 22:35:14",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "GaloisInc",
    "github_project": "dlkoopman",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dlkoopman"
}
        
Elapsed time: 1.51637s