mpcrl


Namempcrl JSON
Version 1.2.0.post1 PyPI version JSON
download
home_pageNone
SummaryReinforcement Learning with Model Predictive Control
upload_time2024-04-11 13:53:25
maintainerNone
docs_urlNone
authorNone
requires_python>=3.9
licenseMIT
keywords reinforcement-learning model-predictive-control optimization casadi
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Reinforcement Learning with Model Predictive Control

**mpcrl** is a library for training model-based Reinforcement Learning (RL) agents with Model Predictive Control (MPC) as function approximation. This framework, also known as MPC-based RL, was first proposed in [[1]](#1) and has so far been shown effective in various applications and with different learning algorithms, e.g., [[2](#2),[3](#3)].

[![PyPI version](https://badge.fury.io/py/mpcrl.svg)](https://badge.fury.io/py/mpcrl)
[![Source Code License](https://img.shields.io/badge/license-MIT-blueviolet)](https://github.com/FilippoAiraldi/casadi-nlp/blob/release/LICENSE)
![Python 3.9](https://img.shields.io/badge/python->=3.9-green.svg)

[![Tests](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/test-main.yml/badge.svg)](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/test-main.yml)
[![Downloads](https://static.pepy.tech/badge/mpcrl)](https://www.pepy.tech/projects/mpcrl)
[![Maintainability](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/maintainability)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/maintainability)
[![Test Coverage](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/test_coverage)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/test_coverage)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

---

## Introduction

This framework merges two powerful control techinques into a single data-driven one

- MPC, a well-known control methodology that exploits a prediction model to predict the future behaviour of the environment and compute the optimal action

- and RL, a Machine Learning paradigm that showed many successes in recent years (with  games such as chess, Go, etc.) and is highly adaptable to unknown and complex-to-model environments.

<div align="center">
  <img src="https://raw.githubusercontent.com/FilippoAiraldi/mpc-reinforcement-learning/main/resources/mpcrl-diagram.png" alt="mpcrl-diagram" height="300">
</div>

The figure shows the main idea behind this learning-based control approach. The MPC controller, parametrized in $\vartheta$, acts both as policy provider (providing an action to the environment, given the current state) and as function approximation for the state and action value functions. Concurrently, an RL agent is employed to tune the parameters of the MPC in such a way to increase the controller's performance and achieve an (sub)optimal policy.

---

## Installation

To install the package, run

```bash
pip install mpcrl
```

**mpcrl** has the following dependencies

- [csnlp](https://pypi.org/project/csnlp/)
- [SciPy](https://scipy.org/)
- [Gymnasium](https://gymnasium.farama.org/)
- [Numba](https://numba.pydata.org/)
- [typing_extensions](https://pypi.org/project/typing-extensions/)

For playing around with the source code instead, run

```bash
git clone https://github.com/FilippoAiraldi/mpc-reinforcement-learning.git
```

---

## Examples

Our [examples](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/tree/main/examples) subdirectory contains an example application on a small linear time-invariant (LTI) system, tackled both with Q-learning and Deterministic Policy Gradient (DPG).

---

## License

The repository is provided under the MIT License. See the LICENSE file included with this repository.

---

## Author

[Filippo Airaldi](https://www.tudelft.nl/staff/f.airaldi/), PhD Candidate [f.airaldi@tudelft.nl | filippoairaldi@gmail.com]

> [Delft Center for Systems and Control](https://www.tudelft.nl/en/3me/about/departments/delft-center-for-systems-and-control/) in [Delft University of Technology](https://www.tudelft.nl/en/)

Copyright (c) 2023 Filippo Airaldi.

Copyright notice: Technische Universiteit Delft hereby disclaims all copyright interest in the program “mpcrl” (Reinforcement Learning with Model Predictive Control) written by the Author(s). Prof. Dr. Ir. Fred van Keulen, Dean of 3mE.

---

## References

<a id="1">[1]</a>
S. Gros and M. Zanon, "Data-Driven Economic NMPC Using Reinforcement Learning," in _IEEE Transactions on Automatic Control_, vol. 65, no. 2, pp. 636-648, Feb. 2020, doi: 10.1109/TAC.2019.2913768.

<a id="2">[2]</a>
H. N. Esfahani, A. B. Kordabad and S. Gros, "Approximate Robust NMPC using Reinforcement Learning," _2021 European Control Conference (ECC)_, 2021, pp. 132-137, doi: 10.23919/ECC54610.2021.9655129.

<a id="3">[3]</a>
W. Cai, A. B. Kordabad, H. N. Esfahani, A. M. Lekkas and S. Gros, "MPC-based Reinforcement Learning for a Simplified Freight Mission of Autonomous Surface Vehicles," _2021 60th IEEE Conference on Decision and Control (CDC)_, 2021, pp. 2990-2995, doi: 10.1109/CDC45484.2021.9683750.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "mpcrl",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "reinforcement-learning, model-predictive-control, optimization, casadi",
    "author": null,
    "author_email": "Filippo Airaldi <filippoairaldi@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/a5/37/135375632a32edf19870fbfc2bd10a906d835483116f26795ede9fbc3689/mpcrl-1.2.0.post1.tar.gz",
    "platform": null,
    "description": "# Reinforcement Learning with Model Predictive Control\r\n\r\n**mpcrl** is a library for training model-based Reinforcement Learning (RL) agents with Model Predictive Control (MPC) as function approximation. This framework, also known as MPC-based RL, was first proposed in [[1]](#1) and has so far been shown effective in various applications and with different learning algorithms, e.g., [[2](#2),[3](#3)].\r\n\r\n[![PyPI version](https://badge.fury.io/py/mpcrl.svg)](https://badge.fury.io/py/mpcrl)\r\n[![Source Code License](https://img.shields.io/badge/license-MIT-blueviolet)](https://github.com/FilippoAiraldi/casadi-nlp/blob/release/LICENSE)\r\n![Python 3.9](https://img.shields.io/badge/python->=3.9-green.svg)\r\n\r\n[![Tests](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/test-main.yml/badge.svg)](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/test-main.yml)\r\n[![Downloads](https://static.pepy.tech/badge/mpcrl)](https://www.pepy.tech/projects/mpcrl)\r\n[![Maintainability](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/maintainability)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/maintainability)\r\n[![Test Coverage](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/test_coverage)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/test_coverage)\r\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\r\n\r\n---\r\n\r\n## Introduction\r\n\r\nThis framework merges two powerful control techinques into a single data-driven one\r\n\r\n- MPC, a well-known control methodology that exploits a prediction model to predict the future behaviour of the environment and compute the optimal action\r\n\r\n- and RL, a Machine Learning paradigm that showed many successes in recent years (with  games such as chess, Go, etc.) and is highly adaptable to unknown and complex-to-model environments.\r\n\r\n<div align=\"center\">\r\n  <img src=\"https://raw.githubusercontent.com/FilippoAiraldi/mpc-reinforcement-learning/main/resources/mpcrl-diagram.png\" alt=\"mpcrl-diagram\" height=\"300\">\r\n</div>\r\n\r\nThe figure shows the main idea behind this learning-based control approach. The MPC controller, parametrized in $\\vartheta$, acts both as policy provider (providing an action to the environment, given the current state) and as function approximation for the state and action value functions. Concurrently, an RL agent is employed to tune the parameters of the MPC in such a way to increase the controller's performance and achieve an (sub)optimal policy.\r\n\r\n---\r\n\r\n## Installation\r\n\r\nTo install the package, run\r\n\r\n```bash\r\npip install mpcrl\r\n```\r\n\r\n**mpcrl** has the following dependencies\r\n\r\n- [csnlp](https://pypi.org/project/csnlp/)\r\n- [SciPy](https://scipy.org/)\r\n- [Gymnasium](https://gymnasium.farama.org/)\r\n- [Numba](https://numba.pydata.org/)\r\n- [typing_extensions](https://pypi.org/project/typing-extensions/)\r\n\r\nFor playing around with the source code instead, run\r\n\r\n```bash\r\ngit clone https://github.com/FilippoAiraldi/mpc-reinforcement-learning.git\r\n```\r\n\r\n---\r\n\r\n## Examples\r\n\r\nOur [examples](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/tree/main/examples) subdirectory contains an example application on a small linear time-invariant (LTI) system, tackled both with Q-learning and Deterministic Policy Gradient (DPG).\r\n\r\n---\r\n\r\n## License\r\n\r\nThe repository is provided under the MIT License. See the LICENSE file included with this repository.\r\n\r\n---\r\n\r\n## Author\r\n\r\n[Filippo Airaldi](https://www.tudelft.nl/staff/f.airaldi/), PhD Candidate [f.airaldi@tudelft.nl | filippoairaldi@gmail.com]\r\n\r\n> [Delft Center for Systems and Control](https://www.tudelft.nl/en/3me/about/departments/delft-center-for-systems-and-control/) in [Delft University of Technology](https://www.tudelft.nl/en/)\r\n\r\nCopyright (c) 2023 Filippo Airaldi.\r\n\r\nCopyright notice: Technische Universiteit Delft hereby disclaims all copyright interest in the program \u201cmpcrl\u201d (Reinforcement Learning with Model Predictive Control) written by the Author(s). Prof. Dr. Ir. Fred van Keulen, Dean of 3mE.\r\n\r\n---\r\n\r\n## References\r\n\r\n<a id=\"1\">[1]</a>\r\nS. Gros and M. Zanon, \"Data-Driven Economic NMPC Using Reinforcement Learning,\" in _IEEE Transactions on Automatic Control_, vol. 65, no. 2, pp. 636-648, Feb. 2020, doi: 10.1109/TAC.2019.2913768.\r\n\r\n<a id=\"2\">[2]</a>\r\nH. N. Esfahani, A. B. Kordabad and S. Gros, \"Approximate Robust NMPC using Reinforcement Learning,\" _2021 European Control Conference (ECC)_, 2021, pp. 132-137, doi: 10.23919/ECC54610.2021.9655129.\r\n\r\n<a id=\"3\">[3]</a>\r\nW. Cai, A. B. Kordabad, H. N. Esfahani, A. M. Lekkas and S. Gros, \"MPC-based Reinforcement Learning for a Simplified Freight Mission of Autonomous Surface Vehicles,\" _2021 60th IEEE Conference on Decision and Control (CDC)_, 2021, pp. 2990-2995, doi: 10.1109/CDC45484.2021.9683750.\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Reinforcement Learning with Model Predictive Control",
    "version": "1.2.0.post1",
    "project_urls": {
        "Bug Tracker": "https://github.com/FilippoAiraldi/mpc-reinforcement-learning/issues",
        "Homepage": "https://github.com/FilippoAiraldi/mpc-reinforcement-learning"
    },
    "split_keywords": [
        "reinforcement-learning",
        " model-predictive-control",
        " optimization",
        " casadi"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7fa7fb8abe2740701886fa5e3be01546a83786fb8ddddd2bf806e1dafed7925a",
                "md5": "52afb24c8c5cb14b553a72ff84e01fef",
                "sha256": "f4d807191b4a9d9d2f9971cf684ecabc5cca1324b14e7249f087f3b54ac0048a"
            },
            "downloads": -1,
            "filename": "mpcrl-1.2.0.post1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "52afb24c8c5cb14b553a72ff84e01fef",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 73380,
            "upload_time": "2024-04-11T13:53:22",
            "upload_time_iso_8601": "2024-04-11T13:53:22.239702Z",
            "url": "https://files.pythonhosted.org/packages/7f/a7/fb8abe2740701886fa5e3be01546a83786fb8ddddd2bf806e1dafed7925a/mpcrl-1.2.0.post1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a537135375632a32edf19870fbfc2bd10a906d835483116f26795ede9fbc3689",
                "md5": "dacc0763ea87c7a355551022eddf4347",
                "sha256": "8268f75b6dbdb45bb08dec80c040a71a3d7841ff41610d219c6bc410fbf23ad3"
            },
            "downloads": -1,
            "filename": "mpcrl-1.2.0.post1.tar.gz",
            "has_sig": false,
            "md5_digest": "dacc0763ea87c7a355551022eddf4347",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 71574,
            "upload_time": "2024-04-11T13:53:25",
            "upload_time_iso_8601": "2024-04-11T13:53:25.106575Z",
            "url": "https://files.pythonhosted.org/packages/a5/37/135375632a32edf19870fbfc2bd10a906d835483116f26795ede9fbc3689/mpcrl-1.2.0.post1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-11 13:53:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "FilippoAiraldi",
    "github_project": "mpc-reinforcement-learning",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "mpcrl"
}
        
Elapsed time: 0.32685s