# Reinforcement Learning with Model Predictive Control
**M**odel **P**redictive **C**ontrol-based **R**einforcement **L**earning (**mpcrl**,
for short) is a library for training model-based Reinforcement Learning (RL) [[1]](#1)
agents with Model Predictive Control (MPC) [[2]](#2) as function approximation.
> | | |
> |---|---|
> | **Documentation** | <https://mpc-reinforcement-learning.readthedocs.io/en/latest/> |
> | **Download** | <https://pypi.python.org/pypi/mpcrl/> |
> | **Source code** | <https://github.com/FilippoAiraldi/mpc-reinforcement-learning/> |
> | **Report issues** | <https://github.com/FilippoAiraldi/mpc-reinforcement-learning/issues/> |
[![PyPI version](https://badge.fury.io/py/mpcrl.svg)](https://badge.fury.io/py/mpcrl)
[![Source Code License](https://img.shields.io/badge/license-MIT-blueviolet)](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/blob/experimental/LICENSE)
![Python 3.9](https://img.shields.io/badge/python->=3.9-green.svg)
[![Tests](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/tests.yml/badge.svg)](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/tests.yml)
[![Docs](https://readthedocs.org/projects/mpc-reinforcement-learning/badge/?version=stable)](https://mpc-reinforcement-learning.readthedocs.io/en/stable/?badge=stable)
[![Downloads](https://static.pepy.tech/badge/mpcrl)](https://www.pepy.tech/projects/mpcrl)
[![Maintainability](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/maintainability)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/maintainability)
[![Test Coverage](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/test_coverage)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/test_coverage)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
---
## Introduction
This framework, also referred to as _RL with/using MPC_, was first proposed in [[3]](#3)
and has so far been shown effective in various
applications, with different learning algorithms and more sound theory, e.g., [[4](#4),
[5](#5), [7](#7), [8](#8)]. It merges two powerful control techinques into a single
data-driven one
- MPC, a well-known control methodology that exploits a prediction model to predict the
future behaviour of the environment and compute the optimal action
- and RL, a Machine Learning paradigm that showed many successes in recent years (with
games such as chess, Go, etc.) and is highly adaptable to unknown and complex-to-model
environments.
The figure below shows the main idea behind this learning-based control approach. The
MPC controller, parametrized in its objective, predictive model and constraints (or a
subset of these), acts both as policy provider (i.e., providing an action to the
environment, given the current state) and as function approximation for the state and
action value functions (i.e., predicting the expected return following the current
control policy from the given state and state-action pair). Concurrently, an RL
algorithm is employed to tune this parametrization of the MPC in such a way to increase
the controller's performance and achieve an (sub)optimal policy. For this purpose,
different algorithms can be employed, two of the most successful being Q-learning
[[4]](#4) and Deterministic Policy Gradient (DPG) [[5]](#5).
<div align="center">
<img src="https://raw.githubusercontent.com/FilippoAiraldi/mpc-reinforcement-learning/experimental/docs/_static/mpcrl.diagram.light.png" alt="mpcrl-diagram" height="300">
</div>
---
## Installation
### Using `pip`
You can use `pip` to install **mpcrl** with the command
```bash
pip install mpcrl
```
**mpcrl** has the following dependencies
- Python 3.9 or higher
- [csnlp](https://casadi-nlp.readthedocs.io/en/stable/)
- [SciPy](https://scipy.org/)
- [Gymnasium](https://gymnasium.farama.org/)
- [Numba](https://numba.pydata.org/)
- [typing_extensions](https://pypi.org/project/typing-extensions/) (only for Python 3.9)
If you'd like to play around with the source code instead, run
```bash
git clone https://github.com/FilippoAiraldi/mpc-reinforcement-learning.git
```
The `main` branch contains the main releases of the packages (and the occasional post
release). The `experimental` branch is reserved for the implementation and test of new
features and hosts the release candidates. You can then install the package to edit it
as you wish as
```bash
pip install -e /path/to/mpc-reinforcement-learning
```
---
## Getting started
Here we provide the skeleton of a simple application of the library. The aim of the code
below is to let an MPC control strategy learn how to optimally control a simple Linear
Time Invariant (LTI) system. The cost (i.e., the opposite of the reward) of controlling
this system in state $s \in \mathbb{R}^{n_s}$ with action
$a \in \mathbb{R}^{n_a}$ is given by
$$
L(s,a) = s^\top Q s + a^\top R a,
$$
where $Q \in \mathbb{R}^{n_s \times n_s}$ and $R \in \mathbb{R}^{n_a \times n_a}$ are
suitable positive definite matrices. This is a very well-known problem in optimal
control theory. However, here, in the context of RL, these matrices are not known, and
we can only observe realizations of the cost for each state-action pair our controller
visits. The underlying system dynamics are described by the usual state-space model
$$
s_{k+1} = A s_k + B a_k,
$$
whose matrices $A \in \mathbb{R}^{n_s \times n_s}$ and
$B \in \mathbb{R}^{n_s \times n_a}$ could again in general be unknown. The control
action $a_k$ is assumed bounded in the interval $[-1,1]$. In what follows we will go
through the usual steps in setting up and solving such a task.
### Environment
The first ingredient to implement is the LTI system in the form of a `gymnasium.Env`
class. Fill free to fill in the missing parts based on your needs. The
`gymnasium.Env.reset` method should initialize the state of the system, while the
`gymnasium.Env.step` method should update the state of the system based on the action
provided and mainly return the new state and the cost.
```python
from gymnasium import Env
from gymnasium.wrappers import TimeLimit
import numpy as np
class LtiSystem(Env):
ns = ... # number of states (must be continuous)
na = ... # number of actions (must be continuous)
A = ... # state-space matrix A
B = ... # state-space matrix B
Q = ... # state-cost matrix Q
R = ... # action-cost matrix R
action_space = Box(-1.0, 1.0, (na,), np.float64) # action space
def reset(self, *, seed=None, options=None):
super().reset(seed=seed, options=options)
self.s = ... # set initial state
return self.s, {}
def step(self, action):
a = np.reshape(action, self.action_space.shape)
assert self.action_space.contains(a)
c = self.s.T @ self.Q @ self.s + a.T @ self.R @ a
self.s = self.A @ self.s + self.B @ a
return self.s, c, False, False, {}
# lastly, instantiate the environment with a wrapper to ensure the simulation finishes
env = TimeLimit(LtiSystem(), max_steps=5000)
```
### Controller
As aforementioned, we'd like to control this system via an MPC controller. Therefore,
the next step is to craft one. To do so, we leverage the `csnlp` package, in particular
its `csnlp.wrappers.Mpc` class (on top of that, under the hood, we exploit this package
also to compute the sensitivities of the MPC controller w.r.t. its parametrization,
which are crucial in calculating the RL updates). In mathematical terms, the MPC looks
like this:
$$
\begin{aligned}
\min_{x_{0:N}, u_{0:N-1}} \quad & \sum_{i=0}^{N-1}{ x_i^\top \tilde{Q} x_i + u_i^\top \tilde{R} u_i } & \\
\textrm{s.t.} \quad & x_0 = s_k \\
& x_{i+1} = \tilde{A} x_i + \tilde{B} u_i, \quad & i=0,\dots,N-1 \\
& -1 \le u_k \le 1, \quad & i=0,\dots,N-1
\end{aligned}
$$
where $\tilde{Q}, \tilde{R}, \tilde{A}, \tilde{B}$ do not necessarily have to match
the environment's $Q, R, A, B$ as they represent a possibly approximated a priori
knowledge on the sytem. In code, we can implement this as follows.
```python
import casadi as cs
from csnlp import Nlp
from csnlp.wrappers import Mpc
N = ... # prediction horizon
mpc = Mpc[cs.SX](Nlp(), N)
# create the parametrization of the controller
nx, nu = LtiSystem.ns, LtiSystem.na
Atilde = mpc.parameter("Atilde", (nx, nx))
Btilde = mpc.parameter("Btilde", (nx, nu))
Qtilde = mpc.parameter("Qtilde", (nx, nx))
Rtilde = mpc.parameter("Rtilde", (nu, nu))
# create the variables of the controller
x, _ = mpc.state("x", nx)
u, _ = mpc.action("u", nu, lb=-1.0, ub=1.0)
# set the dynamics
mpc.set_linear_dynamics(Atilde, Btilde)
# set the objective
mpc.minimize(
sum(cs.bilin(Qtilde, x[:, i]) + cs.bilin(Rtilde, u[:, i]) for i in range(N))
)
# initiliaze the solver with some options
opts = {
"print_time": False,
"bound_consistency": True,
"calc_lam_x": True,
"calc_lam_p": False,
"ipopt": {"max_iter": 500, "sb": "yes", "print_level": 0},
}
mpc.init_solver(opts, solver="ipopt")
```
### Learning
The last step is to train the controller using an RL algorithm. For instance, here we
use Q-Learning. The idea is to let the controller interact with the environment, observe
the cost, and update the MPC parameters accordingly. This can be achieved by computing
the temporal difference error
$$
\delta_k = L(s_k, a_k) + \gamma V_\theta(s_{k+1}) - Q_\theta(s_k, a_k),
$$
where $\gamma$ is the discount factor, and $V_\theta$ and $Q_\theta$ are the state and
state-action value functions, both provided by the parametrized MPC controller with
$\theta = \{\tilde{A}, \tilde{B}, \tilde{Q}, \tilde{R}\}$. The update rule for the
parameters is then given by
$$
\theta \gets \theta + \alpha \delta_k \nabla_\theta Q_\theta(s_k, a_k),
$$
where $\alpha$ is the learning rate, and $\nabla_\theta Q_\theta(s_k, a_k)$ is the
sensitivity of the state-action value function w.r.t. the parameters. All of this can be
implemented as follows.
```python
from mpcrl import LearnableParameter, LearnableParametersDict, LstdQLearningAgent
from mpcrl.optim import GradientDescent
# give some initial values to the learnable parameters (shapes must match!)
learnable_pars_init = {"Atilde": ..., "Btilde": ..., "Qtilde": ..., "Rtilde": ...}
# create the set of parameters that should be learnt
learnable_pars = LearnableParametersDict[cs.SX](
(
LearnableParameter(name, val.shape, val, sym=mpc.parameters[name])
for name, val in learnable_pars_init.items()
)
)
# instantiate the learning agent
agent = LstdQLearningAgent(
mpc=mpc,
learnable_parameters=learnable_pars,
discount_factor=..., # a number in (0,1], e.g., 1.0
update_strategy=..., # an integer, e.g., 1
optimizer=GradientDescent(learning_rate=...),
record_td_errors=True,
)
# finally, launch the training for 5000 timesteps. The method will return an array of
# (hopefully) decreasing costs
costs = agent.train(env=env, episodes=1, seed=69)
```
---
## Examples
Our
[examples](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/index.html)
subdirectory contains examples on how to use the library on some academic, small-scale
application (a small linear time-invariant (LTI) system), tackled both with
[on-policy Q-learning](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-based-onpolicy/q_learning.html#sphx-glr-auto-examples-gradient-based-onpolicy-q-learning-py),
[off-policy Q-learning](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-based-offpolicy/q_learning_offpolicy.html#sphx-glr-auto-examples-gradient-based-offpolicy-q-learning-offpolicy-py)
and
[DPG](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-based-onpolicy/dpg.html#sphx-glr-auto-examples-gradient-based-onpolicy-dpg-py).
While the aforementioned algorithms are all gradient-based, we also provide an
[example on how to use Bayesian Optimization (BO)](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-free/bayesopt.html#sphx-glr-auto-examples-gradient-free-bayesopt-py)
[[6]](#6) to tune the MPC parameters in a gradient-free way.
---
## License
The repository is provided under the MIT License. See the LICENSE file included with
this repository.
---
## Author
[Filippo Airaldi](https://www.tudelft.nl/staff/f.airaldi/), PhD Candidate
[f.airaldi@tudelft.nl | filippoairaldi@gmail.com]
> [Delft Center for Systems and Control](https://www.tudelft.nl/en/me/about/departments/delft-center-for-systems-and-control/)
in [Delft University of Technology](https://www.tudelft.nl/en/)
Copyright (c) 2024 Filippo Airaldi.
Copyright notice: Technische Universiteit Delft hereby disclaims all copyright interest
in the program “mpcrl” (Reinforcement Learning with Model Predictive Control) written by
the Author(s). Prof. Dr. Ir. Fred van Keulen, Dean of ME.
---
## References
<a id="1">[1]</a>
Sutton, R.S. and Barto, A.G. (2018).
[Reinforcement learning: An introduction](https://mitpress-mit-edu.tudelft.idm.oclc.org/9780262039246/reinforcement-learning/).
Cambridge, MIT press.
<a id="2">[2]</a>
Rawlings, J.B., Mayne, D.Q. and Diehl, M. (2017).
[Model Predictive Control: theory, computation, and design (Vol. 2)](https://sites.engineering.ucsb.edu/~jbraw/mpc/).
Madison, WI: Nob Hill Publishing.
<a id="3">[3]</a>
Gros, S. and Zanon, M. (2020).
[Data-Driven Economic NMPC Using Reinforcement Learning](https://ieeexplore-ieee-org.tudelft.idm.oclc.org/document/8701462).
IEEE Transactions on Automatic Control, 65(2), 636-648.
<a id="4">[4]</a>
Esfahani, H. N. and Kordabad, A. B. and Gros, S. (2021).
[Approximate Robust NMPC using Reinforcement Learning](https://ieeexplore-ieee-org.tudelft.idm.oclc.org/document/9655129).
European Control Conference (ECC), 132-137.
<a id="5">[5]</a>
Cai, W. and Kordabad, A. B. and Esfahani, H. N. and Lekkas, A. M. and Gros, S. (2021).
[MPC-based Reinforcement Learning for a Simplified Freight Mission of Autonomous Surface Vehicles](https://ieeexplore-ieee-org.tudelft.idm.oclc.org/document/9683750).
60th IEEE Conference on Decision and Control (CDC), 2990-2995.
<a id="6">[6]</a>
Garnett, R., 2023. [Bayesian Optimization](https://bayesoptbook.com/).
Cambridge University Press.
<a id="7">[7]</a>
Gros, S. and Zanon, M. (2022).
[Learning for MPC with stability & safety guarantees](https://www.sciencedirect.com/science/article/pii/S0005109822004605).
Automatica, 164, 110598.
<a id="8">[8]</a>
Zanon, M. and Gros, S. (2021).
[Safe Reinforcement Learning Using Robust MPC](https://ieeexplore.ieee.org/abstract/document/9198135/).
IEEE Transactions on Automatic Control, 66(8), 3638-3652.
Raw data
{
"_id": null,
"home_page": null,
"name": "mpcrl",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "reinforcement-learning, model-predictive-control, optimization, casadi",
"author": null,
"author_email": "Filippo Airaldi <filippoairaldi@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/0b/d7/83711e63a0b372b961c5ea29e139484fb1a64b7d97f1de7eba1fe938d6d5/mpcrl-1.3.1.post2.tar.gz",
"platform": null,
"description": "# Reinforcement Learning with Model Predictive Control\n\n**M**odel **P**redictive **C**ontrol-based **R**einforcement **L**earning (**mpcrl**,\nfor short) is a library for training model-based Reinforcement Learning (RL) [[1]](#1)\nagents with Model Predictive Control (MPC) [[2]](#2) as function approximation.\n\n> | | |\n> |---|---|\n> | **Documentation** | <https://mpc-reinforcement-learning.readthedocs.io/en/latest/> |\n> | **Download** | <https://pypi.python.org/pypi/mpcrl/> |\n> | **Source code** | <https://github.com/FilippoAiraldi/mpc-reinforcement-learning/> |\n> | **Report issues** | <https://github.com/FilippoAiraldi/mpc-reinforcement-learning/issues/> |\n\n[![PyPI version](https://badge.fury.io/py/mpcrl.svg)](https://badge.fury.io/py/mpcrl)\n[![Source Code License](https://img.shields.io/badge/license-MIT-blueviolet)](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/blob/experimental/LICENSE)\n![Python 3.9](https://img.shields.io/badge/python->=3.9-green.svg)\n\n[![Tests](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/tests.yml/badge.svg)](https://github.com/FilippoAiraldi/mpc-reinforcement-learning/actions/workflows/tests.yml)\n[![Docs](https://readthedocs.org/projects/mpc-reinforcement-learning/badge/?version=stable)](https://mpc-reinforcement-learning.readthedocs.io/en/stable/?badge=stable)\n[![Downloads](https://static.pepy.tech/badge/mpcrl)](https://www.pepy.tech/projects/mpcrl)\n[![Maintainability](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/maintainability)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/maintainability)\n[![Test Coverage](https://api.codeclimate.com/v1/badges/9a46f52603d29c684c48/test_coverage)](https://codeclimate.com/github/FilippoAiraldi/mpc-reinforcement-learning/test_coverage)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n---\n\n## Introduction\n\nThis framework, also referred to as _RL with/using MPC_, was first proposed in [[3]](#3)\nand has so far been shown effective in various\napplications, with different learning algorithms and more sound theory, e.g., [[4](#4),\n[5](#5), [7](#7), [8](#8)]. It merges two powerful control techinques into a single\ndata-driven one\n\n- MPC, a well-known control methodology that exploits a prediction model to predict the\n future behaviour of the environment and compute the optimal action\n\n- and RL, a Machine Learning paradigm that showed many successes in recent years (with\n games such as chess, Go, etc.) and is highly adaptable to unknown and complex-to-model\n environments.\n\nThe figure below shows the main idea behind this learning-based control approach. The\nMPC controller, parametrized in its objective, predictive model and constraints (or a\nsubset of these), acts both as policy provider (i.e., providing an action to the\nenvironment, given the current state) and as function approximation for the state and\naction value functions (i.e., predicting the expected return following the current\ncontrol policy from the given state and state-action pair). Concurrently, an RL\nalgorithm is employed to tune this parametrization of the MPC in such a way to increase\nthe controller's performance and achieve an (sub)optimal policy. For this purpose,\ndifferent algorithms can be employed, two of the most successful being Q-learning\n[[4]](#4) and Deterministic Policy Gradient (DPG) [[5]](#5).\n\n<div align=\"center\">\n <img src=\"https://raw.githubusercontent.com/FilippoAiraldi/mpc-reinforcement-learning/experimental/docs/_static/mpcrl.diagram.light.png\" alt=\"mpcrl-diagram\" height=\"300\">\n</div>\n\n---\n\n## Installation\n\n### Using `pip`\n\nYou can use `pip` to install **mpcrl** with the command\n\n```bash\npip install mpcrl\n```\n\n**mpcrl** has the following dependencies\n\n- Python 3.9 or higher\n- [csnlp](https://casadi-nlp.readthedocs.io/en/stable/)\n- [SciPy](https://scipy.org/)\n- [Gymnasium](https://gymnasium.farama.org/)\n- [Numba](https://numba.pydata.org/)\n- [typing_extensions](https://pypi.org/project/typing-extensions/) (only for Python 3.9)\n\nIf you'd like to play around with the source code instead, run\n\n```bash\ngit clone https://github.com/FilippoAiraldi/mpc-reinforcement-learning.git\n```\n\nThe `main` branch contains the main releases of the packages (and the occasional post\nrelease). The `experimental` branch is reserved for the implementation and test of new\nfeatures and hosts the release candidates. You can then install the package to edit it\nas you wish as\n\n```bash\npip install -e /path/to/mpc-reinforcement-learning\n```\n\n---\n\n## Getting started\n\nHere we provide the skeleton of a simple application of the library. The aim of the code\nbelow is to let an MPC control strategy learn how to optimally control a simple Linear\nTime Invariant (LTI) system. The cost (i.e., the opposite of the reward) of controlling\nthis system in state $s \\in \\mathbb{R}^{n_s}$ with action\n$a \\in \\mathbb{R}^{n_a}$ is given by\n\n$$\nL(s,a) = s^\\top Q s + a^\\top R a,\n$$\n\nwhere $Q \\in \\mathbb{R}^{n_s \\times n_s}$ and $R \\in \\mathbb{R}^{n_a \\times n_a}$ are\nsuitable positive definite matrices. This is a very well-known problem in optimal\ncontrol theory. However, here, in the context of RL, these matrices are not known, and\nwe can only observe realizations of the cost for each state-action pair our controller\nvisits. The underlying system dynamics are described by the usual state-space model\n\n$$\ns_{k+1} = A s_k + B a_k,\n$$\n\nwhose matrices $A \\in \\mathbb{R}^{n_s \\times n_s}$ and\n$B \\in \\mathbb{R}^{n_s \\times n_a}$ could again in general be unknown. The control\naction $a_k$ is assumed bounded in the interval $[-1,1]$. In what follows we will go\nthrough the usual steps in setting up and solving such a task.\n\n### Environment\n\nThe first ingredient to implement is the LTI system in the form of a `gymnasium.Env`\nclass. Fill free to fill in the missing parts based on your needs. The\n`gymnasium.Env.reset` method should initialize the state of the system, while the\n`gymnasium.Env.step` method should update the state of the system based on the action\nprovided and mainly return the new state and the cost.\n\n```python\nfrom gymnasium import Env\nfrom gymnasium.wrappers import TimeLimit\nimport numpy as np\n\n\nclass LtiSystem(Env):\n ns = ... # number of states (must be continuous)\n na = ... # number of actions (must be continuous)\n A = ... # state-space matrix A\n B = ... # state-space matrix B\n Q = ... # state-cost matrix Q\n R = ... # action-cost matrix R\n action_space = Box(-1.0, 1.0, (na,), np.float64) # action space\n\n def reset(self, *, seed=None, options=None):\n super().reset(seed=seed, options=options)\n self.s = ... # set initial state\n return self.s, {}\n\n def step(self, action):\n a = np.reshape(action, self.action_space.shape)\n assert self.action_space.contains(a)\n c = self.s.T @ self.Q @ self.s + a.T @ self.R @ a\n self.s = self.A @ self.s + self.B @ a\n return self.s, c, False, False, {}\n\n\n# lastly, instantiate the environment with a wrapper to ensure the simulation finishes\nenv = TimeLimit(LtiSystem(), max_steps=5000)\n```\n\n### Controller\n\nAs aforementioned, we'd like to control this system via an MPC controller. Therefore,\nthe next step is to craft one. To do so, we leverage the `csnlp` package, in particular\nits `csnlp.wrappers.Mpc` class (on top of that, under the hood, we exploit this package\nalso to compute the sensitivities of the MPC controller w.r.t. its parametrization,\nwhich are crucial in calculating the RL updates). In mathematical terms, the MPC looks\nlike this:\n\n$$\n\\begin{aligned}\n \\min_{x_{0:N}, u_{0:N-1}} \\quad & \\sum_{i=0}^{N-1}{ x_i^\\top \\tilde{Q} x_i + u_i^\\top \\tilde{R} u_i } & \\\\\n \\textrm{s.t.} \\quad & x_0 = s_k \\\\\n & x_{i+1} = \\tilde{A} x_i + \\tilde{B} u_i, \\quad & i=0,\\dots,N-1 \\\\\n & -1 \\le u_k \\le 1, \\quad & i=0,\\dots,N-1\n\\end{aligned}\n$$\n\nwhere $\\tilde{Q}, \\tilde{R}, \\tilde{A}, \\tilde{B}$ do not necessarily have to match\nthe environment's $Q, R, A, B$ as they represent a possibly approximated a priori\nknowledge on the sytem. In code, we can implement this as follows.\n\n```python\nimport casadi as cs\nfrom csnlp import Nlp\nfrom csnlp.wrappers import Mpc\n\nN = ... # prediction horizon\nmpc = Mpc[cs.SX](Nlp(), N)\n\n# create the parametrization of the controller\nnx, nu = LtiSystem.ns, LtiSystem.na\nAtilde = mpc.parameter(\"Atilde\", (nx, nx))\nBtilde = mpc.parameter(\"Btilde\", (nx, nu))\nQtilde = mpc.parameter(\"Qtilde\", (nx, nx))\nRtilde = mpc.parameter(\"Rtilde\", (nu, nu))\n\n# create the variables of the controller\nx, _ = mpc.state(\"x\", nx)\nu, _ = mpc.action(\"u\", nu, lb=-1.0, ub=1.0)\n\n# set the dynamics\nmpc.set_linear_dynamics(Atilde, Btilde)\n\n# set the objective\nmpc.minimize(\n sum(cs.bilin(Qtilde, x[:, i]) + cs.bilin(Rtilde, u[:, i]) for i in range(N))\n)\n\n# initiliaze the solver with some options\nopts = {\n \"print_time\": False,\n \"bound_consistency\": True,\n \"calc_lam_x\": True,\n \"calc_lam_p\": False,\n \"ipopt\": {\"max_iter\": 500, \"sb\": \"yes\", \"print_level\": 0},\n}\nmpc.init_solver(opts, solver=\"ipopt\")\n```\n\n### Learning\n\nThe last step is to train the controller using an RL algorithm. For instance, here we\nuse Q-Learning. The idea is to let the controller interact with the environment, observe\nthe cost, and update the MPC parameters accordingly. This can be achieved by computing\nthe temporal difference error\n\n$$\n\\delta_k = L(s_k, a_k) + \\gamma V_\\theta(s_{k+1}) - Q_\\theta(s_k, a_k),\n$$\n\nwhere $\\gamma$ is the discount factor, and $V_\\theta$ and $Q_\\theta$ are the state and\nstate-action value functions, both provided by the parametrized MPC controller with\n$\\theta = \\{\\tilde{A}, \\tilde{B}, \\tilde{Q}, \\tilde{R}\\}$. The update rule for the\nparameters is then given by\n\n$$\n\\theta \\gets \\theta + \\alpha \\delta_k \\nabla_\\theta Q_\\theta(s_k, a_k),\n$$\n\nwhere $\\alpha$ is the learning rate, and $\\nabla_\\theta Q_\\theta(s_k, a_k)$ is the\nsensitivity of the state-action value function w.r.t. the parameters. All of this can be\nimplemented as follows.\n\n```python\nfrom mpcrl import LearnableParameter, LearnableParametersDict, LstdQLearningAgent\nfrom mpcrl.optim import GradientDescent\n\n# give some initial values to the learnable parameters (shapes must match!)\nlearnable_pars_init = {\"Atilde\": ..., \"Btilde\": ..., \"Qtilde\": ..., \"Rtilde\": ...}\n\n# create the set of parameters that should be learnt\nlearnable_pars = LearnableParametersDict[cs.SX](\n (\n LearnableParameter(name, val.shape, val, sym=mpc.parameters[name])\n for name, val in learnable_pars_init.items()\n )\n)\n\n# instantiate the learning agent\nagent = LstdQLearningAgent(\n mpc=mpc,\n learnable_parameters=learnable_pars,\n discount_factor=..., # a number in (0,1], e.g., 1.0\n update_strategy=..., # an integer, e.g., 1\n optimizer=GradientDescent(learning_rate=...),\n record_td_errors=True,\n)\n\n# finally, launch the training for 5000 timesteps. The method will return an array of\n# (hopefully) decreasing costs\ncosts = agent.train(env=env, episodes=1, seed=69)\n```\n\n---\n\n## Examples\n\nOur\n[examples](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/index.html)\nsubdirectory contains examples on how to use the library on some academic, small-scale\napplication (a small linear time-invariant (LTI) system), tackled both with\n[on-policy Q-learning](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-based-onpolicy/q_learning.html#sphx-glr-auto-examples-gradient-based-onpolicy-q-learning-py),\n[off-policy Q-learning](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-based-offpolicy/q_learning_offpolicy.html#sphx-glr-auto-examples-gradient-based-offpolicy-q-learning-offpolicy-py)\nand\n[DPG](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-based-onpolicy/dpg.html#sphx-glr-auto-examples-gradient-based-onpolicy-dpg-py).\nWhile the aforementioned algorithms are all gradient-based, we also provide an\n[example on how to use Bayesian Optimization (BO)](https://mpc-reinforcement-learning.readthedocs.io/en/latest/auto_examples/gradient-free/bayesopt.html#sphx-glr-auto-examples-gradient-free-bayesopt-py)\n[[6]](#6) to tune the MPC parameters in a gradient-free way.\n\n---\n\n## License\n\nThe repository is provided under the MIT License. See the LICENSE file included with\nthis repository.\n\n---\n\n## Author\n\n[Filippo Airaldi](https://www.tudelft.nl/staff/f.airaldi/), PhD Candidate\n[f.airaldi@tudelft.nl | filippoairaldi@gmail.com]\n\n> [Delft Center for Systems and Control](https://www.tudelft.nl/en/me/about/departments/delft-center-for-systems-and-control/)\nin [Delft University of Technology](https://www.tudelft.nl/en/)\n\nCopyright (c) 2024 Filippo Airaldi.\n\nCopyright notice: Technische Universiteit Delft hereby disclaims all copyright interest\nin the program \u201cmpcrl\u201d (Reinforcement Learning with Model Predictive Control) written by\nthe Author(s). Prof. Dr. Ir. Fred van Keulen, Dean of ME.\n\n---\n\n## References\n\n<a id=\"1\">[1]</a>\nSutton, R.S. and Barto, A.G. (2018).\n[Reinforcement learning: An introduction](https://mitpress-mit-edu.tudelft.idm.oclc.org/9780262039246/reinforcement-learning/).\nCambridge, MIT press.\n\n<a id=\"2\">[2]</a>\nRawlings, J.B., Mayne, D.Q. and Diehl, M. (2017).\n[Model Predictive Control: theory, computation, and design (Vol. 2)](https://sites.engineering.ucsb.edu/~jbraw/mpc/).\nMadison, WI: Nob Hill Publishing.\n\n<a id=\"3\">[3]</a>\nGros, S. and Zanon, M. (2020).\n[Data-Driven Economic NMPC Using Reinforcement Learning](https://ieeexplore-ieee-org.tudelft.idm.oclc.org/document/8701462).\nIEEE Transactions on Automatic Control, 65(2), 636-648.\n\n<a id=\"4\">[4]</a>\nEsfahani, H. N. and Kordabad, A. B. and Gros, S. (2021).\n[Approximate Robust NMPC using Reinforcement Learning](https://ieeexplore-ieee-org.tudelft.idm.oclc.org/document/9655129).\nEuropean Control Conference (ECC), 132-137.\n\n<a id=\"5\">[5]</a>\nCai, W. and Kordabad, A. B. and Esfahani, H. N. and Lekkas, A. M. and Gros, S. (2021).\n[MPC-based Reinforcement Learning for a Simplified Freight Mission of Autonomous Surface Vehicles](https://ieeexplore-ieee-org.tudelft.idm.oclc.org/document/9683750).\n60th IEEE Conference on Decision and Control (CDC), 2990-2995.\n\n<a id=\"6\">[6]</a>\nGarnett, R., 2023. [Bayesian Optimization](https://bayesoptbook.com/).\nCambridge University Press.\n\n<a id=\"7\">[7]</a>\nGros, S. and Zanon, M. (2022).\n[Learning for MPC with stability & safety guarantees](https://www.sciencedirect.com/science/article/pii/S0005109822004605).\nAutomatica, 164, 110598.\n\n<a id=\"8\">[8]</a>\nZanon, M. and Gros, S. (2021).\n[Safe Reinforcement Learning Using Robust MPC](https://ieeexplore.ieee.org/abstract/document/9198135/).\nIEEE Transactions on Automatic Control, 66(8), 3638-3652.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Reinforcement Learning with Model Predictive Control",
"version": "1.3.1.post2",
"project_urls": {
"Bug Tracker": "https://github.com/FilippoAiraldi/mpc-reinforcement-learning/issues",
"Homepage": "https://github.com/FilippoAiraldi/mpc-reinforcement-learning"
},
"split_keywords": [
"reinforcement-learning",
" model-predictive-control",
" optimization",
" casadi"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5ef8e2d60e1370ac741e7061dc0d3cb772956605e5bf3b8963fe5b075fa91e65",
"md5": "2a3ca99c4982896f5aa21459e30f1add",
"sha256": "e1a71e5e9fa2ab29d95265d1c9a8547f088bb6230aa982a2522dc2f910649417"
},
"downloads": -1,
"filename": "mpcrl-1.3.1.post2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2a3ca99c4982896f5aa21459e30f1add",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 95858,
"upload_time": "2024-11-25T13:31:12",
"upload_time_iso_8601": "2024-11-25T13:31:12.267596Z",
"url": "https://files.pythonhosted.org/packages/5e/f8/e2d60e1370ac741e7061dc0d3cb772956605e5bf3b8963fe5b075fa91e65/mpcrl-1.3.1.post2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0bd783711e63a0b372b961c5ea29e139484fb1a64b7d97f1de7eba1fe938d6d5",
"md5": "65c2c8242d697613950480605b1fa617",
"sha256": "0f16333ad33609cc4f830f0e1d4060b8db2761a12dfc31d76dc2151aa9e22b4b"
},
"downloads": -1,
"filename": "mpcrl-1.3.1.post2.tar.gz",
"has_sig": false,
"md5_digest": "65c2c8242d697613950480605b1fa617",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 96946,
"upload_time": "2024-11-25T13:31:14",
"upload_time_iso_8601": "2024-11-25T13:31:14.276168Z",
"url": "https://files.pythonhosted.org/packages/0b/d7/83711e63a0b372b961c5ea29e139484fb1a64b7d97f1de7eba1fe938d6d5/mpcrl-1.3.1.post2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-25 13:31:14",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "FilippoAiraldi",
"github_project": "mpc-reinforcement-learning",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "typing_extensions",
"specs": [
[
">=",
"4.5.0"
]
]
},
{
"name": "numba",
"specs": [
[
">=",
"0.57.1"
]
]
},
{
"name": "csnlp",
"specs": [
[
">=",
"1.6.3"
]
]
},
{
"name": "scipy",
"specs": [
[
">=",
"1.11.0"
]
]
},
{
"name": "gymnasium",
"specs": [
[
">=",
"0.28.1"
]
]
}
],
"lcname": "mpcrl"
}