Name | invopt JSON |
Version |
0.0.7
JSON |
| download |
home_page | |
Summary | Inverse Optimization with Python |
upload_time | 2024-02-14 14:25:13 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.7 |
license | |
keywords |
inverse-optimization
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# InvOpt: Inverse Optimization with Python
InvOpt is an open-source Python package for solving Inverse Optimization (IO) problems. In IO problems, our goal is to model the behavior of an expert agent, which given an exogenous signal, returns a response action. The underlying assumption of IO is that to compute its response, the expert agent solves an optimization problem parametric in an exogenous signal. We assume to know the constraints imposed on the expert, but not its cost function. Therefore, using examples of exogenous signals and corresponding expert response actions, our goal is to model the cost function being optimized by the expert. More concretely, given a dataset $\mathcal{D} = \\{(\hat{s}_ i, \hat{x}_ i)\\}_ {i=1}^N$ of exogenous signals $\hat{s}_ i$ and the respective expert's response $\hat{x}_ i$, feature mapping $\phi$, our goal is to find a cost vector $\theta \in \mathbb{R}^p$ such that a minimizer $x_ i$ of the **Forward Optimization Problem (FOP)**
$$
x_i \in \arg\min_ {x \in \mathbb{X}(\hat{s}_ i)} \ \langle \theta,\phi(\hat{s}_ i,x) \rangle
$$
reproduces (or in some sense approximates) the expert's action $\hat{x}_ i$. For a more detailed description of IO problems and their modeling, please refer to [Zattoni Scroccaro et al. (2023)](https://arxiv.org/abs/2305.07730) and the references therein.
## Installation
```bash
pip install invopt
```
InvOpt depends on `numpy`. Moreover, some of its functions also depend on `gurobipy` or `cvxpy`. You can get a free academic license for Gurobi [here](https://www.gurobi.com/academia/academic-program-and-licenses/).
## Usage and examples
The following functions are available in the InvOpt package:
- [`discrete_consistent`](https://github.com/pedroszattoni/invopt/tree/main/examples/discrete_consistent): for FOPs with discrete decision spaces (e.g., binary), and when the dataset is consistent with some cost vector. Can be used to check if the data is consistent.
- [`discrete`](https://github.com/pedroszattoni/invopt/tree/main/examples/discrete): for FOPs with dicrete decision spaces (e.g., binary).
- [`continuous_linear`](https://github.com/pedroszattoni/invopt/tree/main/examples/continuous_linear): for continuous, linear FOPs.
- [`continuous_quadratic`](https://github.com/pedroszattoni/invopt/tree/main/examples/continuous_quadratic): for continuous, quadratic FOPs.
- [`mixed_integer_linear`](https://github.com/pedroszattoni/invopt/tree/main/examples/mixed_integer_linear): for FOPs with mixed-integer decision spaces and cost functions linear w.r.t. the continuous part of the decision variable.
- [`mixed_integer_quadratic`](https://github.com/pedroszattoni/invopt/tree/main/examples/mixed_integer_quadratic): for FOPs with mixed-integer decision spaces and cost functions quadratic w.r.t. the continuous part of the decision variable.
- [`FOM`](https://github.com/pedroszattoni/invopt/tree/main/examples/FOM): for general FOPs. Solves IO problem approximately using first-order methods.
## Contributing
Contributions, pull requests and suggestions are very much welcome. The [TODO](https://github.com/pedroszattoni/invopt/blob/main/TODO.txt) file contains some ideas to possibly improve the InvOpt package.
## Citing
If you use InvOpt for research, please cite our accompanying paper:
```bibtex
@article{zattoniscroccaro2023learning,
title={Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms},
author={Zattoni Scroccaro, Pedro and Atasoy, Bilge and Mohajerin Esfahani, Peyman},
journal={https://arxiv.org/abs/2305.07730},
year={2023}
}
```
Raw data
{
"_id": null,
"home_page": "",
"name": "invopt",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "inverse-optimization",
"author": "",
"author_email": "Pedro Zattoni Scroccaro <pedroszattoni@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/47/fe/f472991eb69c18c26f75be67e5b050931903ee66bfdea5fa95ce17a1b3bb/invopt-0.0.7.tar.gz",
"platform": null,
"description": "\r\n# InvOpt: Inverse Optimization with Python\r\n\r\nInvOpt is an open-source Python package for solving Inverse Optimization (IO) problems. In IO problems, our goal is to model the behavior of an expert agent, which given an exogenous signal, returns a response action. The underlying assumption of IO is that to compute its response, the expert agent solves an optimization problem parametric in an exogenous signal. We assume to know the constraints imposed on the expert, but not its cost function. Therefore, using examples of exogenous signals and corresponding expert response actions, our goal is to model the cost function being optimized by the expert. More concretely, given a dataset $\\mathcal{D} = \\\\{(\\hat{s}_ i, \\hat{x}_ i)\\\\}_ {i=1}^N$ of exogenous signals $\\hat{s}_ i$ and the respective expert's response $\\hat{x}_ i$, feature mapping $\\phi$, our goal is to find a cost vector $\\theta \\in \\mathbb{R}^p$ such that a minimizer $x_ i$ of the **Forward Optimization Problem (FOP)**\r\n\r\n$$\r\nx_i \\in \\arg\\min_ {x \\in \\mathbb{X}(\\hat{s}_ i)} \\ \\langle \\theta,\\phi(\\hat{s}_ i,x) \\rangle\r\n$$\r\n\r\nreproduces (or in some sense approximates) the expert's action $\\hat{x}_ i$. For a more detailed description of IO problems and their modeling, please refer to [Zattoni Scroccaro et al. (2023)](https://arxiv.org/abs/2305.07730) and the references therein. \r\n\r\n## Installation\r\n\r\n```bash\r\npip install invopt\r\n```\r\nInvOpt depends on `numpy`. Moreover, some of its functions also depend on `gurobipy` or `cvxpy`. You can get a free academic license for Gurobi [here](https://www.gurobi.com/academia/academic-program-and-licenses/).\r\n\r\n## Usage and examples\r\n\r\nThe following functions are available in the InvOpt package:\r\n\r\n- [`discrete_consistent`](https://github.com/pedroszattoni/invopt/tree/main/examples/discrete_consistent): for FOPs with discrete decision spaces (e.g., binary), and when the dataset is consistent with some cost vector. Can be used to check if the data is consistent.\r\n- [`discrete`](https://github.com/pedroszattoni/invopt/tree/main/examples/discrete): for FOPs with dicrete decision spaces (e.g., binary).\r\n- [`continuous_linear`](https://github.com/pedroszattoni/invopt/tree/main/examples/continuous_linear): for continuous, linear FOPs.\r\n- [`continuous_quadratic`](https://github.com/pedroszattoni/invopt/tree/main/examples/continuous_quadratic): for continuous, quadratic FOPs.\r\n- [`mixed_integer_linear`](https://github.com/pedroszattoni/invopt/tree/main/examples/mixed_integer_linear): for FOPs with mixed-integer decision spaces and cost functions linear w.r.t. the continuous part of the decision variable.\r\n- [`mixed_integer_quadratic`](https://github.com/pedroszattoni/invopt/tree/main/examples/mixed_integer_quadratic): for FOPs with mixed-integer decision spaces and cost functions quadratic w.r.t. the continuous part of the decision variable.\r\n- [`FOM`](https://github.com/pedroszattoni/invopt/tree/main/examples/FOM): for general FOPs. Solves IO problem approximately using first-order methods.\r\n\r\n## Contributing\r\n\r\nContributions, pull requests and suggestions are very much welcome. The [TODO](https://github.com/pedroszattoni/invopt/blob/main/TODO.txt) file contains some ideas to possibly improve the InvOpt package.\r\n\r\n## Citing\r\nIf you use InvOpt for research, please cite our accompanying paper:\r\n\r\n```bibtex\r\n@article{zattoniscroccaro2023learning,\r\n title={Learning in Inverse Optimization: Incenter Cost, Augmented Suboptimality Loss, and Algorithms},\r\n author={Zattoni Scroccaro, Pedro and Atasoy, Bilge and Mohajerin Esfahani, Peyman},\r\n journal={https://arxiv.org/abs/2305.07730},\r\n year={2023}\r\n}\r\n```\r\n",
"bugtrack_url": null,
"license": "",
"summary": "Inverse Optimization with Python",
"version": "0.0.7",
"project_urls": {
"Bug Tracker": "https://github.com/pedroszattoni/inverse-optimization/issues",
"Homepage": "https://github.com/pedroszattoni/invopt"
},
"split_keywords": [
"inverse-optimization"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "453ed2b4c2fd2e63b4c10c9a75b7c887aff2b409170b5cdc7cccf310f746c11b",
"md5": "9ea8c40bdf1aa213ed05cf4b6237bf9f",
"sha256": "cec82df4a412b127b509b039eb96bde1943c49c494df624748638b991dcddf33"
},
"downloads": -1,
"filename": "invopt-0.0.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "9ea8c40bdf1aa213ed05cf4b6237bf9f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 13716,
"upload_time": "2024-02-14T14:25:10",
"upload_time_iso_8601": "2024-02-14T14:25:10.136654Z",
"url": "https://files.pythonhosted.org/packages/45/3e/d2b4c2fd2e63b4c10c9a75b7c887aff2b409170b5cdc7cccf310f746c11b/invopt-0.0.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "47fef472991eb69c18c26f75be67e5b050931903ee66bfdea5fa95ce17a1b3bb",
"md5": "e9c5500aed31ffe099a5cbe3c00d4f93",
"sha256": "bc02017229aa632f20c3ec90de6e0cfb9e7965d11166354024d0826c2cd10861"
},
"downloads": -1,
"filename": "invopt-0.0.7.tar.gz",
"has_sig": false,
"md5_digest": "e9c5500aed31ffe099a5cbe3c00d4f93",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 15442,
"upload_time": "2024-02-14T14:25:13",
"upload_time_iso_8601": "2024-02-14T14:25:13.492082Z",
"url": "https://files.pythonhosted.org/packages/47/fe/f472991eb69c18c26f75be67e5b050931903ee66bfdea5fa95ce17a1b3bb/invopt-0.0.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-14 14:25:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pedroszattoni",
"github_project": "inverse-optimization",
"github_not_found": true,
"lcname": "invopt"
}