RL-OptS
================
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
<p align="center">
<img width="350" src="https://github.com/gorkamunoz/rl_opts/blob/master/nbs/figs/index_fig.png">
</p>
<h4 align="center">
Reinforcement Learning of Optimal Search strategies
</h4>
<p align="center">
<a href="https://zenodo.org/badge/latestdoi/424986383"><img src="https://zenodo.org/badge/424986383.svg" alt="DOI"></a>
<a href="https://badge.fury.io/py/a"><img src="https://badge.fury.io/py/xxx.svg" alt="PyPI version"></a>
<a href="https://badge.fury.io/py/b"><img src="https://img.shields.io/badge/python-3.9-red" alt="Python version"></a>
</p>
This library builds the necessary tools needed to study, replicate and
develop the results of the paper: [“Optimal foraging strategies can be
learned and outperform Lévy walks”](https://arxiv.org/abs/2303.06050) by
*G. Muñoz-Gil, A. López-Incera, L. J. Fiderer* and *H. J. Briegel*.
### Installation
You can access all these tools installing the python package `rl_opts`
via Pypi:
``` python
pip install rl-opts
```
You can also opt for cloning the [source
repository](https://github.com/gorkamunoz/rl_opts) and executing the
following on the parent folder you just cloned the repo:
``` python
pip install -e rl_opts
```
This will install both the library and the necessary packages.
### Tutorials
We have prepared a series of tutorials to guide you through the most
important functionalities of the package. You can find them in the
[Tutorials
folder](https://github.com/gorkamunoz/rl_opts/tree/master/nbs/tutorials)
of the Github repository or in the Tutorials tab of our
[webpage](https://gorkamunoz.github.io/rl_opts/), with notebooks that
will help you navigate the package as well as reproducing the results of
our paper via minimal examples. In particular, we have three tutorials:
- <a href="tutorials/tutorial_learning.ipynb" style="text-decoration:none">Reinforcement
learning </a> : shows how to train a RL agent based on Projective
Simulation agents to search targets in randomly distributed
environments as the ones considered in our paper.
- <a href="tutorials/tutorial_imitation.ipynb" style="text-decoration:none">Imitation
learning </a> : shows how to train a RL agent to imitate the policy of
an expert equipped with a pre-trained policy. The latter is based on
the benchmark strategies common in the literature.
- <a href="tutorials/tutorial_benchmarks.ipynb" style="text-decoration:none">Benchmarks
</a> : shows how launch various benchmark strategies with which to
compare the trained RL agents.
### Package structure
The package contains a set of modules for:
- <a href="lib_nbs/01_rl_framework.ipynb" style="text-decoration:none">Reinforcement
learning framework (`rl_opts.rl_framework`)</a> : building foraging
environments as well as the RL agents moving on them.
- <a href="lib_nbs/02_learning_and_benchmark.ipynb" style="text-decoration:none">Learning
and benchmarking (`rl_opts.learn_and_bench`)</a> : training RL agents
as well as benchmarking them w.r.t. to known foraging strategies.
- <a href="lib_nbs/04_imitation_learning.ipynb" style="text-decoration:none">Imitation
learning (`rl_opts.imitation`)</a>: training RL agents in imitation
schemes via foraging experts.
- <a href="lib_nbs/03_analytics.ipynb" style="text-decoration:none">Analytical
functions (`rl_opts.analytics)`</a>: builiding analytical functions
for step length distributions as well as tranforming these to foraging
policies.
- <a href="lib_nbs/00_utils.ipynb" style="text-decoration:none">Utils
(`rl_opts.utils)`</a>: helpers used throughout the package.
### Cite
We kindly ask you to cite our paper if any of the previous material was
useful for your work, here is the bibtex info:
``` latex
@article{munoz2023optimal,
doi = {10.48550/ARXIV.2303.06050},
url = {https://arxiv.org/abs/2303.06050},
author = {Muñoz-Gil, Gorka and López-Incera, Andrea and Fiderer, Lukas J. and Briegel, Hans J.},
title = {Optimal foraging strategies can be learned and outperform Lévy walks},
publisher = {arXiv},
archivePrefix = {arXiv},
eprint = {2303.06050},
primaryClass = {cond-mat.stat-mech},
year = {2023},
}
```
Raw data
{
"_id": null,
"home_page": "https://github.com/gorkamunoz/rl_opts/tree/master/",
"name": "rl-opts",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "Reiforcement learning,optimal search,foraging,Physics",
"author": "Gorka Mu\u00f1oz-Gil",
"author_email": "munoz.gil.gorka@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/44/34/be68090b7b7810778b59a018d847de29eb4f9bab0ea8ec879f69c19dae64/rl_opts-0.0.1.tar.gz",
"platform": null,
"description": "RL-OptS\n================\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n<p align=\"center\">\n<img width=\"350\" src=\"https://github.com/gorkamunoz/rl_opts/blob/master/nbs/figs/index_fig.png\">\n</p>\n<h4 align=\"center\">\nReinforcement Learning of Optimal Search strategies\n</h4>\n<p align=\"center\">\n<a href=\"https://zenodo.org/badge/latestdoi/424986383\"><img src=\"https://zenodo.org/badge/424986383.svg\" alt=\"DOI\"></a>\n<a href=\"https://badge.fury.io/py/a\"><img src=\"https://badge.fury.io/py/xxx.svg\" alt=\"PyPI version\"></a>\n<a href=\"https://badge.fury.io/py/b\"><img src=\"https://img.shields.io/badge/python-3.9-red\" alt=\"Python version\"></a>\n</p>\n\nThis library builds the necessary tools needed to study, replicate and\ndevelop the results of the paper: [\u201cOptimal foraging strategies can be\nlearned and outperform L\u00e9vy walks\u201d](https://arxiv.org/abs/2303.06050) by\n*G. Mu\u00f1oz-Gil, A. L\u00f3pez-Incera, L. J. Fiderer* and *H. J. Briegel*.\n\n### Installation\n\nYou can access all these tools installing the python package `rl_opts`\nvia Pypi:\n\n``` python\npip install rl-opts\n```\n\nYou can also opt for cloning the [source\nrepository](https://github.com/gorkamunoz/rl_opts) and executing the\nfollowing on the parent folder you just cloned the repo:\n\n``` python\npip install -e rl_opts\n```\n\nThis will install both the library and the necessary packages.\n\n### Tutorials\n\nWe have prepared a series of tutorials to guide you through the most\nimportant functionalities of the package. You can find them in the\n[Tutorials\nfolder](https://github.com/gorkamunoz/rl_opts/tree/master/nbs/tutorials)\nof the Github repository or in the Tutorials tab of our\n[webpage](https://gorkamunoz.github.io/rl_opts/), with notebooks that\nwill help you navigate the package as well as reproducing the results of\nour paper via minimal examples. In particular, we have three tutorials:\n\n- <a href=\"tutorials/tutorial_learning.ipynb\" style=\"text-decoration:none\">Reinforcement\n learning </a> : shows how to train a RL agent based on Projective\n Simulation agents to search targets in randomly distributed\n environments as the ones considered in our paper.\n- <a href=\"tutorials/tutorial_imitation.ipynb\" style=\"text-decoration:none\">Imitation\n learning </a> : shows how to train a RL agent to imitate the policy of\n an expert equipped with a pre-trained policy. The latter is based on\n the benchmark strategies common in the literature.\n- <a href=\"tutorials/tutorial_benchmarks.ipynb\" style=\"text-decoration:none\">Benchmarks\n </a> : shows how launch various benchmark strategies with which to\n compare the trained RL agents.\n\n### Package structure\n\nThe package contains a set of modules for:\n\n- <a href=\"lib_nbs/01_rl_framework.ipynb\" style=\"text-decoration:none\">Reinforcement\n learning framework (`rl_opts.rl_framework`)</a> : building foraging\n environments as well as the RL agents moving on them.\n- <a href=\"lib_nbs/02_learning_and_benchmark.ipynb\" style=\"text-decoration:none\">Learning\n and benchmarking (`rl_opts.learn_and_bench`)</a> : training RL agents\n as well as benchmarking them w.r.t. to known foraging strategies.\n- <a href=\"lib_nbs/04_imitation_learning.ipynb\" style=\"text-decoration:none\">Imitation\n learning (`rl_opts.imitation`)</a>: training RL agents in imitation\n schemes via foraging experts.\n- <a href=\"lib_nbs/03_analytics.ipynb\" style=\"text-decoration:none\">Analytical\n functions (`rl_opts.analytics)`</a>: builiding analytical functions\n for step length distributions as well as tranforming these to foraging\n policies.\n- <a href=\"lib_nbs/00_utils.ipynb\" style=\"text-decoration:none\">Utils\n (`rl_opts.utils)`</a>: helpers used throughout the package.\n\n### Cite\n\nWe kindly ask you to cite our paper if any of the previous material was\nuseful for your work, here is the bibtex info:\n\n``` latex\n@article{munoz2023optimal,\n doi = {10.48550/ARXIV.2303.06050}, \n url = {https://arxiv.org/abs/2303.06050}, \n author = {Mu\u00f1oz-Gil, Gorka and L\u00f3pez-Incera, Andrea and Fiderer, Lukas J. and Briegel, Hans J.}, \n title = {Optimal foraging strategies can be learned and outperform L\u00e9vy walks}, \n publisher = {arXiv}, \n archivePrefix = {arXiv},\n eprint = {2303.06050},\n primaryClass = {cond-mat.stat-mech}, \n year = {2023},\n}\n```\n",
"bugtrack_url": null,
"license": "Apache Software License 2.0",
"summary": "Reinforcement Learning of Optimal Search strategies",
"version": "0.0.1",
"split_keywords": [
"reiforcement learning",
"optimal search",
"foraging",
"physics"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3f19b18a729de2d71cb50a31d1d6319ff1344184a8455196e9a15e203369a3e2",
"md5": "50714da5004eae641788d6024a22cd4c",
"sha256": "61e9214c09f37a578f3b3ed63f7099c793aea3275be1829cbde6fa64002172c2"
},
"downloads": -1,
"filename": "rl_opts-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "50714da5004eae641788d6024a22cd4c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 21701,
"upload_time": "2023-03-13T09:18:23",
"upload_time_iso_8601": "2023-03-13T09:18:23.753862Z",
"url": "https://files.pythonhosted.org/packages/3f/19/b18a729de2d71cb50a31d1d6319ff1344184a8455196e9a15e203369a3e2/rl_opts-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4434be68090b7b7810778b59a018d847de29eb4f9bab0ea8ec879f69c19dae64",
"md5": "d8acc6852d9f81eee211fc742e2a5568",
"sha256": "dc77182533510cf3e78f4820f6062cd17b779fa1f1e5ba2aa26e235fec0d1912"
},
"downloads": -1,
"filename": "rl_opts-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "d8acc6852d9f81eee211fc742e2a5568",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 21427,
"upload_time": "2023-03-13T09:18:25",
"upload_time_iso_8601": "2023-03-13T09:18:25.948345Z",
"url": "https://files.pythonhosted.org/packages/44/34/be68090b7b7810778b59a018d847de29eb4f9bab0ea8ec879f69c19dae64/rl_opts-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-03-13 09:18:25",
"github": false,
"gitlab": false,
"bitbucket": false,
"lcname": "rl-opts"
}