commonroad-rl


Namecommonroad-rl JSON
Version 2023.1.3 PyPI version JSON
download
home_pagehttps://commonroad.in.tum.de/
SummaryTools for applying reinforcement learning on commonroad scenarios.
upload_time2023-01-03 09:07:00
maintainer
docs_urlNone
author
requires_python
license
keywords autonomous automated vehicles driving motion planning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # CommonRoad-RL

This repository contains a software package to solve motion planning problems on [CommonRoad](https://commonroad.in.tum.de) using Reinforcement Learning algorithms. We currently use the implementation for the RL algorithms from [OpenAI Stable Baselines](https://stable-baselines.readthedocs.io/en/master/), but the package can be run with any standard (OpenAI Gym compatible) RL implementations.

## CommonRoad-RL in a nutshell
```python
import gym
import commonroad_rl.gym_commonroad

# kwargs overwrites configs defined in commonroad_rl/gym_commonroad/configs.yaml
env = gym.make("commonroad-v1",
               action_configs={"action_type": "continuous"},
               goal_configs={"observe_distance_goal_long": True, "observe_distance_goal_lat": True},
               surrounding_configs={"observe_lane_circ_surrounding": True, "lane_circ_sensor_range_radius": 100.},
               reward_type="sparse_reward",
               reward_configs_sparse={"reward_goal_reached": 50., "reward_collision": -100})

observation = env.reset()
for _ in range(500):
    # env.render() # rendered images with be saved under ./img
    action = env.action_space.sample() # your agent here (this takes random actions)
    observation, reward, done, info = env.step(action)

    if done:
        observation = env.reset()
env.close()
```
## Folder structure
```
commonroad-rl                                           
├─ commonroad_rl
│  ├─ doc                               # Folder for documentation         
│  ├─ gym_commonroad                    # Gym environment for CommonRoad scenarios
|     ├─ action                         # Action and Vehicle modules
|     ├─ observation                    # Observation modules
|     ├─ reward                         # Reward and Termination modules
|     ├─ utils                          # Utility functions for gym_commonroad
│     ├─ configs.yaml                   # Default config file for actions, observations, rewards, and termination conditions,
										  as well as for observation space optimization and reward coefficient optimization
│     ├─ commonroad_env.py              # CommonRoadEnv(gym.Env) class
│     └─ constants.py                   # Script to define path, vehicle, and draw parameters
│  ├─ hyperparams                       # Config files for default hyperparameters for various RL algorithms                                       
│  ├─ tests                             # Test system of commmonroad-rl.
│  ├─ tools                             # Tools to validate, visualize and analyze CommonRoad .xml files, as well as preprocess and convert to .pickle files.                                         
│  ├─ utils_run                         # Utility functions to run training, tuning and evaluating files                                      
│  ├─ README.md                                                      
│  ├─ evaluate_model.py                 # Script to evaluate a trained RL model on specific scenarios and visualize the scenario                
│  ├─ generate_solution.py              # Script to genearte CommonRoad solution files from trained RL models.
│  ├─ train_model.py                    # Script to train RL model or optimize hyperparameters or environment configurations           
│  ├─ sensitivity_analysis.py           # Script to run sensitivity analysis for a trained model
│  └─ plot_learning_curves.py           # Plot learning curves with provided training log files.                
├─ scripts                              # Bash scripts to install all dependencies, train and evaluate RL models, as well as generate CommonRoad solution files from trained RL models.
├─ README.md                                            
├─ commonroad_style_guide.rst           # Coding style guide for this project                
├─ environment.yml                                      
└─ setup.py                                      
```
## Installation

### Installation using Docker
Detailed instructions under ```./commonroad_rl/install_docker/readme_docker.md```

### Prerequisites 
This project should be run with conda. Make sure it is installed before proceeding with the installation.

1. [download & install conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html), and init anaconda to work from the terminal. tested on conda 4.5; 4.9, 4.10
```
~/anaconda3/bin/conda init
# for minconda
~/miniconda3/bin/conda init
```
2. clone this repository
```
git clone https://gitlab.lrz.de/tum-cps/commonroad-rl.git
```
3. install build packages
```
sudo apt-get update
sudo apt-get install build-essential make cmake
```
4. setup a new conda env (or install packages to an existing conda env e.g. myenv `conda env update --name myenv --file environment.yml`)
```
conda env create -n cr37 -f environment.yml
git lfs pull
```
(optional) Install [`commonroad-interactive-scenarios`](https://gitlab.lrz.de/tum-cps/commonroad-interactive-scenarios) 
if you want to evaluate a trained model with SUMO interactive scenarios.

5. (optional) install pip packages for the docs. If you want to use the jupyter notebook, also install jupyter.
```
source activate cr37
pip install -r commonroad_rl/doc/requirements_doc.txt
conda install jupyter
```

### Install mpi4py and commonroad-rl manually
```
conda install --quiet -y  -c conda-forge mpi4py==3.1.3
pip install -e .
```


### Test if installation succeeds

Further details of our test system refer to `./commonroad_rl/tests`.

```
source activate cr37
bash scripts/run_test.sh
```

## Usage

### Tutorials
To get to know the package, please check `./commonroad_rl/tutorials` for further details.

### Python scripts
The commonroad_rl folder contains the source files. There are Python scripts for training, evaluating, and visualizing models. The most important scrips are explained in `./commonroad_rl/README.md` and can be run with your Python executable. They are especially useful if you are developing a new feature or want to debug a specific part of the training.

### Bash scripts
If you tested your codes already and everything runs smoothly on your computer and you now want to run the real experiments on larger dataset, the bash scripts help you with that. The are located in `./scripts`. They can be used for training with PPO and TD3 and testing an agent. Always adapt the specific paths in the scripts to the corresponding paths on your machine and check the comments in the file to determine which arguments have to be provided.  

## References and Suggested Guides

1. [OpenAI Stable Baselines](https://stable-baselines.readthedocs.io/en/master/): the implementation of RL algorithms used in our project.
2. [OpenAI Spinning Up](https://spinningup.openai.com/en/latest/spinningup/rl_intro.html): we do not use their implementations in our project. But they provide quite nice explanations of RL concepts.
3. [OpenAI Gym](https://gym.openai.com/docs/): general interface.
4. [OpenAI Safety Gym](https://openai.com/blog/safety-gym/): a special collection of Gyms for safe RL. Configurable as our project.

## Publication

If you use CommonRoad-RL in your paper, please cite:
```
@inproceedings{Wang2021,
	author = {Xiao Wang and  Hanna Krasowski and  Matthias Althoff},
	title = {{CommonRoad-RL}: A Configurable Reinforcement Learning Environment for Motion Planning of Autonomous Vehicles},
	booktitle = {Proc. of the IEEE International Conference on Intelligent Transportation Systems (ITSC)},
	year = {2021},
	pages={466--472},
}
```

Configurations and trained models used in our experiments in the paper can be downloaded [here](https://nextcloud.in.tum.de/index.php/s/n7oEr9dsyrqjgPZ).

Models trained with current version of code using the same configurations can be downloaded [here](https://nextcloud.in.tum.de/index.php/s/F8C9n2nWmfJy9pr)

## Contact:
commonroad@lists.lrz.de



            

Raw data

            {
    "_id": null,
    "home_page": "https://commonroad.in.tum.de/",
    "name": "commonroad-rl",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "autonomous,automated,vehicles,driving,motion,planning",
    "author": "",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/d6/a9/5dbf806f92b537d0c0240c3d47dfa90f50b27041491c06aee107fe8ea518/commonroad-rl-2023.1.3.tar.gz",
    "platform": null,
    "description": "# CommonRoad-RL\n\nThis repository contains a software package to solve motion planning problems on [CommonRoad](https://commonroad.in.tum.de) using Reinforcement Learning algorithms. We currently use the implementation for the RL algorithms from [OpenAI Stable Baselines](https://stable-baselines.readthedocs.io/en/master/), but the package can be run with any standard (OpenAI Gym compatible) RL implementations.\n\n## CommonRoad-RL in a nutshell\n```python\nimport gym\nimport commonroad_rl.gym_commonroad\n\n# kwargs overwrites configs defined in commonroad_rl/gym_commonroad/configs.yaml\nenv = gym.make(\"commonroad-v1\",\n               action_configs={\"action_type\": \"continuous\"},\n               goal_configs={\"observe_distance_goal_long\": True, \"observe_distance_goal_lat\": True},\n               surrounding_configs={\"observe_lane_circ_surrounding\": True, \"lane_circ_sensor_range_radius\": 100.},\n               reward_type=\"sparse_reward\",\n               reward_configs_sparse={\"reward_goal_reached\": 50., \"reward_collision\": -100})\n\nobservation = env.reset()\nfor _ in range(500):\n    # env.render() # rendered images with be saved under ./img\n    action = env.action_space.sample() # your agent here (this takes random actions)\n    observation, reward, done, info = env.step(action)\n\n    if done:\n        observation = env.reset()\nenv.close()\n```\n## Folder structure\n```\ncommonroad-rl                                           \n\u251c\u2500 commonroad_rl\n\u2502  \u251c\u2500 doc                               # Folder for documentation         \n\u2502  \u251c\u2500 gym_commonroad                    # Gym environment for CommonRoad scenarios\n|     \u251c\u2500 action                         # Action and Vehicle modules\n|     \u251c\u2500 observation                    # Observation modules\n|     \u251c\u2500 reward                         # Reward and Termination modules\n|     \u251c\u2500 utils                          # Utility functions for gym_commonroad\n\u2502     \u251c\u2500 configs.yaml                   # Default config file for actions, observations, rewards, and termination conditions,\n\t\t\t\t\t\t\t\t\t\t  as well as for observation space optimization and reward coefficient optimization\n\u2502     \u251c\u2500 commonroad_env.py              # CommonRoadEnv(gym.Env) class\n\u2502     \u2514\u2500 constants.py                   # Script to define path, vehicle, and draw parameters\n\u2502  \u251c\u2500 hyperparams                       # Config files for default hyperparameters for various RL algorithms                                       \n\u2502  \u251c\u2500 tests                             # Test system of commmonroad-rl.\n\u2502  \u251c\u2500 tools                             # Tools to validate, visualize and analyze CommonRoad .xml files, as well as preprocess and convert to .pickle files.                                         \n\u2502  \u251c\u2500 utils_run                         # Utility functions to run training, tuning and evaluating files                                      \n\u2502  \u251c\u2500 README.md                                                      \n\u2502  \u251c\u2500 evaluate_model.py                 # Script to evaluate a trained RL model on specific scenarios and visualize the scenario                \n\u2502  \u251c\u2500 generate_solution.py              # Script to genearte CommonRoad solution files from trained RL models.\n\u2502  \u251c\u2500 train_model.py                    # Script to train RL model or optimize hyperparameters or environment configurations           \n\u2502  \u251c\u2500 sensitivity_analysis.py           # Script to run sensitivity analysis for a trained model\n\u2502  \u2514\u2500 plot_learning_curves.py           # Plot learning curves with provided training log files.                \n\u251c\u2500 scripts                              # Bash scripts to install all dependencies, train and evaluate RL models, as well as generate CommonRoad solution files from trained RL models.\n\u251c\u2500 README.md                                            \n\u251c\u2500 commonroad_style_guide.rst           # Coding style guide for this project                \n\u251c\u2500 environment.yml                                      \n\u2514\u2500 setup.py                                      \n```\n## Installation\n\n### Installation using Docker\nDetailed instructions under ```./commonroad_rl/install_docker/readme_docker.md```\n\n### Prerequisites \nThis project should be run with conda. Make sure it is installed before proceeding with the installation.\n\n1. [download & install conda](https://docs.conda.io/projects/conda/en/latest/user-guide/install/linux.html), and init anaconda to work from the terminal. tested on conda 4.5; 4.9, 4.10\n```\n~/anaconda3/bin/conda init\n# for minconda\n~/miniconda3/bin/conda init\n```\n2. clone this repository\n```\ngit clone https://gitlab.lrz.de/tum-cps/commonroad-rl.git\n```\n3. install build packages\n```\nsudo apt-get update\nsudo apt-get install build-essential make cmake\n```\n4. setup a new conda env (or install packages to an existing conda env e.g. myenv `conda env update --name myenv --file environment.yml`)\n```\nconda env create -n cr37 -f environment.yml\ngit lfs pull\n```\n(optional) Install [`commonroad-interactive-scenarios`](https://gitlab.lrz.de/tum-cps/commonroad-interactive-scenarios) \nif you want to evaluate a trained model with SUMO interactive scenarios.\n\n5. (optional) install pip packages for the docs. If you want to use the jupyter notebook, also install jupyter.\n```\nsource activate cr37\npip install -r commonroad_rl/doc/requirements_doc.txt\nconda install jupyter\n```\n\n### Install mpi4py and commonroad-rl manually\n```\nconda install --quiet -y  -c conda-forge mpi4py==3.1.3\npip install -e .\n```\n\n\n### Test if installation succeeds\n\nFurther details of our test system refer to `./commonroad_rl/tests`.\n\n```\nsource activate cr37\nbash scripts/run_test.sh\n```\n\n## Usage\n\n### Tutorials\nTo get to know the package, please check `./commonroad_rl/tutorials` for further details.\n\n### Python scripts\nThe commonroad_rl folder contains the source files. There are Python scripts for training, evaluating, and visualizing models. The most important scrips are explained in `./commonroad_rl/README.md` and can be run with your Python executable. They are especially useful if you are developing a new feature or want to debug a specific part of the training.\n\n### Bash scripts\nIf you tested your codes already and everything runs smoothly on your computer and you now want to run the real experiments on larger dataset, the bash scripts help you with that. The are located in `./scripts`. They can be used for training with PPO and TD3 and testing an agent. Always adapt the specific paths in the scripts to the corresponding paths on your machine and check the comments in the file to determine which arguments have to be provided.  \n\n## References and Suggested Guides\n\n1. [OpenAI Stable Baselines](https://stable-baselines.readthedocs.io/en/master/): the implementation of RL algorithms used in our project.\n2. [OpenAI Spinning Up](https://spinningup.openai.com/en/latest/spinningup/rl_intro.html): we do not use their implementations in our project. But they provide quite nice explanations of RL concepts.\n3. [OpenAI Gym](https://gym.openai.com/docs/): general interface.\n4. [OpenAI Safety Gym](https://openai.com/blog/safety-gym/): a special collection of Gyms for safe RL. Configurable as our project.\n\n## Publication\n\nIf you use CommonRoad-RL in your paper, please cite:\n```\n@inproceedings{Wang2021,\n\tauthor = {Xiao Wang and  Hanna Krasowski and  Matthias Althoff},\n\ttitle = {{CommonRoad-RL}: A Configurable Reinforcement Learning Environment for Motion Planning of Autonomous Vehicles},\n\tbooktitle = {Proc. of the IEEE International Conference on Intelligent Transportation Systems (ITSC)},\n\tyear = {2021},\n\tpages={466--472},\n}\n```\n\nConfigurations and trained models used in our experiments in the paper can be downloaded [here](https://nextcloud.in.tum.de/index.php/s/n7oEr9dsyrqjgPZ).\n\nModels trained with current version of code using the same configurations can be downloaded [here](https://nextcloud.in.tum.de/index.php/s/F8C9n2nWmfJy9pr)\n\n## Contact:\ncommonroad@lists.lrz.de\n\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Tools for applying reinforcement learning on commonroad scenarios.",
    "version": "2023.1.3",
    "split_keywords": [
        "autonomous",
        "automated",
        "vehicles",
        "driving",
        "motion",
        "planning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "89d045cf5550e0f20b39721d46e4358723423bdb7c67ffe186a349aecf45e332",
                "md5": "84e2957b2c400d124384d84866d8e34f",
                "sha256": "74ce9c2d97562a8fc0529a014b7056b7b0d7d94c3612ea24552de4674baefbcb"
            },
            "downloads": -1,
            "filename": "commonroad_rl-2023.1.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "84e2957b2c400d124384d84866d8e34f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 218629,
            "upload_time": "2023-01-03T09:06:57",
            "upload_time_iso_8601": "2023-01-03T09:06:57.559810Z",
            "url": "https://files.pythonhosted.org/packages/89/d0/45cf5550e0f20b39721d46e4358723423bdb7c67ffe186a349aecf45e332/commonroad_rl-2023.1.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d6a95dbf806f92b537d0c0240c3d47dfa90f50b27041491c06aee107fe8ea518",
                "md5": "2c44d40c406d85ef7543a4d6b73fa2dc",
                "sha256": "334a8a7489f103072284553ce04c994615aba614b489cd97181a22424767ccfe"
            },
            "downloads": -1,
            "filename": "commonroad-rl-2023.1.3.tar.gz",
            "has_sig": false,
            "md5_digest": "2c44d40c406d85ef7543a4d6b73fa2dc",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 171374,
            "upload_time": "2023-01-03T09:07:00",
            "upload_time_iso_8601": "2023-01-03T09:07:00.059010Z",
            "url": "https://files.pythonhosted.org/packages/d6/a9/5dbf806f92b537d0c0240c3d47dfa90f50b27041491c06aee107fe8ea518/commonroad-rl-2023.1.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-03 09:07:00",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "lcname": "commonroad-rl"
}
        
Elapsed time: 0.02621s