Name | kheperax JSON |
Version |
0.2.0
JSON |
| download |
home_page | None |
Summary | A-maze-ing environment in jax |
upload_time | 2024-09-25 20:11:51 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | MIT License Copyright (c) 2022 Adaptive and Intelligent Robotics Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
quality-diversity
reinforcement learning
jax
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Kheperax
The *Kheperax* task is a re-implementation of the [`fastsim` simulator](https://github.com/sferes2/libfastsim) from _Mouret and Doncieux (2012)_.
Kheperax is fully written using [JAX](https://github.com/google/jax), to leverage hardware accelerators and massive parallelization.
## Features
- Fully implemented in JAX for hardware acceleration
- Simulates Khepera-like robots (circular robots with 2 wheels) in 2D mazes
- Configurable robot sensors (lasers and bumpers)
- Directly compatible with the [QDax library](https://github.com/adaptive-intelligent-robotics/QDax) for efficient Quality-Diversity optimization
- Customizable maze layouts and target-based tasks
- Rendering capabilities for visualizing the environment
<p align="center">
<img src="img/gif/mapelites_progress_standard.gif" width="160" height="160" />
<img src="img/gif/target_policy_standard.gif" width="160" height="160" />
<img src="img/gif/unstructured_progress_snake.gif" width="160" height="160"/>
<img src="img/gif/target_policy_snake.gif" width="160" height="160"/>
</p>
## Installation
Kheperax is available on PyPI and can be installed with:
```shell
pip install kheperax
```
Alternatively, to install Kheperax with CUDA 12 support, you can run:
```shell
pip install kheperax[cuda12]
```
## Task Properties
### Environment
Each episode is run for a fixed amount of time-steps (by default equal to `250`).
The agent corresponds to a Khepera-like robot (circular robots with 2 wheels) that moves in a planar 2-dimensional maze.
This robot has (by default):
- 3 lasers to estimate its distance to some walls in specific directions (by default -45, 0 and 45 degrees).
- 2 bumpers to detect contact with walls.
At each time-step, the agent receives an observation, which corresponds to all laser and bumper measures:
```
# by default:
[laser 1, laser 2, laser 3, bumper left, bumper right]
```
The bumpers return `1` if there's a contact with a wall and `-1` otherwise.
The actions to pass to the environment should be between `-1` and `1`.
They are then scaled depending on a scale defined in the environment configuration.
## Run examples
### Install Dependencies
Before running examples, we recommend creating a virtual environment and installing the required dependencies:
```shell
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
```
If you want to run the examples with CUDA 12 support, you need install `jax` with the `cuda12` extra:
```shell
pip install jax[cuda12]==<version-from-requirements.txt>
```
### Launch MAP-Elites Example
To run the MAP-Elites example on the standard Kheperax task, you can use the following colab notebook:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adaptive-intelligent-robotics/Kheperax/blob/main/examples/main/me_training.ipynb)
Other tasks can be run with the following scripts.
```shell
python -m examples.target.target_me_training # Target task with MAP-Elites
python -m examples.final_distance.final_distance_me_training # Final Distance task with MAP-Elites
```
Additional details on those tasks can be found in the [Tasks and Maze Types](#tasks-and-maze-types) section.
### Rendering images and gifs
To render images, you can run the following script:
```shell
python -m examples.rendering.maps_rendering
```
To render gifs, you can run the following script:
```shell
python -m examples.rendering.gif
```
## Tasks and Maze Types
Kheperax supports various tasks and maze types. Here's an overview of the available options and their corresponding files:
### Basic Kheperax Task
- **File**: `kheperax/tasks/main.py`
- **Class**: `KheperaxTask`
- **Configuration**: `KheperaxConfig`
- **Description**: The standard Kheperax environment without a specific target.
- **Fitness**: sum of rewards
- **Reward**: negated norm of actions, r_t = -1 * norm a_t (i.e. r_t ~ negated energy spent at time t)
- **Descriptor**: final (x,y) location of the robot.
#### Key Features:
- **Reward Function**:
- negated norm of actions, r_t = -1 * norm a_t (i.e. r_t ~ negated energy spent at time t)
- This encourages the robot to move efficiently.
- **Episode Termination**:
- The episode terminates if the maximum number of steps is reached.
- **Rendering**:
- When rendered, the target appears as a green circle in the maze.
#### Configuration
A `KheperaxConfig` object contains all the properties of a `KheperaxTask`:
```python
from kheperax.tasks.config import KheperaxConfig
config = KheperaxConfig.get_default()
```
Key configuration options with their default values:
- `episode_length`: `int = 250`, maximum number of timesteps per episode
- `mlp_policy_hidden_layer_sizes`: `Tuple[int, ...] = (8,)`, structure of policy network's hidden layers
- `action_scale`: `float = 0.025`, scales the actions for wheel velocities
- `robot`: `Robot`, includes:
- `posture`: `Posture(x=0.15, y=0.15, angle=pi/2)`, initial position and orientation
- `radius`: `float = 0.015`, robot size
- `laser_ranges`: `Union[float, List[float]] = 0.2`, max ranges for laser sensors
- `laser_angles`: `List[float] = [-pi/4, 0.0, pi/4]`, placement angles for laser sensors
- `std_noise_sensor_measures`: `float = 0.0`, noise in sensor readings
- `maze`: `Maze`, defines the environment layout (default is a standard maze)
- `std_noise_wheel_velocities`: `float = 0.0`, noise in wheel velocities
- `resolution`: `Tuple[int, int] = (1024, 1024)`, rendering resolution
- `limits`: `Tuple[Tuple[float, float], Tuple[float, float]] = ((0., 0.), (1., 1.))`, environment boundaries
- `action_repeat`: `int = 1`, number of times each action is repeated
Usage example:
```python
from kheperax.tasks.config import KheperaxConfig
from kheperax.simu.robot import Robot
from kheperax.simu.maze import Maze
config = KheperaxConfig.get_default()
config.episode_length = 1000
config.action_scale = 0.03
config.resolution = (800, 800)
new_robot = Robot.create_default_robot().replace(radius=0.05)
config.robot = new_robot
new_maze = Maze.create(segments_list=[...]) # Define maze segments
config.maze = new_maze
```
### Target Kheperax Task
- **File**: `kheperax/tasks/target.py`
- **Class**: `TargetKheperaxTask`
- **Configuration**: `TargetKheperaxConfig`
- **Description**: Kheperax environment with a target position for the robot to reach.
- **Fitness**: sum of rewards (detailed below)
- **Descriptor**: final (x,y) location of the robot.
#### Key Features:
- **Reward Function**:
- At each step, the reward is the negative distance to the target center.
- This encourages the robot to move towards the target.
- **Episode Termination**:
- The episode ends when the robot reaches the target (enters the target radius).
- Also terminates if the maximum number of steps is reached.
- **Rendering**:
- When rendered, the target appears as a green circle in the maze.
#### Configuration
`TargetKheperaxConfig` contains the same parameters as `KheperaxConfig`, plus additional target-related parameters:
- **Target Position**: Defines a specific point in the maze for the robot to reach. Default position: `(0.15, 0.9)`
- **Target Radius**: Specifies the size of the target area.
- Default radius: 0.05
- The episode ends when the robot enters this radius.
Usage example:
```python
from kheperax.tasks.target import TargetKheperaxConfig, TargetKheperaxTask
# Create a default target configuration
target_config = TargetKheperaxConfig.get_default()
# Customize the configuration if needed
target_config.target_pos = (0.2, 0.8) # Change target position
target_config.target_radius = 0.06 # Change target radius
# Create the target task
target_task = TargetKheperaxTask(target_config)
# Use the task in your experiment
# ... (reset, step, etc.)
```
### Final Distance Kheperax Task
- **File**: `kheperax/tasks/final_distance.py`
- **Class**: `FinalDistKheperaxTask`
- **Description**: A task that only rewards the final distance to the target.
- **Fitness**: sum of rewards (detailed below)
- **Descriptor**: final (x,y) location of the robot.
#### Key Features:
- **Reward Function**:
- At each step, the reward is -1, except for the final step, where the reward is 100 * the negative distance to the target center.
- This encourages the robot to move towards the target as quickly as possible.
- **Episode Termination**:
- The episode ends when the robot reaches the target (enters the target radius).
- Also terminates if the maximum number of steps is reached.
`TargetKheperaxConfig` is still used to manage the configuration for this kind of task (see above description)
### Maze Maps
- **File**: `kheperax/envs/maze_maps.py`
- **Description**: Defines various maze layouts, including:
- Standard Kheperax maze - `standard`
- Pointmaze - `pointmaze`
- Snake maze - `snake`
To use a specific maze map:
```python
from kheperax.tasks.config import KheperaxConfig
from kheperax.tasks.target import TargetKheperaxConfig
# Get the default configuration for the desired maze map
maze_map = KheperaxConfig.get_default_for_map("standard") # or "pointmaze", "snake"
# For target-based tasks
target_maze_map = TargetKheperaxConfig.get_default_for_map("standard") # or "pointmaze", "snake"
```
| | Standard | PointMaze | Snake |
|:---------:|:---------------------------------------------------------------:|:----------------------------------------------------------------:|:------------------------------------------------------------:|
| No target | <img src="img/maps/no_target/no_quad/standard.png" width="150"> | <img src="img/maps/no_target/no_quad/pointmaze.png" width="150"> | <img src="img/maps/no_target/no_quad/snake.png" width="150"> |
| Target | <img src="img/maps/target/no_quad/standard.png" width="150"> | <img src="img/maps/target/no_quad/pointmaze.png" width="150"> | <img src="img/maps/target/no_quad/snake.png" width="150"> |
### Quad Mazes
- **File**: `kheperax/tasks/quad.py`
- **Function**: `make_quad_config`
- **Description**: Creates quad mazes, which are essentially four copies of the original maze flipped in different orientations.
To create a quad maze configuration:
```python
from kheperax.tasks.config import KheperaxConfig
from kheperax.tasks.target import TargetKheperaxConfig
from kheperax.tasks.quad import make_quad_config
# Get the default configuration for the desired maze map
maze_map = KheperaxConfig.get_default_for_map("standard") # or "pointmaze", "snake"
target_maze_map = TargetKheperaxConfig.get_default_for_map("standard") # or "pointmaze", "snake"
# Create a quad maze configuration
quad_config = make_quad_config(maze_map) # or target_maze_map for target-based tasks
```
| | Quad Standard | Quad PointMaze | Quad Snake |
|:---------:|:------------------------------------------------------------:|:-------------------------------------------------------------:|:---------------------------------------------------------:|
| No target | <img src="img/maps/no_target/quad/standard.png" width="150"> | <img src="img/maps/no_target/quad/pointmaze.png" width="150"> | <img src="img/maps/no_target/quad/snake.png" width="150"> |
| Target | <img src="img/maps/target/quad/standard.png" width="150"> | <img src="img/maps/target/quad/pointmaze.png" width="150"> | <img src="img/maps/target/quad/snake.png" width="150"> |
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## License
This project is licensed under the MIT License - see the LICENSE file for details.
## Citation
If you use Kheperax in your research, please cite the following paper:
```bibtex
@inproceedings{grillotti2023kheperax,
title={Kheperax: a lightweight jax-based robot control environment for benchmarking quality-diversity algorithms},
author={Grillotti, Luca and Cully, Antoine},
booktitle={Proceedings of the Companion Conference on Genetic and Evolutionary Computation},
pages={2163--2165},
year={2023}
}
```
## Acknowledgements
- [Original `fastsim` simulator](https://github.com/sferes2/libfastsim) by Mouret and Doncieux (2012)
Raw data
{
"_id": null,
"home_page": null,
"name": "kheperax",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "Quality-Diversity, Reinforcement Learning, JAX",
"author": null,
"author_email": "Luca Grillotti and Paul Templier <luca.grillotti16@imperial.ac.uk>",
"download_url": "https://files.pythonhosted.org/packages/75/c4/b8fe4a2ad325d4436e70521e4fb64f677c7c90b3be7794901fadc8a6bf86/kheperax-0.2.0.tar.gz",
"platform": null,
"description": "# Kheperax\n\nThe *Kheperax* task is a re-implementation of the [`fastsim` simulator](https://github.com/sferes2/libfastsim) from _Mouret and Doncieux (2012)_.\nKheperax is fully written using [JAX](https://github.com/google/jax), to leverage hardware accelerators and massive parallelization.\n\n## Features\n\n- Fully implemented in JAX for hardware acceleration\n- Simulates Khepera-like robots (circular robots with 2 wheels) in 2D mazes\n- Configurable robot sensors (lasers and bumpers)\n- Directly compatible with the [QDax library](https://github.com/adaptive-intelligent-robotics/QDax) for efficient Quality-Diversity optimization\n- Customizable maze layouts and target-based tasks\n- Rendering capabilities for visualizing the environment\n\n<p align=\"center\">\n <img src=\"img/gif/mapelites_progress_standard.gif\" width=\"160\" height=\"160\" />\n <img src=\"img/gif/target_policy_standard.gif\" width=\"160\" height=\"160\" />\n <img src=\"img/gif/unstructured_progress_snake.gif\" width=\"160\" height=\"160\"/>\n <img src=\"img/gif/target_policy_snake.gif\" width=\"160\" height=\"160\"/>\n</p>\n\n## Installation\n\nKheperax is available on PyPI and can be installed with:\n\n```shell\npip install kheperax\n```\n\nAlternatively, to install Kheperax with CUDA 12 support, you can run:\n\n```shell\npip install kheperax[cuda12]\n```\n\n## Task Properties\n\n### Environment\n\nEach episode is run for a fixed amount of time-steps (by default equal to `250`).\nThe agent corresponds to a Khepera-like robot (circular robots with 2 wheels) that moves in a planar 2-dimensional maze.\nThis robot has (by default):\n- 3 lasers to estimate its distance to some walls in specific directions (by default -45, 0 and 45 degrees).\n- 2 bumpers to detect contact with walls.\nAt each time-step, the agent receives an observation, which corresponds to all laser and bumper measures:\n```\n# by default:\n[laser 1, laser 2, laser 3, bumper left, bumper right]\n```\nThe bumpers return `1` if there's a contact with a wall and `-1` otherwise.\n\nThe actions to pass to the environment should be between `-1` and `1`.\nThey are then scaled depending on a scale defined in the environment configuration.\n\n## Run examples\n\n### Install Dependencies\n\nBefore running examples, we recommend creating a virtual environment and installing the required dependencies:\n\n```shell\npython -m venv venv\nsource venv/bin/activate\npip install -r requirements.txt\n```\n\nIf you want to run the examples with CUDA 12 support, you need install `jax` with the `cuda12` extra:\n\n```shell\npip install jax[cuda12]==<version-from-requirements.txt>\n```\n\n### Launch MAP-Elites Example\n\nTo run the MAP-Elites example on the standard Kheperax task, you can use the following colab notebook:\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adaptive-intelligent-robotics/Kheperax/blob/main/examples/main/me_training.ipynb)\n\nOther tasks can be run with the following scripts.\n\n```shell\npython -m examples.target.target_me_training # Target task with MAP-Elites\n\npython -m examples.final_distance.final_distance_me_training # Final Distance task with MAP-Elites\n```\n\nAdditional details on those tasks can be found in the [Tasks and Maze Types](#tasks-and-maze-types) section.\n\n### Rendering images and gifs\n\nTo render images, you can run the following script:\n```shell\npython -m examples.rendering.maps_rendering\n```\n\nTo render gifs, you can run the following script:\n```shell\npython -m examples.rendering.gif\n```\n\n## Tasks and Maze Types\n\nKheperax supports various tasks and maze types. Here's an overview of the available options and their corresponding files:\n\n### Basic Kheperax Task\n- **File**: `kheperax/tasks/main.py`\n- **Class**: `KheperaxTask`\n- **Configuration**: `KheperaxConfig`\n- **Description**: The standard Kheperax environment without a specific target.\n- **Fitness**: sum of rewards\n- **Reward**: negated norm of actions, r_t = -1 * norm a_t (i.e. r_t ~ negated energy spent at time t)\n- **Descriptor**: final (x,y) location of the robot.\n\n#### Key Features:\n\n- **Reward Function**:\n - negated norm of actions, r_t = -1 * norm a_t (i.e. r_t ~ negated energy spent at time t)\n - This encourages the robot to move efficiently.\n- **Episode Termination**:\n - The episode terminates if the maximum number of steps is reached.\n- **Rendering**:\n - When rendered, the target appears as a green circle in the maze.\n\n#### Configuration\n\n A `KheperaxConfig` object contains all the properties of a `KheperaxTask`:\n```python\n\nfrom kheperax.tasks.config import KheperaxConfig\n\nconfig = KheperaxConfig.get_default()\n```\n\nKey configuration options with their default values:\n\n- `episode_length`: `int = 250`, maximum number of timesteps per episode\n- `mlp_policy_hidden_layer_sizes`: `Tuple[int, ...] = (8,)`, structure of policy network's hidden layers\n- `action_scale`: `float = 0.025`, scales the actions for wheel velocities\n- `robot`: `Robot`, includes:\n - `posture`: `Posture(x=0.15, y=0.15, angle=pi/2)`, initial position and orientation\n - `radius`: `float = 0.015`, robot size\n - `laser_ranges`: `Union[float, List[float]] = 0.2`, max ranges for laser sensors\n - `laser_angles`: `List[float] = [-pi/4, 0.0, pi/4]`, placement angles for laser sensors\n - `std_noise_sensor_measures`: `float = 0.0`, noise in sensor readings\n- `maze`: `Maze`, defines the environment layout (default is a standard maze)\n- `std_noise_wheel_velocities`: `float = 0.0`, noise in wheel velocities\n- `resolution`: `Tuple[int, int] = (1024, 1024)`, rendering resolution\n- `limits`: `Tuple[Tuple[float, float], Tuple[float, float]] = ((0., 0.), (1., 1.))`, environment boundaries\n- `action_repeat`: `int = 1`, number of times each action is repeated\n\nUsage example:\n```python\n\nfrom kheperax.tasks.config import KheperaxConfig\nfrom kheperax.simu.robot import Robot\nfrom kheperax.simu.maze import Maze\n\nconfig = KheperaxConfig.get_default()\n\nconfig.episode_length = 1000\nconfig.action_scale = 0.03\nconfig.resolution = (800, 800)\n\nnew_robot = Robot.create_default_robot().replace(radius=0.05)\nconfig.robot = new_robot\n\nnew_maze = Maze.create(segments_list=[...]) # Define maze segments\nconfig.maze = new_maze\n```\n\n### Target Kheperax Task\n- **File**: `kheperax/tasks/target.py`\n- **Class**: `TargetKheperaxTask`\n- **Configuration**: `TargetKheperaxConfig`\n- **Description**: Kheperax environment with a target position for the robot to reach.\n- **Fitness**: sum of rewards (detailed below)\n- **Descriptor**: final (x,y) location of the robot.\n\n#### Key Features:\n\n- **Reward Function**:\n - At each step, the reward is the negative distance to the target center.\n - This encourages the robot to move towards the target.\n- **Episode Termination**:\n - The episode ends when the robot reaches the target (enters the target radius).\n - Also terminates if the maximum number of steps is reached.\n- **Rendering**:\n - When rendered, the target appears as a green circle in the maze.\n\n#### Configuration\n\n`TargetKheperaxConfig` contains the same parameters as `KheperaxConfig`, plus additional target-related parameters:\n- **Target Position**: Defines a specific point in the maze for the robot to reach. Default position: `(0.15, 0.9)`\n- **Target Radius**: Specifies the size of the target area.\n - Default radius: 0.05\n - The episode ends when the robot enters this radius.\n\nUsage example:\n```python\nfrom kheperax.tasks.target import TargetKheperaxConfig, TargetKheperaxTask\n\n# Create a default target configuration\ntarget_config = TargetKheperaxConfig.get_default()\n\n# Customize the configuration if needed\ntarget_config.target_pos = (0.2, 0.8) # Change target position\ntarget_config.target_radius = 0.06 # Change target radius\n\n# Create the target task\ntarget_task = TargetKheperaxTask(target_config)\n\n# Use the task in your experiment\n# ... (reset, step, etc.)\n```\n\n### Final Distance Kheperax Task\n- **File**: `kheperax/tasks/final_distance.py`\n- **Class**: `FinalDistKheperaxTask`\n- **Description**: A task that only rewards the final distance to the target.\n- **Fitness**: sum of rewards (detailed below)\n- **Descriptor**: final (x,y) location of the robot.\n\n#### Key Features:\n- **Reward Function**:\n - At each step, the reward is -1, except for the final step, where the reward is 100 * the negative distance to the target center.\n - This encourages the robot to move towards the target as quickly as possible.\n- **Episode Termination**:\n - The episode ends when the robot reaches the target (enters the target radius).\n - Also terminates if the maximum number of steps is reached.\n\n`TargetKheperaxConfig` is still used to manage the configuration for this kind of task (see above description)\n\n### Maze Maps\n- **File**: `kheperax/envs/maze_maps.py`\n- **Description**: Defines various maze layouts, including:\n - Standard Kheperax maze - `standard`\n - Pointmaze - `pointmaze`\n - Snake maze - `snake`\n\nTo use a specific maze map:\n\n```python\n\nfrom kheperax.tasks.config import KheperaxConfig\nfrom kheperax.tasks.target import TargetKheperaxConfig\n\n# Get the default configuration for the desired maze map\nmaze_map = KheperaxConfig.get_default_for_map(\"standard\") # or \"pointmaze\", \"snake\"\n\n# For target-based tasks\ntarget_maze_map = TargetKheperaxConfig.get_default_for_map(\"standard\") # or \"pointmaze\", \"snake\"\n```\n\n| | Standard | PointMaze | Snake |\n|:---------:|:---------------------------------------------------------------:|:----------------------------------------------------------------:|:------------------------------------------------------------:|\n| No target | <img src=\"img/maps/no_target/no_quad/standard.png\" width=\"150\"> | <img src=\"img/maps/no_target/no_quad/pointmaze.png\" width=\"150\"> | <img src=\"img/maps/no_target/no_quad/snake.png\" width=\"150\"> |\n| Target | <img src=\"img/maps/target/no_quad/standard.png\" width=\"150\"> | <img src=\"img/maps/target/no_quad/pointmaze.png\" width=\"150\"> | <img src=\"img/maps/target/no_quad/snake.png\" width=\"150\"> |\n\n### Quad Mazes\n- **File**: `kheperax/tasks/quad.py`\n- **Function**: `make_quad_config`\n- **Description**: Creates quad mazes, which are essentially four copies of the original maze flipped in different orientations.\n\nTo create a quad maze configuration:\n\n```python\n\nfrom kheperax.tasks.config import KheperaxConfig\nfrom kheperax.tasks.target import TargetKheperaxConfig\nfrom kheperax.tasks.quad import make_quad_config\n\n# Get the default configuration for the desired maze map\nmaze_map = KheperaxConfig.get_default_for_map(\"standard\") # or \"pointmaze\", \"snake\"\ntarget_maze_map = TargetKheperaxConfig.get_default_for_map(\"standard\") # or \"pointmaze\", \"snake\"\n\n# Create a quad maze configuration\nquad_config = make_quad_config(maze_map) # or target_maze_map for target-based tasks\n```\n\n| | Quad Standard | Quad PointMaze | Quad Snake |\n|:---------:|:------------------------------------------------------------:|:-------------------------------------------------------------:|:---------------------------------------------------------:|\n| No target | <img src=\"img/maps/no_target/quad/standard.png\" width=\"150\"> | <img src=\"img/maps/no_target/quad/pointmaze.png\" width=\"150\"> | <img src=\"img/maps/no_target/quad/snake.png\" width=\"150\"> |\n| Target | <img src=\"img/maps/target/quad/standard.png\" width=\"150\"> | <img src=\"img/maps/target/quad/pointmaze.png\" width=\"150\"> | <img src=\"img/maps/target/quad/snake.png\" width=\"150\"> |\n\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n## License\n\nThis project is licensed under the MIT License - see the LICENSE file for details.\n\n## Citation\n\nIf you use Kheperax in your research, please cite the following paper:\n\n```bibtex\n@inproceedings{grillotti2023kheperax,\n title={Kheperax: a lightweight jax-based robot control environment for benchmarking quality-diversity algorithms},\n author={Grillotti, Luca and Cully, Antoine},\n booktitle={Proceedings of the Companion Conference on Genetic and Evolutionary Computation},\n pages={2163--2165},\n year={2023}\n}\n```\n\n## Acknowledgements\n\n- [Original `fastsim` simulator](https://github.com/sferes2/libfastsim) by Mouret and Doncieux (2012)\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2022 Adaptive and Intelligent Robotics Lab Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "A-maze-ing environment in jax",
"version": "0.2.0",
"project_urls": {
"Homepage": "https://github.com/adaptive-intelligent-robotics/Kheperax"
},
"split_keywords": [
"quality-diversity",
" reinforcement learning",
" jax"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "f83c60afdd8758a6a3ae06ae0c9ab8cbaf601249cda0e3392a826ffd2e650d37",
"md5": "973d41ee85854618223889fb9f75b5bb",
"sha256": "5a21b8d357e66a56486c84afb9391a0b2b65754a708e2cdb1de2ccdd779b1823"
},
"downloads": -1,
"filename": "kheperax-0.2.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "973d41ee85854618223889fb9f75b5bb",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 27708,
"upload_time": "2024-09-25T20:11:48",
"upload_time_iso_8601": "2024-09-25T20:11:48.695679Z",
"url": "https://files.pythonhosted.org/packages/f8/3c/60afdd8758a6a3ae06ae0c9ab8cbaf601249cda0e3392a826ffd2e650d37/kheperax-0.2.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "75c4b8fe4a2ad325d4436e70521e4fb64f677c7c90b3be7794901fadc8a6bf86",
"md5": "9cd2d87e8f478b8c7d2e259eee69d24d",
"sha256": "df6f3fb55650b7c3849fdfb3d8594824565728099d0ef140d475463370978205"
},
"downloads": -1,
"filename": "kheperax-0.2.0.tar.gz",
"has_sig": false,
"md5_digest": "9cd2d87e8f478b8c7d2e259eee69d24d",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 24488,
"upload_time": "2024-09-25T20:11:51",
"upload_time_iso_8601": "2024-09-25T20:11:51.675970Z",
"url": "https://files.pythonhosted.org/packages/75/c4/b8fe4a2ad325d4436e70521e4fb64f677c7c90b3be7794901fadc8a6bf86/kheperax-0.2.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-25 20:11:51",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "adaptive-intelligent-robotics",
"github_project": "Kheperax",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "kheperax"
}