gym-mage


Namegym-mage JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/damat-le/mage
SummaryMulti-Agent Grid Environment
upload_time2023-08-14 08:40:25
maintainer
docs_urlNone
authorLeo D'Amato
requires_python>=3.7
license
keywords reinforcement learning environment gridworld agent rl openaigym openai-gym gym multi-agent
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # MAGE: Multi-Agent Grid Environment

![](img/movie.gif)

## Introduction

MAGE is a grid-based environment with obstacles (walls) and agents. The agents can move in one of the four cardinal directions. If they try to move over an obstacle or out of the grid bounds, they stay in place. Each agent has a unique color and a goal state of the same color. The environment is episodic, i.e. the episode ends when all agents reach their goals.

To initialise the grid, the user must decide where to put the walls on the grid. This can bee done by either selecting an existing map or by passing a custom map. To load an existing map, the name of the map must be passed to the `obstacle_map` argument. Available pre-existing map names are "4x4" and "8x8". Conversely, to load custom map, the user must provide a map correctly formatted to the `obstacle_map` argument. The obstacle map must be passed as a list of strings, where each string denotes a row of the grid and it is composed by a sequence of 0s and 1s, where 0 denotes a free cell and 1 denotes a wall cell. An example of a 4x4 map is the following:

```python
["0000", 
 "0101", 
 "0001", 
 "1000"]
``` 

The user must also decide the number of agents and their starting and goal positions on the grid. This can be done by passing two lists of tuples, namely `starts_xy` and `goals_xy`, where each tuple is a pair of coordinates (x, y) representing the agent starting/goal position. 

Currently, the user must also define the color of each agent. This can be done by passing a list of strings, where each string is a color name. The available color names are: red, green, blue, purple, yellow, grey and black. This requirement will be removed in the future and the color will be assigned automatically.

The user can also decide whether the agents disappear when they reach their goal. This can be done by passing a boolean value to `disappear_on_goal`. If `disappear_on_goal` is True, the agent disappears when it reaches its goal, otherwise the agent remains on the grid after it reaches its goal. This feature is currently not implemented and will be added in future versions.

Note that, currently no reward mechanism is implemented in the environment but it will be introduced soon.

## Installation

<!---
To install SimpleGrid, you can either use pip

```bash
pip install mage
```

or you can clone the repository and run an editable installation

```bash
git clone https://github.com/damat-le/mage.git
cd mage
pip install -e .
```
--->

Currently, only editable installation is supported:

```bash
git clone https://github.com/damat-le/mage.git
cd mage
pip install -e .
```

## Getting Started

An example illustrating how to use MAGE is available in the `example.py` script.

## Citation

Please use this bibtex if you want to cite this repository in your publications:

```tex
@misc{mage,
  author = {Leo D'Amato},
  title = {Multi-Agent Grid Environment},
  year = {2022},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/damat-le/mage}},
}
```
## Disclaimer

The project is under development. In the future releases, the following features will be added:

- add reward mechanism for RL tasks
- add gym/PettingZoo integration
- add the random generation of maps
- add the disappear-on-goal feature
- prepare the project to be uploaded on PyPI

<!---
## Getting Started

Basic usage options:

```python
import gym 
import gym_simplegrid

# Load the default 8x8 map
env = gym.make('SimpleGrid-8x8-v0')

# Load the default 4x4 map
env = gym.make('SimpleGrid-4x4-v0')

# Load a random map
env = gym.make('SimpleGrid-v0')

# Load a custom map with multiple starting states
# At the beginning of each episode a new starting state will be sampled
my_desc = [
        "SEEEEEES",
        "EEESEEES",
        "WEEWEEEE",
        "EEEEEWEG",
    ]
env = gym.make('SimpleGrid-v0', desc=my_desc)

# Set custom rewards and introduce noise
# The agent will move in the intended direction with probability 1-p_noise
my_reward_map = {
        b'E': -1.0,
        b'S': -0.0,
        b'W': -5.0,
        b'G': 5.0,
    }
env = gym.make('SimpleGrid-8x8-v0', reward_map=my_reward_map, p_noise=.4)
```

Example with rendering:

```python
import gym 
import gym_simplegrid

env = gym.make('SimpleGrid-8x8-v0')
observation = env.reset()
T = 50
for _ in range(T):
    action = env.action_space.sample()
    env.render()
    observation, reward, done, info = env.step(action)
    if done:
        observation = env.reset()
env.close()
```
--->

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/damat-le/mage",
    "name": "gym-mage",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "reinforcement learning,environment,gridworld,agent,rl,openaigym,openai-gym,gym,multi-agent",
    "author": "Leo D'Amato",
    "author_email": "leo.damato.dev@gmail.com",
    "download_url": "",
    "platform": null,
    "description": "# MAGE: Multi-Agent Grid Environment\n\n![](img/movie.gif)\n\n## Introduction\n\nMAGE is a grid-based environment with obstacles (walls) and agents. The agents can move in one of the four cardinal directions. If they try to move over an obstacle or out of the grid bounds, they stay in place. Each agent has a unique color and a goal state of the same color. The environment is episodic, i.e. the episode ends when all agents reach their goals.\n\nTo initialise the grid, the user must decide where to put the walls on the grid. This can bee done by either selecting an existing map or by passing a custom map. To load an existing map, the name of the map must be passed to the `obstacle_map` argument. Available pre-existing map names are \"4x4\" and \"8x8\". Conversely, to load custom map, the user must provide a map correctly formatted to the `obstacle_map` argument. The obstacle map must be passed as a list of strings, where each string denotes a row of the grid and it is composed by a sequence of 0s and 1s, where 0 denotes a free cell and 1 denotes a wall cell. An example of a 4x4 map is the following:\n\n```python\n[\"0000\", \n \"0101\", \n \"0001\", \n \"1000\"]\n``` \n\nThe user must also decide the number of agents and their starting and goal positions on the grid. This can be done by passing two lists of tuples, namely `starts_xy` and `goals_xy`, where each tuple is a pair of coordinates (x, y) representing the agent starting/goal position. \n\nCurrently, the user must also define the color of each agent. This can be done by passing a list of strings, where each string is a color name. The available color names are: red, green, blue, purple, yellow, grey and black. This requirement will be removed in the future and the color will be assigned automatically.\n\nThe user can also decide whether the agents disappear when they reach their goal. This can be done by passing a boolean value to `disappear_on_goal`. If `disappear_on_goal` is True, the agent disappears when it reaches its goal, otherwise the agent remains on the grid after it reaches its goal. This feature is currently not implemented and will be added in future versions.\n\nNote that, currently no reward mechanism is implemented in the environment but it will be introduced soon.\n\n## Installation\n\n<!---\nTo install SimpleGrid, you can either use pip\n\n```bash\npip install mage\n```\n\nor you can clone the repository and run an editable installation\n\n```bash\ngit clone https://github.com/damat-le/mage.git\ncd mage\npip install -e .\n```\n--->\n\nCurrently, only editable installation is supported:\n\n```bash\ngit clone https://github.com/damat-le/mage.git\ncd mage\npip install -e .\n```\n\n## Getting Started\n\nAn example illustrating how to use MAGE is available in the `example.py` script.\n\n## Citation\n\nPlease use this bibtex if you want to cite this repository in your publications:\n\n```tex\n@misc{mage,\n  author = {Leo D'Amato},\n  title = {Multi-Agent Grid Environment},\n  year = {2022},\n  publisher = {GitHub},\n  journal = {GitHub repository},\n  howpublished = {\\url{https://github.com/damat-le/mage}},\n}\n```\n## Disclaimer\n\nThe project is under development. In the future releases, the following features will be added:\n\n- add reward mechanism for RL tasks\n- add gym/PettingZoo integration\n- add the random generation of maps\n- add the disappear-on-goal feature\n- prepare the project to be uploaded on PyPI\n\n<!---\n## Getting Started\n\nBasic usage options:\n\n```python\nimport gym \nimport gym_simplegrid\n\n# Load the default 8x8 map\nenv = gym.make('SimpleGrid-8x8-v0')\n\n# Load the default 4x4 map\nenv = gym.make('SimpleGrid-4x4-v0')\n\n# Load a random map\nenv = gym.make('SimpleGrid-v0')\n\n# Load a custom map with multiple starting states\n# At the beginning of each episode a new starting state will be sampled\nmy_desc = [\n        \"SEEEEEES\",\n        \"EEESEEES\",\n        \"WEEWEEEE\",\n        \"EEEEEWEG\",\n    ]\nenv = gym.make('SimpleGrid-v0', desc=my_desc)\n\n# Set custom rewards and introduce noise\n# The agent will move in the intended direction with probability 1-p_noise\nmy_reward_map = {\n        b'E': -1.0,\n        b'S': -0.0,\n        b'W': -5.0,\n        b'G': 5.0,\n    }\nenv = gym.make('SimpleGrid-8x8-v0', reward_map=my_reward_map, p_noise=.4)\n```\n\nExample with rendering:\n\n```python\nimport gym \nimport gym_simplegrid\n\nenv = gym.make('SimpleGrid-8x8-v0')\nobservation = env.reset()\nT = 50\nfor _ in range(T):\n    action = env.action_space.sample()\n    env.render()\n    observation, reward, done, info = env.step(action)\n    if done:\n        observation = env.reset()\nenv.close()\n```\n--->\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Multi-Agent Grid Environment",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/damat-le/mage"
    },
    "split_keywords": [
        "reinforcement learning",
        "environment",
        "gridworld",
        "agent",
        "rl",
        "openaigym",
        "openai-gym",
        "gym",
        "multi-agent"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f7e5f977cd193597882bf2904687b4690527ae71c3251589085d7947f7bdbc99",
                "md5": "38d22a0e0738d835b227752b666cc55f",
                "sha256": "e92b46be3c781a7b9e4d589be31c47b63043e3438530765a4c4ccdc54db0f3ab"
            },
            "downloads": -1,
            "filename": "gym_mage-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "38d22a0e0738d835b227752b666cc55f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 15420,
            "upload_time": "2023-08-14T08:40:25",
            "upload_time_iso_8601": "2023-08-14T08:40:25.267645Z",
            "url": "https://files.pythonhosted.org/packages/f7/e5/f977cd193597882bf2904687b4690527ae71c3251589085d7947f7bdbc99/gym_mage-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-14 08:40:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "damat-le",
    "github_project": "mage",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "gym-mage"
}
        
Elapsed time: 0.10438s