# MultiGrid
<br/>
<p align="center">
<img src="https://i.imgur.com/usbavAh.gif" width=400 alt="Blocked Unlock Pickup: 2 Agents">
</p>
<br/>
The **MultiGrid** library provides contains a collection of fast multi-agent discrete gridworld environments for reinforcement learning in [Gymnasium](https://github.com/Farama-Foundation/Gymnasium). This is a multi-agent extension of the [minigrid](https://github.com/Farama-Foundation/Minigrid) library, and the interface is designed to be as similar as possible.
The environments are designed to be fast and easily customizable. Compared to minigrid, the underlying gridworld logic is **significantly optimized**, with environment simulation 10x to 20x faster by our benchmarks.
Documentation for this library can be found at [ini.io/docs/multigrid](https://ini.io/docs/multigrid).
## Installation
git clone https://github.com/ini/multigrid
cd multigrid
pip install -e .
This package requires Python 3.9 or later.
## Environments
The `multigrid.envs` package provides implementations of several multi-agent environments. [You can find the full list here](https://ini.io/docs/multigrid/multigrid/multigrid.envs).
## API
MultiGrid follows the same pattern as RLlib's [MultiAgentEnv API](https://docs.ray.io/en/latest/rllib/rllib-env.html#multi-agent-and-hierarchical) and PettingZoo's [ParallelEnv API](https://pettingzoo.farama.org/api/parallel/).
```python
import gymnasium as gym
import multigrid.envs
env = gym.make('MultiGrid-Empty-8x8-v0', agents=2, render_mode='human')
observations, infos = env.reset()
while not env.is_done():
# this is where you would insert your policy / policies
actions = {agent.index: agent.action_space.sample() for agent in env.agents}
observations, rewards, terminations, truncations, infos = env.step(actions)
env.close()
```
More information about using MultiGrid directly with other APIs:
* [PettingZoo](https://ini.io/docs/multigrid/multigrid/multigrid.pettingzoo)
* [RLlib](https://ini.io/docs/multigrid/multigrid/multigrid.rllib)
## Training Agents
See the [scripts folder](./scripts) for an example training with RLlib.
## Documentation
Documentation for this package can be found at [ini.io/docs/multigrid](https://ini.io/docs/multigrid).
## Citation
To cite this project please use:
```
@software{multigrid,
author = {Oguntola, Ini},
title = {Fast Multi-Agent Gridworld Environments for Gymnasium},
url = {https://github.com/ini/multigrid},
year = {2023},
}
```
Raw data
{
"_id": null,
"home_page": "",
"name": "multigrid",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "",
"keywords": "Memory, Environment, Agent, Multi-Agent, RL, Gymnasium, Cooperative, Competitive",
"author": "",
"author_email": "Ini Oguntola <ini@ini.io>",
"download_url": "https://files.pythonhosted.org/packages/88/52/cc45337f5b3cd4e39d1f9fa2a811c63b32ad48c2f4a6d4ab902340d28495/multigrid-0.1.0.tar.gz",
"platform": null,
"description": "# MultiGrid\n\n<br/>\n<p align=\"center\">\n <img src=\"https://i.imgur.com/usbavAh.gif\" width=400 alt=\"Blocked Unlock Pickup: 2 Agents\">\n</p>\n<br/>\n\nThe **MultiGrid** library provides contains a collection of fast multi-agent discrete gridworld environments for reinforcement learning in [Gymnasium](https://github.com/Farama-Foundation/Gymnasium). This is a multi-agent extension of the [minigrid](https://github.com/Farama-Foundation/Minigrid) library, and the interface is designed to be as similar as possible.\n\nThe environments are designed to be fast and easily customizable. Compared to minigrid, the underlying gridworld logic is **significantly optimized**, with environment simulation 10x to 20x faster by our benchmarks.\n\nDocumentation for this library can be found at [ini.io/docs/multigrid](https://ini.io/docs/multigrid).\n\n## Installation\n\n git clone https://github.com/ini/multigrid\n cd multigrid\n pip install -e .\n\nThis package requires Python 3.9 or later.\n\n## Environments\n\nThe `multigrid.envs` package provides implementations of several multi-agent environments. [You can find the full list here](https://ini.io/docs/multigrid/multigrid/multigrid.envs).\n\n## API\n\nMultiGrid follows the same pattern as RLlib's [MultiAgentEnv API](https://docs.ray.io/en/latest/rllib/rllib-env.html#multi-agent-and-hierarchical) and PettingZoo's [ParallelEnv API](https://pettingzoo.farama.org/api/parallel/).\n\n```python\nimport gymnasium as gym\nimport multigrid.envs\n\nenv = gym.make('MultiGrid-Empty-8x8-v0', agents=2, render_mode='human')\n\nobservations, infos = env.reset()\nwhile not env.is_done():\n # this is where you would insert your policy / policies\n actions = {agent.index: agent.action_space.sample() for agent in env.agents}\n observations, rewards, terminations, truncations, infos = env.step(actions)\n\nenv.close()\n```\n\nMore information about using MultiGrid directly with other APIs:\n* [PettingZoo](https://ini.io/docs/multigrid/multigrid/multigrid.pettingzoo)\n* [RLlib](https://ini.io/docs/multigrid/multigrid/multigrid.rllib)\n\n## Training Agents\n\nSee the [scripts folder](./scripts) for an example training with RLlib. \n\n## Documentation\n\nDocumentation for this package can be found at [ini.io/docs/multigrid](https://ini.io/docs/multigrid).\n\n## Citation\n\nTo cite this project please use:\n\n```\n@software{multigrid,\n author = {Oguntola, Ini},\n title = {Fast Multi-Agent Gridworld Environments for Gymnasium},\n url = {https://github.com/ini/multigrid},\n year = {2023},\n}\n```\n",
"bugtrack_url": null,
"license": "Apache License",
"summary": "Fast multi-agent gridworld reinforcement learning environments.",
"version": "0.1.0",
"project_urls": null,
"split_keywords": [
"memory",
" environment",
" agent",
" multi-agent",
" rl",
" gymnasium",
" cooperative",
" competitive"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "df70d136c8173bdcf3cd0a3d14ca0e8b2c2f315d7224585e6a3e910c31a63419",
"md5": "a97492f591f3e09ce6c031c9bf9a71b9",
"sha256": "ea7f05395d6568ae89acfd91bd05f0a9da7796c79bc943a2ba4f75923e97ad7a"
},
"downloads": -1,
"filename": "multigrid-0.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a97492f591f3e09ce6c031c9bf9a71b9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 56070,
"upload_time": "2023-07-13T17:12:54",
"upload_time_iso_8601": "2023-07-13T17:12:54.475503Z",
"url": "https://files.pythonhosted.org/packages/df/70/d136c8173bdcf3cd0a3d14ca0e8b2c2f315d7224585e6a3e910c31a63419/multigrid-0.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "8852cc45337f5b3cd4e39d1f9fa2a811c63b32ad48c2f4a6d4ab902340d28495",
"md5": "c93b23fca28e6bd76110f85fbc69e944",
"sha256": "48e13c0cb21e623eb8cdb1c4cf860a0db4c470aa6ee0787a3cfec257f758f80c"
},
"downloads": -1,
"filename": "multigrid-0.1.0.tar.gz",
"has_sig": false,
"md5_digest": "c93b23fca28e6bd76110f85fbc69e944",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 44671,
"upload_time": "2023-07-13T17:12:56",
"upload_time_iso_8601": "2023-07-13T17:12:56.366009Z",
"url": "https://files.pythonhosted.org/packages/88/52/cc45337f5b3cd4e39d1f9fa2a811c63b32ad48c2f4a6d4ab902340d28495/multigrid-0.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-07-13 17:12:56",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "multigrid"
}