# Aquarium Environment
A Comprehensive Framework for Exploring Predator-Prey Dynamics through Multi-Agent Reinforcement Learning Algorithms using the pettingzoo interface.
![1706264402830](./example/MARL_big_font.png)
## Install
```bash
pip install marl-aquarium
```
## Example
```python
from marl_aquarium import aquarium_v0
env = aquarium_v0.env()
env.reset(seed=42)
for agent in env.agent_iter():
observation, reward, termination, truncation, info = env.last()
if termination or truncation:
action = None
else:
# this is where you would insert your policy
action = env.action_space(agent).sample()
env.step(action)
env.render()
env.close()
```
![1706264371433](./example/cone_screenshot.png)
## Customize the environment
| Parameter | Description | Default Value |
| --------------------------- | --------------------------------------------------------------------------------------- | ------------- |
| `render_mode` | The mode of rendering. Options include "human" for on-screen rendering and "rgb_array". | `"human"` |
| `observable_walls` | Number of observable walls for the agents. | `2` |
| `width` | The width of the environment window. | `800` |
| `height` | The height of the environment window. | `800` |
| `caption` | The caption of the environment window. | `"Aquarium"` |
| `fps` | Frames per second, controlling the speed of simulation. | `60` |
| `max_time_steps` | Maximum number of time steps per episode. | `3000` |
| `action_count` | Number of possible actions an agent can take. | `16` |
| `predator_count` | Number of predators in the environment. | `1` |
| `prey_count` | Number of prey in the environment. | `16` |
| `predator_observe_count` | Number of predators that can be observed by an agent. | `1` |
| `prey_observe_count` | Number of prey that can be observed by an agent. | `3` |
| `draw_force_vectors` | Whether to draw force vectors for debugging. | `False` |
| `draw_action_vectors` | Whether to draw action vectors for debugging. | `False` |
| `draw_view_cones` | Whether to draw view cones for debugging. | `False` |
| `draw_hit_boxes` | Whether to draw hit boxes for debugging. | `False` |
| `draw_death_circles` | Whether to draw death circles for debugging. | `False` |
| `fov_enabled` | Whether field of view is enabled for agents. | `True` |
| `keep_prey_count_constant` | Whether to keep the prey count constant throughout the simulation. | `True` |
| `prey_radius` | Radius of prey entities. | `20` |
| `prey_max_acceleration` | Maximum acceleration of prey entities. | `1.0` |
| `prey_max_velocity` | Maximum velocity of prey entities. | `4.0` |
| `prey_view_distance` | View distance of prey entities. | `100` |
| `prey_replication_age` | Age at which prey entities replicate. | `200` |
| `prey_max_steer_force` | Maximum steering force of prey entities. | `0.6` |
| `prey_fov` | Field of view for prey entities. | `120` |
| `prey_reward` | Reward for prey survival per time step. | `1` |
| `prey_punishment` | Punishment for prey being caught. | `1000` |
| `max_prey_count` | Maximum number of prey entities in the environment. | `20` |
| `predator_max_acceleration` | Maximum acceleration of predator entities. | `0.6` |
| `predator_radius` | Radius of predator entities. | `30` |
| `predator_max_velocity` | Maximum velocity of predator entities. | `5.0` |
| `predator_view_distance` | View distance of predator entities. | `200` |
| `predator_max_steer_force` | Maximum steering force of predator entities. | `0.6` |
| `predator_max_age` | Maximum age of predator entities. | `3000` |
| `predator_fov` | Field of view for predator entities. | `150` |
| `predator_reward` | Reward for predator catching prey. | `10` |
| `catch_radius` | Radius within which predators can catch prey. | `100` |
| `procreate` | Whether entities can procreate within the environment. | `False` |
```python
env = aquarium_v0.env(
draw_force_vectors=True,
draw_action_vectors=True,
draw_view_cones=True,
draw_hit_boxes=True,
draw_death_circles=True,
)
```
Raw data
{
"_id": null,
"home_page": "https://github.com/michaelkoelle/marl-aquarium",
"name": "marl-aquarium",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": "",
"keywords": "artificial intelligence,pettingzoo,multi-agent,reinforcement learning,deep learning,predator-prey,gymnasium,gym",
"author": "Yannick Erpelding and Michael K\u00f6lle",
"author_email": "michael.koelle@ifi.lmu.de",
"download_url": "https://files.pythonhosted.org/packages/e7/23/e8a3076298c33213fd9da0015876d6b741cac7f3a3e0246c20bfa3449396/marl-aquarium-0.1.10.tar.gz",
"platform": null,
"description": "# Aquarium Environment\n\nA Comprehensive Framework for Exploring Predator-Prey Dynamics through Multi-Agent Reinforcement Learning Algorithms using the pettingzoo interface.\n\n![1706264402830](./example/MARL_big_font.png)\n\n## Install\n\n```bash\npip install marl-aquarium\n```\n\n## Example\n\n```python\nfrom marl_aquarium import aquarium_v0\n\nenv = aquarium_v0.env()\nenv.reset(seed=42)\n\nfor agent in env.agent_iter():\n observation, reward, termination, truncation, info = env.last()\n\n if termination or truncation:\n action = None\n else:\n # this is where you would insert your policy\n action = env.action_space(agent).sample()\n\n env.step(action)\n env.render()\nenv.close()\n```\n\n![1706264371433](./example/cone_screenshot.png)\n\n## Customize the environment\n\n| Parameter | Description | Default Value |\n| --------------------------- | --------------------------------------------------------------------------------------- | ------------- |\n| `render_mode` | The mode of rendering. Options include \"human\" for on-screen rendering and \"rgb_array\". | `\"human\"` |\n| `observable_walls` | Number of observable walls for the agents. | `2` |\n| `width` | The width of the environment window. | `800` |\n| `height` | The height of the environment window. | `800` |\n| `caption` | The caption of the environment window. | `\"Aquarium\"` |\n| `fps` | Frames per second, controlling the speed of simulation. | `60` |\n| `max_time_steps` | Maximum number of time steps per episode. | `3000` |\n| `action_count` | Number of possible actions an agent can take. | `16` |\n| `predator_count` | Number of predators in the environment. | `1` |\n| `prey_count` | Number of prey in the environment. | `16` |\n| `predator_observe_count` | Number of predators that can be observed by an agent. | `1` |\n| `prey_observe_count` | Number of prey that can be observed by an agent. | `3` |\n| `draw_force_vectors` | Whether to draw force vectors for debugging. | `False` |\n| `draw_action_vectors` | Whether to draw action vectors for debugging. | `False` |\n| `draw_view_cones` | Whether to draw view cones for debugging. | `False` |\n| `draw_hit_boxes` | Whether to draw hit boxes for debugging. | `False` |\n| `draw_death_circles` | Whether to draw death circles for debugging. | `False` |\n| `fov_enabled` | Whether field of view is enabled for agents. | `True` |\n| `keep_prey_count_constant` | Whether to keep the prey count constant throughout the simulation. | `True` |\n| `prey_radius` | Radius of prey entities. | `20` |\n| `prey_max_acceleration` | Maximum acceleration of prey entities. | `1.0` |\n| `prey_max_velocity` | Maximum velocity of prey entities. | `4.0` |\n| `prey_view_distance` | View distance of prey entities. | `100` |\n| `prey_replication_age` | Age at which prey entities replicate. | `200` |\n| `prey_max_steer_force` | Maximum steering force of prey entities. | `0.6` |\n| `prey_fov` | Field of view for prey entities. | `120` |\n| `prey_reward` | Reward for prey survival per time step. | `1` |\n| `prey_punishment` | Punishment for prey being caught. | `1000` |\n| `max_prey_count` | Maximum number of prey entities in the environment. | `20` |\n| `predator_max_acceleration` | Maximum acceleration of predator entities. | `0.6` |\n| `predator_radius` | Radius of predator entities. | `30` |\n| `predator_max_velocity` | Maximum velocity of predator entities. | `5.0` |\n| `predator_view_distance` | View distance of predator entities. | `200` |\n| `predator_max_steer_force` | Maximum steering force of predator entities. | `0.6` |\n| `predator_max_age` | Maximum age of predator entities. | `3000` |\n| `predator_fov` | Field of view for predator entities. | `150` |\n| `predator_reward` | Reward for predator catching prey. | `10` |\n| `catch_radius` | Radius within which predators can catch prey. | `100` |\n| `procreate` | Whether entities can procreate within the environment. | `False` |\n\n```python\nenv = aquarium_v0.env(\n draw_force_vectors=True,\n draw_action_vectors=True,\n draw_view_cones=True,\n draw_hit_boxes=True,\n draw_death_circles=True,\n)\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Aquarium: A Comprehensive Framework for Exploring Predator-Prey Dynamics through Multi-Agent Reinforcement Learning Algorithms",
"version": "0.1.10",
"project_urls": {
"Homepage": "https://github.com/michaelkoelle/marl-aquarium"
},
"split_keywords": [
"artificial intelligence",
"pettingzoo",
"multi-agent",
"reinforcement learning",
"deep learning",
"predator-prey",
"gymnasium",
"gym"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "4baff6f232ef689d77a27018aeab73683ce556124fcf9387af38a82635e18f90",
"md5": "e9a546450ce5676e7d7c4979c98a8e39",
"sha256": "be4890d47d9ce4439c7e0ac0fa093d5d960e74d2ef1671ec5e0a7d0132e6cc19"
},
"downloads": -1,
"filename": "marl_aquarium-0.1.10-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e9a546450ce5676e7d7c4979c98a8e39",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 29610,
"upload_time": "2024-02-28T19:17:26",
"upload_time_iso_8601": "2024-02-28T19:17:26.806158Z",
"url": "https://files.pythonhosted.org/packages/4b/af/f6f232ef689d77a27018aeab73683ce556124fcf9387af38a82635e18f90/marl_aquarium-0.1.10-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e723e8a3076298c33213fd9da0015876d6b741cac7f3a3e0246c20bfa3449396",
"md5": "8c9a433474d5d2dd29578d57462ab2b0",
"sha256": "18ce50a60f2027660ded774d108deee988113355b1452c44c6861cc71a22cdad"
},
"downloads": -1,
"filename": "marl-aquarium-0.1.10.tar.gz",
"has_sig": false,
"md5_digest": "8c9a433474d5d2dd29578d57462ab2b0",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 29013,
"upload_time": "2024-02-28T19:17:28",
"upload_time_iso_8601": "2024-02-28T19:17:28.208885Z",
"url": "https://files.pythonhosted.org/packages/e7/23/e8a3076298c33213fd9da0015876d6b741cac7f3a3e0246c20bfa3449396/marl-aquarium-0.1.10.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-02-28 19:17:28",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "michaelkoelle",
"github_project": "marl-aquarium",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "marl-aquarium"
}