momaland


Namemomaland JSON
Version 0.0.2 PyPI version JSON
download
home_page
SummaryA standard API for Multi-Objective Multi-Agent Decision making and a diverse set of reference environments.
upload_time2023-12-20 09:26:53
maintainer
docs_urlNone
authorUmut Ucak, Hicham Azmani
requires_python>=3.8
licenseGNU General Public License v3.0
keywords reinforcement learning multi-objective multi-agent rl ai gymnasium pettingzoo
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ![tests](https://github.com/rradules/momaland/workflows/Python%20tests/badge.svg)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)


<!-- start elevator-pitch -->

MOMAland is an open source Python library for developing and comparing multi-objective multi-agent reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Essentially, the environments follow the standard [PettingZoo APIs](https://github.com/Farama-Foundation/PettingZoo), but return vectorized rewards as numpy arrays instead of scalar values.

The documentation website is at TODO, and we have a public discord server (which we also use to coordinate development work) that you can join here: https://discord.gg/bnJ6kubTg6.

<!-- end elevator-pitch -->

## Environments

MOMAland includes environments taken from the MOMARL literature, as well as multi-objective version of classical environments, such as SISL or Butterfly.
The full list of environments is available at TODO.

## Installation
<!-- start install -->

To install MOMAland, use:
```bash
pip install momaland
```

This does not include dependencies for all families of environments (some can be problematic to install on certain systems). You can install these dependencies for one family like `pip install "momaland"` or use `pip install "momaland[all]"` to install all dependencies.

<!-- end install -->

## API

<!-- start snippet-usage -->

As for PettingZoo, the MOMAland API models environments as simple Python `env` classes. Creating environment instances and interacting with them is very simple - here's an example using the "mosurround_v0" environment:

```python
import momaland
import numpy as np

# It follows the original PettingZoo APIs ...
env = momaland.envs.crazyrl.surround.surround_v0.parallel_env()

obs, info = env.reset()
# but vector_reward is a numpy array!
actions = {agent: env.action_spaces[agent].sample() for agent in env.agents}
next_obs, vector_rewards, terminated, truncated, info = env.step(actions)

# Optionally, you can scalarize the reward function with the LinearReward wrapper to fall back to the original PZ API
env = momaland.LinearReward(env, weight=np.array([0.8, 0.2, 0.2]))
```
For details on multi-objective multi-agent RL definitions, see [Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey](https://arxiv.org/abs/1909.02964).

You can also check more examples in this colab notebook! TODO

<!-- end snippet-usage -->


## Environment Versioning

MOMAland keeps strict versioning for reproducibility reasons. All environments end in a suffix like "-v0".  When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.

## Development Roadmap
We have a roadmap for future development available here: TODO.

## Project Maintainers

Project Managers:  TODO

Maintenance for this project is also contributed by the broader Farama team: [farama.org/team](https://farama.org/team).

## Citing

<!-- start citation -->

If you use this repository in your research, please cite:

```bibtex
@inproceedings{TODO}
```

<!-- end citation -->

## Development

### Setup pre-commit
Clone the repo and run `pre-commit install` to setup the pre-commit hooks.

### New environment steps
1. Create a new environment package in `momaland/envs/`
2. Create a new environment class in `momaland/envs/<env_name>/<env_name>.py`, this class should extend `MOParallelEnv` or `MOAECEnv`. Override the PettingZoo methods (see their [documentation](https://pettingzoo.farama.org/api/aec/)). Additionally, you should define a member `self.reward_spaces` that is a dictionary of space specifying the shape of the reward vector of each agent, as well as a method `reward_space(self, agent) -> Space` that returns the reward space of a given agent.
3. Define the factory functions to create your class: `parallel_env` returns a parallel version of the env, `env` returns an AEC version, and `raw_env` that is the pure class constructor (it is not used in practice). (!) use the conversions that are defined inside our repository, e.g. `mo_parallel_to_aec` instead of `parallel_to_aec` from PZ.
4. (!) do not use `OrderEnforcingWrapper`, it prevents from accessing the `reward_space` of the env :-(;
5. Add a versioned constructor of your env in the directory which exports the factory functions (see `mobeach_v0.py` for an example).
6. Add your environment to the tests in `utils/all_modules.py`
7. Run `pytest` to check that everything works

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "momaland",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "",
    "keywords": "Reinforcement Learning,Multi-Objective,Multi-Agent,RL,AI,gymnasium,pettingzoo",
    "author": "Umut Ucak, Hicham Azmani",
    "author_email": "Florian Felten <florian.felten@uni.lu>, Roxana Radulescu <roxana.radulescu@vub.be>, \"Hendrik J. S. Baier\" <h.j.s.baier@tue.nl>, Willem R\u00f6pke <willem.ropke@vub.be>, Patrick Mannion <patrick.mannion@universityofgalway.ie>, \"Diederik M. Roijers\" <diederik.roijers@vub.be>",
    "download_url": "https://files.pythonhosted.org/packages/e4/9a/2940a6403810f351413739e9104c57ca80d040a30f839712766cc77d5da7/momaland-0.0.2.tar.gz",
    "platform": null,
    "description": "![tests](https://github.com/rradules/momaland/workflows/Python%20tests/badge.svg)\n[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n\n<!-- start elevator-pitch -->\n\nMOMAland is an open source Python library for developing and comparing multi-objective multi-agent reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Essentially, the environments follow the standard [PettingZoo APIs](https://github.com/Farama-Foundation/PettingZoo), but return vectorized rewards as numpy arrays instead of scalar values.\n\nThe documentation website is at TODO, and we have a public discord server (which we also use to coordinate development work) that you can join here: https://discord.gg/bnJ6kubTg6.\n\n<!-- end elevator-pitch -->\n\n## Environments\n\nMOMAland includes environments taken from the MOMARL literature, as well as multi-objective version of classical environments, such as SISL or Butterfly.\nThe full list of environments is available at TODO.\n\n## Installation\n<!-- start install -->\n\nTo install MOMAland, use:\n```bash\npip install momaland\n```\n\nThis does not include dependencies for all families of environments (some can be problematic to install on certain systems). You can install these dependencies for one family like `pip install \"momaland\"` or use `pip install \"momaland[all]\"` to install all dependencies.\n\n<!-- end install -->\n\n## API\n\n<!-- start snippet-usage -->\n\nAs for PettingZoo, the MOMAland API models environments as simple Python `env` classes. Creating environment instances and interacting with them is very simple - here's an example using the \"mosurround_v0\" environment:\n\n```python\nimport momaland\nimport numpy as np\n\n# It follows the original PettingZoo APIs ...\nenv = momaland.envs.crazyrl.surround.surround_v0.parallel_env()\n\nobs, info = env.reset()\n# but vector_reward is a numpy array!\nactions = {agent: env.action_spaces[agent].sample() for agent in env.agents}\nnext_obs, vector_rewards, terminated, truncated, info = env.step(actions)\n\n# Optionally, you can scalarize the reward function with the LinearReward wrapper to fall back to the original PZ API\nenv = momaland.LinearReward(env, weight=np.array([0.8, 0.2, 0.2]))\n```\nFor details on multi-objective multi-agent RL definitions, see [Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey](https://arxiv.org/abs/1909.02964).\n\nYou can also check more examples in this colab notebook! TODO\n\n<!-- end snippet-usage -->\n\n\n## Environment Versioning\n\nMOMAland keeps strict versioning for reproducibility reasons. All environments end in a suffix like \"-v0\".  When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.\n\n## Development Roadmap\nWe have a roadmap for future development available here: TODO.\n\n## Project Maintainers\n\nProject Managers:  TODO\n\nMaintenance for this project is also contributed by the broader Farama team: [farama.org/team](https://farama.org/team).\n\n## Citing\n\n<!-- start citation -->\n\nIf you use this repository in your research, please cite:\n\n```bibtex\n@inproceedings{TODO}\n```\n\n<!-- end citation -->\n\n## Development\n\n### Setup pre-commit\nClone the repo and run `pre-commit install` to setup the pre-commit hooks.\n\n### New environment steps\n1. Create a new environment package in `momaland/envs/`\n2. Create a new environment class in `momaland/envs/<env_name>/<env_name>.py`, this class should extend `MOParallelEnv` or `MOAECEnv`. Override the PettingZoo methods (see their [documentation](https://pettingzoo.farama.org/api/aec/)). Additionally, you should define a member `self.reward_spaces` that is a dictionary of space specifying the shape of the reward vector of each agent, as well as a method `reward_space(self, agent) -> Space` that returns the reward space of a given agent.\n3. Define the factory functions to create your class: `parallel_env` returns a parallel version of the env, `env` returns an AEC version, and `raw_env` that is the pure class constructor (it is not used in practice). (!) use the conversions that are defined inside our repository, e.g. `mo_parallel_to_aec` instead of `parallel_to_aec` from PZ.\n4. (!) do not use `OrderEnforcingWrapper`, it prevents from accessing the `reward_space` of the env :-(;\n5. Add a versioned constructor of your env in the directory which exports the factory functions (see `mobeach_v0.py` for an example).\n6. Add your environment to the tests in `utils/all_modules.py`\n7. Run `pytest` to check that everything works\n",
    "bugtrack_url": null,
    "license": "GNU General Public License v3.0",
    "summary": "A standard API for Multi-Objective Multi-Agent Decision making and a diverse set of reference environments.",
    "version": "0.0.2",
    "project_urls": {
        "Bug Report": "https://github.com/rradules/momaland/tree/main/issues",
        "Documentation": "https://github.com/rradules/momaland/tree/main",
        "Homepage": "https://github.com/rradules/momaland/tree/main",
        "Repository": "https://github.com/rradules/momaland/tree/main"
    },
    "split_keywords": [
        "reinforcement learning",
        "multi-objective",
        "multi-agent",
        "rl",
        "ai",
        "gymnasium",
        "pettingzoo"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "975b9b5a1a7b4c11d127fcb87e49dd628895c869684bf68ae05bdc557d3fe6d7",
                "md5": "cf73bf11b6246b2c1d180d2e042e67a1",
                "sha256": "4deaee662be5f7b079878187b5421bbc400b2630c3f27d5b0c95c17b1d42dcf0"
            },
            "downloads": -1,
            "filename": "momaland-0.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "cf73bf11b6246b2c1d180d2e042e67a1",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 69385,
            "upload_time": "2023-12-20T09:26:52",
            "upload_time_iso_8601": "2023-12-20T09:26:52.129004Z",
            "url": "https://files.pythonhosted.org/packages/97/5b/9b5a1a7b4c11d127fcb87e49dd628895c869684bf68ae05bdc557d3fe6d7/momaland-0.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e49a2940a6403810f351413739e9104c57ca80d040a30f839712766cc77d5da7",
                "md5": "d944f513013aac9ff0c1c116d812add8",
                "sha256": "d7f038cf693fc9f861b68c2a9b448d559d8df6b565cdfd18a8553f34d92f3c5a"
            },
            "downloads": -1,
            "filename": "momaland-0.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "d944f513013aac9ff0c1c116d812add8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 55743,
            "upload_time": "2023-12-20T09:26:53",
            "upload_time_iso_8601": "2023-12-20T09:26:53.266922Z",
            "url": "https://files.pythonhosted.org/packages/e4/9a/2940a6403810f351413739e9104c57ca80d040a30f839712766cc77d5da7/momaland-0.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-20 09:26:53",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rradules",
    "github_project": "momaland",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "momaland"
}
        
Elapsed time: 0.50140s