momaland


Namemomaland JSON
Version 0.2.0 PyPI version JSON
download
home_pageNone
SummaryA standard API for Multi-Objective Multi-Agent Decision making and a diverse set of reference environments.
upload_time2025-10-07 13:40:03
maintainerNone
docs_urlNone
authorUmut Ucak
requires_python>=3.9
licenseGNU General Public License v3.0
keywords reinforcement learning multi-objective multi-agent rl ai gymnasium pettingzoo
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Python](https://img.shields.io/pypi/pyversions/momaland.svg)](https://badge.fury.io/py/momaland)
[![PyPI](https://badge.fury.io/py/momaland.svg)](https://badge.fury.io/py/momaland)
![tests](https://github.com/rradules/momaland/workflows/Python%20tests/badge.svg)
[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

<p align="center">
    <img src="docs/_static/img/momaland-text.png" width="500px"/>
</p>

<!-- start elevator-pitch -->
MOMAland is an open source Python library for developing and comparing multi-objective multi-agent reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Essentially, the environments follow the standard [PettingZoo APIs](https://github.com/Farama-Foundation/PettingZoo), but return vectorized rewards as numpy arrays instead of scalar values.

The documentation website is at https://momaland.farama.org/, and we have a public discord server (which we also use to coordinate development work) that you can join [here](https://discord.gg/bnJ6kubTg6).
<!-- end elevator-pitch -->

## Environments
MOMAland includes environments taken from the MOMARL literature, as well as multi-objective version of classical environments, such as SISL or Butterfly.
The full list of environments is available at https://momaland.farama.org/environments/all-envs/.

## Installation
<!-- start install -->
To install MOMAland, use:
```bash
pip install momaland
```
This does not include dependencies for all components of MOMAland (not everything is required for the basic usage, and some can be problematic to install on certain systems).
- `pip install "momaland[testing]"` to install dependencies for API testing.
- `pip install "momaland[learning]"` to install dependencies for the supplied learning algorithms.
- `pip install "momaland[all]"` for all dependencies for all components.
<!-- end install -->

## API
<!-- start snippet-usage -->
Similar to [PettingZoo](https://pettingzoo.farama.org), the MOMAland API models environments as simple Python `env` classes. Creating environment instances and interacting with them is very simple - here's an example using the "momultiwalker_stability_v0" environment:

```python
from momaland.envs.momultiwalker_stability import momultiwalker_stability_v0 as _env
import numpy as np

# .env() function will return an AEC environment, as per PZ standard
env = _env.env(render_mode="human")

env.reset(seed=42)
for agent in env.agent_iter():
    # vec_reward is a numpy array
    observation, vec_reward, termination, truncation, info = env.last()

    if termination or truncation:
        action = None
    else:
        action = env.action_space(agent).sample() # this is where you would insert your policy

    env.step(action)
env.close()

# optionally, you can scalarize the reward with weights
# Making the vector reward a scalar reward to shift to single-objective multi-agent (aka PettingZoo)
# We can assign different weights to the objectives of each agent.
weights = {
    "walker_0": np.array([0.7, 0.3]),
    "walker_1": np.array([0.5, 0.5]),
    "walker_2": np.array([0.2, 0.8]),
}
env = LinearizeReward(env, weights)
```

For details on multi-objective multi-agent RL definitions, see [Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey](https://arxiv.org/abs/1909.02964).

You can also check more examples in this colab notebook! [![MOMAland Demo in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Farama-Foundation/momaland/blob/main/momaland_demo.ipynb)
<!-- end snippet-usage -->

## Learning Algorithms
<!-- start learning-algorithms -->
We provide a set of learning algorithms that are compatible with the MOMAland environments. The learning algorithms are implemented in the [learning/](https://github.com/Farama-Foundation/momaland/tree/main/momaland/learning) directory. To keep everything as self-contained as possible, each algorithm is implemented as a single-file (close to [cleanRL's philosophy](https://github.com/vwxyzjn/cleanrl/tree/master)).

Nevertheless, we reuse tools provided by other libraries, like multi-objective evaluations and performance indicators from [MORL-Baselines](https://github.com/LucasAlegre/morl-baselines).

Here is a list of algorithms that are currently implemented:

| **Name**                                                                                                                                                                                                                                                            | Single/Multi-policy | Reward     | Utility             | Observation space | Action space | Paper |
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|------------|---------------------|-------------------|--------------|-------|
| MOMAPPO (OLS) [continuous](https://github.com/Farama-Foundation/momaland/blob/main/momaland/learning/continuous/cooperative_momappo.py),<br/> [discrete](https://github.com/Farama-Foundation/momaland/blob/main/momaland/learning/discrete/cooperative_momappo.py) | Multi               | Team       | Team / Linear       | Any               | Any          |       |
| [Scalarized IQL](https://github.com/Farama-Foundation/momaland/tree/main/momaland/learning/iql)                                                                                                                                                                     | Single              | Individual | Individual / Linear | Discrete          | Discrete     |       |
| [Centralization wrapper](https://github.com/Farama-Foundation/momaland/blob/main/momaland/utils/parallel_wrappers.py#L149)                                                                                                                                          | Any                 | Team       | Team / Any          | Discrete          | Discrete     |       |
| [Linearization wrapper](https://github.com/Farama-Foundation/momaland/blob/main/momaland/utils/parallel_wrappers.py#L49)                                                                                                                                            | Single              | Any        | Individual / Linear | Any               | Any          |       |


<!-- end learning-algorithms -->

## Environment Versioning
MOMAland keeps strict versioning for reproducibility reasons. All environments end in a suffix like "_v0".  When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.

## Development Roadmap
We have a roadmap for future development available [here](https://github.com/Farama-Foundation/momaland/issues/56).

## Project Maintainers
Project Managers:  Florian Felten (@ffelten)

Maintenance for this project is also contributed by the broader Farama team: [farama.org/team](https://farama.org/team).

## Citing
<!-- start citation -->
If you use this repository in your research, please cite:
```bibtex
@misc{felten2024momaland,
      title={MOMAland: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning},
      author={Florian Felten and Umut Ucak and Hicham Azmani and Gao Peng and Willem Röpke and Hendrik Baier and Patrick Mannion and Diederik M. Roijers and Jordan K. Terry and El-Ghazali Talbi and Grégoire Danoy and Ann Nowé and Roxana Rădulescu},
      year={2024},
      eprint={2407.16312},
      archivePrefix={arXiv},
      primaryClass={cs.MA},
      url={https://arxiv.org/abs/2407.16312},
}
```
<!-- end citation -->

## Development
### Setup pre-commit
Clone the repo and run `pre-commit install` to setup the pre-commit hooks.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "momaland",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "Reinforcement Learning, Multi-Objective, Multi-Agent, RL, AI, gymnasium, pettingzoo",
    "author": "Umut Ucak",
    "author_email": "Florian Felten <florian.felten@uni.lu>, Hicham Azmani <hicham.azmani@vub.be>, Roxana Radulescu <r.t.radulescu@uu.nl>, \"Hendrik J. S. Baier\" <h.j.s.baier@tue.nl>, Willem R\u00f6pke <willem.ropke@vub.be>, Patrick Mannion <patrick.mannion@universityofgalway.ie>, \"Diederik M. Roijers\" <diederik.roijers@vub.be>",
    "download_url": "https://files.pythonhosted.org/packages/de/6e/444b0d3cb30f63f4abeb943cc88dfd594dfb6b6be5ffe9e7560006547428/momaland-0.2.0.tar.gz",
    "platform": null,
    "description": "[![Python](https://img.shields.io/pypi/pyversions/momaland.svg)](https://badge.fury.io/py/momaland)\n[![PyPI](https://badge.fury.io/py/momaland.svg)](https://badge.fury.io/py/momaland)\n![tests](https://github.com/rradules/momaland/workflows/Python%20tests/badge.svg)\n[![pre-commit](https://img.shields.io/badge/pre--commit-enabled-brightgreen?logo=pre-commit&logoColor=white)](https://pre-commit.com/)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n\n<p align=\"center\">\n    <img src=\"docs/_static/img/momaland-text.png\" width=\"500px\"/>\n</p>\n\n<!-- start elevator-pitch -->\nMOMAland is an open source Python library for developing and comparing multi-objective multi-agent reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Essentially, the environments follow the standard [PettingZoo APIs](https://github.com/Farama-Foundation/PettingZoo), but return vectorized rewards as numpy arrays instead of scalar values.\n\nThe documentation website is at https://momaland.farama.org/, and we have a public discord server (which we also use to coordinate development work) that you can join [here](https://discord.gg/bnJ6kubTg6).\n<!-- end elevator-pitch -->\n\n## Environments\nMOMAland includes environments taken from the MOMARL literature, as well as multi-objective version of classical environments, such as SISL or Butterfly.\nThe full list of environments is available at https://momaland.farama.org/environments/all-envs/.\n\n## Installation\n<!-- start install -->\nTo install MOMAland, use:\n```bash\npip install momaland\n```\nThis does not include dependencies for all components of MOMAland (not everything is required for the basic usage, and some can be problematic to install on certain systems).\n- `pip install \"momaland[testing]\"` to install dependencies for API testing.\n- `pip install \"momaland[learning]\"` to install dependencies for the supplied learning algorithms.\n- `pip install \"momaland[all]\"` for all dependencies for all components.\n<!-- end install -->\n\n## API\n<!-- start snippet-usage -->\nSimilar to [PettingZoo](https://pettingzoo.farama.org), the MOMAland API models environments as simple Python `env` classes. Creating environment instances and interacting with them is very simple - here's an example using the \"momultiwalker_stability_v0\" environment:\n\n```python\nfrom momaland.envs.momultiwalker_stability import momultiwalker_stability_v0 as _env\nimport numpy as np\n\n# .env() function will return an AEC environment, as per PZ standard\nenv = _env.env(render_mode=\"human\")\n\nenv.reset(seed=42)\nfor agent in env.agent_iter():\n    # vec_reward is a numpy array\n    observation, vec_reward, termination, truncation, info = env.last()\n\n    if termination or truncation:\n        action = None\n    else:\n        action = env.action_space(agent).sample() # this is where you would insert your policy\n\n    env.step(action)\nenv.close()\n\n# optionally, you can scalarize the reward with weights\n# Making the vector reward a scalar reward to shift to single-objective multi-agent (aka PettingZoo)\n# We can assign different weights to the objectives of each agent.\nweights = {\n    \"walker_0\": np.array([0.7, 0.3]),\n    \"walker_1\": np.array([0.5, 0.5]),\n    \"walker_2\": np.array([0.2, 0.8]),\n}\nenv = LinearizeReward(env, weights)\n```\n\nFor details on multi-objective multi-agent RL definitions, see [Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey](https://arxiv.org/abs/1909.02964).\n\nYou can also check more examples in this colab notebook! [![MOMAland Demo in Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Farama-Foundation/momaland/blob/main/momaland_demo.ipynb)\n<!-- end snippet-usage -->\n\n## Learning Algorithms\n<!-- start learning-algorithms -->\nWe provide a set of learning algorithms that are compatible with the MOMAland environments. The learning algorithms are implemented in the [learning/](https://github.com/Farama-Foundation/momaland/tree/main/momaland/learning) directory. To keep everything as self-contained as possible, each algorithm is implemented as a single-file (close to [cleanRL's philosophy](https://github.com/vwxyzjn/cleanrl/tree/master)).\n\nNevertheless, we reuse tools provided by other libraries, like multi-objective evaluations and performance indicators from [MORL-Baselines](https://github.com/LucasAlegre/morl-baselines).\n\nHere is a list of algorithms that are currently implemented:\n\n| **Name**                                                                                                                                                                                                                                                            | Single/Multi-policy | Reward     | Utility             | Observation space | Action space | Paper |\n|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|------------|---------------------|-------------------|--------------|-------|\n| MOMAPPO (OLS) [continuous](https://github.com/Farama-Foundation/momaland/blob/main/momaland/learning/continuous/cooperative_momappo.py),<br/> [discrete](https://github.com/Farama-Foundation/momaland/blob/main/momaland/learning/discrete/cooperative_momappo.py) | Multi               | Team       | Team / Linear       | Any               | Any          |       |\n| [Scalarized IQL](https://github.com/Farama-Foundation/momaland/tree/main/momaland/learning/iql)                                                                                                                                                                     | Single              | Individual | Individual / Linear | Discrete          | Discrete     |       |\n| [Centralization wrapper](https://github.com/Farama-Foundation/momaland/blob/main/momaland/utils/parallel_wrappers.py#L149)                                                                                                                                          | Any                 | Team       | Team / Any          | Discrete          | Discrete     |       |\n| [Linearization wrapper](https://github.com/Farama-Foundation/momaland/blob/main/momaland/utils/parallel_wrappers.py#L49)                                                                                                                                            | Single              | Any        | Individual / Linear | Any               | Any          |       |\n\n\n<!-- end learning-algorithms -->\n\n## Environment Versioning\nMOMAland keeps strict versioning for reproducibility reasons. All environments end in a suffix like \"_v0\".  When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.\n\n## Development Roadmap\nWe have a roadmap for future development available [here](https://github.com/Farama-Foundation/momaland/issues/56).\n\n## Project Maintainers\nProject Managers:  Florian Felten (@ffelten)\n\nMaintenance for this project is also contributed by the broader Farama team: [farama.org/team](https://farama.org/team).\n\n## Citing\n<!-- start citation -->\nIf you use this repository in your research, please cite:\n```bibtex\n@misc{felten2024momaland,\n      title={MOMAland: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning},\n      author={Florian Felten and Umut Ucak and Hicham Azmani and Gao Peng and Willem R\u00f6pke and Hendrik Baier and Patrick Mannion and Diederik M. Roijers and Jordan K. Terry and El-Ghazali Talbi and Gr\u00e9goire Danoy and Ann Now\u00e9 and Roxana R\u0103dulescu},\n      year={2024},\n      eprint={2407.16312},\n      archivePrefix={arXiv},\n      primaryClass={cs.MA},\n      url={https://arxiv.org/abs/2407.16312},\n}\n```\n<!-- end citation -->\n\n## Development\n### Setup pre-commit\nClone the repo and run `pre-commit install` to setup the pre-commit hooks.\n",
    "bugtrack_url": null,
    "license": "GNU General Public License v3.0",
    "summary": "A standard API for Multi-Objective Multi-Agent Decision making and a diverse set of reference environments.",
    "version": "0.2.0",
    "project_urls": {
        "Bug Report": "https://github.com/rradules/momaland/tree/main/issues",
        "Documentation": "https://github.com/rradules/momaland/tree/main",
        "Homepage": "https://github.com/rradules/momaland/tree/main",
        "Repository": "https://github.com/rradules/momaland/tree/main"
    },
    "split_keywords": [
        "reinforcement learning",
        " multi-objective",
        " multi-agent",
        " rl",
        " ai",
        " gymnasium",
        " pettingzoo"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "32aab3369470aded730d47dcb1140f46de89adf263badbe3133ebfee9947ef65",
                "md5": "18d4de0804c760deacf5f44d2d1336ee",
                "sha256": "5c72079a5c4bef46ef53716c169fc2f0d5ac67a905951210cd43e41891495a21"
            },
            "downloads": -1,
            "filename": "momaland-0.2.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "18d4de0804c760deacf5f44d2d1336ee",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 1116278,
            "upload_time": "2025-10-07T13:40:01",
            "upload_time_iso_8601": "2025-10-07T13:40:01.949613Z",
            "url": "https://files.pythonhosted.org/packages/32/aa/b3369470aded730d47dcb1140f46de89adf263badbe3133ebfee9947ef65/momaland-0.2.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "de6e444b0d3cb30f63f4abeb943cc88dfd594dfb6b6be5ffe9e7560006547428",
                "md5": "101f91c324124bc3d6dd030cb93569c3",
                "sha256": "ab0b7ab63aafe8deae984b04853ac7e29809365c45320db152b02e1b003a4130"
            },
            "downloads": -1,
            "filename": "momaland-0.2.0.tar.gz",
            "has_sig": false,
            "md5_digest": "101f91c324124bc3d6dd030cb93569c3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 1032589,
            "upload_time": "2025-10-07T13:40:03",
            "upload_time_iso_8601": "2025-10-07T13:40:03.452891Z",
            "url": "https://files.pythonhosted.org/packages/de/6e/444b0d3cb30f63f4abeb943cc88dfd594dfb6b6be5ffe9e7560006547428/momaland-0.2.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-07 13:40:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "rradules",
    "github_project": "momaland",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "momaland"
}
        
Elapsed time: 1.86094s