dm-meltingpot


Namedm-meltingpot JSON
Version 2.3.1 PyPI version JSON
download
home_pagehttps://github.com/google-deepmind/meltingpot
SummaryA suite of test scenarios for multi-agent reinforcement learning.
upload_time2024-07-02 09:12:19
maintainerNone
docs_urlNone
authorDeepMind
requires_python>=3.11
licenseApache 2.0
keywords multi-agent reinforcement-learning python machine-learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Melting Pot

*A suite of test scenarios for multi-agent reinforcement learning.*



<div align="center">
  <img src="https://github.com/google-deepmind/meltingpot/blob/main/docs/images/meltingpot_montage.gif?raw=true"
       alt="Melting Pot substrates"
       height="250" width="250" />
</div>

[Melting Pot 2.0 Tech Report](https://arxiv.org/abs/2211.13746)
[Melting Pot Contest at NeurIPS 2023](https://www.aicrowd.com/challenges/meltingpot-challenge-2023)

## About

Melting Pot assesses generalization to novel social situations involving both
familiar and unfamiliar individuals, and has been designed to test a broad range
of social interactions such as: cooperation, competition, deception,
reciprocation, trust, stubbornness and so on. Melting Pot offers researchers a
set of over 50 multi-agent reinforcement learning _substrates_ (multi-agent
games) on which to train agents, and over 256 unique test _scenarios_ on which
to evaluate these trained agents. The performance of agents on these held-out
test scenarios quantifies whether agents:

*   perform well across a range of social situations where individuals are
    interdependent,
*   interact effectively with unfamiliar individuals not seen during training

The resulting score can then be used to rank different multi-agent RL algorithms
by their ability to generalize to novel social situations.

We hope Melting Pot will become a standard benchmark for multi-agent
reinforcement learning. We plan to maintain it, and will be extending it in the
coming years to cover more social interactions and generalization scenarios.

If you are interested in extending Melting Pot, please refer to the
[Extending Melting Pot](https://github.com/google-deepmind/meltingpot/blob/main/docs/extending.md) documentation.

## Installation

### `pip` install

[Melting Pot is available on PyPI](https://pypi.python.org/pypi/dm-meltingpot)
and can be installed using:

```shell
pip install dm-meltingpot
```

NOTE: Melting Pot is built on top of [DeepMind Lab2D](https://github.com/google-deepmind/lab2d)
which is distributed as pre-built wheels. If there is no appropriate wheel for
`dmlab2d`, you will need to build it from source (see
[the `dmlab2d` `README.md`](https://github.com/google-deepmind/lab2d/blob/main/README.md)
for details).

### Manual install

If you want to work on the Melting Pot source code, you can perform an editable
installation as follows:

1.  Clone Melting Pot:

    ```shell
    git clone -b main https://github.com/google-deepmind/meltingpot
    cd meltingpot
    ```

2.  (Optional) Activate a virtual environment, e.g.:

    ```shell
    python -m venv venv
    source venv/bin/activate
    ```

3.  Install Melting Pot:

    ```shell
    pip install --editable .[dev]
    ```

4.  (Optional) Test the installation:

    ```shell
    pytest --pyargs meltingpot
    ```

### Devcontainer (x86 only)

*NOTE: This Devcontainer only works for x86 platforms. For arm64 (newer M1 Macs)
users will have to follow the manual installation steps.*

This project includes a pre-configured development environment
([devcontainer](https://containers.dev)).

You can launch a working development environment with one click, using e.g.
[Github Codespaces](https://github.com/features/codespaces) or the
[VSCode Containers](https://code.visualstudio.com/docs/remote/containers-tutorial)
extension.

#### CUDA support

To enable CUDA support (required for GPU training), make sure you have the
[nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)
package installed, and then run Docker with the `---gpus all` flag enabled. Note
that for GitHub Codespaces this isn't necessary, as it's done for you
automatically.

## Example usage

### Evaluation
The [evaluation](https://github.com/google-deepmind/meltingpot/blob/main/meltingpot/utils/evaluation/evaluation.py) library can be used
to evaluate [SavedModel](https://www.tensorflow.org/guide/saved_model)s
trained on Melting Pot substrates.

Evaluation results from the [Melting Pot 2.0 Tech Report](https://arxiv.org/abs/2211.13746)
can be viewed in the [Evaluation Notebook](https://github.com/google-deepmind/meltingpot/blob/main/notebooks/evaluation_results.ipynb).

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepmind/meltingpot/blob/main/notebooks/evaluation_results.ipynb)

### Interacting with the substrates

You can try out the substrates interactively with the
[human_players](https://github.com/google-deepmind/meltingpot/blob/main/meltingpot/human_players) scripts. For example, to play
the `clean_up` substrate, you can run:

```shell
python meltingpot/human_players/play_clean_up.py
```

You can move around with the `W`, `A`, `S`, `D` keys, Turn with `Q`, and `E`,
fire the zapper with `1`, and fire the cleaning beam with `2`. You can switch
between players with `TAB`. There are other substrates available in the
[human_players](https://github.com/google-deepmind/meltingpot/blob/main/meltingpot/human_players) directory. Some have multiple
variants, which you select with the `--level_name` flag.

### Training agents

We provide two example scripts: one using
[RLlib](https://github.com/ray-project/ray), and another using
[PettingZoo](https://github.com/Farama-Foundation/PettingZoo) with
[Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) (SB3). Note
that Melting Pot is agnostic to how you train your agents, and as such, these
scripts are not meant to be a suggestion on how to achieve good scores in the
task suite.

#### RLlib

This example uses RLlib to train agents in
self-play on a Melting Pot substrate.

First you will need to install the dependencies needed by the examples:

```shell
cd <meltingpot_root>
pip install -r examples/requirements.txt
```

Then you can run the training experiment using:

```shell
cd examples/rllib
python self_play_train.py
```

#### PettingZoo and Stable-Baselines3

This example uses a PettingZoo wrapper with a fully parameter shared PPO agent
from SB3.

The PettingZoo wrapper can be used separately from SB3 and
can be found [here](https://github.com/google-deepmind/meltingpot/blob/main/examples/pettingzoo/utils.py).

```shell
cd <meltingpot_root>
pip install -r examples/requirements.txt
cd examples/pettingzoo
python sb3_train.py
```

## Documentation

Full documentation is available [here](https://github.com/google-deepmind/meltingpot/blob/main/docs/index.md).

## Citing Melting Pot

If you use Melting Pot in your work, please cite the accompanying article:

```bibtex
@inproceedings{leibo2021meltingpot,
    title={Scalable Evaluation of Multi-Agent Reinforcement Learning with
           Melting Pot},
    author={Joel Z. Leibo AND Edgar Du\'e\~nez-Guzm\'an AND Alexander Sasha
            Vezhnevets AND John P. Agapiou AND Peter Sunehag AND Raphael Koster
            AND Jayd Matyas AND Charles Beattie AND Igor Mordatch AND Thore
            Graepel},
    year={2021},
    journal={International conference on machine learning},
    organization={PMLR},
    url={https://doi.org/10.48550/arXiv.2107.06857},
    doi={10.48550/arXiv.2107.06857}
}
```

## Disclaimer

This is not an officially supported Google product.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/google-deepmind/meltingpot",
    "name": "dm-meltingpot",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.11",
    "maintainer_email": null,
    "keywords": "multi-agent reinforcement-learning python machine-learning",
    "author": "DeepMind",
    "author_email": "noreply@google.com",
    "download_url": "https://files.pythonhosted.org/packages/44/0f/4f131b05ab87361c54a6f5ba805280d4e81f5c48b44c6ccbd2d9dfd8ff4b/dm_meltingpot-2.3.1.tar.gz",
    "platform": null,
    "description": "# Melting Pot\n\n*A suite of test scenarios for multi-agent reinforcement learning.*\n\n\n\n<div align=\"center\">\n  <img src=\"https://github.com/google-deepmind/meltingpot/blob/main/docs/images/meltingpot_montage.gif?raw=true\"\n       alt=\"Melting Pot substrates\"\n       height=\"250\" width=\"250\" />\n</div>\n\n[Melting Pot 2.0 Tech Report](https://arxiv.org/abs/2211.13746)\n[Melting Pot Contest at NeurIPS 2023](https://www.aicrowd.com/challenges/meltingpot-challenge-2023)\n\n## About\n\nMelting Pot assesses generalization to novel social situations involving both\nfamiliar and unfamiliar individuals, and has been designed to test a broad range\nof social interactions such as: cooperation, competition, deception,\nreciprocation, trust, stubbornness and so on. Melting Pot offers researchers a\nset of over 50 multi-agent reinforcement learning _substrates_ (multi-agent\ngames) on which to train agents, and over 256 unique test _scenarios_ on which\nto evaluate these trained agents. The performance of agents on these held-out\ntest scenarios quantifies whether agents:\n\n*   perform well across a range of social situations where individuals are\n    interdependent,\n*   interact effectively with unfamiliar individuals not seen during training\n\nThe resulting score can then be used to rank different multi-agent RL algorithms\nby their ability to generalize to novel social situations.\n\nWe hope Melting Pot will become a standard benchmark for multi-agent\nreinforcement learning. We plan to maintain it, and will be extending it in the\ncoming years to cover more social interactions and generalization scenarios.\n\nIf you are interested in extending Melting Pot, please refer to the\n[Extending Melting Pot](https://github.com/google-deepmind/meltingpot/blob/main/docs/extending.md) documentation.\n\n## Installation\n\n### `pip` install\n\n[Melting Pot is available on PyPI](https://pypi.python.org/pypi/dm-meltingpot)\nand can be installed using:\n\n```shell\npip install dm-meltingpot\n```\n\nNOTE: Melting Pot is built on top of [DeepMind Lab2D](https://github.com/google-deepmind/lab2d)\nwhich is distributed as pre-built wheels. If there is no appropriate wheel for\n`dmlab2d`, you will need to build it from source (see\n[the `dmlab2d` `README.md`](https://github.com/google-deepmind/lab2d/blob/main/README.md)\nfor details).\n\n### Manual install\n\nIf you want to work on the Melting Pot source code, you can perform an editable\ninstallation as follows:\n\n1.  Clone Melting Pot:\n\n    ```shell\n    git clone -b main https://github.com/google-deepmind/meltingpot\n    cd meltingpot\n    ```\n\n2.  (Optional) Activate a virtual environment, e.g.:\n\n    ```shell\n    python -m venv venv\n    source venv/bin/activate\n    ```\n\n3.  Install Melting Pot:\n\n    ```shell\n    pip install --editable .[dev]\n    ```\n\n4.  (Optional) Test the installation:\n\n    ```shell\n    pytest --pyargs meltingpot\n    ```\n\n### Devcontainer (x86 only)\n\n*NOTE: This Devcontainer only works for x86 platforms. For arm64 (newer M1 Macs)\nusers will have to follow the manual installation steps.*\n\nThis project includes a pre-configured development environment\n([devcontainer](https://containers.dev)).\n\nYou can launch a working development environment with one click, using e.g.\n[Github Codespaces](https://github.com/features/codespaces) or the\n[VSCode Containers](https://code.visualstudio.com/docs/remote/containers-tutorial)\nextension.\n\n#### CUDA support\n\nTo enable CUDA support (required for GPU training), make sure you have the\n[nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html)\npackage installed, and then run Docker with the `---gpus all` flag enabled. Note\nthat for GitHub Codespaces this isn't necessary, as it's done for you\nautomatically.\n\n## Example usage\n\n### Evaluation\nThe [evaluation](https://github.com/google-deepmind/meltingpot/blob/main/meltingpot/utils/evaluation/evaluation.py) library can be used\nto evaluate [SavedModel](https://www.tensorflow.org/guide/saved_model)s\ntrained on Melting Pot substrates.\n\nEvaluation results from the [Melting Pot 2.0 Tech Report](https://arxiv.org/abs/2211.13746)\ncan be viewed in the [Evaluation Notebook](https://github.com/google-deepmind/meltingpot/blob/main/notebooks/evaluation_results.ipynb).\n\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepmind/meltingpot/blob/main/notebooks/evaluation_results.ipynb)\n\n### Interacting with the substrates\n\nYou can try out the substrates interactively with the\n[human_players](https://github.com/google-deepmind/meltingpot/blob/main/meltingpot/human_players) scripts. For example, to play\nthe `clean_up` substrate, you can run:\n\n```shell\npython meltingpot/human_players/play_clean_up.py\n```\n\nYou can move around with the `W`, `A`, `S`, `D` keys, Turn with `Q`, and `E`,\nfire the zapper with `1`, and fire the cleaning beam with `2`. You can switch\nbetween players with `TAB`. There are other substrates available in the\n[human_players](https://github.com/google-deepmind/meltingpot/blob/main/meltingpot/human_players) directory. Some have multiple\nvariants, which you select with the `--level_name` flag.\n\n### Training agents\n\nWe provide two example scripts: one using\n[RLlib](https://github.com/ray-project/ray), and another using\n[PettingZoo](https://github.com/Farama-Foundation/PettingZoo) with\n[Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) (SB3). Note\nthat Melting Pot is agnostic to how you train your agents, and as such, these\nscripts are not meant to be a suggestion on how to achieve good scores in the\ntask suite.\n\n#### RLlib\n\nThis example uses RLlib to train agents in\nself-play on a Melting Pot substrate.\n\nFirst you will need to install the dependencies needed by the examples:\n\n```shell\ncd <meltingpot_root>\npip install -r examples/requirements.txt\n```\n\nThen you can run the training experiment using:\n\n```shell\ncd examples/rllib\npython self_play_train.py\n```\n\n#### PettingZoo and Stable-Baselines3\n\nThis example uses a PettingZoo wrapper with a fully parameter shared PPO agent\nfrom SB3.\n\nThe PettingZoo wrapper can be used separately from SB3 and\ncan be found [here](https://github.com/google-deepmind/meltingpot/blob/main/examples/pettingzoo/utils.py).\n\n```shell\ncd <meltingpot_root>\npip install -r examples/requirements.txt\ncd examples/pettingzoo\npython sb3_train.py\n```\n\n## Documentation\n\nFull documentation is available [here](https://github.com/google-deepmind/meltingpot/blob/main/docs/index.md).\n\n## Citing Melting Pot\n\nIf you use Melting Pot in your work, please cite the accompanying article:\n\n```bibtex\n@inproceedings{leibo2021meltingpot,\n    title={Scalable Evaluation of Multi-Agent Reinforcement Learning with\n           Melting Pot},\n    author={Joel Z. Leibo AND Edgar Du\\'e\\~nez-Guzm\\'an AND Alexander Sasha\n            Vezhnevets AND John P. Agapiou AND Peter Sunehag AND Raphael Koster\n            AND Jayd Matyas AND Charles Beattie AND Igor Mordatch AND Thore\n            Graepel},\n    year={2021},\n    journal={International conference on machine learning},\n    organization={PMLR},\n    url={https://doi.org/10.48550/arXiv.2107.06857},\n    doi={10.48550/arXiv.2107.06857}\n}\n```\n\n## Disclaimer\n\nThis is not an officially supported Google product.\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "A suite of test scenarios for multi-agent reinforcement learning.",
    "version": "2.3.1",
    "project_urls": {
        "Download": "https://github.com/google-deepmind/meltingpot/releases",
        "Homepage": "https://github.com/google-deepmind/meltingpot"
    },
    "split_keywords": [
        "multi-agent",
        "reinforcement-learning",
        "python",
        "machine-learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "440f4f131b05ab87361c54a6f5ba805280d4e81f5c48b44c6ccbd2d9dfd8ff4b",
                "md5": "adacf5522abe32fd2bfb2a1d37ec9852",
                "sha256": "9815bb3569b5833f426ac787c46bf7c8a52f0376c8b062de3c902a15df59d90b"
            },
            "downloads": -1,
            "filename": "dm_meltingpot-2.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "adacf5522abe32fd2bfb2a1d37ec9852",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.11",
            "size": 390363,
            "upload_time": "2024-07-02T09:12:19",
            "upload_time_iso_8601": "2024-07-02T09:12:19.018927Z",
            "url": "https://files.pythonhosted.org/packages/44/0f/4f131b05ab87361c54a6f5ba805280d4e81f5c48b44c6ccbd2d9dfd8ff4b/dm_meltingpot-2.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-02 09:12:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "google-deepmind",
    "github_project": "meltingpot",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "dm-meltingpot"
}
        
Elapsed time: 0.31429s