dm-robotics-moma


Namedm-robotics-moma JSON
Version 0.8.1 PyPI version JSON
download
home_pagehttps://github.com/deepmind/dm_robotics/tree/main/py/moma
SummaryTools for authoring robotic manipulation tasks.
upload_time2024-06-20 10:34:25
maintainerNone
docs_urlNone
authorDeepMind
requires_python<3.13,>=3.7
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Modular Manipulation (MoMa)

DeepMind's library for building modular robotic manipulation environments, both
in simulation and on real robots.

## Quick Start

An quick-start introductory tutorial can be found at this Colab:
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepmind/dm_robotics/blob/main/py/moma/moma_tutorial.ipynb)

## Overview

MoMa builds on DeepMind's [Composer library] \(part of [`dm_control`]\).
Composer helps build simulation environments for reinforcement-learning,
providing tools to define actions, observations, and rewards based on MuJoCo
entities.

MoMa wraps Composer to make it easy to build manipulation environments, and the
abstractions MoMa introduces allow these environments to work in both
simulation and the real world.

## Important Abstractions

MoMa is designed to be modular with respect to the robots in an environment,
whether running in simulation or reality, and the task-specific game logic for
a single RL environment.

MoMa does this by separating an RL environment into 2 components, the physical
setup and the task logic.

![Abstractions diagram](./doc/images/moma_abstractions.png "MoMa Abstractions")

### Hardware Abstraction

MoMa enforces that the only way to interact with an RL environment is via a
set of sensors and effectors, which define the input-output interface of the
environment.

[Sensors] provide an abstraction for real hardware sensors, but they can be
used in simulation as well. They read in information from the simulated or
real world and produce the observations in an RL environment. The [`sensors`]
package provides several ready-to-use sensors. You will see examples of sensors
that are used to collect robot joint positions, object positions, gripper
state, etc.

[Effectors] consume the actions in an RL environment and actuate robots, again
either in simulation or the real world. The [`effectors`] package provides
several commonly-used effectors.

At MoMa's core is [`BaseTask`], a variant of `composer.Task` which contains a
set of sensors and effectors. With this abstraction, `BaseTask` can encapsulate
a manipuation environment for any robot arm(s) and gripper(s), in either
simulation or in reality.

![Hardware abstractions diagram](./doc/images/hardware_abstraction.png "Hardware Abstractions")

### Task Logic

`BaseTask` represents a "physical" environment (e.g. a single Sawyer
arm and Robotiq gripper with 2 cameras, running either in simulation or
reality), but that alone doesn't define a _complete_ RL environment. For an RL
environment, we need to define the agent's actions, the observations, and the
rewards.

We use 2 abstractions from DeepMind's [AgentFlow] to help define things.

1. [`agentflow.ActionSpace`] maps the agent's actions to a new space or to
   relevant effectors in the `BaseTask`.

2. [`agentflow.TimestepPreprocessor`] modifies the base RL timestep before
   returning it to the agent. They can be used to modify observations, add
   rewards, etc. They can also be chained together. The name "timestep
   preprocessor" comes from the fact that the timestep is preprocessed before
   being passed on to the agent. The [`agentflow.preprocessors`] package
   contains many useful, ready-to-use timestep preprocessors.

Together, the `ActionSpace` and `TimestepPreprocessor` define the "game logic"
for an RL environment, and they are housed inside an [`agentflow.SubTask`].

![Task logic diagram](./doc/images/actions_and_tsps.png "Task Logic")

If you have a fixed physical setup and you just want to change the task, all
you need to change is the `af.SubTask`. Likewise, if you have a single task but
want to switch the hardware or switch between sim and real, you can fix the
`af.SubTask` and just change the `BaseTask`. See the AgentFlow documentation
for more information.

## Putting It All Together

### Single Task

In cases where there is only one objective for the RL agent (i.e. one instance
of the game-logic), you can use MoMa's [SubtaskEnvironment], which exposes a
single `agentflow.SubTask` with Deepmind's standard RL environment interface,
[dm_env.Environment].

Here is a diagram presenting the different components of a MoMa subtask
environment along with an explanation of information flow and different links to
the code.

![SubtaskEnv diagram](./doc/images/moma_logic_flow.png "RL loop diagram")

1.  The agent sends an action to a MoMa `SubTaskEnvironment` which serves as a
    container for the different components used in a task. The action is passed
    to an AgentFlow `ActionSpace` that projects the agent's action to a new
    action space that matches the spec of the underlying effector(s).

2.  The projected action is given to effectors. This allows us to use both sim
    or real robots for the same task.

3.  The effectors then actuate the robots either in sim or in real.

4.  The sensors then collect information from the robotics environment. Sensors
    are an abstraction layer for both sim and real, similar to Effectors.

5.  The `BaseTask` then passes the timestep to an AgentFlow
    `TimestepPreprocessor`. The preprocessor can change the timestep's
    observations and rewards, and it can terminate an RL episode if some
    termination criteria are met.

6.  The modified timestep is then passed on to the agent.

### Multiple Tasks

Given a single `BaseTask` which represents a collection of robots and sensors,
we can support multiple RL tasks and "flow" between them. Each RL task is an
[`agentflow.SubTask`], containing its own "game logic" specifying the agent's
action space, observations, rewards, and episode termination criteria.

AgentFlow contains utilities to specify these different subtasks and define
how the agent can move from subtask to subtask. Please see the AgentFlow docs
for more information.

## Creating a task with MoMa

### Creating a task in a new environment

To build a new MoMa environment, you can use the [subtask_env_builder]
pattern. An example of this pattern can be found in our [example task] and in
the tutorial linked at the top.

<!--internal doc marker-->

[Composer library]: https://deepmind.google/discover/blog/dm-control-software-and-tasks-for-continuous-control/
[`dm_control`]: https://github.com/deepmind/dm_control/tree/master
[Sensors]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/sensor.py
[`sensors`]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/sensors/
[Effectors]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/effector.py
[`effectors`]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/effectors/
[`BaseTask`]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/base_task.py
[SubtaskEnvironment]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/subtask_env.py
[dm_env.Environment]: https://github.com/deepmind/dm_env/tree/master
[AgentFlow]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/README.md
[`agentflow.ActionSpace`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/action_spaces.py
[`agentflow.TimestepPreprocessor`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/preprocessors/timestep_preprocessor.py
[`agentflow.SubTask`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/subtask.py
[`agentflow.SubTask`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/subtask.py
[`agentflow.preprocessors`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/preprocessors/
[subtask_env_builder]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/subtask_env_builder.py
[example task]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/tasks/example_task/example_task.py

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/deepmind/dm_robotics/tree/main/py/moma",
    "name": "dm-robotics-moma",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.7",
    "maintainer_email": null,
    "keywords": null,
    "author": "DeepMind",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "# Modular Manipulation (MoMa)\n\nDeepMind's library for building modular robotic manipulation environments, both\nin simulation and on real robots.\n\n## Quick Start\n\nAn quick-start introductory tutorial can be found at this Colab:\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/deepmind/dm_robotics/blob/main/py/moma/moma_tutorial.ipynb)\n\n## Overview\n\nMoMa builds on DeepMind's [Composer library] \\(part of [`dm_control`]\\).\nComposer helps build simulation environments for reinforcement-learning,\nproviding tools to define actions, observations, and rewards based on MuJoCo\nentities.\n\nMoMa wraps Composer to make it easy to build manipulation environments, and the\nabstractions MoMa introduces allow these environments to work in both\nsimulation and the real world.\n\n## Important Abstractions\n\nMoMa is designed to be modular with respect to the robots in an environment,\nwhether running in simulation or reality, and the task-specific game logic for\na single RL environment.\n\nMoMa does this by separating an RL environment into 2 components, the physical\nsetup and the task logic.\n\n![Abstractions diagram](./doc/images/moma_abstractions.png \"MoMa Abstractions\")\n\n### Hardware Abstraction\n\nMoMa enforces that the only way to interact with an RL environment is via a\nset of sensors and effectors, which define the input-output interface of the\nenvironment.\n\n[Sensors] provide an abstraction for real hardware sensors, but they can be\nused in simulation as well. They read in information from the simulated or\nreal world and produce the observations in an RL environment. The [`sensors`]\npackage provides several ready-to-use sensors. You will see examples of sensors\nthat are used to collect robot joint positions, object positions, gripper\nstate, etc.\n\n[Effectors] consume the actions in an RL environment and actuate robots, again\neither in simulation or the real world. The [`effectors`] package provides\nseveral commonly-used effectors.\n\nAt MoMa's core is [`BaseTask`], a variant of `composer.Task` which contains a\nset of sensors and effectors. With this abstraction, `BaseTask` can encapsulate\na manipuation environment for any robot arm(s) and gripper(s), in either\nsimulation or in reality.\n\n![Hardware abstractions diagram](./doc/images/hardware_abstraction.png \"Hardware Abstractions\")\n\n### Task Logic\n\n`BaseTask` represents a \"physical\" environment (e.g. a single Sawyer\narm and Robotiq gripper with 2 cameras, running either in simulation or\nreality), but that alone doesn't define a _complete_ RL environment. For an RL\nenvironment, we need to define the agent's actions, the observations, and the\nrewards.\n\nWe use 2 abstractions from DeepMind's [AgentFlow] to help define things.\n\n1. [`agentflow.ActionSpace`] maps the agent's actions to a new space or to\n   relevant effectors in the `BaseTask`.\n\n2. [`agentflow.TimestepPreprocessor`] modifies the base RL timestep before\n   returning it to the agent. They can be used to modify observations, add\n   rewards, etc. They can also be chained together. The name \"timestep\n   preprocessor\" comes from the fact that the timestep is preprocessed before\n   being passed on to the agent. The [`agentflow.preprocessors`] package\n   contains many useful, ready-to-use timestep preprocessors.\n\nTogether, the `ActionSpace` and `TimestepPreprocessor` define the \"game logic\"\nfor an RL environment, and they are housed inside an [`agentflow.SubTask`].\n\n![Task logic diagram](./doc/images/actions_and_tsps.png \"Task Logic\")\n\nIf you have a fixed physical setup and you just want to change the task, all\nyou need to change is the `af.SubTask`. Likewise, if you have a single task but\nwant to switch the hardware or switch between sim and real, you can fix the\n`af.SubTask` and just change the `BaseTask`. See the AgentFlow documentation\nfor more information.\n\n## Putting It All Together\n\n### Single Task\n\nIn cases where there is only one objective for the RL agent (i.e. one instance\nof the game-logic), you can use MoMa's [SubtaskEnvironment], which exposes a\nsingle `agentflow.SubTask` with Deepmind's standard RL environment interface,\n[dm_env.Environment].\n\nHere is a diagram presenting the different components of a MoMa subtask\nenvironment along with an explanation of information flow and different links to\nthe code.\n\n![SubtaskEnv diagram](./doc/images/moma_logic_flow.png \"RL loop diagram\")\n\n1.  The agent sends an action to a MoMa `SubTaskEnvironment` which serves as a\n    container for the different components used in a task. The action is passed\n    to an AgentFlow `ActionSpace` that projects the agent's action to a new\n    action space that matches the spec of the underlying effector(s).\n\n2.  The projected action is given to effectors. This allows us to use both sim\n    or real robots for the same task.\n\n3.  The effectors then actuate the robots either in sim or in real.\n\n4.  The sensors then collect information from the robotics environment. Sensors\n    are an abstraction layer for both sim and real, similar to Effectors.\n\n5.  The `BaseTask` then passes the timestep to an AgentFlow\n    `TimestepPreprocessor`. The preprocessor can change the timestep's\n    observations and rewards, and it can terminate an RL episode if some\n    termination criteria are met.\n\n6.  The modified timestep is then passed on to the agent.\n\n### Multiple Tasks\n\nGiven a single `BaseTask` which represents a collection of robots and sensors,\nwe can support multiple RL tasks and \"flow\" between them. Each RL task is an\n[`agentflow.SubTask`], containing its own \"game logic\" specifying the agent's\naction space, observations, rewards, and episode termination criteria.\n\nAgentFlow contains utilities to specify these different subtasks and define\nhow the agent can move from subtask to subtask. Please see the AgentFlow docs\nfor more information.\n\n## Creating a task with MoMa\n\n### Creating a task in a new environment\n\nTo build a new MoMa environment, you can use the [subtask_env_builder]\npattern. An example of this pattern can be found in our [example task] and in\nthe tutorial linked at the top.\n\n<!--internal doc marker-->\n\n[Composer library]: https://deepmind.google/discover/blog/dm-control-software-and-tasks-for-continuous-control/\n[`dm_control`]: https://github.com/deepmind/dm_control/tree/master\n[Sensors]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/sensor.py\n[`sensors`]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/sensors/\n[Effectors]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/effector.py\n[`effectors`]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/effectors/\n[`BaseTask`]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/base_task.py\n[SubtaskEnvironment]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/subtask_env.py\n[dm_env.Environment]: https://github.com/deepmind/dm_env/tree/master\n[AgentFlow]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/README.md\n[`agentflow.ActionSpace`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/action_spaces.py\n[`agentflow.TimestepPreprocessor`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/preprocessors/timestep_preprocessor.py\n[`agentflow.SubTask`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/subtask.py\n[`agentflow.SubTask`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/subtask.py\n[`agentflow.preprocessors`]: https://github.com/deepmind/dm_robotics/tree/main/py/agentflow/preprocessors/\n[subtask_env_builder]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/subtask_env_builder.py\n[example task]: https://github.com/deepmind/dm_robotics/tree/main/py/moma/tasks/example_task/example_task.py\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "Tools for authoring robotic manipulation tasks.",
    "version": "0.8.1",
    "project_urls": {
        "Homepage": "https://github.com/deepmind/dm_robotics/tree/main/py/moma"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f4e541cef9bd4dad88b2ced72fe8640e8fb4dc5748dbcbe20fe9dd61a479a29d",
                "md5": "78000586051c09a71c4a88be61b7e34f",
                "sha256": "0c8f56af117c59db357e943f8d244c8282310bf2d5086d0ffcb0a80bb47be6b4"
            },
            "downloads": -1,
            "filename": "dm_robotics_moma-0.8.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "78000586051c09a71c4a88be61b7e34f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.7",
            "size": 18954869,
            "upload_time": "2024-06-20T10:34:25",
            "upload_time_iso_8601": "2024-06-20T10:34:25.885284Z",
            "url": "https://files.pythonhosted.org/packages/f4/e5/41cef9bd4dad88b2ced72fe8640e8fb4dc5748dbcbe20fe9dd61a479a29d/dm_robotics_moma-0.8.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-20 10:34:25",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "deepmind",
    "github_project": "dm_robotics",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "dm-robotics-moma"
}
        
Elapsed time: 0.28121s