gym-so100


Namegym-so100 JSON
Version 0.1.2 PyPI version JSON
download
home_pageNone
SummaryA gym environment for SO100 robot
upload_time2024-12-30 04:19:34
maintainerNone
docs_urlNone
authorXiaoxuan Liu
requires_python<4.0,>=3.10
licenseApache-2.0
keywords robotics deep reinforcement learning so100 environment gym gymnasium dm-control mujoco
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # gym-so100

A gym environment for [SO-ARM100](https://github.com/TheRobotStudio/SO-ARM100).

<img src="./example_episode_0.gif" width="50%" alt="ACT SO100EETransferCube-v0 policy on SO100 env"/>


## Installation

Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):
```bash
conda create -y -n so100 python=3.10 && conda activate so100
```

Install gym-so100:
```bash
pip install -e .
```


## Quickstart

### 1. Check the environment

```python
# example.py
import imageio
import gymnasium as gym
import numpy as np
import gym_so100

env = gym.make("gym_so100/SO100Insertion-v0")
observation, info = env.reset()
frames = []

for _ in range(1000):
    action = env.action_space.sample()
    observation, reward, terminated, truncated, info = env.step(action)
    image = env.render()
    frames.append(image)

    if terminated or truncated:
        observation, info = env.reset()

env.close()
imageio.mimsave("example.mp4", np.stack(frames), fps=25)
```
### 2. Run the scripted sim task example

```bash
from gym_so100.policy import InsertionPolicy, PickAndTransferPolicy
from tests.test_policy import test_policy

test_policy("SO100EETransferCube-v0", PickAndTransferPolicy, True)
# test_policy("SO100EEInsertion-v0", InsertionPolicy, True)
```

## Description
SO100 *(aka. SO-ARM100)* environment.

Two tasks are available:
- TransferCubeTask: The right arm needs to first pick up the red cube lying on the table, then place it inside the gripper of the other arm.
- InsertionTask: The left and right arms need to pick up the socket and peg respectively, and then insert in mid-air so the peg touches the “pins” inside the socket.

### Action Space
The action space consists of continuous values for each arm and gripper, resulting in a 12-dimensional vector:
- Five values for each arm's joint positions (absolute values).
- One value for each gripper's position, normalized between 0 (closed) and 1 (open).

### Observation Space
Observations are provided as a dictionary with the following keys:

- `qpos` and `qvel`: Position and velocity data for the arms and grippers.
- `images`: Camera feeds from different angles.
- `env_state`: Additional environment state information, such as positions of the peg and sockets.

### Rewards
- TransferCubeTask:
    - 1 point for holding the box with the right gripper.
    - 2 points if the box is lifted with the right gripper.
    - 3 points for transferring the box to the left gripper.
    - 4 points for a successful transfer without touching the table.
- InsertionTask:
    - 1 point for touching both the peg and a socket with the grippers.
    - 2 points for grasping both without dropping them.
    - 3 points if the peg is aligned with and touching the socket.
    - 4 points for successful insertion of the peg into the socket.

### Success Criteria
Achieving the maximum reward of 4 points more than 10 times within last 50 steps.

### Starting State
The arms at home position and the items (block, peg, socket) start at a random position and angle.

### Arguments

```python
>>> import gymnasium as gym
>>> import gym_so100
>>> env = gym.make("gym_so100/SO100Insertion-v0", obs_type="pixels", render_mode="rgb_array")
>>> env
<TimeLimit<OrderEnforcing<PassiveEnvChecker<SO100Env<gym_so100/SO100Insertion-v0>>>>>
```

* `obs_type`: (str) The observation type. Can be either `pixels` or `pixels_agent_pos`. Default is `pixels`.

* `render_mode`: (str) The rendering mode. Only `rgb_array` is supported for now.

* `observation_width`: (int) The width of the observed image. Default is `640`.

* `observation_height`: (int) The height of the observed image. Default is `480`.

* `visualization_width`: (int) The width of the visualized image. Default is `640`.

* `visualization_height`: (int) The height of the visualized image. Default is `480`.


# LeRobot Dataset Creation

```bash
# 1. clone lerobot repo and install lerobot env, note: `pip install lerobot` do not include `LeRobotDataset` module
git clone https://github.com/huggingface/lerobot.git --single-branch
pip install -e .

# back to this repo and run the script to create dataset
# Note: update params to your own
python record_lerobot_dataset.py --user-id xuaner233 --root dataset --num-episodes 1
```


## Contribute

Instead of using `pip` directly, we use `poetry` for development purposes to easily track our dependencies.
If you don't have it already, follow the [instructions](https://python-poetry.org/docs/#installation) to install it.

Install the project with dev dependencies:
```bash
poetry install --all-extras
```


### Follow our style

```bash
# install pre-commit hooks
pre-commit install

# apply style and linter checks on staged files
pre-commit
```


## Acknowledgment

gym-so100 is adapted from [gym-aloha](https://github.com/huggingface/gym-aloha)


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "gym-so100",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "robotics, deep, reinforcement, learning, so100, environment, gym, gymnasium, dm-control, mujoco",
    "author": "Xiaoxuan Liu",
    "author_email": "xiaoxuan.liu.work@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/2b/f1/e299d9355dd035379b44b5d7a48755664aa1b2b1f70a8cf5308366b03476/gym_so100-0.1.2.tar.gz",
    "platform": null,
    "description": "# gym-so100\n\nA gym environment for [SO-ARM100](https://github.com/TheRobotStudio/SO-ARM100).\n\n<img src=\"./example_episode_0.gif\" width=\"50%\" alt=\"ACT SO100EETransferCube-v0 policy on SO100 env\"/>\n\n\n## Installation\n\nCreate a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):\n```bash\nconda create -y -n so100 python=3.10 && conda activate so100\n```\n\nInstall gym-so100:\n```bash\npip install -e .\n```\n\n\n## Quickstart\n\n### 1. Check the environment\n\n```python\n# example.py\nimport imageio\nimport gymnasium as gym\nimport numpy as np\nimport gym_so100\n\nenv = gym.make(\"gym_so100/SO100Insertion-v0\")\nobservation, info = env.reset()\nframes = []\n\nfor _ in range(1000):\n    action = env.action_space.sample()\n    observation, reward, terminated, truncated, info = env.step(action)\n    image = env.render()\n    frames.append(image)\n\n    if terminated or truncated:\n        observation, info = env.reset()\n\nenv.close()\nimageio.mimsave(\"example.mp4\", np.stack(frames), fps=25)\n```\n### 2. Run the scripted sim task example\n\n```bash\nfrom gym_so100.policy import InsertionPolicy, PickAndTransferPolicy\nfrom tests.test_policy import test_policy\n\ntest_policy(\"SO100EETransferCube-v0\", PickAndTransferPolicy, True)\n# test_policy(\"SO100EEInsertion-v0\", InsertionPolicy, True)\n```\n\n## Description\nSO100 *(aka. SO-ARM100)* environment.\n\nTwo tasks are available:\n- TransferCubeTask: The right arm needs to first pick up the red cube lying on the table, then place it inside the gripper of the other arm.\n- InsertionTask: The left and right arms need to pick up the socket and peg respectively, and then insert in mid-air so the peg touches the \u201cpins\u201d inside the socket.\n\n### Action Space\nThe action space consists of continuous values for each arm and gripper, resulting in a 12-dimensional vector:\n- Five values for each arm's joint positions (absolute values).\n- One value for each gripper's position, normalized between 0 (closed) and 1 (open).\n\n### Observation Space\nObservations are provided as a dictionary with the following keys:\n\n- `qpos` and `qvel`: Position and velocity data for the arms and grippers.\n- `images`: Camera feeds from different angles.\n- `env_state`: Additional environment state information, such as positions of the peg and sockets.\n\n### Rewards\n- TransferCubeTask:\n    - 1 point for holding the box with the right gripper.\n    - 2 points if the box is lifted with the right gripper.\n    - 3 points for transferring the box to the left gripper.\n    - 4 points for a successful transfer without touching the table.\n- InsertionTask:\n    - 1 point for touching both the peg and a socket with the grippers.\n    - 2 points for grasping both without dropping them.\n    - 3 points if the peg is aligned with and touching the socket.\n    - 4 points for successful insertion of the peg into the socket.\n\n### Success Criteria\nAchieving the maximum reward of 4 points more than 10 times within last 50 steps.\n\n### Starting State\nThe arms at home position and the items (block, peg, socket) start at a random position and angle.\n\n### Arguments\n\n```python\n>>> import gymnasium as gym\n>>> import gym_so100\n>>> env = gym.make(\"gym_so100/SO100Insertion-v0\", obs_type=\"pixels\", render_mode=\"rgb_array\")\n>>> env\n<TimeLimit<OrderEnforcing<PassiveEnvChecker<SO100Env<gym_so100/SO100Insertion-v0>>>>>\n```\n\n* `obs_type`: (str) The observation type. Can be either `pixels` or `pixels_agent_pos`. Default is `pixels`.\n\n* `render_mode`: (str) The rendering mode. Only `rgb_array` is supported for now.\n\n* `observation_width`: (int) The width of the observed image. Default is `640`.\n\n* `observation_height`: (int) The height of the observed image. Default is `480`.\n\n* `visualization_width`: (int) The width of the visualized image. Default is `640`.\n\n* `visualization_height`: (int) The height of the visualized image. Default is `480`.\n\n\n# LeRobot Dataset Creation\n\n```bash\n# 1. clone lerobot repo and install lerobot env, note: `pip install lerobot` do not include `LeRobotDataset` module\ngit clone https://github.com/huggingface/lerobot.git --single-branch\npip install -e .\n\n# back to this repo and run the script to create dataset\n# Note: update params to your own\npython record_lerobot_dataset.py --user-id xuaner233 --root dataset --num-episodes 1\n```\n\n\n## Contribute\n\nInstead of using `pip` directly, we use `poetry` for development purposes to easily track our dependencies.\nIf you don't have it already, follow the [instructions](https://python-poetry.org/docs/#installation) to install it.\n\nInstall the project with dev dependencies:\n```bash\npoetry install --all-extras\n```\n\n\n### Follow our style\n\n```bash\n# install pre-commit hooks\npre-commit install\n\n# apply style and linter checks on staged files\npre-commit\n```\n\n\n## Acknowledgment\n\ngym-so100 is adapted from [gym-aloha](https://github.com/huggingface/gym-aloha)\n\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "A gym environment for SO100 robot",
    "version": "0.1.2",
    "project_urls": null,
    "split_keywords": [
        "robotics",
        " deep",
        " reinforcement",
        " learning",
        " so100",
        " environment",
        " gym",
        " gymnasium",
        " dm-control",
        " mujoco"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "7c3523dacf9e94c5db360389c9b5f427e3b60097373bbea7527e1f72f10d90a0",
                "md5": "24d4965bd04464f759082185effff13b",
                "sha256": "b4f05b272500dfe77f3dce423b7d90cd6e7662c938965d1921ff6427fd3d2bdd"
            },
            "downloads": -1,
            "filename": "gym_so100-0.1.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "24d4965bd04464f759082185effff13b",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 941220,
            "upload_time": "2024-12-30T04:19:31",
            "upload_time_iso_8601": "2024-12-30T04:19:31.476810Z",
            "url": "https://files.pythonhosted.org/packages/7c/35/23dacf9e94c5db360389c9b5f427e3b60097373bbea7527e1f72f10d90a0/gym_so100-0.1.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2bf1e299d9355dd035379b44b5d7a48755664aa1b2b1f70a8cf5308366b03476",
                "md5": "1050d6b109b90214e458b6d86a0be6ce",
                "sha256": "5da08cab611d85bde5583573385a8f3095bf84166ccb247678d681dd3d4fe449"
            },
            "downloads": -1,
            "filename": "gym_so100-0.1.2.tar.gz",
            "has_sig": false,
            "md5_digest": "1050d6b109b90214e458b6d86a0be6ce",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 935182,
            "upload_time": "2024-12-30T04:19:34",
            "upload_time_iso_8601": "2024-12-30T04:19:34.636034Z",
            "url": "https://files.pythonhosted.org/packages/2b/f1/e299d9355dd035379b44b5d7a48755664aa1b2b1f70a8cf5308366b03476/gym_so100-0.1.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-30 04:19:34",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "gym-so100"
}
        
Elapsed time: 0.39427s