lerobot


Namelerobot JSON
Version 0.3.3 PyPI version JSON
download
home_pageNone
SummaryπŸ€— LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch
upload_time2025-08-06 18:41:35
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseApache-2.0
keywords lerobot huggingface robotics machine learning artificial intelligence
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img alt="LeRobot, Hugging Face Robotics Library" src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/lerobot-logo-thumbnail.png" width="100%">
  <br/>
  <br/>
</p>

<div align="center">

[![Tests](https://github.com/huggingface/lerobot/actions/workflows/nightly.yml/badge.svg?branch=main)](https://github.com/huggingface/lerobot/actions/workflows/nighty.yml?query=branch%3Amain)
[![Python versions](https://img.shields.io/pypi/pyversions/lerobot)](https://www.python.org/downloads/)
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/huggingface/lerobot/blob/main/LICENSE)
[![Status](https://img.shields.io/pypi/status/lerobot)](https://pypi.org/project/lerobot/)
[![Version](https://img.shields.io/pypi/v/lerobot)](https://pypi.org/project/lerobot/)
[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.1-ff69b4.svg)](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
[![Discord](https://dcbadge.vercel.app/api/server/C5P34WJ68S?style=flat)](https://discord.gg/s3KuuzsPFb)

<!-- [![Coverage](https://codecov.io/gh/huggingface/lerobot/branch/main/graph/badge.svg?token=TODO)](https://codecov.io/gh/huggingface/lerobot) -->

</div>

<h2 align="center">
    <p><a href="https://huggingface.co/docs/lerobot/hope_jr">
        Build Your Own HopeJR Robot!</a></p>
</h2>

<div align="center">
  <img
    src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/hope_jr/hopejr.png"
    alt="HopeJR robot"
    title="HopeJR robot"
    width="60%"
  />

  <p><strong>Meet HopeJR – A humanoid robot arm and hand for dexterous manipulation!</strong></p>
  <p>Control it with exoskeletons and gloves for precise hand movements.</p>
  <p>Perfect for advanced manipulation tasks! πŸ€–</p>

  <p><a href="https://huggingface.co/docs/lerobot/hope_jr">
      See the full HopeJR tutorial here.</a></p>
</div>

<br/>

<h2 align="center">
    <p><a href="https://huggingface.co/docs/lerobot/so101">
        Build Your Own SO-101 Robot!</a></p>
</h2>

<div align="center">
  <table>
    <tr>
      <td align="center"><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/so101/so101.webp" alt="SO-101 follower arm" title="SO-101 follower arm" width="90%"/></td>
      <td align="center"><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/so101/so101-leader.webp" alt="SO-101 leader arm" title="SO-101 leader arm" width="90%"/></td>
    </tr>
  </table>

  <p><strong>Meet the updated SO100, the SO-101 – Just €114 per arm!</strong></p>
  <p>Train it in minutes with a few simple moves on your laptop.</p>
  <p>Then sit back and watch your creation act autonomously! 🀯</p>

  <p><a href="https://huggingface.co/docs/lerobot/so101">
      See the full SO-101 tutorial here.</a></p>

  <p>Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!</p>
  <p>Check out the <a href="https://huggingface.co/docs/lerobot/lekiwi">LeKiwi tutorial</a> and bring your robot to life on wheels.</p>

  <img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/lekiwi/kiwi.webp" alt="LeKiwi mobile robot" title="LeKiwi mobile robot" width="50%">
</div>

<br/>

<h3 align="center">
    <p>LeRobot: State-of-the-art AI for real-world robotics</p>
</h3>

---

πŸ€— LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.

πŸ€— LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.

πŸ€— LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.

πŸ€— LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)

#### Examples of pretrained models on simulation environments

<table>
  <tr>
    <td><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
    <td><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
    <td><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
  </tr>
  <tr>
    <td align="center">ACT policy on ALOHA env</td>
    <td align="center">TDMPC policy on SimXArm env</td>
    <td align="center">Diffusion policy on PushT env</td>
  </tr>
</table>

## Installation

LeRobot works with Python 3.10+ and PyTorch 2.2+.

### Environment Setup

Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):

```bash
conda create -y -n lerobot python=3.10
conda activate lerobot
```

When using `miniconda`, install `ffmpeg` in your environment:

```bash
conda install ffmpeg -c conda-forge
```

> **NOTE:** This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
>
> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
>
> ```bash
> conda install ffmpeg=7.1.1 -c conda-forge
> ```
>
> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.

### Install LeRobot πŸ€—

#### From Source

First, clone the repository and navigate into the directory:

```bash
git clone https://github.com/huggingface/lerobot.git
cd lerobot
```

Then, install the library in editable mode. This is useful if you plan to contribute to the code.

```bash
pip install -e .
```

> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:
> `sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)

For simulations, πŸ€— LeRobot comes with gymnasium environments that can be installed as extras:

- [aloha](https://github.com/huggingface/gym-aloha)
- [xarm](https://github.com/huggingface/gym-xarm)
- [pusht](https://github.com/huggingface/gym-pusht)

For instance, to install πŸ€— LeRobot with aloha and pusht, use:

```bash
pip install -e ".[aloha, pusht]"
```

### Installation from PyPI

**Core Library:**
Install the base package with:

```bash
pip install lerobot
```

_This installs only the default dependencies._

**Extra Features:**
To install additional functionality, use one of the following:

```bash
pip install 'lerobot[all]'          # All available features
pip install 'lerobot[aloha,pusht]'  # Specific features (Aloha & Pusht)
pip install 'lerobot[feetech]'      # Feetech motor support
```

_Replace `[...]` with your desired features._

**Available Tags:**
For a full list of optional dependencies, see:
https://pypi.org/project/lerobot/

### Weights & Biases

To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with

```bash
wandb login
```

(note: you will also need to enable WandB in the configuration. See below.)

### Visualize datasets

Check out [example 1](https://github.com/huggingface/lerobot/blob/main/examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.

You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:

```bash
python -m lerobot.scripts.visualize_dataset \
    --repo-id lerobot/pusht \
    --episode-index 0
```

or from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)

```bash
python -m lerobot.scripts.visualize_dataset \
    --repo-id lerobot/pusht \
    --root ./my_local_data_dir \
    --local-files-only 1 \
    --episode-index 0
```

It will open `rerun.io` and display the camera streams, robot states and actions, like this:

https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144

Our script can also visualize datasets stored on a distant server. See `python -m lerobot.scripts.visualize_dataset --help` for more instructions.

### The `LeRobotDataset` format

A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.

A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](https://github.com/huggingface/lerobot/blob/main/examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.

Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.

Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:

```
dataset attributes:
  β”œ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
  β”‚  β”œ observation.images.cam_high (VideoFrame):
  β”‚  β”‚   VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
  β”‚  β”œ observation.state (list of float32): position of an arm joints (for instance)
  β”‚  ... (more observations)
  β”‚  β”œ action (list of float32): goal position of an arm joints (for instance)
  β”‚  β”œ episode_index (int64): index of the episode for this sample
  β”‚  β”œ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
  β”‚  β”œ timestamp (float32): timestamp in the episode
  β”‚  β”œ next.done (bool): indicates the end of an episode ; True for the last frame in each episode
  β”‚  β”” index (int64): general index in the whole dataset
  β”œ episode_data_index: contains 2 tensors with the start and end indices of each episode
  β”‚  β”œ from (1D int64 tensor): first frame index for each episode β€” shape (num episodes,) starts with 0
  β”‚  β”” to: (1D int64 tensor): last frame index for each episode β€” shape (num episodes,)
  β”œ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
  β”‚  β”œ observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
  β”‚  ...
  β”œ info: a dictionary of metadata on the dataset
  β”‚  β”œ codebase_version (str): this is to keep track of the codebase version the dataset was created with
  β”‚  β”œ fps (float): frame per second the dataset is recorded/synchronized to
  β”‚  β”œ video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files
  β”‚  β”” encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos
  β”œ videos_dir (Path): where the mp4 videos or png images are stored/accessed
  β”” camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `["observation.images.cam_high", ...]`)
```

A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:

- hf_dataset stored using Hugging Face datasets library serialization to parquet
- videos are stored in mp4 format to save space
- metadata are stored in plain json/jsonl files

Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.

### Evaluate a pretrained policy

Check out [example 2](https://github.com/huggingface/lerobot/blob/main/examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.

We also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):

```bash
python -m lerobot.scripts.eval \
    --policy.path=lerobot/diffusion_pusht \
    --env.type=pusht \
    --eval.batch_size=10 \
    --eval.n_episodes=10 \
    --policy.use_amp=false \
    --policy.device=cuda
```

Note: After training your own policy, you can re-evaluate the checkpoints with:

```bash
python -m lerobot.scripts.eval --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model
```

See `python -m lerobot.scripts.eval --help` for more instructions.

### Train your own policy

Check out [example 3](https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md) that shows how to use our training script from command line.

To use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.

A link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.

\<img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/wandb.png" alt="WandB logs example"\>

Note: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python -m lerobot.scripts.eval --help` for more instructions.

#### Reproduce state-of-the-art (SOTA)

We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
You can reproduce their training by loading the config from their run. Simply running:

```bash
python -m lerobot.scripts.train --config_path=lerobot/diffusion_pusht
```

reproduces SOTA results for Diffusion Policy on the PushT task.

## Contribute

If you would like to contribute to πŸ€— LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).

### Add a pretrained policy

Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).

You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:

- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.

To upload these to the hub, run the following:

```bash
huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
```

See [eval.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/eval.py) for an example of how other people may use your policy.

### Acknowledgment

- The LeRobot team πŸ€— for building SmolVLA [Paper](https://arxiv.org/abs/2506.01844), [Blog](https://huggingface.co/blog/smolvla).
- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
- Thanks to Antonio Loquercio and Ashish Kumar for their early support.
- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).

## Citation

If you want, you can cite this work with:

```bibtex
@misc{cadene2024lerobot,
    author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
    title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
    howpublished = "\url{https://github.com/huggingface/lerobot}",
    year = {2024}
}
```

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=huggingface/lerobot&type=Timeline)](https://star-history.com/#huggingface/lerobot&Timeline)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "lerobot",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "lerobot, huggingface, robotics, machine learning, artificial intelligence",
    "author": null,
    "author_email": "R\u00e9mi Cad\u00e8ne <re.cadene@gmail.com>, Simon Alibert <alibert.sim@gmail.com>, Alexander Soare <alexander.soare159@gmail.com>, Quentin Gallou\u00e9dec <quentin.gallouedec@ec-lyon.fr>, Steven Palma <imstevenpmwork@ieee.org>, Pepijn Kooijmans <pepijnkooijmans@outlook.com>, Michel Aractingi <michel.aractingi@gmail.com>, Adil Zouitine <adilzouitinegm@gmail.com>, Dana Aubakirova <danaaubakirova17@gmail.com>, Caroline Pascal <caroline8.pascal@gmail.com>, Martino Russi <nopyeps@gmail.com>, Thomas Wolf <thomaswolfcontact@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/c7/01/45eb19324a80c8719e7bd98fc12c5dfb50da2b58c2404953f6d0112a6fc3/lerobot-0.3.3.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img alt=\"LeRobot, Hugging Face Robotics Library\" src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/lerobot-logo-thumbnail.png\" width=\"100%\">\n  <br/>\n  <br/>\n</p>\n\n<div align=\"center\">\n\n[![Tests](https://github.com/huggingface/lerobot/actions/workflows/nightly.yml/badge.svg?branch=main)](https://github.com/huggingface/lerobot/actions/workflows/nighty.yml?query=branch%3Amain)\n[![Python versions](https://img.shields.io/pypi/pyversions/lerobot)](https://www.python.org/downloads/)\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/huggingface/lerobot/blob/main/LICENSE)\n[![Status](https://img.shields.io/pypi/status/lerobot)](https://pypi.org/project/lerobot/)\n[![Version](https://img.shields.io/pypi/v/lerobot)](https://pypi.org/project/lerobot/)\n[![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.1-ff69b4.svg)](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)\n[![Discord](https://dcbadge.vercel.app/api/server/C5P34WJ68S?style=flat)](https://discord.gg/s3KuuzsPFb)\n\n<!-- [![Coverage](https://codecov.io/gh/huggingface/lerobot/branch/main/graph/badge.svg?token=TODO)](https://codecov.io/gh/huggingface/lerobot) -->\n\n</div>\n\n<h2 align=\"center\">\n    <p><a href=\"https://huggingface.co/docs/lerobot/hope_jr\">\n        Build Your Own HopeJR Robot!</a></p>\n</h2>\n\n<div align=\"center\">\n  <img\n    src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/hope_jr/hopejr.png\"\n    alt=\"HopeJR robot\"\n    title=\"HopeJR robot\"\n    width=\"60%\"\n  />\n\n  <p><strong>Meet HopeJR \u2013 A humanoid robot arm and hand for dexterous manipulation!</strong></p>\n  <p>Control it with exoskeletons and gloves for precise hand movements.</p>\n  <p>Perfect for advanced manipulation tasks! \ud83e\udd16</p>\n\n  <p><a href=\"https://huggingface.co/docs/lerobot/hope_jr\">\n      See the full HopeJR tutorial here.</a></p>\n</div>\n\n<br/>\n\n<h2 align=\"center\">\n    <p><a href=\"https://huggingface.co/docs/lerobot/so101\">\n        Build Your Own SO-101 Robot!</a></p>\n</h2>\n\n<div align=\"center\">\n  <table>\n    <tr>\n      <td align=\"center\"><img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/so101/so101.webp\" alt=\"SO-101 follower arm\" title=\"SO-101 follower arm\" width=\"90%\"/></td>\n      <td align=\"center\"><img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/so101/so101-leader.webp\" alt=\"SO-101 leader arm\" title=\"SO-101 leader arm\" width=\"90%\"/></td>\n    </tr>\n  </table>\n\n  <p><strong>Meet the updated SO100, the SO-101 \u2013 Just \u20ac114 per arm!</strong></p>\n  <p>Train it in minutes with a few simple moves on your laptop.</p>\n  <p>Then sit back and watch your creation act autonomously! \ud83e\udd2f</p>\n\n  <p><a href=\"https://huggingface.co/docs/lerobot/so101\">\n      See the full SO-101 tutorial here.</a></p>\n\n  <p>Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!</p>\n  <p>Check out the <a href=\"https://huggingface.co/docs/lerobot/lekiwi\">LeKiwi tutorial</a> and bring your robot to life on wheels.</p>\n\n  <img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/lekiwi/kiwi.webp\" alt=\"LeKiwi mobile robot\" title=\"LeKiwi mobile robot\" width=\"50%\">\n</div>\n\n<br/>\n\n<h3 align=\"center\">\n    <p>LeRobot: State-of-the-art AI for real-world robotics</p>\n</h3>\n\n---\n\n\ud83e\udd17 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.\n\n\ud83e\udd17 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.\n\n\ud83e\udd17 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.\n\n\ud83e\udd17 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)\n\n#### Examples of pretrained models on simulation environments\n\n<table>\n  <tr>\n    <td><img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/aloha_act.gif\" width=\"100%\" alt=\"ACT policy on ALOHA env\"/></td>\n    <td><img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/simxarm_tdmpc.gif\" width=\"100%\" alt=\"TDMPC policy on SimXArm env\"/></td>\n    <td><img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/pusht_diffusion.gif\" width=\"100%\" alt=\"Diffusion policy on PushT env\"/></td>\n  </tr>\n  <tr>\n    <td align=\"center\">ACT policy on ALOHA env</td>\n    <td align=\"center\">TDMPC policy on SimXArm env</td>\n    <td align=\"center\">Diffusion policy on PushT env</td>\n  </tr>\n</table>\n\n## Installation\n\nLeRobot works with Python 3.10+ and PyTorch 2.2+.\n\n### Environment Setup\n\nCreate a virtual environment with Python 3.10 and activate it, e.g. with [`miniconda`](https://docs.anaconda.com/free/miniconda/index.html):\n\n```bash\nconda create -y -n lerobot python=3.10\nconda activate lerobot\n```\n\nWhen using `miniconda`, install `ffmpeg` in your environment:\n\n```bash\nconda install ffmpeg -c conda-forge\n```\n\n> **NOTE:** This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:\n>\n> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:\n>\n> ```bash\n> conda install ffmpeg=7.1.1 -c conda-forge\n> ```\n>\n> - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.\n\n### Install LeRobot \ud83e\udd17\n\n#### From Source\n\nFirst, clone the repository and navigate into the directory:\n\n```bash\ngit clone https://github.com/huggingface/lerobot.git\ncd lerobot\n```\n\nThen, install the library in editable mode. This is useful if you plan to contribute to the code.\n\n```bash\npip install -e .\n```\n\n> **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:\n> `sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)\n\nFor simulations, \ud83e\udd17 LeRobot comes with gymnasium environments that can be installed as extras:\n\n- [aloha](https://github.com/huggingface/gym-aloha)\n- [xarm](https://github.com/huggingface/gym-xarm)\n- [pusht](https://github.com/huggingface/gym-pusht)\n\nFor instance, to install \ud83e\udd17 LeRobot with aloha and pusht, use:\n\n```bash\npip install -e \".[aloha, pusht]\"\n```\n\n### Installation from PyPI\n\n**Core Library:**\nInstall the base package with:\n\n```bash\npip install lerobot\n```\n\n_This installs only the default dependencies._\n\n**Extra Features:**\nTo install additional functionality, use one of the following:\n\n```bash\npip install 'lerobot[all]'          # All available features\npip install 'lerobot[aloha,pusht]'  # Specific features (Aloha & Pusht)\npip install 'lerobot[feetech]'      # Feetech motor support\n```\n\n_Replace `[...]` with your desired features._\n\n**Available Tags:**\nFor a full list of optional dependencies, see:\nhttps://pypi.org/project/lerobot/\n\n### Weights & Biases\n\nTo use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with\n\n```bash\nwandb login\n```\n\n(note: you will also need to enable WandB in the configuration. See below.)\n\n### Visualize datasets\n\nCheck out [example 1](https://github.com/huggingface/lerobot/blob/main/examples/1_load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.\n\nYou can also locally visualize episodes from a dataset on the hub by executing our script from the command line:\n\n```bash\npython -m lerobot.scripts.visualize_dataset \\\n    --repo-id lerobot/pusht \\\n    --episode-index 0\n```\n\nor from a dataset in a local folder with the `root` option and the `--local-files-only` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)\n\n```bash\npython -m lerobot.scripts.visualize_dataset \\\n    --repo-id lerobot/pusht \\\n    --root ./my_local_data_dir \\\n    --local-files-only 1 \\\n    --episode-index 0\n```\n\nIt will open `rerun.io` and display the camera streams, robot states and actions, like this:\n\nhttps://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144\n\nOur script can also visualize datasets stored on a distant server. See `python -m lerobot.scripts.visualize_dataset --help` for more instructions.\n\n### The `LeRobotDataset` format\n\nA dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset(\"lerobot/aloha_static_coffee\")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.\n\nA specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {\"observation.image\": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 \"previous\" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](https://github.com/huggingface/lerobot/blob/main/examples/1_load_lerobot_dataset.py) for more details on `delta_timestamps`.\n\nUnder the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.\n\nHere are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset(\"lerobot/aloha_static_coffee\")`. The exact features will change from dataset to dataset but not the main aspects:\n\n```\ndataset attributes:\n  \u251c hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:\n  \u2502  \u251c observation.images.cam_high (VideoFrame):\n  \u2502  \u2502   VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}\n  \u2502  \u251c observation.state (list of float32): position of an arm joints (for instance)\n  \u2502  ... (more observations)\n  \u2502  \u251c action (list of float32): goal position of an arm joints (for instance)\n  \u2502  \u251c episode_index (int64): index of the episode for this sample\n  \u2502  \u251c frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode\n  \u2502  \u251c timestamp (float32): timestamp in the episode\n  \u2502  \u251c next.done (bool): indicates the end of an episode ; True for the last frame in each episode\n  \u2502  \u2514 index (int64): general index in the whole dataset\n  \u251c episode_data_index: contains 2 tensors with the start and end indices of each episode\n  \u2502  \u251c from (1D int64 tensor): first frame index for each episode \u2014 shape (num episodes,) starts with 0\n  \u2502  \u2514 to: (1D int64 tensor): last frame index for each episode \u2014 shape (num episodes,)\n  \u251c stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance\n  \u2502  \u251c observation.images.cam_high: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}\n  \u2502  ...\n  \u251c info: a dictionary of metadata on the dataset\n  \u2502  \u251c codebase_version (str): this is to keep track of the codebase version the dataset was created with\n  \u2502  \u251c fps (float): frame per second the dataset is recorded/synchronized to\n  \u2502  \u251c video (bool): indicates if frames are encoded in mp4 video files to save space or stored as png files\n  \u2502  \u2514 encoding (dict): if video, this documents the main options that were used with ffmpeg to encode the videos\n  \u251c videos_dir (Path): where the mp4 videos or png images are stored/accessed\n  \u2514 camera_keys (list of string): the keys to access camera features in the item returned by the dataset (e.g. `[\"observation.images.cam_high\", ...]`)\n```\n\nA `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:\n\n- hf_dataset stored using Hugging Face datasets library serialization to parquet\n- videos are stored in mp4 format to save space\n- metadata are stored in plain json/jsonl files\n\nDataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.\n\n### Evaluate a pretrained policy\n\nCheck out [example 2](https://github.com/huggingface/lerobot/blob/main/examples/2_evaluate_pretrained_policy.py) that illustrates how to download a pretrained policy from Hugging Face hub, and run an evaluation on its corresponding environment.\n\nWe also provide a more capable script to parallelize the evaluation over multiple environments during the same rollout. Here is an example with a pretrained model hosted on [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht):\n\n```bash\npython -m lerobot.scripts.eval \\\n    --policy.path=lerobot/diffusion_pusht \\\n    --env.type=pusht \\\n    --eval.batch_size=10 \\\n    --eval.n_episodes=10 \\\n    --policy.use_amp=false \\\n    --policy.device=cuda\n```\n\nNote: After training your own policy, you can re-evaluate the checkpoints with:\n\n```bash\npython -m lerobot.scripts.eval --policy.path={OUTPUT_DIR}/checkpoints/last/pretrained_model\n```\n\nSee `python -m lerobot.scripts.eval --help` for more instructions.\n\n### Train your own policy\n\nCheck out [example 3](https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py) that illustrates how to train a model using our core library in python, and [example 4](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md) that shows how to use our training script from command line.\n\nTo use wandb for logging training and evaluation curves, make sure you've run `wandb login` as a one-time setup step. Then, when running the training command above, enable WandB in the configuration by adding `--wandb.enable=true`.\n\nA link to the wandb logs for the run will also show up in yellow in your terminal. Here is an example of what they look like in your browser. Please also check [here](https://github.com/huggingface/lerobot/blob/main/examples/4_train_policy_with_script.md#typical-logs-and-metrics) for the explanation of some commonly used metrics in logs.\n\n\\<img src=\"https://raw.githubusercontent.com/huggingface/lerobot/main/media/wandb.png\" alt=\"WandB logs example\"\\>\n\nNote: For efficiency, during training every checkpoint is evaluated on a low number of episodes. You may use `--eval.n_episodes=500` to evaluate on more episodes than the default. Or, after training, you may want to re-evaluate your best checkpoints on more episodes or change the evaluation settings. See `python -m lerobot.scripts.eval --help` for more instructions.\n\n#### Reproduce state-of-the-art (SOTA)\n\nWe provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.\nYou can reproduce their training by loading the config from their run. Simply running:\n\n```bash\npython -m lerobot.scripts.train --config_path=lerobot/diffusion_pusht\n```\n\nreproduces SOTA results for Diffusion Policy on the PushT task.\n\n## Contribute\n\nIf you would like to contribute to \ud83e\udd17 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).\n\n### Add a pretrained policy\n\nOnce you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).\n\nYou first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:\n\n- `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).\n- `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.\n- `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.\n\nTo upload these to the hub, run the following:\n\n```bash\nhuggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model\n```\n\nSee [eval.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/eval.py) for an example of how other people may use your policy.\n\n### Acknowledgment\n\n- The LeRobot team \ud83e\udd17 for building SmolVLA [Paper](https://arxiv.org/abs/2506.01844), [Blog](https://huggingface.co/blog/smolvla).\n- Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).\n- Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).\n- Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).\n- Thanks to Antonio Loquercio and Ashish Kumar for their early support.\n- Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).\n\n## Citation\n\nIf you want, you can cite this work with:\n\n```bibtex\n@misc{cadene2024lerobot,\n    author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},\n    title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},\n    howpublished = \"\\url{https://github.com/huggingface/lerobot}\",\n    year = {2024}\n}\n```\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=huggingface/lerobot&type=Timeline)](https://star-history.com/#huggingface/lerobot&Timeline)\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "\ud83e\udd17 LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch",
    "version": "0.3.3",
    "project_urls": {
        "discord": "https://discord.gg/s3KuuzsPFb",
        "documentation": "https://huggingface.co/docs/lerobot/index",
        "homepage": "https://huggingface.co/lerobot",
        "issues": "https://github.com/huggingface/lerobot/issues",
        "source": "https://github.com/huggingface/lerobot"
    },
    "split_keywords": [
        "lerobot",
        " huggingface",
        " robotics",
        " machine learning",
        " artificial intelligence"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8ebe3361a6417799062bbf7b7f6863900e3c1843eb7b21a5ffc5831fdfc7a3b3",
                "md5": "991c0a9d394f001f4a220b68cf65d468",
                "sha256": "ae8354b516f5c8cfaa059d5ac926b4d03e96ca73350b971f8c7d75b905ba953d"
            },
            "downloads": -1,
            "filename": "lerobot-0.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "991c0a9d394f001f4a220b68cf65d468",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 597692,
            "upload_time": "2025-08-06T18:41:33",
            "upload_time_iso_8601": "2025-08-06T18:41:33.608647Z",
            "url": "https://files.pythonhosted.org/packages/8e/be/3361a6417799062bbf7b7f6863900e3c1843eb7b21a5ffc5831fdfc7a3b3/lerobot-0.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "c70145eb19324a80c8719e7bd98fc12c5dfb50da2b58c2404953f6d0112a6fc3",
                "md5": "6e3a251883fee371e9598a607aeea600",
                "sha256": "5a8c526f4f50a5a4260b75f0382af26d01ef169c9a1ae8ca453ad5c6e1cf5aef"
            },
            "downloads": -1,
            "filename": "lerobot-0.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "6e3a251883fee371e9598a607aeea600",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 450703,
            "upload_time": "2025-08-06T18:41:35",
            "upload_time_iso_8601": "2025-08-06T18:41:35.689524Z",
            "url": "https://files.pythonhosted.org/packages/c7/01/45eb19324a80c8719e7bd98fc12c5dfb50da2b58c2404953f6d0112a6fc3/lerobot-0.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-06 18:41:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "huggingface",
    "github_project": "lerobot",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "lerobot"
}
        
Elapsed time: 1.74609s