rtmlib


Namertmlib JSON
Version 0.0.12 PyPI version JSON
download
home_pagehttps://github.com/Tau-J/rtmlib
SummaryA library for real-time pose estimation.
upload_time2024-07-10 04:50:44
maintainerNone
docs_urlNone
authorTau-J
requires_python>=3.7
licenseApache License 2.0
keywords pose estimation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # rtmlib

![demo](https://github.com/Tau-J/rtmlib/assets/13503330/b7e8ce8b-3134-43cf-bba6-d81656897289)

rtmlib is a super lightweight library to conduct pose estimation based on [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose) models **WITHOUT** any dependencies like mmcv, mmpose, mmdet, etc.

Basically, rtmlib only requires these dependencies:

- numpy
- opencv-python
- opencv-contrib-python
- onnxruntime

Optionally, you can use other common backends like opencv, onnxruntime, openvino, tensorrt to accelerate the inference process.

- For openvino users, please add the path `<your python path>\envs\<your env name>\Lib\site-packages\openvino\libs` into your environment path.

## Installation

- install from pypi:

```shell
pip install rtmlib -i https://pypi.org/simple
```

- install from source code:

```shell
git clone https://github.com/Tau-J/rtmlib.git
cd rtmlib

pip install -r requirements.txt

pip install -e .

# [optional]
# pip install onnxruntime-gpu
# pip install openvino

```

## Quick Start

Here is a simple demo to show how to use rtmlib to conduct pose estimation on a single image.

```python
import cv2

from rtmlib import Wholebody, draw_skeleton

device = 'cpu'  # cpu, cuda, mps
backend = 'onnxruntime'  # opencv, onnxruntime, openvino
img = cv2.imread('./demo.jpg')

openpose_skeleton = False  # True for openpose-style, False for mmpose-style

wholebody = Wholebody(to_openpose=openpose_skeleton,
                      mode='balanced',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'
                      backend=backend, device=device)

keypoints, scores = wholebody(img)

# visualize

# if you want to use black background instead of original image,
# img_show = np.zeros(img_show.shape, dtype=np.uint8)

img_show = draw_skeleton(img_show, keypoints, scores, kpt_thr=0.5)


cv2.imshow('img', img_show)
cv2.waitKey()
```

## WebUI

Run `webui.py`:

```shell
# Please make sure you have installed gradio
# pip install gradio

python webui.py
```

![image](https://github.com/Tau-J/rtmlib/assets/13503330/49ef11a1-a1b5-4a20-a2e1-d49f8be6a25d)

## APIs

- Solutions (High-level APIs)
  - [Wholebody](/rtmlib/tools/solution/wholebody.py)
  - [Body](/rtmlib/tools/solution/body.py)
  - [Body_with_feet](/rtmlib/tools/solution/body_with_feet.py)
  - [Hand](/rtmlib/tools/solution/hand.py)
  - [PoseTracker](/rtmlib/tools/solution/pose_tracker.py)
- Models (Low-level APIs)
  - [YOLOX](/rtmlib/tools/object_detection/yolox.py)
  - [RTMDet](/rtmlib/tools/object_detection/rtmdet.py)
  - [RTMPose](/rtmlib/tools/pose_estimation/rtmpose.py)
    - RTMPose for 17 keypoints
    - RTMPose for 26 keypoints
    - RTMW for 133 keypoints
    - DWPose for 133 keypoints
    - RTMO for one-stage pose estimation (17 keypoints)
- Visualization
  - [draw_bbox](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L9)
  - [draw_skeleton](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L16)

For high-level APIs (`Solution`), you can choose to pass `mode` or `det`+`pose` arguments to specify the detector and pose estimator you want to use.

```Python
# By mode
wholebody = Wholebody(mode='performance',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'
                      backend=backend,
                      device=device)

# By det and pose
body = Body(det='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip',
            det_input_size=(640, 640),
            pose='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip',
            pose_input_size=(288, 384),
            backend=backend,
            device=device)
```

For low-level APIs (`Model`), you can specify the model you want to use by passing the `onnx_model` argument.

```Python
# By onnx_model (.onnx)
pose_model = RTMPose(onnx_model='/path/to/your_model.onnx',  # download link or local path
                     backend=backend, device=device)

# By onnx_model (.zip)
pose_model = RTMPose(onnx_model='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip',  # download link or local path
                     backend=backend, device=device)
```

## Model Zoo

By defaults, rtmlib will automatically download and apply models with the best performance.

More models can be found in [RTMPose Model Zoo](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose).

### Detectors

<details open>
<summary><b>Person</b></summary>

Notes:

- Models trained on HumanArt can detect both real human and cartoon characters.
- Models trained on COCO can only detect real human.

|                                                          ONNX Model                                                           | Input Size | AP (person) |       Description        |
| :---------------------------------------------------------------------------------------------------------------------------: | :--------: | :---------: | :----------------------: |
|                 [YOLOX-l](https://drive.google.com/file/d/1w9pXC8tT0p9ndMN-CArp1__b2GbzewWI/view?usp=sharing)                 |  640x640   |      -      |     trained on COCO      |
| [YOLOX-nano](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip) |  416x416   |    38.9     | trained on HumanArt+COCO |
| [YOLOX-tiny](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_tiny_8xb8-300e_humanart-6f3252f9.zip) |  416x416   |    47.7     | trained on HumanArt+COCO |
|    [YOLOX-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_s_8xb8-300e_humanart-3ef259a7.zip)    |  640x640   |    54.6     | trained on HumanArt+COCO |
|    [YOLOX-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip)    |  640x640   |    59.1     | trained on HumanArt+COCO |
|    [YOLOX-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_l_8xb8-300e_humanart-ce1d7a62.zip)    |  640x640   |    60.2     | trained on HumanArt+COCO |
|    [YOLOX-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip)    |  640x640   |    61.3     | trained on HumanArt+COCO |

</details>

### Pose Estimators

<details open>
<summary><b>Body 17 Keypoints</b></summary>

|                                                                     ONNX Model                                                                      | Input Size | AP (COCO) |      Description      |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |
| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip) |  256x192   |   65.9    | trained on 7 datasets |
| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7_420e-256x192-acd4a1ef_20230504.zip) |  256x192   |   69.7    | trained on 7 datasets |
| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip) |  256x192   |   74.9    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-256x192-4dba18fc_20230504.zip) |  256x192   |   76.7    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-384x288-3f5a1437_20230504.zip) |  384x288   |   78.3    | trained on 7 datasets |
| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip) |  384x288   |   78.8    | trained on 7 datasets |
|           [RTMO-s](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-s_8xb32-600e_body7-640x640-dac2bf74_20231211.zip)           |  640x640   |   68.6    | trained on 7 datasets |
|          [RTMO-m](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-m_16xb16-600e_body7-640x640-39e78cc4_20231211.zip)           |  640x640   |   72.6    | trained on 7 datasets |
|          [RTMO-l](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-l_16xb16-600e_body7-640x640-b37118ce_20231211.zip)           |  640x640   |   74.8    | trained on 7 datasets |

</details>

<details open>
<summary><b>Body 26 Keypoints</b></summary>

|                                                                     ONNX Model                                                                      | Input Size | AUC (Body8) |      Description      |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |
| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7-halpe26_700e-256x192-6020f8a6_20230605.zip) |  256x192   |   66.35    | trained on 7 datasets |
| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7-halpe26_700e-256x192-7f134165_20230605.zip) |  256x192   |   68.62    | trained on 7 datasets |
| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip) |  256x192   |   71.91    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-256x192-2abb7558_20230605.zip) |  256x192   |   73.19    | trained on 7 datasets |
| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-384x288-89e6428b_20230605.zip) |  384x288   |   73.56    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-384x288-734182ce_20230605.zip) |  384x288   |   74.38    | trained on 7 datasets |
| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7-halpe26_700e-384x288-7fb6e239_20230606.zip) |  384x288   |   74.82    | trained on 7 datasets |

</details>

<details open>
<summary><b>WholeBody 133 Keypoints</b></summary>

|                                                                     ONNX Model                                                                     | Input Size |   AP (Whole)   |           Description           |
| :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :--: | :-----------------------------: |
| [DWPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-ucoco_dw-ucoco_270e-256x192-dcf277bf_20230728.zip) |  256x192   | 48.5 | trained on COCO-Wholebody+UBody |
| [DWPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-ucoco_dw-ucoco_270e-256x192-3fd922c8_20230728.zip) |  256x192   | 53.8 | trained on COCO-Wholebody+UBody |
| [DWPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-ucoco_dw-ucoco_270e-256x192-c8b76419_20230728.zip) |  256x192   | 60.6 | trained on COCO-Wholebody+UBody |
| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-256x192-4d6dfc62_20230728.zip) |  256x192   | 63.1 | trained on COCO-Wholebody+UBody |
| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-384x288-2438fd99_20230728.zip) |  384x288   | 66.5 | trained on COCO-Wholebody+UBody |
|          [RTMW-m](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-m-s_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 58.2 |     trained on 14 datasets      |
|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 66.0 |     trained on 14 datasets      |
|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-384x288_20231122.zip)          |  384x288   | 70.1 |     trained on 14 datasets      |
|   [RTMW-x](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-x_simcc-cocktail13_pt-ucoco_270e-384x288-0949e3a9_20230925.zip)    |  384x288   | 70.2 |     trained on 14 datasets      |

</details>

### Visualization

|                                            MMPose-style                                             |                                            OpenPose-style                                             |
| :-------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------: |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/c9e6fbaa-00f0-4961-ac87-d881edca778b"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/9afc996a-59e6-4200-a655-59dae10b46c4"> |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/b12e5f60-fec0-42a1-b7b6-365e93894fb1"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/5acf7431-6ef0-44a8-ae52-9d8c8cb988c9"> |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/091b8ce3-32d5-463b-9f41-5c683afa7a11"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/4ffc7be1-50d6-44ff-8c6b-22ea8975aad4"> |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/6fddfc14-7519-42eb-a7a4-98bf5441f324"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/2523e568-e0c3-4c2e-8e54-d1a67100c537"> |

### Citation

```
@misc{rtmlib,
  title={rtmlib},
  author={Jiang, Tao},
  year={2023},
  howpublished = {\url{https://github.com/Tau-J/rtmlib}},
}

@misc{jiang2023,
  doi = {10.48550/ARXIV.2303.07399},
  url = {https://arxiv.org/abs/2303.07399},
  author = {Jiang, Tao and Lu, Peng and Zhang, Li and Ma, Ningsheng and Han, Rui and Lyu, Chengqi and Li, Yining and Chen, Kai},
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose},
  publisher = {arXiv},
  year = {2023},
  copyright = {Creative Commons Attribution 4.0 International}
}

@misc{lu2023rtmo,
      title={{RTMO}: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation},
      author={Peng Lu and Tao Jiang and Yining Li and Xiangtai Li and Kai Chen and Wenming Yang},
      year={2023},
      eprint={2312.07526},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
```

## Acknowledgement

Our code is based on these repos:

- [MMPose](https://github.com/open-mmlab/mmpose)
- [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose)
- [DWPose](https://github.com/IDEA-Research/DWPose/tree/opencv_onnx)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Tau-J/rtmlib",
    "name": "rtmlib",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "pose estimation",
    "author": "Tau-J",
    "author_email": "taujiang@outlook.com",
    "download_url": "https://files.pythonhosted.org/packages/f3/2e/1a8a27c6db42b366488e2f6d89dedfddf6b60f4be60d87c94ac631bf845b/rtmlib-0.0.12.tar.gz",
    "platform": null,
    "description": "# rtmlib\r\n\r\n![demo](https://github.com/Tau-J/rtmlib/assets/13503330/b7e8ce8b-3134-43cf-bba6-d81656897289)\r\n\r\nrtmlib is a super lightweight library to conduct pose estimation based on [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose) models **WITHOUT** any dependencies like mmcv, mmpose, mmdet, etc.\r\n\r\nBasically, rtmlib only requires these dependencies:\r\n\r\n- numpy\r\n- opencv-python\r\n- opencv-contrib-python\r\n- onnxruntime\r\n\r\nOptionally, you can use other common backends like opencv, onnxruntime, openvino, tensorrt to accelerate the inference process.\r\n\r\n- For openvino users, please add the path `<your python path>\\envs\\<your env name>\\Lib\\site-packages\\openvino\\libs` into your environment path.\r\n\r\n## Installation\r\n\r\n- install from pypi:\r\n\r\n```shell\r\npip install rtmlib -i https://pypi.org/simple\r\n```\r\n\r\n- install from source code:\r\n\r\n```shell\r\ngit clone https://github.com/Tau-J/rtmlib.git\r\ncd rtmlib\r\n\r\npip install -r requirements.txt\r\n\r\npip install -e .\r\n\r\n# [optional]\r\n# pip install onnxruntime-gpu\r\n# pip install openvino\r\n\r\n```\r\n\r\n## Quick Start\r\n\r\nHere is a simple demo to show how to use rtmlib to conduct pose estimation on a single image.\r\n\r\n```python\r\nimport cv2\r\n\r\nfrom rtmlib import Wholebody, draw_skeleton\r\n\r\ndevice = 'cpu'  # cpu, cuda, mps\r\nbackend = 'onnxruntime'  # opencv, onnxruntime, openvino\r\nimg = cv2.imread('./demo.jpg')\r\n\r\nopenpose_skeleton = False  # True for openpose-style, False for mmpose-style\r\n\r\nwholebody = Wholebody(to_openpose=openpose_skeleton,\r\n                      mode='balanced',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'\r\n                      backend=backend, device=device)\r\n\r\nkeypoints, scores = wholebody(img)\r\n\r\n# visualize\r\n\r\n# if you want to use black background instead of original image,\r\n# img_show = np.zeros(img_show.shape, dtype=np.uint8)\r\n\r\nimg_show = draw_skeleton(img_show, keypoints, scores, kpt_thr=0.5)\r\n\r\n\r\ncv2.imshow('img', img_show)\r\ncv2.waitKey()\r\n```\r\n\r\n## WebUI\r\n\r\nRun `webui.py`:\r\n\r\n```shell\r\n# Please make sure you have installed gradio\r\n# pip install gradio\r\n\r\npython webui.py\r\n```\r\n\r\n![image](https://github.com/Tau-J/rtmlib/assets/13503330/49ef11a1-a1b5-4a20-a2e1-d49f8be6a25d)\r\n\r\n## APIs\r\n\r\n- Solutions (High-level APIs)\r\n  - [Wholebody](/rtmlib/tools/solution/wholebody.py)\r\n  - [Body](/rtmlib/tools/solution/body.py)\r\n  - [Body_with_feet](/rtmlib/tools/solution/body_with_feet.py)\r\n  - [Hand](/rtmlib/tools/solution/hand.py)\r\n  - [PoseTracker](/rtmlib/tools/solution/pose_tracker.py)\r\n- Models (Low-level APIs)\r\n  - [YOLOX](/rtmlib/tools/object_detection/yolox.py)\r\n  - [RTMDet](/rtmlib/tools/object_detection/rtmdet.py)\r\n  - [RTMPose](/rtmlib/tools/pose_estimation/rtmpose.py)\r\n    - RTMPose for 17 keypoints\r\n    - RTMPose for 26 keypoints\r\n    - RTMW for 133 keypoints\r\n    - DWPose for 133 keypoints\r\n    - RTMO for one-stage pose estimation (17 keypoints)\r\n- Visualization\r\n  - [draw_bbox](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L9)\r\n  - [draw_skeleton](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L16)\r\n\r\nFor high-level APIs (`Solution`), you can choose to pass `mode` or `det`+`pose` arguments to specify the detector and pose estimator you want to use.\r\n\r\n```Python\r\n# By mode\r\nwholebody = Wholebody(mode='performance',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'\r\n                      backend=backend,\r\n                      device=device)\r\n\r\n# By det and pose\r\nbody = Body(det='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip',\r\n            det_input_size=(640, 640),\r\n            pose='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip',\r\n            pose_input_size=(288, 384),\r\n            backend=backend,\r\n            device=device)\r\n```\r\n\r\nFor low-level APIs (`Model`), you can specify the model you want to use by passing the `onnx_model` argument.\r\n\r\n```Python\r\n# By onnx_model (.onnx)\r\npose_model = RTMPose(onnx_model='/path/to/your_model.onnx',  # download link or local path\r\n                     backend=backend, device=device)\r\n\r\n# By onnx_model (.zip)\r\npose_model = RTMPose(onnx_model='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip',  # download link or local path\r\n                     backend=backend, device=device)\r\n```\r\n\r\n## Model Zoo\r\n\r\nBy defaults, rtmlib will automatically download and apply models with the best performance.\r\n\r\nMore models can be found in [RTMPose Model Zoo](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose).\r\n\r\n### Detectors\r\n\r\n<details open>\r\n<summary><b>Person</b></summary>\r\n\r\nNotes:\r\n\r\n- Models trained on HumanArt can detect both real human and cartoon characters.\r\n- Models trained on COCO can only detect real human.\r\n\r\n|                                                          ONNX Model                                                           | Input Size | AP (person) |       Description        |\r\n| :---------------------------------------------------------------------------------------------------------------------------: | :--------: | :---------: | :----------------------: |\r\n|                 [YOLOX-l](https://drive.google.com/file/d/1w9pXC8tT0p9ndMN-CArp1__b2GbzewWI/view?usp=sharing)                 |  640x640   |      -      |     trained on COCO      |\r\n| [YOLOX-nano](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip) |  416x416   |    38.9     | trained on HumanArt+COCO |\r\n| [YOLOX-tiny](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_tiny_8xb8-300e_humanart-6f3252f9.zip) |  416x416   |    47.7     | trained on HumanArt+COCO |\r\n|    [YOLOX-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_s_8xb8-300e_humanart-3ef259a7.zip)    |  640x640   |    54.6     | trained on HumanArt+COCO |\r\n|    [YOLOX-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip)    |  640x640   |    59.1     | trained on HumanArt+COCO |\r\n|    [YOLOX-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_l_8xb8-300e_humanart-ce1d7a62.zip)    |  640x640   |    60.2     | trained on HumanArt+COCO |\r\n|    [YOLOX-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip)    |  640x640   |    61.3     | trained on HumanArt+COCO |\r\n\r\n</details>\r\n\r\n### Pose Estimators\r\n\r\n<details open>\r\n<summary><b>Body 17 Keypoints</b></summary>\r\n\r\n|                                                                     ONNX Model                                                                      | Input Size | AP (COCO) |      Description      |\r\n| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |\r\n| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip) |  256x192   |   65.9    | trained on 7 datasets |\r\n| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7_420e-256x192-acd4a1ef_20230504.zip) |  256x192   |   69.7    | trained on 7 datasets |\r\n| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip) |  256x192   |   74.9    | trained on 7 datasets |\r\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-256x192-4dba18fc_20230504.zip) |  256x192   |   76.7    | trained on 7 datasets |\r\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-384x288-3f5a1437_20230504.zip) |  384x288   |   78.3    | trained on 7 datasets |\r\n| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip) |  384x288   |   78.8    | trained on 7 datasets |\r\n|           [RTMO-s](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-s_8xb32-600e_body7-640x640-dac2bf74_20231211.zip)           |  640x640   |   68.6    | trained on 7 datasets |\r\n|          [RTMO-m](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-m_16xb16-600e_body7-640x640-39e78cc4_20231211.zip)           |  640x640   |   72.6    | trained on 7 datasets |\r\n|          [RTMO-l](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-l_16xb16-600e_body7-640x640-b37118ce_20231211.zip)           |  640x640   |   74.8    | trained on 7 datasets |\r\n\r\n</details>\r\n\r\n<details open>\r\n<summary><b>Body 26 Keypoints</b></summary>\r\n\r\n|                                                                     ONNX Model                                                                      | Input Size | AUC (Body8) |      Description      |\r\n| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |\r\n| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7-halpe26_700e-256x192-6020f8a6_20230605.zip) |  256x192   |   66.35    | trained on 7 datasets |\r\n| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7-halpe26_700e-256x192-7f134165_20230605.zip) |  256x192   |   68.62    | trained on 7 datasets |\r\n| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip) |  256x192   |   71.91    | trained on 7 datasets |\r\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-256x192-2abb7558_20230605.zip) |  256x192   |   73.19    | trained on 7 datasets |\r\n| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-384x288-89e6428b_20230605.zip) |  384x288   |   73.56    | trained on 7 datasets |\r\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-384x288-734182ce_20230605.zip) |  384x288   |   74.38    | trained on 7 datasets |\r\n| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7-halpe26_700e-384x288-7fb6e239_20230606.zip) |  384x288   |   74.82    | trained on 7 datasets |\r\n\r\n</details>\r\n\r\n<details open>\r\n<summary><b>WholeBody 133 Keypoints</b></summary>\r\n\r\n|                                                                     ONNX Model                                                                     | Input Size |   AP (Whole)   |           Description           |\r\n| :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :--: | :-----------------------------: |\r\n| [DWPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-ucoco_dw-ucoco_270e-256x192-dcf277bf_20230728.zip) |  256x192   | 48.5 | trained on COCO-Wholebody+UBody |\r\n| [DWPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-ucoco_dw-ucoco_270e-256x192-3fd922c8_20230728.zip) |  256x192   | 53.8 | trained on COCO-Wholebody+UBody |\r\n| [DWPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-ucoco_dw-ucoco_270e-256x192-c8b76419_20230728.zip) |  256x192   | 60.6 | trained on COCO-Wholebody+UBody |\r\n| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-256x192-4d6dfc62_20230728.zip) |  256x192   | 63.1 | trained on COCO-Wholebody+UBody |\r\n| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-384x288-2438fd99_20230728.zip) |  384x288   | 66.5 | trained on COCO-Wholebody+UBody |\r\n|          [RTMW-m](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-m-s_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 58.2 |     trained on 14 datasets      |\r\n|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 66.0 |     trained on 14 datasets      |\r\n|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-384x288_20231122.zip)          |  384x288   | 70.1 |     trained on 14 datasets      |\r\n|   [RTMW-x](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-x_simcc-cocktail13_pt-ucoco_270e-384x288-0949e3a9_20230925.zip)    |  384x288   | 70.2 |     trained on 14 datasets      |\r\n\r\n</details>\r\n\r\n### Visualization\r\n\r\n|                                            MMPose-style                                             |                                            OpenPose-style                                             |\r\n| :-------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------: |\r\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/c9e6fbaa-00f0-4961-ac87-d881edca778b\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/9afc996a-59e6-4200-a655-59dae10b46c4\"> |\r\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/b12e5f60-fec0-42a1-b7b6-365e93894fb1\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/5acf7431-6ef0-44a8-ae52-9d8c8cb988c9\"> |\r\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/091b8ce3-32d5-463b-9f41-5c683afa7a11\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/4ffc7be1-50d6-44ff-8c6b-22ea8975aad4\"> |\r\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/6fddfc14-7519-42eb-a7a4-98bf5441f324\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/2523e568-e0c3-4c2e-8e54-d1a67100c537\"> |\r\n\r\n### Citation\r\n\r\n```\r\n@misc{rtmlib,\r\n  title={rtmlib},\r\n  author={Jiang, Tao},\r\n  year={2023},\r\n  howpublished = {\\url{https://github.com/Tau-J/rtmlib}},\r\n}\r\n\r\n@misc{jiang2023,\r\n  doi = {10.48550/ARXIV.2303.07399},\r\n  url = {https://arxiv.org/abs/2303.07399},\r\n  author = {Jiang, Tao and Lu, Peng and Zhang, Li and Ma, Ningsheng and Han, Rui and Lyu, Chengqi and Li, Yining and Chen, Kai},\r\n  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},\r\n  title = {RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose},\r\n  publisher = {arXiv},\r\n  year = {2023},\r\n  copyright = {Creative Commons Attribution 4.0 International}\r\n}\r\n\r\n@misc{lu2023rtmo,\r\n      title={{RTMO}: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation},\r\n      author={Peng Lu and Tao Jiang and Yining Li and Xiangtai Li and Kai Chen and Wenming Yang},\r\n      year={2023},\r\n      eprint={2312.07526},\r\n      archivePrefix={arXiv},\r\n      primaryClass={cs.CV}\r\n}\r\n```\r\n\r\n## Acknowledgement\r\n\r\nOur code is based on these repos:\r\n\r\n- [MMPose](https://github.com/open-mmlab/mmpose)\r\n- [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose)\r\n- [DWPose](https://github.com/IDEA-Research/DWPose/tree/opencv_onnx)\r\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "A library for real-time pose estimation.",
    "version": "0.0.12",
    "project_urls": {
        "Homepage": "https://github.com/Tau-J/rtmlib"
    },
    "split_keywords": [
        "pose",
        "estimation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "9fba2c73c1e70404789527d13c36c98298db4186134294035c0f8bf27a036e72",
                "md5": "948793b8cfccfd78961b430bb93949a5",
                "sha256": "dd5888666bacca9493f9a9bee0fe63beb7153d0f0dc4c04fc821024a2967e34d"
            },
            "downloads": -1,
            "filename": "rtmlib-0.0.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "948793b8cfccfd78961b430bb93949a5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 45919,
            "upload_time": "2024-07-10T04:50:41",
            "upload_time_iso_8601": "2024-07-10T04:50:41.397366Z",
            "url": "https://files.pythonhosted.org/packages/9f/ba/2c73c1e70404789527d13c36c98298db4186134294035c0f8bf27a036e72/rtmlib-0.0.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f32e1a8a27c6db42b366488e2f6d89dedfddf6b60f4be60d87c94ac631bf845b",
                "md5": "ac564475a2d379a1c8586a84d96d0004",
                "sha256": "c774cc05528a0b261cfb8c1a18841d9a0b9a2f7964cbc84ca9431a9c7b1fec4f"
            },
            "downloads": -1,
            "filename": "rtmlib-0.0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "ac564475a2d379a1c8586a84d96d0004",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 37997,
            "upload_time": "2024-07-10T04:50:44",
            "upload_time_iso_8601": "2024-07-10T04:50:44.201544Z",
            "url": "https://files.pythonhosted.org/packages/f3/2e/1a8a27c6db42b366488e2f6d89dedfddf6b60f4be60d87c94ac631bf845b/rtmlib-0.0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-10 04:50:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Tau-J",
    "github_project": "rtmlib",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "rtmlib"
}
        
Elapsed time: 0.28346s