rtmlib-flip


Namertmlib-flip JSON
Version 1.0.9 PyPI version JSON
download
home_pagehttps://github.com/philip-fu/rtmlib
SummaryA library for real-time pose estimation.
upload_time2025-01-14 00:32:09
maintainerNone
docs_urlNone
authorPhilip.Fu
requires_python>=3.7
licenseApache License 2.0
keywords pose estimation
VCS
bugtrack_url
requirements numpy onnxruntime opencv-contrib-python opencv-python tqdm
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # rtmlib

![demo](https://github.com/Tau-J/rtmlib/assets/13503330/b7e8ce8b-3134-43cf-bba6-d81656897289)

rtmlib is a super lightweight library to conduct pose estimation based on [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose) models **WITHOUT** any dependencies like mmcv, mmpose, mmdet, etc.

Basically, rtmlib only requires these dependencies:

- numpy
- opencv-python
- opencv-contrib-python
- onnxruntime

Optionally, you can use other common backends like opencv, onnxruntime, openvino, tensorrt to accelerate the inference process.

- For openvino users, please add the path `<your python path>\envs\<your env name>\Lib\site-packages\openvino\libs` into your environment path.

## Installation

- install from pypi:

```shell
pip install rtmlib -i https://pypi.org/simple
```

- install from source code:

```shell
git clone https://github.com/Tau-J/rtmlib.git
cd rtmlib

pip install -r requirements.txt

pip install -e .

# [optional]
# pip install onnxruntime-gpu
# pip install openvino

```

## Quick Start

Here is a simple demo to show how to use rtmlib to conduct pose estimation on a single image.

```python
import cv2

from rtmlib import Wholebody, draw_skeleton

device = 'cpu'  # cpu, cuda, mps
backend = 'onnxruntime'  # opencv, onnxruntime, openvino
img = cv2.imread('./demo.jpg')

openpose_skeleton = False  # True for openpose-style, False for mmpose-style

wholebody = Wholebody(to_openpose=openpose_skeleton,
                      mode='balanced',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'
                      backend=backend, device=device)

keypoints, scores = wholebody(img)

# visualize

# if you want to use black background instead of original image,
# img_show = np.zeros(img_show.shape, dtype=np.uint8)

img_show = draw_skeleton(img_show, keypoints, scores, kpt_thr=0.5)


cv2.imshow('img', img_show)
cv2.waitKey()
```

## WebUI

Run `webui.py`:

```shell
# Please make sure you have installed gradio
# pip install gradio

python webui.py
```

![image](https://github.com/Tau-J/rtmlib/assets/13503330/49ef11a1-a1b5-4a20-a2e1-d49f8be6a25d)

## APIs

- Solutions (High-level APIs)
  - [Wholebody](/rtmlib/tools/solution/wholebody.py)
  - [Body](/rtmlib/tools/solution/body.py)
  - [Body_with_feet](/rtmlib/tools/solution/body_with_feet.py)
  - [Hand](/rtmlib/tools/solution/hand.py)
  - [PoseTracker](/rtmlib/tools/solution/pose_tracker.py)
- Models (Low-level APIs)
  - [YOLOX](/rtmlib/tools/object_detection/yolox.py)
  - [RTMDet](/rtmlib/tools/object_detection/rtmdet.py)
  - [RTMPose](/rtmlib/tools/pose_estimation/rtmpose.py)
    - RTMPose for 17 keypoints
    - RTMPose for 26 keypoints
    - RTMW for 133 keypoints
    - DWPose for 133 keypoints
    - RTMO for one-stage pose estimation (17 keypoints)
- Visualization
  - [draw_bbox](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L9)
  - [draw_skeleton](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L16)

For high-level APIs (`Solution`), you can choose to pass `mode` or `det`+`pose` arguments to specify the detector and pose estimator you want to use.

```Python
# By mode
wholebody = Wholebody(mode='performance',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'
                      backend=backend,
                      device=device)

# By det and pose
body = Body(det='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip',
            det_input_size=(640, 640),
            pose='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip',
            pose_input_size=(288, 384),
            backend=backend,
            device=device)
```

For low-level APIs (`Model`), you can specify the model you want to use by passing the `onnx_model` argument.

```Python
# By onnx_model (.onnx)
pose_model = RTMPose(onnx_model='/path/to/your_model.onnx',  # download link or local path
                     backend=backend, device=device)

# By onnx_model (.zip)
pose_model = RTMPose(onnx_model='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip',  # download link or local path
                     backend=backend, device=device)
```

## Model Zoo

By defaults, rtmlib will automatically download and apply models with the best performance.

More models can be found in [RTMPose Model Zoo](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose).

### Detectors

<details open>
<summary><b>Person</b></summary>

Notes:

- Models trained on HumanArt can detect both real human and cartoon characters.
- Models trained on COCO can only detect real human.

|                                                          ONNX Model                                                           | Input Size | AP (person) |       Description        |
| :---------------------------------------------------------------------------------------------------------------------------: | :--------: | :---------: | :----------------------: |
|                 [YOLOX-l](https://drive.google.com/file/d/1w9pXC8tT0p9ndMN-CArp1__b2GbzewWI/view?usp=sharing)                 |  640x640   |      -      |     trained on COCO      |
| [YOLOX-nano](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip) |  416x416   |    38.9     | trained on HumanArt+COCO |
| [YOLOX-tiny](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_tiny_8xb8-300e_humanart-6f3252f9.zip) |  416x416   |    47.7     | trained on HumanArt+COCO |
|    [YOLOX-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_s_8xb8-300e_humanart-3ef259a7.zip)    |  640x640   |    54.6     | trained on HumanArt+COCO |
|    [YOLOX-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip)    |  640x640   |    59.1     | trained on HumanArt+COCO |
|    [YOLOX-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_l_8xb8-300e_humanart-ce1d7a62.zip)    |  640x640   |    60.2     | trained on HumanArt+COCO |
|    [YOLOX-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip)    |  640x640   |    61.3     | trained on HumanArt+COCO |

</details>

### Pose Estimators

<details open>
<summary><b>Body 17 Keypoints</b></summary>

|                                                                     ONNX Model                                                                      | Input Size | AP (COCO) |      Description      |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |
| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip) |  256x192   |   65.9    | trained on 7 datasets |
| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7_420e-256x192-acd4a1ef_20230504.zip) |  256x192   |   69.7    | trained on 7 datasets |
| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip) |  256x192   |   74.9    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-256x192-4dba18fc_20230504.zip) |  256x192   |   76.7    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-384x288-3f5a1437_20230504.zip) |  384x288   |   78.3    | trained on 7 datasets |
| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip) |  384x288   |   78.8    | trained on 7 datasets |
|           [RTMO-s](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-s_8xb32-600e_body7-640x640-dac2bf74_20231211.zip)           |  640x640   |   68.6    | trained on 7 datasets |
|          [RTMO-m](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-m_16xb16-600e_body7-640x640-39e78cc4_20231211.zip)           |  640x640   |   72.6    | trained on 7 datasets |
|          [RTMO-l](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-l_16xb16-600e_body7-640x640-b37118ce_20231211.zip)           |  640x640   |   74.8    | trained on 7 datasets |

</details>

<details open>
<summary><b>Body 26 Keypoints</b></summary>

|                                                                     ONNX Model                                                                      | Input Size | AUC (Body8) |      Description      |
| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |
| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7-halpe26_700e-256x192-6020f8a6_20230605.zip) |  256x192   |   66.35    | trained on 7 datasets |
| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7-halpe26_700e-256x192-7f134165_20230605.zip) |  256x192   |   68.62    | trained on 7 datasets |
| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip) |  256x192   |   71.91    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-256x192-2abb7558_20230605.zip) |  256x192   |   73.19    | trained on 7 datasets |
| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-384x288-89e6428b_20230605.zip) |  384x288   |   73.56    | trained on 7 datasets |
| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-384x288-734182ce_20230605.zip) |  384x288   |   74.38    | trained on 7 datasets |
| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7-halpe26_700e-384x288-7fb6e239_20230606.zip) |  384x288   |   74.82    | trained on 7 datasets |

</details>

<details open>
<summary><b>WholeBody 133 Keypoints</b></summary>

|                                                                     ONNX Model                                                                     | Input Size |   AP (Whole)   |           Description           |
| :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :--: | :-----------------------------: |
| [DWPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-ucoco_dw-ucoco_270e-256x192-dcf277bf_20230728.zip) |  256x192   | 48.5 | trained on COCO-Wholebody+UBody |
| [DWPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-ucoco_dw-ucoco_270e-256x192-3fd922c8_20230728.zip) |  256x192   | 53.8 | trained on COCO-Wholebody+UBody |
| [DWPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-ucoco_dw-ucoco_270e-256x192-c8b76419_20230728.zip) |  256x192   | 60.6 | trained on COCO-Wholebody+UBody |
| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-256x192-4d6dfc62_20230728.zip) |  256x192   | 63.1 | trained on COCO-Wholebody+UBody |
| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-384x288-2438fd99_20230728.zip) |  384x288   | 66.5 | trained on COCO-Wholebody+UBody |
|          [RTMW-m](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-m-s_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 58.2 |     trained on 14 datasets      |
|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 66.0 |     trained on 14 datasets      |
|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-384x288_20231122.zip)          |  384x288   | 70.1 |     trained on 14 datasets      |
|   [RTMW-x](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-x_simcc-cocktail13_pt-ucoco_270e-384x288-0949e3a9_20230925.zip)    |  384x288   | 70.2 |     trained on 14 datasets      |

</details>

### Visualization

|                                            MMPose-style                                             |                                            OpenPose-style                                             |
| :-------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------: |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/c9e6fbaa-00f0-4961-ac87-d881edca778b"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/9afc996a-59e6-4200-a655-59dae10b46c4"> |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/b12e5f60-fec0-42a1-b7b6-365e93894fb1"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/5acf7431-6ef0-44a8-ae52-9d8c8cb988c9"> |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/091b8ce3-32d5-463b-9f41-5c683afa7a11"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/4ffc7be1-50d6-44ff-8c6b-22ea8975aad4"> |
| <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/6fddfc14-7519-42eb-a7a4-98bf5441f324"> | <img width="357" alt="result" src="https://github.com/Tau-J/rtmlib/assets/13503330/2523e568-e0c3-4c2e-8e54-d1a67100c537"> |

### Citation

```
@misc{rtmlib,
  title={rtmlib},
  author={Jiang, Tao},
  year={2023},
  howpublished = {\url{https://github.com/Tau-J/rtmlib}},
}

@misc{jiang2023,
  doi = {10.48550/ARXIV.2303.07399},
  url = {https://arxiv.org/abs/2303.07399},
  author = {Jiang, Tao and Lu, Peng and Zhang, Li and Ma, Ningsheng and Han, Rui and Lyu, Chengqi and Li, Yining and Chen, Kai},
  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose},
  publisher = {arXiv},
  year = {2023},
  copyright = {Creative Commons Attribution 4.0 International}
}

@misc{lu2023rtmo,
      title={{RTMO}: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation},
      author={Peng Lu and Tao Jiang and Yining Li and Xiangtai Li and Kai Chen and Wenming Yang},
      year={2023},
      eprint={2312.07526},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{jiang2024rtmwrealtimemultiperson2d,
      title={RTMW: Real-Time Multi-Person 2D and 3D Whole-body Pose Estimation}, 
      author={Tao Jiang and Xinchen Xie and Yining Li},
      year={2024},
      eprint={2407.08634},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2407.08634}, 
}
```

## Acknowledgement

Our code is based on these repos:

- [MMPose](https://github.com/open-mmlab/mmpose)
- [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose)
- [DWPose](https://github.com/IDEA-Research/DWPose/tree/opencv_onnx)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/philip-fu/rtmlib",
    "name": "rtmlib-flip",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": null,
    "keywords": "pose estimation",
    "author": "Philip.Fu",
    "author_email": "fuyang.phil@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/14/a4/542a11aed68d0e082be10a94d67d138f4df61db4d0ec3f6586fc00de5fac/rtmlib_flip-1.0.9.tar.gz",
    "platform": null,
    "description": "# rtmlib\n\n![demo](https://github.com/Tau-J/rtmlib/assets/13503330/b7e8ce8b-3134-43cf-bba6-d81656897289)\n\nrtmlib is a super lightweight library to conduct pose estimation based on [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose) models **WITHOUT** any dependencies like mmcv, mmpose, mmdet, etc.\n\nBasically, rtmlib only requires these dependencies:\n\n- numpy\n- opencv-python\n- opencv-contrib-python\n- onnxruntime\n\nOptionally, you can use other common backends like opencv, onnxruntime, openvino, tensorrt to accelerate the inference process.\n\n- For openvino users, please add the path `<your python path>\\envs\\<your env name>\\Lib\\site-packages\\openvino\\libs` into your environment path.\n\n## Installation\n\n- install from pypi:\n\n```shell\npip install rtmlib -i https://pypi.org/simple\n```\n\n- install from source code:\n\n```shell\ngit clone https://github.com/Tau-J/rtmlib.git\ncd rtmlib\n\npip install -r requirements.txt\n\npip install -e .\n\n# [optional]\n# pip install onnxruntime-gpu\n# pip install openvino\n\n```\n\n## Quick Start\n\nHere is a simple demo to show how to use rtmlib to conduct pose estimation on a single image.\n\n```python\nimport cv2\n\nfrom rtmlib import Wholebody, draw_skeleton\n\ndevice = 'cpu'  # cpu, cuda, mps\nbackend = 'onnxruntime'  # opencv, onnxruntime, openvino\nimg = cv2.imread('./demo.jpg')\n\nopenpose_skeleton = False  # True for openpose-style, False for mmpose-style\n\nwholebody = Wholebody(to_openpose=openpose_skeleton,\n                      mode='balanced',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'\n                      backend=backend, device=device)\n\nkeypoints, scores = wholebody(img)\n\n# visualize\n\n# if you want to use black background instead of original image,\n# img_show = np.zeros(img_show.shape, dtype=np.uint8)\n\nimg_show = draw_skeleton(img_show, keypoints, scores, kpt_thr=0.5)\n\n\ncv2.imshow('img', img_show)\ncv2.waitKey()\n```\n\n## WebUI\n\nRun `webui.py`:\n\n```shell\n# Please make sure you have installed gradio\n# pip install gradio\n\npython webui.py\n```\n\n![image](https://github.com/Tau-J/rtmlib/assets/13503330/49ef11a1-a1b5-4a20-a2e1-d49f8be6a25d)\n\n## APIs\n\n- Solutions (High-level APIs)\n  - [Wholebody](/rtmlib/tools/solution/wholebody.py)\n  - [Body](/rtmlib/tools/solution/body.py)\n  - [Body_with_feet](/rtmlib/tools/solution/body_with_feet.py)\n  - [Hand](/rtmlib/tools/solution/hand.py)\n  - [PoseTracker](/rtmlib/tools/solution/pose_tracker.py)\n- Models (Low-level APIs)\n  - [YOLOX](/rtmlib/tools/object_detection/yolox.py)\n  - [RTMDet](/rtmlib/tools/object_detection/rtmdet.py)\n  - [RTMPose](/rtmlib/tools/pose_estimation/rtmpose.py)\n    - RTMPose for 17 keypoints\n    - RTMPose for 26 keypoints\n    - RTMW for 133 keypoints\n    - DWPose for 133 keypoints\n    - RTMO for one-stage pose estimation (17 keypoints)\n- Visualization\n  - [draw_bbox](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L9)\n  - [draw_skeleton](https://github.com/Tau-J/rtmlib/blob/adc69a850f59ba962d81a88cffd3f48cfc5fd1ae/rtmlib/draw.py#L16)\n\nFor high-level APIs (`Solution`), you can choose to pass `mode` or `det`+`pose` arguments to specify the detector and pose estimator you want to use.\n\n```Python\n# By mode\nwholebody = Wholebody(mode='performance',  # 'performance', 'lightweight', 'balanced'. Default: 'balanced'\n                      backend=backend,\n                      device=device)\n\n# By det and pose\nbody = Body(det='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip',\n            det_input_size=(640, 640),\n            pose='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip',\n            pose_input_size=(288, 384),\n            backend=backend,\n            device=device)\n```\n\nFor low-level APIs (`Model`), you can specify the model you want to use by passing the `onnx_model` argument.\n\n```Python\n# By onnx_model (.onnx)\npose_model = RTMPose(onnx_model='/path/to/your_model.onnx',  # download link or local path\n                     backend=backend, device=device)\n\n# By onnx_model (.zip)\npose_model = RTMPose(onnx_model='https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip',  # download link or local path\n                     backend=backend, device=device)\n```\n\n## Model Zoo\n\nBy defaults, rtmlib will automatically download and apply models with the best performance.\n\nMore models can be found in [RTMPose Model Zoo](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose).\n\n### Detectors\n\n<details open>\n<summary><b>Person</b></summary>\n\nNotes:\n\n- Models trained on HumanArt can detect both real human and cartoon characters.\n- Models trained on COCO can only detect real human.\n\n|                                                          ONNX Model                                                           | Input Size | AP (person) |       Description        |\n| :---------------------------------------------------------------------------------------------------------------------------: | :--------: | :---------: | :----------------------: |\n|                 [YOLOX-l](https://drive.google.com/file/d/1w9pXC8tT0p9ndMN-CArp1__b2GbzewWI/view?usp=sharing)                 |  640x640   |      -      |     trained on COCO      |\n| [YOLOX-nano](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip) |  416x416   |    38.9     | trained on HumanArt+COCO |\n| [YOLOX-tiny](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_tiny_8xb8-300e_humanart-6f3252f9.zip) |  416x416   |    47.7     | trained on HumanArt+COCO |\n|    [YOLOX-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_s_8xb8-300e_humanart-3ef259a7.zip)    |  640x640   |    54.6     | trained on HumanArt+COCO |\n|    [YOLOX-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip)    |  640x640   |    59.1     | trained on HumanArt+COCO |\n|    [YOLOX-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_l_8xb8-300e_humanart-ce1d7a62.zip)    |  640x640   |    60.2     | trained on HumanArt+COCO |\n|    [YOLOX-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_x_8xb8-300e_humanart-a39d44ed.zip)    |  640x640   |    61.3     | trained on HumanArt+COCO |\n\n</details>\n\n### Pose Estimators\n\n<details open>\n<summary><b>Body 17 Keypoints</b></summary>\n\n|                                                                     ONNX Model                                                                      | Input Size | AP (COCO) |      Description      |\n| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |\n| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip) |  256x192   |   65.9    | trained on 7 datasets |\n| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7_420e-256x192-acd4a1ef_20230504.zip) |  256x192   |   69.7    | trained on 7 datasets |\n| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7_420e-256x192-e48f03d0_20230504.zip) |  256x192   |   74.9    | trained on 7 datasets |\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-256x192-4dba18fc_20230504.zip) |  256x192   |   76.7    | trained on 7 datasets |\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7_420e-384x288-3f5a1437_20230504.zip) |  384x288   |   78.3    | trained on 7 datasets |\n| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7_700e-384x288-71d7b7e9_20230629.zip) |  384x288   |   78.8    | trained on 7 datasets |\n|           [RTMO-s](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-s_8xb32-600e_body7-640x640-dac2bf74_20231211.zip)           |  640x640   |   68.6    | trained on 7 datasets |\n|          [RTMO-m](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-m_16xb16-600e_body7-640x640-39e78cc4_20231211.zip)           |  640x640   |   72.6    | trained on 7 datasets |\n|          [RTMO-l](https://download.openmmlab.com/mmpose/v1/projects/rtmo/onnx_sdk/rtmo-l_16xb16-600e_body7-640x640-b37118ce_20231211.zip)           |  640x640   |   74.8    | trained on 7 datasets |\n\n</details>\n\n<details open>\n<summary><b>Body 26 Keypoints</b></summary>\n\n|                                                                     ONNX Model                                                                      | Input Size | AUC (Body8) |      Description      |\n| :-------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :-------: | :-------------------: |\n| [RTMPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7-halpe26_700e-256x192-6020f8a6_20230605.zip) |  256x192   |   66.35    | trained on 7 datasets |\n| [RTMPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-body7_pt-body7-halpe26_700e-256x192-7f134165_20230605.zip) |  256x192   |   68.62    | trained on 7 datasets |\n| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip) |  256x192   |   71.91    | trained on 7 datasets |\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-256x192-2abb7558_20230605.zip) |  256x192   |   73.19    | trained on 7 datasets |\n| [RTMPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-384x288-89e6428b_20230605.zip) |  384x288   |   73.56    | trained on 7 datasets |\n| [RTMPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-body7_pt-body7-halpe26_700e-384x288-734182ce_20230605.zip) |  384x288   |   74.38    | trained on 7 datasets |\n| [RTMPose-x](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-x_simcc-body7_pt-body7-halpe26_700e-384x288-7fb6e239_20230606.zip) |  384x288   |   74.82    | trained on 7 datasets |\n\n</details>\n\n<details open>\n<summary><b>WholeBody 133 Keypoints</b></summary>\n\n|                                                                     ONNX Model                                                                     | Input Size |   AP (Whole)   |           Description           |\n| :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------: | :--: | :-----------------------------: |\n| [DWPose-t](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-ucoco_dw-ucoco_270e-256x192-dcf277bf_20230728.zip) |  256x192   | 48.5 | trained on COCO-Wholebody+UBody |\n| [DWPose-s](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-s_simcc-ucoco_dw-ucoco_270e-256x192-3fd922c8_20230728.zip) |  256x192   | 53.8 | trained on COCO-Wholebody+UBody |\n| [DWPose-m](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-ucoco_dw-ucoco_270e-256x192-c8b76419_20230728.zip) |  256x192   | 60.6 | trained on COCO-Wholebody+UBody |\n| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-256x192-4d6dfc62_20230728.zip) |  256x192   | 63.1 | trained on COCO-Wholebody+UBody |\n| [DWPose-l](https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-l_simcc-ucoco_dw-ucoco_270e-384x288-2438fd99_20230728.zip) |  384x288   | 66.5 | trained on COCO-Wholebody+UBody |\n|          [RTMW-m](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-m-s_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 58.2 |     trained on 14 datasets      |\n|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-256x192_20231122.zip)          |  256x192   | 66.0 |     trained on 14 datasets      |\n|          [RTMW-l](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-dw-x-l_simcc-cocktail14_270e-384x288_20231122.zip)          |  384x288   | 70.1 |     trained on 14 datasets      |\n|   [RTMW-x](https://download.openmmlab.com/mmpose/v1/projects/rtmw/onnx_sdk/rtmw-x_simcc-cocktail13_pt-ucoco_270e-384x288-0949e3a9_20230925.zip)    |  384x288   | 70.2 |     trained on 14 datasets      |\n\n</details>\n\n### Visualization\n\n|                                            MMPose-style                                             |                                            OpenPose-style                                             |\n| :-------------------------------------------------------------------------------------------------: | :---------------------------------------------------------------------------------------------------: |\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/c9e6fbaa-00f0-4961-ac87-d881edca778b\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/9afc996a-59e6-4200-a655-59dae10b46c4\"> |\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/b12e5f60-fec0-42a1-b7b6-365e93894fb1\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/5acf7431-6ef0-44a8-ae52-9d8c8cb988c9\"> |\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/091b8ce3-32d5-463b-9f41-5c683afa7a11\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/4ffc7be1-50d6-44ff-8c6b-22ea8975aad4\"> |\n| <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/6fddfc14-7519-42eb-a7a4-98bf5441f324\"> | <img width=\"357\" alt=\"result\" src=\"https://github.com/Tau-J/rtmlib/assets/13503330/2523e568-e0c3-4c2e-8e54-d1a67100c537\"> |\n\n### Citation\n\n```\n@misc{rtmlib,\n  title={rtmlib},\n  author={Jiang, Tao},\n  year={2023},\n  howpublished = {\\url{https://github.com/Tau-J/rtmlib}},\n}\n\n@misc{jiang2023,\n  doi = {10.48550/ARXIV.2303.07399},\n  url = {https://arxiv.org/abs/2303.07399},\n  author = {Jiang, Tao and Lu, Peng and Zhang, Li and Ma, Ningsheng and Han, Rui and Lyu, Chengqi and Li, Yining and Chen, Kai},\n  keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},\n  title = {RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose},\n  publisher = {arXiv},\n  year = {2023},\n  copyright = {Creative Commons Attribution 4.0 International}\n}\n\n@misc{lu2023rtmo,\n      title={{RTMO}: Towards High-Performance One-Stage Real-Time Multi-Person Pose Estimation},\n      author={Peng Lu and Tao Jiang and Yining Li and Xiangtai Li and Kai Chen and Wenming Yang},\n      year={2023},\n      eprint={2312.07526},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV}\n}\n\n@misc{jiang2024rtmwrealtimemultiperson2d,\n      title={RTMW: Real-Time Multi-Person 2D and 3D Whole-body Pose Estimation}, \n      author={Tao Jiang and Xinchen Xie and Yining Li},\n      year={2024},\n      eprint={2407.08634},\n      archivePrefix={arXiv},\n      primaryClass={cs.CV},\n      url={https://arxiv.org/abs/2407.08634}, \n}\n```\n\n## Acknowledgement\n\nOur code is based on these repos:\n\n- [MMPose](https://github.com/open-mmlab/mmpose)\n- [RTMPose](https://github.com/open-mmlab/mmpose/tree/dev-1.x/projects/rtmpose)\n- [DWPose](https://github.com/IDEA-Research/DWPose/tree/opencv_onnx)\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "A library for real-time pose estimation.",
    "version": "1.0.9",
    "project_urls": {
        "Homepage": "https://github.com/philip-fu/rtmlib"
    },
    "split_keywords": [
        "pose",
        "estimation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c2ba0bc3abdcc674a9f0f8f45143efc69d615ba26e72550b7bd2e9cd371d443a",
                "md5": "02c3d42c22c6f51431a8e6590df83909",
                "sha256": "57146636f4ab7ddd475b83ddedd3365ae864f66e6696c0fbd9721d239249def1"
            },
            "downloads": -1,
            "filename": "rtmlib_flip-1.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "02c3d42c22c6f51431a8e6590df83909",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 47467,
            "upload_time": "2025-01-14T00:32:07",
            "upload_time_iso_8601": "2025-01-14T00:32:07.513273Z",
            "url": "https://files.pythonhosted.org/packages/c2/ba/0bc3abdcc674a9f0f8f45143efc69d615ba26e72550b7bd2e9cd371d443a/rtmlib_flip-1.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "14a4542a11aed68d0e082be10a94d67d138f4df61db4d0ec3f6586fc00de5fac",
                "md5": "f5d3bdc1c38ad41011293caec1dfa871",
                "sha256": "2fa841c11b794eaa1d61b14706c5f5d50c9f68bb80107a9039562d9c66251bf6"
            },
            "downloads": -1,
            "filename": "rtmlib_flip-1.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "f5d3bdc1c38ad41011293caec1dfa871",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 39810,
            "upload_time": "2025-01-14T00:32:09",
            "upload_time_iso_8601": "2025-01-14T00:32:09.800746Z",
            "url": "https://files.pythonhosted.org/packages/14/a4/542a11aed68d0e082be10a94d67d138f4df61db4d0ec3f6586fc00de5fac/rtmlib_flip-1.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-01-14 00:32:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "philip-fu",
    "github_project": "rtmlib",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "onnxruntime",
            "specs": []
        },
        {
            "name": "opencv-contrib-python",
            "specs": []
        },
        {
            "name": "opencv-python",
            "specs": []
        },
        {
            "name": "tqdm",
            "specs": []
        }
    ],
    "lcname": "rtmlib-flip"
}
        
Elapsed time: 0.64834s