zmkj-rknn-tools


Namezmkj-rknn-tools JSON
Version 0.1.0 PyPI version JSON
download
home_pagehttps://github.com/xuntee/rknn-tools
Summary瑞芯微端侧的辅助快速验证工具
upload_time2025-08-11 11:37:04
maintainerNone
docs_urlNone
author壹世朱名
requires_python>=3.6
licenseNone
keywords rknn rockchip yolov8 object detection edge computing
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # RKNN-Tools

[![PyPI version](https://badge.fury.io/py/rknn-tools.svg)](https://badge.fury.io/py/rknn-tools)
[![Python 3.6+](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/downloads/release/python-360/)

## 简介

RKNN-Tools 是一个专为瑞芯微(Rockchip)芯片设计的端侧推理辅助工具包,旨在简化模型转换、部署和验证流程。该工具包支持从 YOLOv8 模型到 RKNN 模型的转换,并提供完整的推理、后处理和可视化功能。

## 特性

- **模型转换**:支持 YOLOv8 模型(.pt)到 ONNX 再到 RKNN 的转换流程
- **推理引擎**:高效的 RKNN 模型推理实现,支持图像和视频输入
- **后处理优化**:针对 YOLOv8 的高效后处理算法,包括分布式焦点损失(DFL)解码和快速 NMS
- **可视化工具**:集成 Supervision 库,提供丰富的可视化功能,支持高质量的目标检测结果展示
- **性能优化**:支持图像下采样、缓存机制和跳帧处理等优化技术
- **中文文档**:详尽的中文注释和文档

## 安装

```bash
pip install rknn-tools
```

### 安装可视化依赖

示例中的可视化功能使用了supervision库,如果需要运行示例代码,建议安装:

```bash
pip install supervision
```
### 调试安装
如果需要调试或修改代码,可以克隆仓库并安装:

```bash
python3 -m venv venv_tools
source venv_tools/bin/activate
pip install -e .

python examples/model_conversion.py --pt examples/yolov8s.pt
python examples/video_inference.py --model examples/yolov8s.rknn --input examples/people-walking.mp4 
python examples/image_inference.py --model examples/yolov8s.rknn --input examples/bus.jpg 
python examples/image_inference.py --model examples/yolov8s.rknn --input examples/people-walking.jpg

 pip install --upgrade build twine twine
 python -m build
 twine upload dist/*

```


supervision库提供了高质量的目标检测结果展示功能,但它是可选的。您可以根据自己的需求,在示例中自行实现可视化功能。

## 环境要求

- Python 3.6+
- 已测试环境:RK3576,Python 3.10.12

## 快速开始

### 模型转换

#### 1. PyTorch 模型转 ONNX

```python
from rknn_tools.converter import pt_to_onnx

# 转换 YOLOv8 模型到 ONNX
pt_to_onnx(model_path="yolov8s.pt", output_path="yolov8s.onnx")
```

#### 2. ONNX 模型转 RKNN

```python
from rknn_tools.converter import onnx_to_rknn

# 转换 ONNX 模型到 RKNN
onnx_to_rknn(
    model_path="yolov8s.onnx",
    output_path="yolov8s.rknn",
    platform="rk3576",
    do_quantization=False
)
```

### 图像推理

```python
from rknn_tools.detector import YOLOv8Detector, CLASSES
import cv2
import supervision as sv
import numpy as np

# 自定义可视化函数
def visualize_detections(image, boxes, classes, scores):
    # 创建图像副本
    img_result = image.copy()
    
    # 如果没有检测到目标,直接返回原图
    if len(boxes) == 0:
        return img_result
    
    # 转换为supervision格式的检测结果
    detections = sv.Detections(
        xyxy=boxes,
        class_id=classes.astype(int),
        confidence=scores
    )
    
    # 创建标签注释器
    label_annotator = sv.LabelAnnotator(
        text_position=sv.Position.TOP_LEFT,
        text_scale=0.5,
        text_thickness=1,
        text_padding=5,
        color_lookup=sv.ColorLookup.CLASS
    )
    
    # 创建边界框注释器
    box_annotator = sv.BoxAnnotator(
        thickness=2,
        color_lookup=sv.ColorLookup.CLASS
    )
    
    # 生成标签
    labels = [f"{CLASSES[class_id]} {confidence:.2f}" 
             for class_id, confidence in zip(detections.class_id, detections.confidence)]
    
    # 绘制边界框和标签
    img_result = box_annotator.annotate(scene=img_result, detections=detections)
    img_result = label_annotator.annotate(scene=img_result, detections=detections, labels=labels)
    
    return img_result

# 初始化检测器
detector = YOLOv8Detector(model_path="yolov8s.rknn")

# 读取图像
image = cv2.imread("test.jpg")

# 执行检测
boxes, classes, scores, inference_time, process_time = detector.detect(
    image=image,
    conf_thresh=0.25,
    nms_thresh=0.45
)

# 可视化结果
result_image = visualize_detections(image, boxes, classes, scores)

# 显示结果
cv2.imshow("Result", result_image)
cv2.waitKey(0)

# 释放资源
detector.release()
```

### 视频/摄像头推理

```python
from rknn_tools.detector import YOLOv8Detector, CLASSES
import cv2
import time
import supervision as sv
import numpy as np

# 自定义可视化函数
def visualize_detections(image, boxes, classes, scores):
    # 创建图像副本
    img_result = image.copy()
    
    # 如果没有检测到目标,直接返回原图
    if len(boxes) == 0:
        return img_result
    
    # 转换为supervision格式的检测结果
    detections = sv.Detections(
        xyxy=boxes,
        class_id=classes.astype(int),
        confidence=scores
    )
    
    # 创建标签注释器
    label_annotator = sv.LabelAnnotator(
        text_position=sv.Position.TOP_LEFT,
        text_scale=0.5,
        text_thickness=1,
        text_padding=5,
        color_lookup=sv.ColorLookup.CLASS
    )
    
    # 创建边界框注释器
    box_annotator = sv.BoxAnnotator(
        thickness=2,
        color_lookup=sv.ColorLookup.CLASS
    )
    
    # 生成标签
    labels = [f"{CLASSES[class_id]} {confidence:.2f}" 
             for class_id, confidence in zip(detections.class_id, detections.confidence)]
    
    # 绘制边界框和标签
    img_result = box_annotator.annotate(scene=img_result, detections=detections)
    img_result = label_annotator.annotate(scene=img_result, detections=detections, labels=labels)
    
    return img_result

# 自定义FPS绘制函数
def draw_fps(image, fps, position=(10, 30), text_scale=0.7, color=(0, 0, 255), thickness=2):
    # 创建图像副本
    img_result = image.copy()
    
    # 创建文本注释器
    text_annotator = sv.TextAnnotator(
        text_scale=text_scale,
        text_thickness=thickness,
        text_color=color
    )
    
    # 格式化FPS文本
    fps_text = f"FPS: {fps:.2f}"
    
    # 绘制文本
    img_result = text_annotator.annotate(scene=img_result, text=fps_text, position=position)
    
    return img_result

# 初始化检测器
detector = YOLOv8Detector(model_path="yolov8s.rknn")

# 打开视频文件或摄像头
cap = cv2.VideoCapture("test.mp4")  # 或者使用摄像头 cv2.VideoCapture(0)

# 性能统计
frame_count = 0
start_time = time.time()

while cap.isOpened():
    ret, frame = cap.read()
    if not ret:
        break
    
    # 执行检测
    boxes, classes, scores, inference_time, process_time = detector.detect(
        image=frame,
        conf_thresh=0.25,
        nms_thresh=0.45
    )
    
    # 更新性能统计
    frame_count += 1
    
    # 可视化结果
    result_frame = visualize_detections(frame, boxes, classes, scores)
    
    # 计算FPS
    elapsed_time = time.time() - start_time
    fps = frame_count / elapsed_time
    result_frame = draw_fps(result_frame, fps)
    
    # 显示结果
    cv2.imshow("Result", result_frame)
    
    # 按ESC键退出
    if cv2.waitKey(1) == 27:
        break

# 释放资源
cap.release()
cv2.destroyAllWindows()
detector.release()
```

## 高级用法

### 配置优化参数

```python
from rknn_tools.detector import YOLOv8Detector
from rknn_tools.config import update_config

# 更新配置参数
update_config({
    'downscale_factor': 2,     # 高分辨率图像下采样因子
    'use_fast_nms': True,      # 使用快速NMS算法
    'use_parallel': True,      # 使用并行处理
    'skip_frames': 1,          # 跳帧处理(0表示不跳帧)
    'cache_size': 10           # 缓存大小
})

# 初始化检测器
detector = YOLOv8Detector(model_path="yolov8s.rknn")
```

### 自定义后处理

```python
from rknn_tools.postprocess import post_process

# 自定义后处理
boxes, classes, scores = post_process(
    input_data=model_outputs,
    conf_thresh=0.25,
    nms_thresh=0.45
)
```

## 贡献

欢迎提交问题和拉取请求,共同改进这个项目!

## 许可证

[MIT](LICENSE)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/xuntee/rknn-tools",
    "name": "zmkj-rknn-tools",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.6",
    "maintainer_email": null,
    "keywords": "rknn, rockchip, yolov8, object detection, edge computing",
    "author": "\u58f9\u4e16\u6731\u540d",
    "author_email": "nx740@qq.com",
    "download_url": "https://files.pythonhosted.org/packages/8a/31/0e568bdea8ee65ca60f12866cadaa2d72288c8ff159d394c12d76e529b25/zmkj_rknn_tools-0.1.0.tar.gz",
    "platform": null,
    "description": "# RKNN-Tools\n\n[![PyPI version](https://badge.fury.io/py/rknn-tools.svg)](https://badge.fury.io/py/rknn-tools)\n[![Python 3.6+](https://img.shields.io/badge/python-3.6+-blue.svg)](https://www.python.org/downloads/release/python-360/)\n\n## \u7b80\u4ecb\n\nRKNN-Tools \u662f\u4e00\u4e2a\u4e13\u4e3a\u745e\u82af\u5fae\uff08Rockchip\uff09\u82af\u7247\u8bbe\u8ba1\u7684\u7aef\u4fa7\u63a8\u7406\u8f85\u52a9\u5de5\u5177\u5305\uff0c\u65e8\u5728\u7b80\u5316\u6a21\u578b\u8f6c\u6362\u3001\u90e8\u7f72\u548c\u9a8c\u8bc1\u6d41\u7a0b\u3002\u8be5\u5de5\u5177\u5305\u652f\u6301\u4ece YOLOv8 \u6a21\u578b\u5230 RKNN \u6a21\u578b\u7684\u8f6c\u6362\uff0c\u5e76\u63d0\u4f9b\u5b8c\u6574\u7684\u63a8\u7406\u3001\u540e\u5904\u7406\u548c\u53ef\u89c6\u5316\u529f\u80fd\u3002\n\n## \u7279\u6027\n\n- **\u6a21\u578b\u8f6c\u6362**\uff1a\u652f\u6301 YOLOv8 \u6a21\u578b\uff08.pt\uff09\u5230 ONNX \u518d\u5230 RKNN \u7684\u8f6c\u6362\u6d41\u7a0b\n- **\u63a8\u7406\u5f15\u64ce**\uff1a\u9ad8\u6548\u7684 RKNN \u6a21\u578b\u63a8\u7406\u5b9e\u73b0\uff0c\u652f\u6301\u56fe\u50cf\u548c\u89c6\u9891\u8f93\u5165\n- **\u540e\u5904\u7406\u4f18\u5316**\uff1a\u9488\u5bf9 YOLOv8 \u7684\u9ad8\u6548\u540e\u5904\u7406\u7b97\u6cd5\uff0c\u5305\u62ec\u5206\u5e03\u5f0f\u7126\u70b9\u635f\u5931\uff08DFL\uff09\u89e3\u7801\u548c\u5feb\u901f NMS\n- **\u53ef\u89c6\u5316\u5de5\u5177**\uff1a\u96c6\u6210 Supervision \u5e93\uff0c\u63d0\u4f9b\u4e30\u5bcc\u7684\u53ef\u89c6\u5316\u529f\u80fd\uff0c\u652f\u6301\u9ad8\u8d28\u91cf\u7684\u76ee\u6807\u68c0\u6d4b\u7ed3\u679c\u5c55\u793a\n- **\u6027\u80fd\u4f18\u5316**\uff1a\u652f\u6301\u56fe\u50cf\u4e0b\u91c7\u6837\u3001\u7f13\u5b58\u673a\u5236\u548c\u8df3\u5e27\u5904\u7406\u7b49\u4f18\u5316\u6280\u672f\n- **\u4e2d\u6587\u6587\u6863**\uff1a\u8be6\u5c3d\u7684\u4e2d\u6587\u6ce8\u91ca\u548c\u6587\u6863\n\n## \u5b89\u88c5\n\n```bash\npip install rknn-tools\n```\n\n### \u5b89\u88c5\u53ef\u89c6\u5316\u4f9d\u8d56\n\n\u793a\u4f8b\u4e2d\u7684\u53ef\u89c6\u5316\u529f\u80fd\u4f7f\u7528\u4e86supervision\u5e93\uff0c\u5982\u679c\u9700\u8981\u8fd0\u884c\u793a\u4f8b\u4ee3\u7801\uff0c\u5efa\u8bae\u5b89\u88c5\uff1a\n\n```bash\npip install supervision\n```\n### \u8c03\u8bd5\u5b89\u88c5\n\u5982\u679c\u9700\u8981\u8c03\u8bd5\u6216\u4fee\u6539\u4ee3\u7801\uff0c\u53ef\u4ee5\u514b\u9686\u4ed3\u5e93\u5e76\u5b89\u88c5\uff1a\n\n```bash\npython3 -m venv venv_tools\nsource venv_tools/bin/activate\npip install -e .\n\npython examples/model_conversion.py --pt examples/yolov8s.pt\npython examples/video_inference.py --model examples/yolov8s.rknn --input examples/people-walking.mp4 \npython examples/image_inference.py --model examples/yolov8s.rknn --input examples/bus.jpg \npython examples/image_inference.py --model examples/yolov8s.rknn --input examples/people-walking.jpg\n\n pip install --upgrade build twine twine\n python -m build\n twine upload dist/*\n\n```\n\n\nsupervision\u5e93\u63d0\u4f9b\u4e86\u9ad8\u8d28\u91cf\u7684\u76ee\u6807\u68c0\u6d4b\u7ed3\u679c\u5c55\u793a\u529f\u80fd\uff0c\u4f46\u5b83\u662f\u53ef\u9009\u7684\u3002\u60a8\u53ef\u4ee5\u6839\u636e\u81ea\u5df1\u7684\u9700\u6c42\uff0c\u5728\u793a\u4f8b\u4e2d\u81ea\u884c\u5b9e\u73b0\u53ef\u89c6\u5316\u529f\u80fd\u3002\n\n## \u73af\u5883\u8981\u6c42\n\n- Python 3.6+\n- \u5df2\u6d4b\u8bd5\u73af\u5883\uff1aRK3576\uff0cPython 3.10.12\n\n## \u5feb\u901f\u5f00\u59cb\n\n### \u6a21\u578b\u8f6c\u6362\n\n#### 1. PyTorch \u6a21\u578b\u8f6c ONNX\n\n```python\nfrom rknn_tools.converter import pt_to_onnx\n\n# \u8f6c\u6362 YOLOv8 \u6a21\u578b\u5230 ONNX\npt_to_onnx(model_path=\"yolov8s.pt\", output_path=\"yolov8s.onnx\")\n```\n\n#### 2. ONNX \u6a21\u578b\u8f6c RKNN\n\n```python\nfrom rknn_tools.converter import onnx_to_rknn\n\n# \u8f6c\u6362 ONNX \u6a21\u578b\u5230 RKNN\nonnx_to_rknn(\n    model_path=\"yolov8s.onnx\",\n    output_path=\"yolov8s.rknn\",\n    platform=\"rk3576\",\n    do_quantization=False\n)\n```\n\n### \u56fe\u50cf\u63a8\u7406\n\n```python\nfrom rknn_tools.detector import YOLOv8Detector, CLASSES\nimport cv2\nimport supervision as sv\nimport numpy as np\n\n# \u81ea\u5b9a\u4e49\u53ef\u89c6\u5316\u51fd\u6570\ndef visualize_detections(image, boxes, classes, scores):\n    # \u521b\u5efa\u56fe\u50cf\u526f\u672c\n    img_result = image.copy()\n    \n    # \u5982\u679c\u6ca1\u6709\u68c0\u6d4b\u5230\u76ee\u6807\uff0c\u76f4\u63a5\u8fd4\u56de\u539f\u56fe\n    if len(boxes) == 0:\n        return img_result\n    \n    # \u8f6c\u6362\u4e3asupervision\u683c\u5f0f\u7684\u68c0\u6d4b\u7ed3\u679c\n    detections = sv.Detections(\n        xyxy=boxes,\n        class_id=classes.astype(int),\n        confidence=scores\n    )\n    \n    # \u521b\u5efa\u6807\u7b7e\u6ce8\u91ca\u5668\n    label_annotator = sv.LabelAnnotator(\n        text_position=sv.Position.TOP_LEFT,\n        text_scale=0.5,\n        text_thickness=1,\n        text_padding=5,\n        color_lookup=sv.ColorLookup.CLASS\n    )\n    \n    # \u521b\u5efa\u8fb9\u754c\u6846\u6ce8\u91ca\u5668\n    box_annotator = sv.BoxAnnotator(\n        thickness=2,\n        color_lookup=sv.ColorLookup.CLASS\n    )\n    \n    # \u751f\u6210\u6807\u7b7e\n    labels = [f\"{CLASSES[class_id]} {confidence:.2f}\" \n             for class_id, confidence in zip(detections.class_id, detections.confidence)]\n    \n    # \u7ed8\u5236\u8fb9\u754c\u6846\u548c\u6807\u7b7e\n    img_result = box_annotator.annotate(scene=img_result, detections=detections)\n    img_result = label_annotator.annotate(scene=img_result, detections=detections, labels=labels)\n    \n    return img_result\n\n# \u521d\u59cb\u5316\u68c0\u6d4b\u5668\ndetector = YOLOv8Detector(model_path=\"yolov8s.rknn\")\n\n# \u8bfb\u53d6\u56fe\u50cf\nimage = cv2.imread(\"test.jpg\")\n\n# \u6267\u884c\u68c0\u6d4b\nboxes, classes, scores, inference_time, process_time = detector.detect(\n    image=image,\n    conf_thresh=0.25,\n    nms_thresh=0.45\n)\n\n# \u53ef\u89c6\u5316\u7ed3\u679c\nresult_image = visualize_detections(image, boxes, classes, scores)\n\n# \u663e\u793a\u7ed3\u679c\ncv2.imshow(\"Result\", result_image)\ncv2.waitKey(0)\n\n# \u91ca\u653e\u8d44\u6e90\ndetector.release()\n```\n\n### \u89c6\u9891/\u6444\u50cf\u5934\u63a8\u7406\n\n```python\nfrom rknn_tools.detector import YOLOv8Detector, CLASSES\nimport cv2\nimport time\nimport supervision as sv\nimport numpy as np\n\n# \u81ea\u5b9a\u4e49\u53ef\u89c6\u5316\u51fd\u6570\ndef visualize_detections(image, boxes, classes, scores):\n    # \u521b\u5efa\u56fe\u50cf\u526f\u672c\n    img_result = image.copy()\n    \n    # \u5982\u679c\u6ca1\u6709\u68c0\u6d4b\u5230\u76ee\u6807\uff0c\u76f4\u63a5\u8fd4\u56de\u539f\u56fe\n    if len(boxes) == 0:\n        return img_result\n    \n    # \u8f6c\u6362\u4e3asupervision\u683c\u5f0f\u7684\u68c0\u6d4b\u7ed3\u679c\n    detections = sv.Detections(\n        xyxy=boxes,\n        class_id=classes.astype(int),\n        confidence=scores\n    )\n    \n    # \u521b\u5efa\u6807\u7b7e\u6ce8\u91ca\u5668\n    label_annotator = sv.LabelAnnotator(\n        text_position=sv.Position.TOP_LEFT,\n        text_scale=0.5,\n        text_thickness=1,\n        text_padding=5,\n        color_lookup=sv.ColorLookup.CLASS\n    )\n    \n    # \u521b\u5efa\u8fb9\u754c\u6846\u6ce8\u91ca\u5668\n    box_annotator = sv.BoxAnnotator(\n        thickness=2,\n        color_lookup=sv.ColorLookup.CLASS\n    )\n    \n    # \u751f\u6210\u6807\u7b7e\n    labels = [f\"{CLASSES[class_id]} {confidence:.2f}\" \n             for class_id, confidence in zip(detections.class_id, detections.confidence)]\n    \n    # \u7ed8\u5236\u8fb9\u754c\u6846\u548c\u6807\u7b7e\n    img_result = box_annotator.annotate(scene=img_result, detections=detections)\n    img_result = label_annotator.annotate(scene=img_result, detections=detections, labels=labels)\n    \n    return img_result\n\n# \u81ea\u5b9a\u4e49FPS\u7ed8\u5236\u51fd\u6570\ndef draw_fps(image, fps, position=(10, 30), text_scale=0.7, color=(0, 0, 255), thickness=2):\n    # \u521b\u5efa\u56fe\u50cf\u526f\u672c\n    img_result = image.copy()\n    \n    # \u521b\u5efa\u6587\u672c\u6ce8\u91ca\u5668\n    text_annotator = sv.TextAnnotator(\n        text_scale=text_scale,\n        text_thickness=thickness,\n        text_color=color\n    )\n    \n    # \u683c\u5f0f\u5316FPS\u6587\u672c\n    fps_text = f\"FPS: {fps:.2f}\"\n    \n    # \u7ed8\u5236\u6587\u672c\n    img_result = text_annotator.annotate(scene=img_result, text=fps_text, position=position)\n    \n    return img_result\n\n# \u521d\u59cb\u5316\u68c0\u6d4b\u5668\ndetector = YOLOv8Detector(model_path=\"yolov8s.rknn\")\n\n# \u6253\u5f00\u89c6\u9891\u6587\u4ef6\u6216\u6444\u50cf\u5934\ncap = cv2.VideoCapture(\"test.mp4\")  # \u6216\u8005\u4f7f\u7528\u6444\u50cf\u5934 cv2.VideoCapture(0)\n\n# \u6027\u80fd\u7edf\u8ba1\nframe_count = 0\nstart_time = time.time()\n\nwhile cap.isOpened():\n    ret, frame = cap.read()\n    if not ret:\n        break\n    \n    # \u6267\u884c\u68c0\u6d4b\n    boxes, classes, scores, inference_time, process_time = detector.detect(\n        image=frame,\n        conf_thresh=0.25,\n        nms_thresh=0.45\n    )\n    \n    # \u66f4\u65b0\u6027\u80fd\u7edf\u8ba1\n    frame_count += 1\n    \n    # \u53ef\u89c6\u5316\u7ed3\u679c\n    result_frame = visualize_detections(frame, boxes, classes, scores)\n    \n    # \u8ba1\u7b97FPS\n    elapsed_time = time.time() - start_time\n    fps = frame_count / elapsed_time\n    result_frame = draw_fps(result_frame, fps)\n    \n    # \u663e\u793a\u7ed3\u679c\n    cv2.imshow(\"Result\", result_frame)\n    \n    # \u6309ESC\u952e\u9000\u51fa\n    if cv2.waitKey(1) == 27:\n        break\n\n# \u91ca\u653e\u8d44\u6e90\ncap.release()\ncv2.destroyAllWindows()\ndetector.release()\n```\n\n## \u9ad8\u7ea7\u7528\u6cd5\n\n### \u914d\u7f6e\u4f18\u5316\u53c2\u6570\n\n```python\nfrom rknn_tools.detector import YOLOv8Detector\nfrom rknn_tools.config import update_config\n\n# \u66f4\u65b0\u914d\u7f6e\u53c2\u6570\nupdate_config({\n    'downscale_factor': 2,     # \u9ad8\u5206\u8fa8\u7387\u56fe\u50cf\u4e0b\u91c7\u6837\u56e0\u5b50\n    'use_fast_nms': True,      # \u4f7f\u7528\u5feb\u901fNMS\u7b97\u6cd5\n    'use_parallel': True,      # \u4f7f\u7528\u5e76\u884c\u5904\u7406\n    'skip_frames': 1,          # \u8df3\u5e27\u5904\u7406\uff080\u8868\u793a\u4e0d\u8df3\u5e27\uff09\n    'cache_size': 10           # \u7f13\u5b58\u5927\u5c0f\n})\n\n# \u521d\u59cb\u5316\u68c0\u6d4b\u5668\ndetector = YOLOv8Detector(model_path=\"yolov8s.rknn\")\n```\n\n### \u81ea\u5b9a\u4e49\u540e\u5904\u7406\n\n```python\nfrom rknn_tools.postprocess import post_process\n\n# \u81ea\u5b9a\u4e49\u540e\u5904\u7406\nboxes, classes, scores = post_process(\n    input_data=model_outputs,\n    conf_thresh=0.25,\n    nms_thresh=0.45\n)\n```\n\n## \u8d21\u732e\n\n\u6b22\u8fce\u63d0\u4ea4\u95ee\u9898\u548c\u62c9\u53d6\u8bf7\u6c42\uff0c\u5171\u540c\u6539\u8fdb\u8fd9\u4e2a\u9879\u76ee\uff01\n\n## \u8bb8\u53ef\u8bc1\n\n[MIT](LICENSE)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "\u745e\u82af\u5fae\u7aef\u4fa7\u7684\u8f85\u52a9\u5feb\u901f\u9a8c\u8bc1\u5de5\u5177",
    "version": "0.1.0",
    "project_urls": {
        "Homepage": "https://github.com/xuntee/rknn-tools"
    },
    "split_keywords": [
        "rknn",
        " rockchip",
        " yolov8",
        " object detection",
        " edge computing"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "994a85b893927e7b2eb1a51cdfacdce913e42480443ddb6a59f2957338d6f098",
                "md5": "ecec6784b8937485d5482ef1c51e9aba",
                "sha256": "fa6e7b1576621ccba6c3e1582b51e13dd0933608c0b3f5b2971513283ba72469"
            },
            "downloads": -1,
            "filename": "zmkj_rknn_tools-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "ecec6784b8937485d5482ef1c51e9aba",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.6",
            "size": 22936,
            "upload_time": "2025-08-11T11:37:02",
            "upload_time_iso_8601": "2025-08-11T11:37:02.892110Z",
            "url": "https://files.pythonhosted.org/packages/99/4a/85b893927e7b2eb1a51cdfacdce913e42480443ddb6a59f2957338d6f098/zmkj_rknn_tools-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8a310e568bdea8ee65ca60f12866cadaa2d72288c8ff159d394c12d76e529b25",
                "md5": "e017146c675c1c8b898f617c62755fa2",
                "sha256": "2ef79e32f3569582d2e9fd0dccda4c61317c96b8c9c0b86efaabbabd5b2eea6f"
            },
            "downloads": -1,
            "filename": "zmkj_rknn_tools-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "e017146c675c1c8b898f617c62755fa2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.6",
            "size": 27434,
            "upload_time": "2025-08-11T11:37:04",
            "upload_time_iso_8601": "2025-08-11T11:37:04.397395Z",
            "url": "https://files.pythonhosted.org/packages/8a/31/0e568bdea8ee65ca60f12866cadaa2d72288c8ff159d394c12d76e529b25/zmkj_rknn_tools-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-08-11 11:37:04",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xuntee",
    "github_project": "rknn-tools",
    "github_not_found": true,
    "lcname": "zmkj-rknn-tools"
}
        
Elapsed time: 1.20104s