<h3 align="center">
Workflow-based Multi-platform AI Deployment Tool
</h3>
## Features
nndeploy is a workflow-based multi-platform AI deployment tool with the following capabilities:
### 1. Efficiency Tool for AI Deployment
- **Visual Workflow**: Deploy AI algorithms through drag-and-drop interface
- **Function Calls**: Export workflows as JSON configuration files, supporting Python/C++ API calls
- **Multi-platform Inference**: One workflow, multi-platform deployment. Zero-abstraction cost integration with 13 mainstream inference frameworks, covering cloud, desktop, mobile, and edge platforms
| Framework | Support Status |
| :------- | :------ |
| [PyTorch](https://pytorch.org/) | ✅ |
| [TensorRT](https://github.com/NVIDIA/TensorRT) | ✅ |
| [OpenVINO](https://github.com/openvinotoolkit/openvino) | ✅ |
| [ONNXRuntime](https://github.com/microsoft/onnxruntime) | ✅ |
| [MNN](https://github.com/alibaba/MNN) | ✅ |
| [TNN](https://github.com/Tencent/TNN) | ✅ |
| [ncnn](https://github.com/Tencent/ncnn) | ✅ |
| [CoreML](https://github.com/apple/coremltools) | ✅ |
| [AscendCL](https://www.hiascend.com/zh/) | ✅ |
| [RKNN](https://www.rock-chips.com/a/cn/downloadcenter/BriefDatasheet/index.html) | ✅ |
| [TVM](https://github.com/apache/tvm) | ✅ |
| [SNPE](https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk) | ✅ |
| [Custom Inference Framework](docs/zh_cn/inference/README_INFERENCE.md) | ✅ |
### 2. Performance Tool for AI Deployment
- **Parallel Optimization**: Support for serial, pipeline parallel, and task parallel execution modes
- **Memory Optimization**: Zero-copy, memory pools, memory reuse and other optimization strategies
- **High-Performance Optimization**: Built-in nodes optimized with C++/CUDA/SIMD implementations
### 3. Creative Tool for AI Deployment
- **Custom Nodes**: Support Python/C++ custom nodes with seamless integration into visual interface without frontend code
- **Algorithm Composition**: Flexible combination of different algorithms to rapidly build innovative AI applications
- **What You Tune Is What You See**: Frontend visual adjustment of all node parameters in AI algorithm deployment with quick preview of post-tuning effects
## More infomation
You can get everything in nndeploy github main page : [nndeploy](https://github.com/nndeploy/nndeploy)
Raw data
{
"_id": null,
"home_page": "https://github.com/nndeploy/nndeploy",
"name": "nndeploy",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "deep-learning, neural-network, model-deployment, inference, ai",
"author": "nndeploy team",
"author_email": "595961667@qq.com",
"download_url": null,
"platform": null,
"description": "\n<h3 align=\"center\">\nWorkflow-based Multi-platform AI Deployment Tool\n</h3>\n\n## Features\n\nnndeploy is a workflow-based multi-platform AI deployment tool with the following capabilities:\n\n### 1. Efficiency Tool for AI Deployment\n\n- **Visual Workflow**: Deploy AI algorithms through drag-and-drop interface\n\n- **Function Calls**: Export workflows as JSON configuration files, supporting Python/C++ API calls\n\n- **Multi-platform Inference**: One workflow, multi-platform deployment. Zero-abstraction cost integration with 13 mainstream inference frameworks, covering cloud, desktop, mobile, and edge platforms\n\n | Framework | Support Status |\n | :------- | :------ |\n | [PyTorch](https://pytorch.org/) | \u2705 |\n | [TensorRT](https://github.com/NVIDIA/TensorRT) | \u2705 |\n | [OpenVINO](https://github.com/openvinotoolkit/openvino) | \u2705 |\n | [ONNXRuntime](https://github.com/microsoft/onnxruntime) | \u2705 |\n | [MNN](https://github.com/alibaba/MNN) | \u2705 |\n | [TNN](https://github.com/Tencent/TNN) | \u2705 |\n | [ncnn](https://github.com/Tencent/ncnn) | \u2705 |\n | [CoreML](https://github.com/apple/coremltools) | \u2705 |\n | [AscendCL](https://www.hiascend.com/zh/) | \u2705 |\n | [RKNN](https://www.rock-chips.com/a/cn/downloadcenter/BriefDatasheet/index.html) | \u2705 |\n | [TVM](https://github.com/apache/tvm) | \u2705 |\n | [SNPE](https://developer.qualcomm.com/software/qualcomm-neural-processing-sdk) | \u2705 |\n | [Custom Inference Framework](docs/zh_cn/inference/README_INFERENCE.md) | \u2705 |\n\n### 2. Performance Tool for AI Deployment\n\n- **Parallel Optimization**: Support for serial, pipeline parallel, and task parallel execution modes\n\n- **Memory Optimization**: Zero-copy, memory pools, memory reuse and other optimization strategies\n \n- **High-Performance Optimization**: Built-in nodes optimized with C++/CUDA/SIMD implementations\n\n### 3. Creative Tool for AI Deployment\n\n- **Custom Nodes**: Support Python/C++ custom nodes with seamless integration into visual interface without frontend code\n\n- **Algorithm Composition**: Flexible combination of different algorithms to rapidly build innovative AI applications\n\n- **What You Tune Is What You See**: Frontend visual adjustment of all node parameters in AI algorithm deployment with quick preview of post-tuning effects\n\n## More infomation\nYou can get everything in nndeploy github main page : [nndeploy](https://github.com/nndeploy/nndeploy)\n",
"bugtrack_url": null,
"license": "Apache License 2.0",
"summary": "Workflow-based Multi-platform AI Deployment Tool",
"version": "0.2.2",
"project_urls": {
"Bug Reports": "https://github.com/nndeploy/nndeploy/issues",
"Homepage": "https://github.com/nndeploy/nndeploy",
"Source": "https://github.com/nndeploy/nndeploy"
},
"split_keywords": [
"deep-learning",
" neural-network",
" model-deployment",
" inference",
" ai"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "27d0a708ef7d713119475d59791e2db2bfb9c143dba47913347d6112331a38d3",
"md5": "42d34b7b78bf22403003d754c2cd0075",
"sha256": "ef061fb2d8efbd2e30643dcea25ec33294ee30522a51c1a068d2b93e0138bdb6"
},
"downloads": -1,
"filename": "nndeploy-0.2.2-cp313-cp313-macosx_13_0_arm64.whl",
"has_sig": false,
"md5_digest": "42d34b7b78bf22403003d754c2cd0075",
"packagetype": "bdist_wheel",
"python_version": "cp313",
"requires_python": ">=3.10",
"size": 40085045,
"upload_time": "2025-08-01T03:37:02",
"upload_time_iso_8601": "2025-08-01T03:37:02.058058Z",
"url": "https://files.pythonhosted.org/packages/27/d0/a708ef7d713119475d59791e2db2bfb9c143dba47913347d6112331a38d3/nndeploy-0.2.2-cp313-cp313-macosx_13_0_arm64.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-01 03:37:02",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "nndeploy",
"github_project": "nndeploy",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "cython",
"specs": []
},
{
"name": "packaging",
"specs": []
},
{
"name": "setuptools",
"specs": [
[
"<=",
"68.0.0"
]
]
},
{
"name": "gitpython",
"specs": [
[
">=",
"3.1.30"
]
]
},
{
"name": "aiofiles",
"specs": [
[
">=",
"24.1.0"
]
]
},
{
"name": "PyYAML",
"specs": [
[
">=",
"5.3.1"
]
]
},
{
"name": "pytest",
"specs": []
},
{
"name": "jsonschema",
"specs": []
},
{
"name": "multiprocess",
"specs": []
},
{
"name": "numpy",
"specs": []
},
{
"name": "opencv-python",
"specs": [
[
">=",
"4.8.0"
]
]
},
{
"name": "onnxruntime",
"specs": [
[
">=",
"1.18.0"
]
]
},
{
"name": "torch",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "requests",
"specs": [
[
">=",
"2.31.0"
]
]
},
{
"name": "fastapi",
"specs": [
[
">=",
"0.104.0"
]
]
},
{
"name": "uvicorn",
"specs": [
[
">=",
"0.24.0"
]
]
},
{
"name": "websockets",
"specs": [
[
">=",
"11.0"
]
]
},
{
"name": "python-multipart",
"specs": [
[
">=",
"0.0.6"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"2.0.0"
]
]
}
],
"lcname": "nndeploy"
}