qai-hub-models


Nameqai-hub-models JSON
Version 0.23.1 PyPI version JSON
download
home_pagehttps://github.com/quic/ai-hub-models
SummaryModels optimized for export to run on device.
upload_time2025-02-14 23:39:21
maintainerNone
docs_urlNone
authorQualcomm® Technologies, Inc.
requires_python<3.13,>=3.9
licenseBSD-3
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Qualcomm® AI Hub Models](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/quic-logo.jpg)](https://aihub.qualcomm.com)

# [Qualcomm® AI Hub Models](https://aihub.qualcomm.com/)

[![Release](https://img.shields.io/github/v/release/quic/ai-hub-models)](https://github.com/quic/ai-hub-models/releases/latest)
[![Tag](https://img.shields.io/github/v/tag/quic/ai-hub-models)](https://github.com/quic/ai-hub-models/releases/latest)
[![PyPi](https://img.shields.io/pypi/v/qai-hub-models)](https://pypi.org/project/qai-hub-models/)
![Python 3.9, 3.10, 3.11, 3.12](https://img.shields.io/badge/python-3.9%2C%203.10%20(Recommended)%2C%203.11%2C%203.12-yellow)

The Qualcomm® AI Hub Models are a collection of
state-of-the-art machine learning models optimized for deployment on Qualcomm® devices.

* [List of Models by Category](#model-directory)
* [On-Device Performance Data](https://aihub.qualcomm.com/models)
* [Device-Native Sample Apps](https://github.com/quic/ai-hub-apps)

See supported: [On-Device Runtimes](#on-device-runtimes), [Hardware Targets & Precision](#device-hardware--precision), [Chipsets](#chipsets), [Devices](#devices)

&nbsp;

## Setup

### 1. Install Python Package

The package is available via pip:

```shell
# NOTE for Snapdragon X Elite users:
# Only AMDx64 (64-bit) Python in supported on Windows.
# Installation will fail when using Windows ARM64 Python.

pip install qai_hub_models
```

Some models (e.g. [YOLOv7](https://github.com/quic/ai-hub-models/tree/main/qai_hub_models/models/yolov7)) require
additional dependencies that can be installed as follows:

```shell
pip install "qai_hub_models[yolov7]"
```

&nbsp;

### 2. Configure AI Hub Access

Many features of AI Hub Models _(such as model compilation, on-device profiling, etc.)_ require access to Qualcomm® AI Hub:

-  [Create a Qualcomm® ID](https://myaccount.qualcomm.com/signup), and use it to [login to Qualcomm® AI Hub](https://app.aihub.qualcomm.com/).
-  Configure your [API token](https://app.aihub.qualcomm.com/account/): `qai-hub configure --api_token API_TOKEN`

&nbsp;

## Getting Started

### Export and Run A Model on a Physical Device

All [models in our directory](#model-directory) can be compiled and profiled on a hosted
Qualcomm® device:

```shell
pip install "qai_hub_models[yolov7]"

python -m qai_hub_models.models.yolov7.export [--target-runtime ...] [--device ...] [--help]
```

_Using Qualcomm® AI Hub_, the export script will:

1. **Compile** the model for the chosen device and target runtime (see: [Compiling Models on AI Hub](https://app.aihub.qualcomm.com/docs/hub/compile_examples.html)).
2. If applicable, **Quantize** the model (see: [Quantization on AI Hub](https://app.aihub.qualcomm.com/docs/hub/quantize_examples.html))
3. **Profile** the compiled model on a real device in the cloud (see: [Profiling Models on AI Hub](https://app.aihub.qualcomm.com/docs/hub/profile_examples.html)).
4. **Run inference** with a sample input data on a real device in the cloud, and compare on-device model output with PyTorch output (see: [Running Inference on AI Hub](https://app.aihub.qualcomm.com/docs/hub/inference_examples.html))
5. **Download** the compiled model to disk.

&nbsp;

### End-To-End Model Demos

Most [models in our directory](#model-directory) contain CLI demos that run the model _end-to-end_:

```shell
pip install "qai_hub_models[yolov7]"
# Predict and draw bounding boxes on the provided image
python -m qai_hub_models.models.yolov7.demo [--image ...] [--on-device] [--help]
```

_End-to-end_ demos:
1. **Preprocess** human-readable input into model input
2. Run **model inference**
3. **Postprocess** model output to a human-readable format

**Many end-to-end demos use AI Hub to run inference on a real cloud-hosted device** _(if the `--on-device` flag is set)_. All end-to-end demos also run locally via PyTorch.

&nbsp;

### Sample Applications

**Native** applications that can run our models (with pre- and post-processing) on physical devices are published in the [AI Hub Apps repository](https://github.com/quic/ai-hub-apps/).

**Python** applications are defined for all models [(from qai_hub_models.models.\<model_name> import App)](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/yolov7/app.py). These apps wrap model inference with pre- and post-processing steps written using torch & numpy. **These apps are optimized to be an easy-to-follow example, rather than to minimize prediction time.**

&nbsp;

## Model Support Data

### On-Device Runtimes

| Runtime | Supported OS |
| -- | -- |
| [Qualcomm AI Engine Direct](https://www.qualcomm.com/developer/artificial-intelligence#overview) | Android, Linux, Windows
| [LiteRT (TensorFlow Lite)](https://www.tensorflow.org/lite) | Android, Linux
| [ONNX](https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html) | Android, Linux, Windows

### Device Hardware & Precision

| Device Compute Unit | Supported Precision |
| -- | -- |
| CPU | FP32, INT16, INT8
| GPU | FP32, FP16
| NPU (includes [Hexagon DSP](https://developer.qualcomm.com/software/hexagon-dsp-sdk/dsp-processor), [HTP](https://developer.qualcomm.com/hardware/qualcomm-innovators-development-kit/ai-resources-overview/ai-hardware-cores-accelerators)) | FP16*, INT16, INT8

*Some older chipsets do not support fp16 inference on their NPU.

### Chipsets
* Snapdragon [8 Elite](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-elite-mobile-platform), [8 Gen 3](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-3-mobile-platform), [8 Gen 2](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-2-mobile-platform), and [8 Gen 1](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-1-mobile-platform) Mobile Platforms
* [Snapdragon X Elite](https://www.qualcomm.com/products/mobile/snapdragon/pcs-and-tablets/snapdragon-x-elite) Compute Platform
* SA8255P, SA8295P, SA8650P, and SA8775P Automotive Platforms
* [QCS 6490](https://www.qualcomm.com/products/internet-of-things/industrial/building-enterprise/qcs6490),  [QCS 8250](https://www.qualcomm.com/products/internet-of-things/consumer/cameras/qcs8250), and [QCS 8550](https://www.qualcomm.com/products/technology/processors/qcs8550) IoT Platforms
* QCS8450 XR Platform

and many more.

### Devices
* Samsung Galaxy S21, S22, S23, and S24 Series
* Xiaomi 12 and 13
* Snapdragon X Elite CRD (Compute Reference Device)
* Qualcomm RB3 Gen 2, RB5

and many more.

&nbsp;

## Model Directory
{PUBLIC_MODEL_TABLE}

## Need help?
Slack: https://aihub.qualcomm.com/community/slack

GitHub Issues: https://github.com/quic/ai-hub-models/issues

Email: ai-hub-support@qti.qualcomm.com.

## LICENSE

Qualcomm® AI Hub Models is licensed under BSD-3. See the [LICENSE file](LICENSE).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/quic/ai-hub-models",
    "name": "qai-hub-models",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>=3.9",
    "maintainer_email": null,
    "keywords": null,
    "author": "Qualcomm\u00ae Technologies, Inc.",
    "author_email": null,
    "download_url": null,
    "platform": null,
    "description": "[![Qualcomm\u00ae AI Hub Models](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/quic-logo.jpg)](https://aihub.qualcomm.com)\n\n# [Qualcomm\u00ae AI Hub Models](https://aihub.qualcomm.com/)\n\n[![Release](https://img.shields.io/github/v/release/quic/ai-hub-models)](https://github.com/quic/ai-hub-models/releases/latest)\n[![Tag](https://img.shields.io/github/v/tag/quic/ai-hub-models)](https://github.com/quic/ai-hub-models/releases/latest)\n[![PyPi](https://img.shields.io/pypi/v/qai-hub-models)](https://pypi.org/project/qai-hub-models/)\n![Python 3.9, 3.10, 3.11, 3.12](https://img.shields.io/badge/python-3.9%2C%203.10%20(Recommended)%2C%203.11%2C%203.12-yellow)\n\nThe Qualcomm\u00ae AI Hub Models are a collection of\nstate-of-the-art machine learning models optimized for deployment on Qualcomm\u00ae devices.\n\n* [List of Models by Category](#model-directory)\n* [On-Device Performance Data](https://aihub.qualcomm.com/models)\n* [Device-Native Sample Apps](https://github.com/quic/ai-hub-apps)\n\nSee supported: [On-Device Runtimes](#on-device-runtimes), [Hardware Targets & Precision](#device-hardware--precision), [Chipsets](#chipsets), [Devices](#devices)\n\n&nbsp;\n\n## Setup\n\n### 1. Install Python Package\n\nThe package is available via pip:\n\n```shell\n# NOTE for Snapdragon X Elite users:\n# Only AMDx64 (64-bit) Python in supported on Windows.\n# Installation will fail when using Windows ARM64 Python.\n\npip install qai_hub_models\n```\n\nSome models (e.g. [YOLOv7](https://github.com/quic/ai-hub-models/tree/main/qai_hub_models/models/yolov7)) require\nadditional dependencies that can be installed as follows:\n\n```shell\npip install \"qai_hub_models[yolov7]\"\n```\n\n&nbsp;\n\n### 2. Configure AI Hub Access\n\nMany features of AI Hub Models _(such as model compilation, on-device profiling, etc.)_ require access to Qualcomm\u00ae AI Hub:\n\n-  [Create a Qualcomm\u00ae ID](https://myaccount.qualcomm.com/signup), and use it to [login to Qualcomm\u00ae AI Hub](https://app.aihub.qualcomm.com/).\n-  Configure your [API token](https://app.aihub.qualcomm.com/account/): `qai-hub configure --api_token API_TOKEN`\n\n&nbsp;\n\n## Getting Started\n\n### Export and Run A Model on a Physical Device\n\nAll [models in our directory](#model-directory) can be compiled and profiled on a hosted\nQualcomm\u00ae device:\n\n```shell\npip install \"qai_hub_models[yolov7]\"\n\npython -m qai_hub_models.models.yolov7.export [--target-runtime ...] [--device ...] [--help]\n```\n\n_Using Qualcomm\u00ae AI Hub_, the export script will:\n\n1. **Compile** the model for the chosen device and target runtime (see: [Compiling Models on AI Hub](https://app.aihub.qualcomm.com/docs/hub/compile_examples.html)).\n2. If applicable, **Quantize** the model (see: [Quantization on AI Hub](https://app.aihub.qualcomm.com/docs/hub/quantize_examples.html))\n3. **Profile** the compiled model on a real device in the cloud (see: [Profiling Models on AI Hub](https://app.aihub.qualcomm.com/docs/hub/profile_examples.html)).\n4. **Run inference** with a sample input data on a real device in the cloud, and compare on-device model output with PyTorch output (see: [Running Inference on AI Hub](https://app.aihub.qualcomm.com/docs/hub/inference_examples.html))\n5. **Download** the compiled model to disk.\n\n&nbsp;\n\n### End-To-End Model Demos\n\nMost [models in our directory](#model-directory) contain CLI demos that run the model _end-to-end_:\n\n```shell\npip install \"qai_hub_models[yolov7]\"\n# Predict and draw bounding boxes on the provided image\npython -m qai_hub_models.models.yolov7.demo [--image ...] [--on-device] [--help]\n```\n\n_End-to-end_ demos:\n1. **Preprocess** human-readable input into model input\n2. Run **model inference**\n3. **Postprocess** model output to a human-readable format\n\n**Many end-to-end demos use AI Hub to run inference on a real cloud-hosted device** _(if the `--on-device` flag is set)_. All end-to-end demos also run locally via PyTorch.\n\n&nbsp;\n\n### Sample Applications\n\n**Native** applications that can run our models (with pre- and post-processing) on physical devices are published in the [AI Hub Apps repository](https://github.com/quic/ai-hub-apps/).\n\n**Python** applications are defined for all models [(from qai_hub_models.models.\\<model_name> import App)](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/yolov7/app.py). These apps wrap model inference with pre- and post-processing steps written using torch & numpy. **These apps are optimized to be an easy-to-follow example, rather than to minimize prediction time.**\n\n&nbsp;\n\n## Model Support Data\n\n### On-Device Runtimes\n\n| Runtime | Supported OS |\n| -- | -- |\n| [Qualcomm AI Engine Direct](https://www.qualcomm.com/developer/artificial-intelligence#overview) | Android, Linux, Windows\n| [LiteRT (TensorFlow Lite)](https://www.tensorflow.org/lite) | Android, Linux\n| [ONNX](https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html) | Android, Linux, Windows\n\n### Device Hardware & Precision\n\n| Device Compute Unit | Supported Precision |\n| -- | -- |\n| CPU | FP32, INT16, INT8\n| GPU | FP32, FP16\n| NPU (includes [Hexagon DSP](https://developer.qualcomm.com/software/hexagon-dsp-sdk/dsp-processor), [HTP](https://developer.qualcomm.com/hardware/qualcomm-innovators-development-kit/ai-resources-overview/ai-hardware-cores-accelerators)) | FP16*, INT16, INT8\n\n*Some older chipsets do not support fp16 inference on their NPU.\n\n### Chipsets\n* Snapdragon [8 Elite](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-elite-mobile-platform), [8 Gen 3](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-3-mobile-platform), [8 Gen 2](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-2-mobile-platform), and [8 Gen 1](https://www.qualcomm.com/products/mobile/snapdragon/smartphones/snapdragon-8-series-mobile-platforms/snapdragon-8-gen-1-mobile-platform) Mobile Platforms\n* [Snapdragon X Elite](https://www.qualcomm.com/products/mobile/snapdragon/pcs-and-tablets/snapdragon-x-elite) Compute Platform\n* SA8255P, SA8295P, SA8650P, and SA8775P Automotive Platforms\n* [QCS 6490](https://www.qualcomm.com/products/internet-of-things/industrial/building-enterprise/qcs6490),  [QCS 8250](https://www.qualcomm.com/products/internet-of-things/consumer/cameras/qcs8250), and [QCS 8550](https://www.qualcomm.com/products/technology/processors/qcs8550) IoT Platforms\n* QCS8450 XR Platform\n\nand many more.\n\n### Devices\n* Samsung Galaxy S21, S22, S23, and S24 Series\n* Xiaomi 12 and 13\n* Snapdragon X Elite CRD (Compute Reference Device)\n* Qualcomm RB3 Gen 2, RB5\n\nand many more.\n\n&nbsp;\n\n## Model Directory\n{PUBLIC_MODEL_TABLE}\n\n## Need help?\nSlack: https://aihub.qualcomm.com/community/slack\n\nGitHub Issues: https://github.com/quic/ai-hub-models/issues\n\nEmail: ai-hub-support@qti.qualcomm.com.\n\n## LICENSE\n\nQualcomm\u00ae AI Hub Models is licensed under BSD-3. See the [LICENSE file](LICENSE).\n",
    "bugtrack_url": null,
    "license": "BSD-3",
    "summary": "Models optimized for export to run on device.",
    "version": "0.23.1",
    "project_urls": {
        "Homepage": "https://github.com/quic/ai-hub-models"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2c1b6c9c49f1c074bb366b8efea2d4d9b65c2f4a96fe2428ed3324389e78daed",
                "md5": "3e30ebd95e9a2e2404fb9d9cfc759bd2",
                "sha256": "6774b639e1907a0c8bf192670b9791d803838674362210e0da450e8134099e17"
            },
            "downloads": -1,
            "filename": "qai_hub_models-0.23.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "3e30ebd95e9a2e2404fb9d9cfc759bd2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>=3.9",
            "size": 2779338,
            "upload_time": "2025-02-14T23:39:21",
            "upload_time_iso_8601": "2025-02-14T23:39:21.887213Z",
            "url": "https://files.pythonhosted.org/packages/2c/1b/6c9c49f1c074bb366b8efea2d4d9b65c2f4a96fe2428ed3324389e78daed/qai_hub_models-0.23.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-14 23:39:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "quic",
    "github_project": "ai-hub-models",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "qai-hub-models"
}
        
Elapsed time: 7.73978s