mindspore-lite


Namemindspore-lite JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://www.mindspore.cn
SummaryMindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.
upload_time2023-06-16 03:06:00
maintainer
docs_urlNone
authorThe MindSpore Authors
requires_python>=3.7
licenseApache 2.0
keywords mindspore lite
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [查看中文](./README_CN.md)

## What Is MindSpore Lite

MindSpore lite is a high-performance, lightweight open source reasoning framework that can be used to meet the needs of AI applications on mobile devices. MindSpore Lite focuses on how to deploy AI technology more effectively on devices. It has been integrated into HMS (Huawei Mobile Services) to provide inferences for applications such as image classification, object detection and OCR. MindSpore Lite will promote the development and enrichment of the AI software/hardware application ecosystem.

<img src="../../docs/MindSpore-Lite-architecture.png" alt="MindSpore Lite Architecture" width="600"/>

For more details please check out our [MindSpore Lite Architecture Guide](https://www.mindspore.cn/lite/docs/en/master/architecture_lite.html).

### MindSpore Lite features

1. Cooperative work with MindSpore training
   - Provides training, optimization, and deployment.
   - The unified IR realizes the device-cloud AI application integration.

2. Lightweight
   - Provides model compress, which could help to improve performance as well.
   - Provides the ultra-lightweight reasoning solution MindSpore Micro to meet the deployment requirements in extreme environments such as smart watches and headphones.

3. High-performance
   - The built-in high-performance kernel computing library NNACL supports multiple convolution optimization algorithms such as Slide window, im2col+gemm, winograde, etc.
   - Assembly code to improve performance of kernel operators. Supports CPU, GPU, and NPU.
4. Versatility
   - Supports IOS, Android.
   - Supports Lite OS.
   - Supports mobile device, smart screen, pad, and IOT devices.
   - Supports third party models such as TFLite, CAFFE and ONNX.

## MindSpore Lite AI deployment procedure

1. Model selection and personalized training

   Select a new model or use an existing model for incremental training using labeled data. When designing a model for mobile device, it is necessary to consider the model size, accuracy and calculation amount.

   The MindSpore team provides a series of pre-training models used for image classification, object detection. You can use these pre-trained models in your application.

   The pre-trained model provided by MindSpore: [Image Classification](https://download.mindspore.cn/model_zoo/official/lite/). More models will be provided in the feature.

   MindSpore allows you to retrain pre-trained models to perform other tasks.

2. Model converter and optimization

   If you use MindSpore or a third-party model, you need to use [MindSpore Lite Model Converter Tool](https://www.mindspore.cn/lite/docs/en/master/use/converter_tool.html) to convert the model into MindSpore Lite model. The MindSpore Lite model converter tool provides the converter of TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure.

   MindSpore also provides a tool to convert models running on IoT devices .

3. Model deployment

   This stage mainly realizes model deployment, including model management, deployment, operation and maintenance monitoring, etc.

4. Inference

   Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html) is the process of running input data through the model to get output.

   MindSpore provides pre-trained model that can be deployed on mobile device [example](https://www.mindspore.cn/lite/examples/en).

## MindSpore Lite benchmark test result

We test a couple of networks on HUAWEI Mate40 (Hisilicon Kirin9000e) mobile phone, and get the test results below for your reference.

| NetWork             | Thread Number | Average Run Time(ms) |
| ------------------- | ------------- | -------------------- |
| basic_squeezenet    | 4             | 6.415                |
| inception_v3        | 4             | 36.767               |
| mobilenet_v1_10_224 | 4             | 4.936                |
| mobilenet_v2_10_224 | 4             | 3.644                |
| resnet_v2_50        | 4             | 25.071               |



            

Raw data

            {
    "_id": null,
    "home_page": "https://www.mindspore.cn",
    "name": "mindspore-lite",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "mindspore lite",
    "author": "The MindSpore Authors",
    "author_email": "contact@mindspore.cn",
    "download_url": "https://github.com/mindspore-ai/mindspore/tags",
    "platform": null,
    "description": "[\u67e5\u770b\u4e2d\u6587](./README_CN.md)\n\n## What Is MindSpore Lite\n\nMindSpore lite is a high-performance, lightweight open source reasoning framework that can be used to meet the needs of AI applications on mobile devices. MindSpore Lite focuses on how to deploy AI technology more effectively on devices. It has been integrated into HMS (Huawei Mobile Services) to provide inferences for applications such as image classification, object detection and OCR. MindSpore Lite will promote the development and enrichment of the AI software/hardware application ecosystem.\n\n<img src=\"../../docs/MindSpore-Lite-architecture.png\" alt=\"MindSpore Lite Architecture\" width=\"600\"/>\n\nFor more details please check out our [MindSpore Lite Architecture Guide](https://www.mindspore.cn/lite/docs/en/master/architecture_lite.html).\n\n### MindSpore Lite features\n\n1. Cooperative work with MindSpore training\n   - Provides training, optimization, and deployment.\n   - The unified IR realizes the device-cloud AI application integration.\n\n2. Lightweight\n   - Provides model compress, which could help to improve performance as well.\n   - Provides the ultra-lightweight reasoning solution MindSpore Micro to meet the deployment requirements in extreme environments such as smart watches and headphones.\n\n3. High-performance\n   - The built-in high-performance kernel computing library NNACL supports multiple convolution optimization algorithms such as Slide window, im2col+gemm, winograde, etc.\n   - Assembly code to improve performance of kernel operators. Supports CPU, GPU, and NPU.\n4. Versatility\n   - Supports IOS, Android.\n   - Supports Lite OS.\n   - Supports mobile device, smart screen, pad, and IOT devices.\n   - Supports third party models such as TFLite, CAFFE and ONNX.\n\n## MindSpore Lite AI deployment procedure\n\n1. Model selection and personalized training\n\n   Select a new model or use an existing model for incremental training using labeled data. When designing a model for mobile device, it is necessary to consider the model size, accuracy and calculation amount.\n\n   The MindSpore team provides a series of pre-training models used for image classification, object detection. You can use these pre-trained models in your application.\n\n   The pre-trained model provided by MindSpore: [Image Classification](https://download.mindspore.cn/model_zoo/official/lite/). More models will be provided in the feature.\n\n   MindSpore allows you to retrain pre-trained models to perform other tasks.\n\n2. Model converter and optimization\n\n   If you use MindSpore or a third-party model, you need to use [MindSpore Lite Model Converter Tool](https://www.mindspore.cn/lite/docs/en/master/use/converter_tool.html) to convert the model into MindSpore Lite model. The MindSpore Lite model converter tool provides the converter of TensorFlow Lite, Caffe, ONNX to MindSpore Lite model, fusion and quantization could be introduced during convert procedure.\n\n   MindSpore also provides a tool to convert models running on IoT devices .\n\n3. Model deployment\n\n   This stage mainly realizes model deployment, including model management, deployment, operation and maintenance monitoring, etc.\n\n4. Inference\n\n   Load the model and perform inference. [Inference](https://www.mindspore.cn/lite/docs/en/master/use/runtime.html) is the process of running input data through the model to get output.\n\n   MindSpore provides pre-trained model that can be deployed on mobile device [example](https://www.mindspore.cn/lite/examples/en).\n\n## MindSpore Lite benchmark test result\n\nWe test a couple of networks on HUAWEI Mate40 (Hisilicon Kirin9000e) mobile phone, and get the test results below for your reference.\n\n| NetWork             | Thread Number | Average Run Time(ms) |\n| ------------------- | ------------- | -------------------- |\n| basic_squeezenet    | 4             | 6.415                |\n| inception_v3        | 4             | 36.767               |\n| mobilenet_v1_10_224 | 4             | 4.936                |\n| mobilenet_v2_10_224 | 4             | 3.644                |\n| resnet_v2_50        | 4             | 25.071               |\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "MindSpore is a new open source deep learning training/inference framework that could be used for mobile, edge and cloud scenarios.",
    "version": "2.0.0",
    "project_urls": {
        "Download": "https://github.com/mindspore-ai/mindspore/tags",
        "Homepage": "https://www.mindspore.cn",
        "Issue Tracker": "https://github.com/mindspore-ai/mindspore/issues",
        "Sources": "https://github.com/mindspore-ai/mindspore"
    },
    "split_keywords": [
        "mindspore",
        "lite"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "28cb928ff13776e770db06d5da286fdbf996e465b48cc5a4a052f3e59b5933db",
                "md5": "5be5ab9cb33a0d569a95d2319a0a7cf0",
                "sha256": "9006b56a0285833802b4880f34cbce5f94111ef5ca4071d5df5c61e8c2a3a6f7"
            },
            "downloads": -1,
            "filename": "mindspore_lite-2.0.0-cp37-none-any.whl",
            "has_sig": false,
            "md5_digest": "5be5ab9cb33a0d569a95d2319a0a7cf0",
            "packagetype": "bdist_wheel",
            "python_version": "cp37",
            "requires_python": ">=3.7",
            "size": 55957681,
            "upload_time": "2023-06-16T03:06:00",
            "upload_time_iso_8601": "2023-06-16T03:06:00.171939Z",
            "url": "https://files.pythonhosted.org/packages/28/cb/928ff13776e770db06d5da286fdbf996e465b48cc5a4a052f3e59b5933db/mindspore_lite-2.0.0-cp37-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-06-16 03:06:00",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "mindspore-ai",
    "github_project": "mindspore",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "mindspore-lite"
}
        
Elapsed time: 0.10044s