mlia


Namemlia JSON
Version 0.8.0 PyPI version JSON
download
home_pagehttps://git.mlplatform.org/ml/mlia.git
SummaryML Inference Advisor
upload_time2024-02-28 15:37:37
maintainer
docs_urlNone
authorArm Ltd
requires_python>=3.9.0
licenseApache License 2.0
keywords ml arm ethos-u tflite
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <!---
SPDX-FileCopyrightText: Copyright 2022-2023, Arm Limited and/or its affiliates.
SPDX-License-Identifier: Apache-2.0
--->
# ML Inference Advisor - Introduction

The ML Inference Advisor (MLIA) helps AI developers design and optimize
neural network models for efficient inference on Arm® targets (see
[supported targets](#target-profiles)). MLIA provides
insights on how the ML model will perform on Arm early in the model
development cycle. By passing a model file and specifying an Arm hardware target,
users get an overview of possible areas of improvement and actionable advice.
The advice can cover operator compatibility, performance analysis and model
optimization (e.g. pruning and clustering). With the ML Inference Advisor,
we aim to make the Arm ML IP accessible to developers at all levels of abstraction,
with differing knowledge on hardware optimization and machine learning.

## Inclusive language commitment

This product conforms to Arm's inclusive language policy and, to the best of
our knowledge, does not contain any non-inclusive language.

If you find something that concerns you, email terms@arm.com.

## Releases

Release notes can be found in [MLIA releases](https://review.mlplatform.org/plugins/gitiles/ml/mlia/+/refs/tags/0.8.0/RELEASES.md).

## Getting support

In case you need support or want to report an issue, give us feedback or
simply ask a question about MLIA, please send an email to mlia@arm.com.

Alternatively, use the
[AI and ML forum](https://community.arm.com/support-forums/f/ai-and-ml-forum)
to get support by marking your post with the **MLIA** tag.

## Reporting vulnerabilities

Information on reporting security issues can be found in
[Reporting vulnerabilities](https://review.mlplatform.org/plugins/gitiles/ml/mlia/+/refs/tags/0.8.0/SECURITY.md).

## License

ML Inference Advisor is licensed under [Apache License 2.0](https://review.mlplatform.org/plugins/gitiles/ml/mlia/+/refs/tags/0.8.0/LICENSES/Apache-2.0.txt).

## Trademarks and copyrights

* Arm®, Arm® Ethos™-U, Arm® Cortex®-A, Arm® Cortex®-M, Arm® Corstone™ are
  registered trademarks or trademarks of Arm® Limited (or its subsidiaries) in
  the U.S. and/or elsewhere.
* TensorFlow™ is a trademark of Google® LLC.
* Keras™ is a trademark by François Chollet.
* Linux® is the registered trademark of Linus Torvalds in the U.S. and
  elsewhere.
* Python® is a registered trademark of the PSF.
* Ubuntu® is a registered trademark of Canonical.
* Microsoft and Windows are trademarks of the Microsoft group of companies.

# General usage

## Prerequisites and dependencies

It is recommended to use a virtual environment for MLIA installation, and a
typical setup requires:

* Ubuntu® 20.04.03 LTS (other OSs may work, the ML Inference Advisor has been
  tested on this one specifically)
* Python® >= 3.8.1
* Ethos™-U Vela dependencies (Linux® only)
   * For more details, please refer to the
     [prerequisites of Vela](https://pypi.org/project/ethos-u-vela/)

## Installation

MLIA can be installed with `pip` using the following command:

```bash
pip install mlia
```

It is highly recommended to create a new virtual environment for the installation.

## First steps

After the installation, you can check that MLIA is installed correctly by
opening your terminal, activating the virtual environment and typing the
following command that should print the help text:

```bash
mlia --help
```

The ML Inference Advisor works with sub-commands, i.e. in general a command
would look like this:

```bash
mlia [sub-command] [arguments]
```

Where the following sub-commands are available:

* ["check"](#check): perform compatibility or performance checks on the model
* ["optimize"](#optimize): apply specified optimizations

Detailed help about the different sub-commands can be shown like this:

```bash
mlia [sub-command] --help
```

The following sections go into further detail regarding the usage of MLIA.

# Sub-commands

This section gives an overview of the available sub-commands for MLIA.

## **check**

### compatibility

Lists the model's operators with information about their compatibility with
the specified target.

*Examples:*

```bash
# List operator compatibility with Ethos-U55 with 256 MAC
mlia check ~/models/mobilenet_v1_1.0_224_quant.tflite --target-profile ethos-u55-256

# List operator compatibility with Cortex-A
mlia check ~/models/mobilenet_v1_1.0_224_quant.tflite --target-profile cortex-a

# Get help and further information
mlia check --help
```

### performance

Estimates the model's performance on the specified target and prints out
statistics.

*Examples:*

```bash
# Use default parameters
mlia check ~/models/mobilenet_v1_1.0_224_quant.tflite \
    --target-profile ethos-u55-256 \
    --performance

# Explicitly specify the target profile and backend(s) to use
# with --backend option
mlia check ~/models/ds_cnn_large_fully_quantized_int8.tflite \
    --target-profile ethos-u65-512 \
    --performance \
    --backend "vela" \
    --backend "corstone-300"

# Get help and further information
mlia check --help
```

## **optimize**

This sub-command applies optimizations to a Keras model (.h5 or SavedModel) or
a TensorFlow Lite model and shows the performance improvements compared to
the original unoptimized model.

There are currently three optimization techniques available to apply:

* **pruning**: Sets insignificant model weights to zero until the specified
    sparsity is reached.
* **clustering**: Groups the weights into the specified number of clusters and
    then replaces the weight values with the cluster centroids.

More information about these techniques can be found online in the TensorFlow
documentation, e.g. in the
[TensorFlow model optimization guides](https://www.tensorflow.org/model_optimization/guide).

* **rewrite**: Replaces certain subgraph/layer of the pre-trained model with
    candidates from the rewrite library, with or without training using a
    small portion of the training data, to achieve local performance gains.

**Note:** A ***Keras model*** (.h5 or SavedModel) is required as input to
perform pruning and clustering. A ***TensorFlow Lite model*** is required as input
to perform a rewrite.

*Examples:*

```bash
# Custom optimization parameters: pruning=0.6, clustering=16
mlia optimize ~/models/ds_cnn_l.h5 \
    --target-profile ethos-u55-256 \
    --pruning \
    --pruning-target 0.6 \
    --clustering \
    --clustering-target 16

# Get help and further information
mlia optimize --help

# An example for using rewrite
mlia optimize ~/models/ds_cnn_large_fp32.tflite \
    --target-profile ethos-u55-256 \
    --rewrite \
    --dataset input.tfrec \
    --rewrite-target fully_connected \
    --rewrite-start MobileNet/avg_pool/AvgPool \
    --rewrite-end MobileNet/fc1/BiasAdd
```

# Target profiles

The targets currently supported are described in the sections below.
All sub-commands require a target profile as input parameter.
That target profile can be either a name of a built-in target profile
or a custom file. MLIA saves the target profile that was used for a run
in the output directory.

The support of the above sub-commands for different targets is provided via
backends that need to be installed separately, see
[Backend installation](#backend-installation) section.

## Ethos-U

There are a number of predefined profiles for Ethos-U with the following
attributes:

```
+--------------------------------------------------------------------+
| Profile name  | MAC | System config               | Memory mode    |
+=====================================================================
| ethos-u55-256 | 256 | Ethos_U55_High_End_Embedded | Shared_Sram    |
+---------------------------------------------------------------------
| ethos-u55-128 | 128 | Ethos_U55_High_End_Embedded | Shared_Sram    |
+---------------------------------------------------------------------
| ethos-u65-512 | 512 | Ethos_U65_High_End          | Dedicated_Sram |
+---------------------------------------------------------------------
| ethos-u65-256 | 256 | Ethos_U65_High_End          | Dedicated_Sram |
+--------------------------------------------------------------------+
```

Example:

```bash
mlia check ~/model.tflite --target-profile ethos-u65-512 --performance
```

Ethos-U is supported by these backends:

* [Corstone-300](#corstone-300)
* [Corstone-310](#corstone-310)
* [Vela](#vela)

## Cortex-A

The profile *cortex-a* can be used to get the information about supported
operators for Cortex-A CPUs when using the Arm NN TensorFlow Lite Delegate.
Please, find more details in the section for the
[corresponding backend](#arm-nn-tensorflow-lite-delegate).

## TOSA

The target profile *tosa* can be used for TOSA compatibility checks of your
model. It requires the [TOSA Checker](#tosa-checker) backend.

For more information, see TOSA Checker's:

* [repository](https://review.mlplatform.org/plugins/gitiles/tosa/tosa_checker/+/refs/heads/main)
* [pypi.org page](https://pypi.org/project/tosa-checker/)

## Custom target profiles

For the _custom target profiles_, the configuration file for a custom
target profile is passed as path and needs to conform to the TOML file format.
Each target in MLIA has a pre-defined set of parameters which need to be present
in the config file. The built-in target profiles (in `src/mlia/resources/target_profiles`)
can be used to understand what parameters apply for each target.

*Example:*

``` bash
# for custom profiles
mlia ops --target-profile ~/my_custom_profile.toml sample_model.tflite
```

# Backend installation

The ML Inference Advisor is designed to use backends to provide different
metrics for different target hardware. Some backends come pre-installed,
but others can be added and managed using the command `mlia-backend`, that
provides the following functionality:

* **install**
* **uninstall**
* **list**

 *Examples:*

```bash
# List backends installed and available for installation
mlia-backend list

# Install Corstone-300 backend for Ethos-U
mlia-backend install Corstone-300 --path ~/FVP_Corstone_SSE-300/

# Uninstall the Corstone-300 backend
mlia-backend uninstall Corstone-300

# Get help and further information
mlia-backend --help
```

**Note:** Some, but not all, backends can be automatically downloaded, if no
path is provided.

## Available backends

This section lists available backends. As not all backends work on any platform
the following table shows some compatibility information:

```
+----------------------------------------------------------------------------+
| Backend       | Linux                  | Windows        | Python           |
+=============================================================================
| Arm NN        |                        |                |                  |
| TensorFlow    | x86_64                 | Windows 10     | Python>=3.8      |
| Lite Delegate |                        |                |                  |
+-----------------------------------------------------------------------------
| Corstone-300  | x86_64                 | Not compatible | Python>=3.8      |
+-----------------------------------------------------------------------------
| Corstone-310  | x86_64                 | Not compatible | Python>=3.8      |
+-----------------------------------------------------------------------------
| TOSA checker  | x86_64 (manylinux2014) | Not compatible | 3.7<=Python<=3.9 |
+-----------------------------------------------------------------------------
| Vela          | x86_64                 | Windows 10     | Python~=3.7      |
+----------------------------------------------------------------------------+
```

### Arm NN TensorFlow Lite Delegate

This backend provides general information about the compatibility of operators
with the Arm NN TensorFlow Lite Delegate for Cortex-A. It comes pre-installed.

For version 23.05 the classic delegate is used.

For more information see:

* [Arm NN TensorFlow Lite Delegate documentation](https://arm-software.github.io/armnn/latest/delegate.xhtml)

### Corstone-300

Corstone-300 is a backend that provides performance metrics for systems based
on Cortex-M55 and Ethos-U. It is only available on the Linux platform.

*Examples:*

```bash
# Download and install Corstone-300 automatically
mlia-backend install Corstone-300
# Point to a local version of Corstone-300 installed using its installation script
mlia-backend install Corstone-300 --path YOUR_LOCAL_PATH_TO_CORSTONE_300
```

For further information about Corstone-300 please refer to:
<https://developer.arm.com/Processors/Corstone-300>

### Corstone-310

Corstone-310 is a backend that provides performance metrics for systems based
on Cortex-M85 and Ethos-U. It is available as Arm Virtual Hardware (AVH) only,
i.e. it can not be downloaded automatically.

* For access to AVH for Corstone-310 please refer to:
  <https://developer.arm.com/Processors/Corstone-310>
* Please use the examples of MLIA using Corstone-310 here to get started:
  <https://github.com/ARM-software/open-iot-sdk>

### TOSA Checker

The TOSA Checker backend provides operator compatibility checks against the
TOSA specification.

Please, install it into the same environment as MLIA using this command:

```bash
mlia-backend install tosa-checker
```

Additional resources:

* Source code: <https://review.mlplatform.org/admin/repos/tosa/tosa_checker>
* PyPi package <https://pypi.org/project/tosa-checker/>

### Vela

The Vela backend provides performance metrics for Ethos-U based systems. It
comes pre-installed.

Additional resources:

* <https://pypi.org/project/ethos-u-vela/>

            

Raw data

            {
    "_id": null,
    "home_page": "https://git.mlplatform.org/ml/mlia.git",
    "name": "mlia",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.9.0",
    "maintainer_email": "",
    "keywords": "ml,arm,ethos-u,tflite",
    "author": "Arm Ltd",
    "author_email": "mlia@arm.com",
    "download_url": "",
    "platform": null,
    "description": "<!---\nSPDX-FileCopyrightText: Copyright 2022-2023, Arm Limited and/or its affiliates.\nSPDX-License-Identifier: Apache-2.0\n--->\n# ML Inference Advisor - Introduction\n\nThe ML Inference Advisor (MLIA) helps AI developers design and optimize\nneural network models for efficient inference on Arm\u00ae targets (see\n[supported targets](#target-profiles)). MLIA provides\ninsights on how the ML model will perform on Arm early in the model\ndevelopment cycle. By passing a model file and specifying an Arm hardware target,\nusers get an overview of possible areas of improvement and actionable advice.\nThe advice can cover operator compatibility, performance analysis and model\noptimization (e.g. pruning and clustering). With the ML Inference Advisor,\nwe aim to make the Arm ML IP accessible to developers at all levels of abstraction,\nwith differing knowledge on hardware optimization and machine learning.\n\n## Inclusive language commitment\n\nThis product conforms to Arm's inclusive language policy and, to the best of\nour knowledge, does not contain any non-inclusive language.\n\nIf you find something that concerns you, email terms@arm.com.\n\n## Releases\n\nRelease notes can be found in [MLIA releases](https://review.mlplatform.org/plugins/gitiles/ml/mlia/+/refs/tags/0.8.0/RELEASES.md).\n\n## Getting support\n\nIn case you need support or want to report an issue, give us feedback or\nsimply ask a question about MLIA, please send an email to mlia@arm.com.\n\nAlternatively, use the\n[AI and ML forum](https://community.arm.com/support-forums/f/ai-and-ml-forum)\nto get support by marking your post with the **MLIA** tag.\n\n## Reporting vulnerabilities\n\nInformation on reporting security issues can be found in\n[Reporting vulnerabilities](https://review.mlplatform.org/plugins/gitiles/ml/mlia/+/refs/tags/0.8.0/SECURITY.md).\n\n## License\n\nML Inference Advisor is licensed under [Apache License 2.0](https://review.mlplatform.org/plugins/gitiles/ml/mlia/+/refs/tags/0.8.0/LICENSES/Apache-2.0.txt).\n\n## Trademarks and copyrights\n\n* Arm\u00ae, Arm\u00ae Ethos\u2122-U, Arm\u00ae Cortex\u00ae-A, Arm\u00ae Cortex\u00ae-M, Arm\u00ae Corstone\u2122 are\n  registered trademarks or trademarks of Arm\u00ae Limited (or its subsidiaries) in\n  the U.S. and/or elsewhere.\n* TensorFlow\u2122 is a trademark of Google\u00ae LLC.\n* Keras\u2122 is a trademark by Fran\u00e7ois Chollet.\n* Linux\u00ae is the registered trademark of Linus Torvalds in the U.S. and\n  elsewhere.\n* Python\u00ae is a registered trademark of the PSF.\n* Ubuntu\u00ae is a registered trademark of Canonical.\n* Microsoft and Windows are trademarks of the Microsoft group of companies.\n\n# General usage\n\n## Prerequisites and dependencies\n\nIt is recommended to use a virtual environment for MLIA installation, and a\ntypical setup requires:\n\n* Ubuntu\u00ae 20.04.03 LTS (other OSs may work, the ML Inference Advisor has been\n  tested on this one specifically)\n* Python\u00ae >= 3.8.1\n* Ethos\u2122-U Vela dependencies (Linux\u00ae only)\n   * For more details, please refer to the\n     [prerequisites of Vela](https://pypi.org/project/ethos-u-vela/)\n\n## Installation\n\nMLIA can be installed with `pip` using the following command:\n\n```bash\npip install mlia\n```\n\nIt is highly recommended to create a new virtual environment for the installation.\n\n## First steps\n\nAfter the installation, you can check that MLIA is installed correctly by\nopening your terminal, activating the virtual environment and typing the\nfollowing command that should print the help text:\n\n```bash\nmlia --help\n```\n\nThe ML Inference Advisor works with sub-commands, i.e. in general a command\nwould look like this:\n\n```bash\nmlia [sub-command] [arguments]\n```\n\nWhere the following sub-commands are available:\n\n* [\"check\"](#check): perform compatibility or performance checks on the model\n* [\"optimize\"](#optimize): apply specified optimizations\n\nDetailed help about the different sub-commands can be shown like this:\n\n```bash\nmlia [sub-command] --help\n```\n\nThe following sections go into further detail regarding the usage of MLIA.\n\n# Sub-commands\n\nThis section gives an overview of the available sub-commands for MLIA.\n\n## **check**\n\n### compatibility\n\nLists the model's operators with information about their compatibility with\nthe specified target.\n\n*Examples:*\n\n```bash\n# List operator compatibility with Ethos-U55 with 256 MAC\nmlia check ~/models/mobilenet_v1_1.0_224_quant.tflite --target-profile ethos-u55-256\n\n# List operator compatibility with Cortex-A\nmlia check ~/models/mobilenet_v1_1.0_224_quant.tflite --target-profile cortex-a\n\n# Get help and further information\nmlia check --help\n```\n\n### performance\n\nEstimates the model's performance on the specified target and prints out\nstatistics.\n\n*Examples:*\n\n```bash\n# Use default parameters\nmlia check ~/models/mobilenet_v1_1.0_224_quant.tflite \\\n    --target-profile ethos-u55-256 \\\n    --performance\n\n# Explicitly specify the target profile and backend(s) to use\n# with --backend option\nmlia check ~/models/ds_cnn_large_fully_quantized_int8.tflite \\\n    --target-profile ethos-u65-512 \\\n    --performance \\\n    --backend \"vela\" \\\n    --backend \"corstone-300\"\n\n# Get help and further information\nmlia check --help\n```\n\n## **optimize**\n\nThis sub-command applies optimizations to a Keras model (.h5 or SavedModel) or\na TensorFlow Lite model and shows the performance improvements compared to\nthe original unoptimized model.\n\nThere are currently three optimization techniques available to apply:\n\n* **pruning**: Sets insignificant model weights to zero until the specified\n    sparsity is reached.\n* **clustering**: Groups the weights into the specified number of clusters and\n    then replaces the weight values with the cluster centroids.\n\nMore information about these techniques can be found online in the TensorFlow\ndocumentation, e.g. in the\n[TensorFlow model optimization guides](https://www.tensorflow.org/model_optimization/guide).\n\n* **rewrite**: Replaces certain subgraph/layer of the pre-trained model with\n    candidates from the rewrite library, with or without training using a\n    small portion of the training data, to achieve local performance gains.\n\n**Note:** A ***Keras model*** (.h5 or SavedModel) is required as input to\nperform pruning and clustering. A ***TensorFlow Lite model*** is required as input\nto perform a rewrite.\n\n*Examples:*\n\n```bash\n# Custom optimization parameters: pruning=0.6, clustering=16\nmlia optimize ~/models/ds_cnn_l.h5 \\\n    --target-profile ethos-u55-256 \\\n    --pruning \\\n    --pruning-target 0.6 \\\n    --clustering \\\n    --clustering-target 16\n\n# Get help and further information\nmlia optimize --help\n\n# An example for using rewrite\nmlia optimize ~/models/ds_cnn_large_fp32.tflite \\\n    --target-profile ethos-u55-256 \\\n    --rewrite \\\n    --dataset input.tfrec \\\n    --rewrite-target fully_connected \\\n    --rewrite-start MobileNet/avg_pool/AvgPool \\\n    --rewrite-end MobileNet/fc1/BiasAdd\n```\n\n# Target profiles\n\nThe targets currently supported are described in the sections below.\nAll sub-commands require a target profile as input parameter.\nThat target profile can be either a name of a built-in target profile\nor a custom file. MLIA saves the target profile that was used for a run\nin the output directory.\n\nThe support of the above sub-commands for different targets is provided via\nbackends that need to be installed separately, see\n[Backend installation](#backend-installation) section.\n\n## Ethos-U\n\nThere are a number of predefined profiles for Ethos-U with the following\nattributes:\n\n```\n+--------------------------------------------------------------------+\n| Profile name  | MAC | System config               | Memory mode    |\n+=====================================================================\n| ethos-u55-256 | 256 | Ethos_U55_High_End_Embedded | Shared_Sram    |\n+---------------------------------------------------------------------\n| ethos-u55-128 | 128 | Ethos_U55_High_End_Embedded | Shared_Sram    |\n+---------------------------------------------------------------------\n| ethos-u65-512 | 512 | Ethos_U65_High_End          | Dedicated_Sram |\n+---------------------------------------------------------------------\n| ethos-u65-256 | 256 | Ethos_U65_High_End          | Dedicated_Sram |\n+--------------------------------------------------------------------+\n```\n\nExample:\n\n```bash\nmlia check ~/model.tflite --target-profile ethos-u65-512 --performance\n```\n\nEthos-U is supported by these backends:\n\n* [Corstone-300](#corstone-300)\n* [Corstone-310](#corstone-310)\n* [Vela](#vela)\n\n## Cortex-A\n\nThe profile *cortex-a* can be used to get the information about supported\noperators for Cortex-A CPUs when using the Arm NN TensorFlow Lite Delegate.\nPlease, find more details in the section for the\n[corresponding backend](#arm-nn-tensorflow-lite-delegate).\n\n## TOSA\n\nThe target profile *tosa* can be used for TOSA compatibility checks of your\nmodel. It requires the [TOSA Checker](#tosa-checker) backend.\n\nFor more information, see TOSA Checker's:\n\n* [repository](https://review.mlplatform.org/plugins/gitiles/tosa/tosa_checker/+/refs/heads/main)\n* [pypi.org page](https://pypi.org/project/tosa-checker/)\n\n## Custom target profiles\n\nFor the _custom target profiles_, the configuration file for a custom\ntarget profile is passed as path and needs to conform to the TOML file format.\nEach target in MLIA has a pre-defined set of parameters which need to be present\nin the config file. The built-in target profiles (in `src/mlia/resources/target_profiles`)\ncan be used to understand what parameters apply for each target.\n\n*Example:*\n\n``` bash\n# for custom profiles\nmlia ops --target-profile ~/my_custom_profile.toml sample_model.tflite\n```\n\n# Backend installation\n\nThe ML Inference Advisor is designed to use backends to provide different\nmetrics for different target hardware. Some backends come pre-installed,\nbut others can be added and managed using the command `mlia-backend`, that\nprovides the following functionality:\n\n* **install**\n* **uninstall**\n* **list**\n\n *Examples:*\n\n```bash\n# List backends installed and available for installation\nmlia-backend list\n\n# Install Corstone-300 backend for Ethos-U\nmlia-backend install Corstone-300 --path ~/FVP_Corstone_SSE-300/\n\n# Uninstall the Corstone-300 backend\nmlia-backend uninstall Corstone-300\n\n# Get help and further information\nmlia-backend --help\n```\n\n**Note:** Some, but not all, backends can be automatically downloaded, if no\npath is provided.\n\n## Available backends\n\nThis section lists available backends. As not all backends work on any platform\nthe following table shows some compatibility information:\n\n```\n+----------------------------------------------------------------------------+\n| Backend       | Linux                  | Windows        | Python           |\n+=============================================================================\n| Arm NN        |                        |                |                  |\n| TensorFlow    | x86_64                 | Windows 10     | Python>=3.8      |\n| Lite Delegate |                        |                |                  |\n+-----------------------------------------------------------------------------\n| Corstone-300  | x86_64                 | Not compatible | Python>=3.8      |\n+-----------------------------------------------------------------------------\n| Corstone-310  | x86_64                 | Not compatible | Python>=3.8      |\n+-----------------------------------------------------------------------------\n| TOSA checker  | x86_64 (manylinux2014) | Not compatible | 3.7<=Python<=3.9 |\n+-----------------------------------------------------------------------------\n| Vela          | x86_64                 | Windows 10     | Python~=3.7      |\n+----------------------------------------------------------------------------+\n```\n\n### Arm NN TensorFlow Lite Delegate\n\nThis backend provides general information about the compatibility of operators\nwith the Arm NN TensorFlow Lite Delegate for Cortex-A. It comes pre-installed.\n\nFor version 23.05 the classic delegate is used.\n\nFor more information see:\n\n* [Arm NN TensorFlow Lite Delegate documentation](https://arm-software.github.io/armnn/latest/delegate.xhtml)\n\n### Corstone-300\n\nCorstone-300 is a backend that provides performance metrics for systems based\non Cortex-M55 and Ethos-U. It is only available on the Linux platform.\n\n*Examples:*\n\n```bash\n# Download and install Corstone-300 automatically\nmlia-backend install Corstone-300\n# Point to a local version of Corstone-300 installed using its installation script\nmlia-backend install Corstone-300 --path YOUR_LOCAL_PATH_TO_CORSTONE_300\n```\n\nFor further information about Corstone-300 please refer to:\n<https://developer.arm.com/Processors/Corstone-300>\n\n### Corstone-310\n\nCorstone-310 is a backend that provides performance metrics for systems based\non Cortex-M85 and Ethos-U. It is available as Arm Virtual Hardware (AVH) only,\ni.e. it can not be downloaded automatically.\n\n* For access to AVH for Corstone-310 please refer to:\n  <https://developer.arm.com/Processors/Corstone-310>\n* Please use the examples of MLIA using Corstone-310 here to get started:\n  <https://github.com/ARM-software/open-iot-sdk>\n\n### TOSA Checker\n\nThe TOSA Checker backend provides operator compatibility checks against the\nTOSA specification.\n\nPlease, install it into the same environment as MLIA using this command:\n\n```bash\nmlia-backend install tosa-checker\n```\n\nAdditional resources:\n\n* Source code: <https://review.mlplatform.org/admin/repos/tosa/tosa_checker>\n* PyPi package <https://pypi.org/project/tosa-checker/>\n\n### Vela\n\nThe Vela backend provides performance metrics for Ethos-U based systems. It\ncomes pre-installed.\n\nAdditional resources:\n\n* <https://pypi.org/project/ethos-u-vela/>\n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "ML Inference Advisor",
    "version": "0.8.0",
    "project_urls": {
        "Homepage": "https://git.mlplatform.org/ml/mlia.git"
    },
    "split_keywords": [
        "ml",
        "arm",
        "ethos-u",
        "tflite"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "bfcb62b115209b23c5ed5b26f156ab65bf5893afa47f3feae47cadd9ca5c14d7",
                "md5": "80d7cf8b3883cdeb3a447af89517cabc",
                "sha256": "bbe5fd945506a15eefca7073e1121281ea6f2872e9b46b60e200cbe4deef4a9b"
            },
            "downloads": -1,
            "filename": "mlia-0.8.0-py3-none-manylinux2014_x86_64.whl",
            "has_sig": false,
            "md5_digest": "80d7cf8b3883cdeb3a447af89517cabc",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9.0",
            "size": 1530027,
            "upload_time": "2024-02-28T15:37:37",
            "upload_time_iso_8601": "2024-02-28T15:37:37.452401Z",
            "url": "https://files.pythonhosted.org/packages/bf/cb/62b115209b23c5ed5b26f156ab65bf5893afa47f3feae47cadd9ca5c14d7/mlia-0.8.0-py3-none-manylinux2014_x86_64.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-28 15:37:37",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "mlia"
}
        
Elapsed time: 0.28220s