onnx-tool


Nameonnx-tool JSON
Version 0.9.0 PyPI version JSON
download
home_pagehttps://github.com/ThanatosShinji/onnx-tool
SummaryA tool for ONNX model: A parser, editor and profiler tool for ONNX models.
upload_time2024-02-28 15:50:09
maintainer
docs_urlNone
authorLuo Yu
requires_python
licenseMIT
keywords
VCS
bugtrack_url
requirements onnx numpy tabulate
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <a href="README_CN.md">简体中文</a>
# onnx-tool

**A tool for ONNX model:**

* *[Parse and edit](#basic-parse-edit): [Constant folding](data/ConstantFolding.md); [OPs fusion](data/GraphFusion.md).*
* *[Model profiling](#shapeinfer-profile): Rapid shape inference; MACs statistics*
* *[Compute Graph and Shape Engine](#compute_graph-header).*
* *[Model memory compression](#memory-compression): activation compression and weight compression.*
* *[Quantized models and sparse models](#models) are supported.*

Supported Models:

* NLP: BERT, T5, GPT, LLaMa, MPT([TransformerModel](benchmark/transfomer_models.py))
* Diffusion: Stable Diffusion(TextEncoder, VAE, UNET)
* CV: [BEVFormer](benchmark/compression.py), MobileNet, YOLO, ...
* Audio: sovits, LPCNet

---

## Basic Parse and Edit
<a id="basic-parse-edit"></a>
You can load any onnx file by onnx_tool.Model:  
Change graph structure with onnx_tool.Graph;  
Change op attributes and IO tensors with onnx_tool.Node;  
Change tensor data or type with onnx_tool.Tensor.  
To apply your changes, just call save_model method of onnx_tool.Model or onnx_tool.Graph.

Please refer [benchmark/examples.py](benchmark/examples.py).

---

## Shape Inference & Profile Model
<a id="shapeinfer-profile"></a>
All profiling data must be built on shape inference result.  
ONNX graph with tensor shapes:
<p align="center">  
  <img src="data/shape_inference.jpg">
</p>  
Regular model profiling table:  
<p align="center">
  <img src="data/macs_counting.png">
</p>
<br><br>
Sparse profiling table:
<p id="sparsity" align="center">
  <img src="data/sparse_model.png">
</p>
<br><br>  

Introduction: [data/Profile.md](data/Profile.md).  
pytorch usage: [data/PytorchUsage.md](data/PytorchUsage.md).  
tensorflow
usage: [data/TensorflowUsage.md](data/TensorflowUsage.md).  
examples: [benchmark/examples.py](benchmark/examples.py).

---

## Compute Graph with Shape Engine
<a id="compute_graph-header"></a>
From a raw graph to a compute graph:
<p id="compute_graph" align="center">
  <img src="data/compute_graph.png">
</p>  

Remove shape calculation layers(created by ONNX export) to get a *Compute Graph*. Use *Shape Engine* to update tensor
shapes at runtime.  
Examples: [benchmark/shape_regress.py](benchmark/shape_regress.py).
[benchmark/examples.py](benchmark/examples.py).  
Integrate *Compute Graph* and *Shape Engine* into a cpp inference
engine: [data/inference_engine.md](data/inference_engine.md)

---

## Memory Compression
<a id="memory-compression"></a>

### Activation Compression
Activation memory also called temporary memory is created by each OP's output. Only the last activation marked as the
model's output will be kept. So you don't have to prepare memory space for each activation tensor. They better reuse 
an optimized memory size.

For large language models and high-resolution CV models, the activation memory compression is a key to save memory.  
The compression method achieves 5% memory compression on most models.   
For example:

 model                         | Native Memory Size(MB) | Compressed Memory Size(MB) | Compression Ratio(%) 
-------------------------------|------------------------|----------------------------|----------------------
 StableDiffusion(VAE_encoder)  | 14,245                 | 540                        | 3.7                  
 StableDiffusion(VAE_decoder)  | 25,417                 | 1,140                      | 4.48                 
 StableDiffusion(Text_encoder) | 215                    | 5                          | 2.5                  
 StableDiffusion(UNet)         | 36,135                 | 2,232                      | 6.2                  
 GPT2                          | 40                     | 2                          | 6.9                  
 BERT                          | 2,170                  | 27                         | 1.25                 

code example: [benchmark/compression.py](benchmark/compression.py)

### Weight Compression
A fp32 model with 7B parameters will take 28GB disk space and memory space. You can not even run the model if your device
 doesn't have that much memory space. So weight compression is critical to run large language models. As a reference, 7B 
model with int4 symmetric per block(32) quantization(llama.cpp's q4_0 quantization method) only has ~0.156x model size compared with fp32 model. 

Current support:   
* [fp16]
* [int8]x[symmetric/asymmetric]x[per tensor/per channel/per block]  
* [int4]x[symmetric/asymmetric]x[per tensor/per channel/per block]  

code examples:[benchmark/examples.py](benchmark/examples.py).  


---

## How to install
    
`pip install onnx-tool`

OR

`pip install --upgrade git+https://github.com/ThanatosShinji/onnx-tool.git`  

python>=3.6

If `pip install onnx-tool` failed by onnx's installation, you may try `pip install onnx==1.8.1` (a lower version like this) first.  
Then `pip install onnx-tool` again.


---

## Known Issues
* Loop op is not supported
* Sequence type is not supported
  
---

## Results of [ONNX Model Zoo](https://github.com/onnx/models) and SOTA models
<a id='models'></a>
Some models have dynamic input shapes. The MACs varies from input shapes. The input shapes used in these results are writen to [data/public/config.py](data/public/config.py).
These onnx models with all tensors' shape can be downloaded: [baidu drive](https://pan.baidu.com/s/1eebBP-n-wXvOhSmIH-NUZQ 
)(code: p91k) [google drive](https://drive.google.com/drive/folders/1H-ya1wTvjIMg2pMcMITWDIfWNSnjYxTn?usp=sharing)
<p id="results" align="center">
<table>
<tr>
<td>

Model | Params(M) | MACs(M)
---|---|---
<a href="benchmark/transfomer_models.py">GPT-J 1 layer</a> | 464 | 173,398  
<a href="benchmark/transfomer_models.py">MPT 1 layer</a> | 261 | 79,894
[text_encoder](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main)| 123.13 | 6,782
[UNet2DCondition](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main)| 859.52 | 888,870
[VAE_encoder](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main) | 34.16 | 566,371
[VAE_decoder](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main) | 49.49 | 1,271,959
[SqueezeNet 1.0](https://github.com/onnx/models/tree/main/vision/classification/squeezenet) | 1.23 | 351
[AlexNet](https://github.com/onnx/models/tree/main/vision/classification/alexnet) | 60.96 | 665
[GoogleNet](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet) | 6.99 | 1,606
[googlenet_age](https://github.com/onnx/models/tree/main/vision/body_analysis/age_gender) | 5.98 | 1,605
[LResNet100E-IR](https://github.com/onnx/models/tree/main/vision/body_analysis/arcface) | 65.22 | 12,102
[BERT-Squad](https://github.com/onnx/models/tree/main/text/machine_comprehension/bert-squad) | 113.61 | 22,767
[BiDAF](https://github.com/onnx/models/tree/main/text/machine_comprehension/bidirectional_attention_flow) | 18.08 | 9.87
[EfficientNet-Lite4](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4) | 12.96 | 1,361
[Emotion](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus) | 12.95 | 877
[Mask R-CNN](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/mask-rcnn) | 46.77 | 92,077
</td>

<td>

Model | Params(M) | MACs(M)
---|-----------|---
<a href="benchmark/transfomer_models.py">LLaMa 1 layer</a> | 618       | 211,801  
[BEVFormer Tiny](https://github.com/DerryHub/BEVFormer_tensorrt) | 33.7      | 210,838
[rvm_mobilenetv3](https://github.com/PeterL1n/RobustVideoMatting) | 3.73      | 4,289
[yolov4](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/yolov4) | 64.33     | 3,319
[ConvNeXt-L](https://github.com/facebookresearch/ConvNeXt) | 229.79    | 34,872
[edgenext_small](https://github.com/mmaaz60/EdgeNeXt) | 5.58      | 1,357
[SSD](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/ssd) | 19.98     | 216,598
[RealESRGAN](https://github.com/xinntao/Real-ESRGAN) | 16.69     | 73,551
[ShuffleNet](https://github.com/onnx/models/tree/main/vision/classification/shufflenet) | 2.29      | 146
[GPT-2](https://github.com/onnx/models/tree/main/text/machine_comprehension/gpt-2) | 137.02    | 1,103
[T5-encoder](https://github.com/onnx/models/tree/main/text/machine_comprehension/t5) | 109.62    | 686
[T5-decoder](https://github.com/onnx/models/tree/main/text/machine_comprehension/t5) | 162.62    | 1,113
[RoBERTa-BASE](https://github.com/onnx/models/tree/main/text/machine_comprehension/roberta) | 124.64    | 688
[Faster R-CNN](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/faster-rcnn) | 44.10     | 46,018
[FCN ResNet-50](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/fcn) | 35.29     | 37,056
[ResNet50](https://github.com/onnx/models/tree/main/vision/classification/resnet) | 25        | 3,868

</td>
</tr>
</table>
</p>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ThanatosShinji/onnx-tool",
    "name": "onnx-tool",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "Luo Yu",
    "author_email": "luoyu888888@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/0f/48/e17a24b2d37c4f71bb4ac4297d15b7f63f45cb9e052a30dd6835ffeaa498/onnx-tool-0.9.0.tar.gz",
    "platform": null,
    "description": "<a href=\"README_CN.md\">\u7b80\u4f53\u4e2d\u6587</a>\r\n# onnx-tool\r\n\r\n**A tool for ONNX model:**\r\n\r\n* *[Parse and edit](#basic-parse-edit): [Constant folding](data/ConstantFolding.md); [OPs fusion](data/GraphFusion.md).*\r\n* *[Model profiling](#shapeinfer-profile): Rapid shape inference; MACs statistics*\r\n* *[Compute Graph and Shape Engine](#compute_graph-header).*\r\n* *[Model memory compression](#memory-compression): activation compression and weight compression.*\r\n* *[Quantized models and sparse models](#models) are supported.*\r\n\r\nSupported Models:\r\n\r\n* NLP: BERT, T5, GPT, LLaMa, MPT([TransformerModel](benchmark/transfomer_models.py))\r\n* Diffusion: Stable Diffusion(TextEncoder, VAE, UNET)\r\n* CV: [BEVFormer](benchmark/compression.py), MobileNet, YOLO, ...\r\n* Audio: sovits, LPCNet\r\n\r\n---\r\n\r\n## Basic Parse and Edit\r\n<a id=\"basic-parse-edit\"></a>\r\nYou can load any onnx file by onnx_tool.Model:  \r\nChange graph structure with onnx_tool.Graph;  \r\nChange op attributes and IO tensors with onnx_tool.Node;  \r\nChange tensor data or type with onnx_tool.Tensor.  \r\nTo apply your changes, just call save_model method of onnx_tool.Model or onnx_tool.Graph.\r\n\r\nPlease refer [benchmark/examples.py](benchmark/examples.py).\r\n\r\n---\r\n\r\n## Shape Inference & Profile Model\r\n<a id=\"shapeinfer-profile\"></a>\r\nAll profiling data must be built on shape inference result.  \r\nONNX graph with tensor shapes:\r\n<p align=\"center\">  \r\n  <img src=\"data/shape_inference.jpg\">\r\n</p>  \r\nRegular model profiling table:  \r\n<p align=\"center\">\r\n  <img src=\"data/macs_counting.png\">\r\n</p>\r\n<br><br>\r\nSparse profiling table:\r\n<p id=\"sparsity\" align=\"center\">\r\n  <img src=\"data/sparse_model.png\">\r\n</p>\r\n<br><br>  \r\n\r\nIntroduction: [data/Profile.md](data/Profile.md).  \r\npytorch usage: [data/PytorchUsage.md](data/PytorchUsage.md).  \r\ntensorflow\r\nusage: [data/TensorflowUsage.md](data/TensorflowUsage.md).  \r\nexamples: [benchmark/examples.py](benchmark/examples.py).\r\n\r\n---\r\n\r\n## Compute Graph with Shape Engine\r\n<a id=\"compute_graph-header\"></a>\r\nFrom a raw graph to a compute graph:\r\n<p id=\"compute_graph\" align=\"center\">\r\n  <img src=\"data/compute_graph.png\">\r\n</p>  \r\n\r\nRemove shape calculation layers(created by ONNX export) to get a *Compute Graph*. Use *Shape Engine* to update tensor\r\nshapes at runtime.  \r\nExamples: [benchmark/shape_regress.py](benchmark/shape_regress.py).\r\n[benchmark/examples.py](benchmark/examples.py).  \r\nIntegrate *Compute Graph* and *Shape Engine* into a cpp inference\r\nengine: [data/inference_engine.md](data/inference_engine.md)\r\n\r\n---\r\n\r\n## Memory Compression\r\n<a id=\"memory-compression\"></a>\r\n\r\n### Activation Compression\r\nActivation memory also called temporary memory is created by each OP's output. Only the last activation marked as the\r\nmodel's output will be kept. So you don't have to prepare memory space for each activation tensor. They better reuse \r\nan optimized memory size.\r\n\r\nFor large language models and high-resolution CV models, the activation memory compression is a key to save memory.  \r\nThe compression method achieves 5% memory compression on most models.   \r\nFor example:\r\n\r\n model                         | Native Memory Size(MB) | Compressed Memory Size(MB) | Compression Ratio(%) \r\n-------------------------------|------------------------|----------------------------|----------------------\r\n StableDiffusion(VAE_encoder)  | 14,245                 | 540                        | 3.7                  \r\n StableDiffusion(VAE_decoder)  | 25,417                 | 1,140                      | 4.48                 \r\n StableDiffusion(Text_encoder) | 215                    | 5                          | 2.5                  \r\n StableDiffusion(UNet)         | 36,135                 | 2,232                      | 6.2                  \r\n GPT2                          | 40                     | 2                          | 6.9                  \r\n BERT                          | 2,170                  | 27                         | 1.25                 \r\n\r\ncode example: [benchmark/compression.py](benchmark/compression.py)\r\n\r\n### Weight Compression\r\nA fp32 model with 7B parameters will take 28GB disk space and memory space. You can not even run the model if your device\r\n doesn't have that much memory space. So weight compression is critical to run large language models. As a reference, 7B \r\nmodel with int4 symmetric per block(32) quantization(llama.cpp's q4_0 quantization method) only has ~0.156x model size compared with fp32 model. \r\n\r\nCurrent support:   \r\n* [fp16]\r\n* [int8]x[symmetric/asymmetric]x[per tensor/per channel/per block]  \r\n* [int4]x[symmetric/asymmetric]x[per tensor/per channel/per block]  \r\n\r\ncode examples:[benchmark/examples.py](benchmark/examples.py).  \r\n\r\n\r\n---\r\n\r\n## How to install\r\n    \r\n`pip install onnx-tool`\r\n\r\nOR\r\n\r\n`pip install --upgrade git+https://github.com/ThanatosShinji/onnx-tool.git`  \r\n\r\npython>=3.6\r\n\r\nIf `pip install onnx-tool` failed by onnx's installation, you may try `pip install onnx==1.8.1` (a lower version like this) first.  \r\nThen `pip install onnx-tool` again.\r\n\r\n\r\n---\r\n\r\n## Known Issues\r\n* Loop op is not supported\r\n* Sequence type is not supported\r\n  \r\n---\r\n\r\n## Results of [ONNX Model Zoo](https://github.com/onnx/models) and SOTA models\r\n<a id='models'></a>\r\nSome models have dynamic input shapes. The MACs varies from input shapes. The input shapes used in these results are writen to [data/public/config.py](data/public/config.py).\r\nThese onnx models with all tensors' shape can be downloaded: [baidu drive](https://pan.baidu.com/s/1eebBP-n-wXvOhSmIH-NUZQ \r\n)(code: p91k) [google drive](https://drive.google.com/drive/folders/1H-ya1wTvjIMg2pMcMITWDIfWNSnjYxTn?usp=sharing)\r\n<p id=\"results\" align=\"center\">\r\n<table>\r\n<tr>\r\n<td>\r\n\r\nModel | Params(M) | MACs(M)\r\n---|---|---\r\n<a href=\"benchmark/transfomer_models.py\">GPT-J 1 layer</a> | 464 | 173,398  \r\n<a href=\"benchmark/transfomer_models.py\">MPT 1 layer</a> | 261 | 79,894\r\n[text_encoder](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main)| 123.13 | 6,782\r\n[UNet2DCondition](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main)| 859.52 | 888,870\r\n[VAE_encoder](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main) | 34.16 | 566,371\r\n[VAE_decoder](https://huggingface.co/bes-dev/stable-diffusion-v1-4-onnx/tree/main) | 49.49 | 1,271,959\r\n[SqueezeNet 1.0](https://github.com/onnx/models/tree/main/vision/classification/squeezenet) | 1.23 | 351\r\n[AlexNet](https://github.com/onnx/models/tree/main/vision/classification/alexnet) | 60.96 | 665\r\n[GoogleNet](https://github.com/onnx/models/tree/main/vision/classification/inception_and_googlenet/googlenet) | 6.99 | 1,606\r\n[googlenet_age](https://github.com/onnx/models/tree/main/vision/body_analysis/age_gender) | 5.98 | 1,605\r\n[LResNet100E-IR](https://github.com/onnx/models/tree/main/vision/body_analysis/arcface) | 65.22 | 12,102\r\n[BERT-Squad](https://github.com/onnx/models/tree/main/text/machine_comprehension/bert-squad) | 113.61 | 22,767\r\n[BiDAF](https://github.com/onnx/models/tree/main/text/machine_comprehension/bidirectional_attention_flow) | 18.08 | 9.87\r\n[EfficientNet-Lite4](https://github.com/onnx/models/tree/main/vision/classification/efficientnet-lite4) | 12.96 | 1,361\r\n[Emotion](https://github.com/onnx/models/tree/main/vision/body_analysis/emotion_ferplus) | 12.95 | 877\r\n[Mask R-CNN](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/mask-rcnn) | 46.77 | 92,077\r\n</td>\r\n\r\n<td>\r\n\r\nModel | Params(M) | MACs(M)\r\n---|-----------|---\r\n<a href=\"benchmark/transfomer_models.py\">LLaMa 1 layer</a> | 618       | 211,801  \r\n[BEVFormer Tiny](https://github.com/DerryHub/BEVFormer_tensorrt) | 33.7      | 210,838\r\n[rvm_mobilenetv3](https://github.com/PeterL1n/RobustVideoMatting) | 3.73      | 4,289\r\n[yolov4](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/yolov4) | 64.33     | 3,319\r\n[ConvNeXt-L](https://github.com/facebookresearch/ConvNeXt) | 229.79    | 34,872\r\n[edgenext_small](https://github.com/mmaaz60/EdgeNeXt) | 5.58      | 1,357\r\n[SSD](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/ssd) | 19.98     | 216,598\r\n[RealESRGAN](https://github.com/xinntao/Real-ESRGAN) | 16.69     | 73,551\r\n[ShuffleNet](https://github.com/onnx/models/tree/main/vision/classification/shufflenet) | 2.29      | 146\r\n[GPT-2](https://github.com/onnx/models/tree/main/text/machine_comprehension/gpt-2) | 137.02    | 1,103\r\n[T5-encoder](https://github.com/onnx/models/tree/main/text/machine_comprehension/t5) | 109.62    | 686\r\n[T5-decoder](https://github.com/onnx/models/tree/main/text/machine_comprehension/t5) | 162.62    | 1,113\r\n[RoBERTa-BASE](https://github.com/onnx/models/tree/main/text/machine_comprehension/roberta) | 124.64    | 688\r\n[Faster R-CNN](https://github.com/onnx/models/blob/main/vision/object_detection_segmentation/faster-rcnn) | 44.10     | 46,018\r\n[FCN ResNet-50](https://github.com/onnx/models/tree/main/vision/object_detection_segmentation/fcn) | 35.29     | 37,056\r\n[ResNet50](https://github.com/onnx/models/tree/main/vision/classification/resnet) | 25        | 3,868\r\n\r\n</td>\r\n</tr>\r\n</table>\r\n</p>\r\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "A tool for ONNX model: A parser, editor and profiler tool for ONNX models.",
    "version": "0.9.0",
    "project_urls": {
        "Homepage": "https://github.com/ThanatosShinji/onnx-tool"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f43116fa211a12696ebe3dc41b934c5ee81bb9abc8b75482d796b170428051a2",
                "md5": "4fea772b70e68978d0195703ded349cd",
                "sha256": "1a5bb59f4f4d78614c4fe6627fa3cd02a10eef1d7d1d653dfbe9e4db7b395abd"
            },
            "downloads": -1,
            "filename": "onnx_tool-0.9.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "4fea772b70e68978d0195703ded349cd",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 44498,
            "upload_time": "2024-02-28T15:50:07",
            "upload_time_iso_8601": "2024-02-28T15:50:07.372640Z",
            "url": "https://files.pythonhosted.org/packages/f4/31/16fa211a12696ebe3dc41b934c5ee81bb9abc8b75482d796b170428051a2/onnx_tool-0.9.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0f48e17a24b2d37c4f71bb4ac4297d15b7f63f45cb9e052a30dd6835ffeaa498",
                "md5": "62821ca45fa3b2db68ffbcd59b5b5ffd",
                "sha256": "83ef90ca877564b4f4fdfdb245e543aef51c4adb796f3371e04ff1172df68075"
            },
            "downloads": -1,
            "filename": "onnx-tool-0.9.0.tar.gz",
            "has_sig": false,
            "md5_digest": "62821ca45fa3b2db68ffbcd59b5b5ffd",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 44978,
            "upload_time": "2024-02-28T15:50:09",
            "upload_time_iso_8601": "2024-02-28T15:50:09.612086Z",
            "url": "https://files.pythonhosted.org/packages/0f/48/e17a24b2d37c4f71bb4ac4297d15b7f63f45cb9e052a30dd6835ffeaa498/onnx-tool-0.9.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-02-28 15:50:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ThanatosShinji",
    "github_project": "onnx-tool",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "onnx",
            "specs": []
        },
        {
            "name": "numpy",
            "specs": []
        },
        {
            "name": "tabulate",
            "specs": []
        }
    ],
    "lcname": "onnx-tool"
}
        
Elapsed time: 0.20125s