onnx-graphsurgeon


Nameonnx-graphsurgeon JSON
Version 0.5.2 PyPI version JSON
download
home_pagehttps://github.com/NVIDIA/TensorRT/tree/main/tools/onnx-graphsurgeon
SummaryONNX GraphSurgeon
upload_time2024-04-13 23:29:58
maintainerNone
docs_urlNone
authorNVIDIA
requires_pythonNone
licenseApache 2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ONNX GraphSurgeon


## Table of Contents

- [Introduction](#introduction)
- [Installation](#installation)
- [Examples](#examples)
- [Understanding The Basics](#understanding-the-basics)
    - [Importers](#importers)
    - [IR](#ir)
        - [Tensor](#tensor)
        - [Node](#node)
        - [A Note On Modifying Inputs And Outputs](#a-note-on-modifying-inputs-and-outputs)
        - [Graph](#graph)
    - [Exporters](#exporters)
- [Advanced](#advanced)
    - [Working With Models With External Data](#working-with-models-with-external-data)

## Introduction

ONNX GraphSurgeon is a tool that allows you to easily generate new ONNX graphs, or modify existing ones.


## Installation

### Using Prebuilt Wheels
```bash
python3 -m pip install onnx_graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
```

### Building From Source

#### Using Make Targets
```
make install
```

#### Building Manually

1. Build a wheel:
```
make build
```

2. Install the wheel manually from **outside** the repository:
```
python3 -m pip install onnx_graphsurgeon/dist/onnx_graphsurgeon-*-py2.py3-none-any.whl
```


## Examples

The [examples](./examples) directory contains several examples of common use-cases of ONNX GraphSurgeon.

The visualizations provided were generated using [Netron](https://github.com/lutzroeder/netron).


## Understanding The Basics

ONNX GraphSurgeon is composed of three major components: Importers, the IR, and Exporters.

### Importers

Importers are used to import a graph into the ONNX GraphSurgeon IR.
The importer interface is defined in [base_importer.py](./onnx_graphsurgeon/importers/base_importer.py).

ONNX GraphSurgeon also provides [high-level importer APIs](./onnx_graphsurgeon/api/api.py) for ease of use:
```python
graph = gs.import_onnx(onnx.load("model.onnx"))
```

### IR

The Intermediate Representation (IR) is where all modifications to the graph are made. It can also be used to
create new graphs from scratch. The IR involves three components: [Tensor](./onnx_graphsurgeon/ir/tensor.py)s,
[Node](./onnx_graphsurgeon/ir/node.py)s, and [Graph](./onnx_graphsurgeon/ir/graph.py)s.

Nearly all of the member variables of each component can be freely modified. For details on the various
attributes of these classes, you can view the help output using `help(<class_or_instance>)` in an
interactive shell, or using `print(help(<class_or_instance>))` in a script, where `<class_or_instance>`
is an ONNX GraphSurgeon type, or an instance of that type.

#### Tensor

Tensors are divided into two subclasses: `Variable` and `Constant`.

- A `Constant` is a tensor whose values are known upfront, and can be retrieved as a NumPy array and modified.
    *Note: The `values` property of a `Constant` is loaded on-demand. If the property is not accessed, the values will*
    *not be loaded as a NumPy array*.
- A `Variable` is a tensor whose values are unknown until inference-time, but may contain information about data type and shape.

The inputs and outputs of Tensors are always Nodes.

**An example constant tensor from ResNet50:**
```
>>> print(tensor)
Constant (gpu_0/res_conv1_bn_s_0)
[0.85369843 1.1515082  0.9152944  0.9577646  1.0663182  0.55629414
 1.2009839  1.1912311  2.2619808  0.62263143 1.1149117  1.4921428
 0.89566356 1.0358194  1.431092   1.5360111  1.25086    0.8706703
 1.2564877  0.8524589  0.9436758  0.7507614  0.8945271  0.93587166
 1.8422242  3.0609846  1.3124607  1.2158023  1.3937513  0.7857263
 0.8928106  1.3042281  1.0153942  0.89356416 1.0052011  1.2964457
 1.1117343  1.0669073  0.91343874 0.92906713 1.0465593  1.1261675
 1.4551278  1.8252873  1.9678202  1.1031747  2.3236883  0.8831993
 1.1133649  1.1654979  1.2705412  2.5578163  0.9504889  1.0441847
 1.0620039  0.92997414 1.2119316  1.3101407  0.7091761  0.99814713
 1.3404484  0.96389204 1.3435135  0.9236031 ]
```

**An example variable tensor from ResNet50:**
```
>>> print(tensor)
Variable (gpu_0/data_0): (shape=[1, 3, 224, 224], dtype=float32)
```


#### Node

A `Node` defines an operation in the graph. A node may specify attributes; attribute values can be any
Python primitive types, as well as ONNX GraphSurgeon `Graph`s or `Tensor`s

The inputs and outputs of Nodes are always Tensors

**An example ReLU node from ResNet50:**
```
>>> print(node)
 (Relu)
    Inputs: [Tensor (gpu_0/res_conv1_bn_1)]
    Outputs: [Tensor (gpu_0/res_conv1_bn_2)]
```

In this case, the node has no attributes. Otherwise, attributes are displayed as an `OrderedDict`.


#### A Note On Modifying Inputs And Outputs

The `inputs`/`outputs` members of nodes and tensors have special logic that will update the inputs/outputs of all
affected nodes/tensors when you make a change. This means, for example, that you do **not** need to update the `inputs`
of a Node when you make a change to the `outputs` of its input tensor.

Consider the following node:
```
>>> print(node)
 (Relu).
    Inputs: [Tensor (gpu_0/res_conv1_bn_1)]
    Outputs: [Tensor (gpu_0/res_conv1_bn_2)]
```

The input tensor can be accessed like so:
```
>>> tensor = node.inputs[0]
>>> print(tensor)
Tensor (gpu_0/res_conv1_bn_1)
>>> print(tensor.outputs)
[ (Relu).
	Inputs: [Tensor (gpu_0/res_conv1_bn_1)]
	Outputs: [Tensor (gpu_0/res_conv1_bn_2)]
```

If we remove the node from the outputs of the tensor, this is reflected in the node inputs as well:
```
>>> del tensor.outputs[0]
>>> print(tensor.outputs)
[]
>>> print(node)
 (Relu).
    Inputs: []
    Outputs: [Tensor (gpu_0/res_conv1_bn_2)]
```


#### Graph

A `Graph` contains zero or more `Node`s and input/output `Tensor`s.

Intermediate tensors are not explicitly tracked, but are instead retrieved from the nodes contained within the graph.

The `Graph` class exposes several functions. A small subset is listed here:

- `cleanup()`: Removes unused nodes and tensors in the graph
- `toposort()`: Topologically sorts the graph.
- `tensors()`: Returns a `Dict[str, Tensor]` mapping tensor names to tensors, by walking over all the tensors in the graph.
    This is an `O(N)` operation, and so may be slow for large graphs.

To see the full Graph API, you can see `help(onnx_graphsurgeon.Graph)` in an interactive Python shell.

### Exporters

Exporters are used to export the ONNX GraphSurgeon IR to ONNX or other types of graphs.
The exporter interface is defined in [base_exporter.py](./onnx_graphsurgeon/exporters/base_exporter.py).

ONNX GraphSurgeon also provides [high-level exporter APIs](./onnx_graphsurgeon/api/api.py) for ease of use:
```python
onnx.save(gs.export_onnx(graph), "model.onnx")
```


## Advanced

### Working With Models With External Data

Using models with externally stored data with ONNX-GraphSurgeon is almost the same as working with
ONNX models without external data. Refer to the
[official ONNX documentation](https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#loading-an-onnx-model-with-external-data)
for details on how to load such models. To import the model into ONNX-GraphSurgeon, you can use the
`import_onnx` function as normal.

During export, you just need to take one additional step:

1. Export the model from ONNX-GraphSurgeon as normal:
    ```python
    model = gs.export_onnx(graph)
    ```

2. Update the model so that it writes its data to the external location. If the location is not
    specified, it defaults to the same directory as the ONNX model:
    ```python
    from onnx.external_data_helper import convert_model_to_external_data

    convert_model_to_external_data(model, location="model.data")
    ```

3. Then you can save the model as usual:
    ```python
    onnx.save(model, "model.onnx")
    ```



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/NVIDIA/TensorRT/tree/main/tools/onnx-graphsurgeon",
    "name": "onnx-graphsurgeon",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "NVIDIA",
    "author_email": "svc_tensorrt@nvidia.com",
    "download_url": null,
    "platform": null,
    "description": "# ONNX GraphSurgeon\n\n\n## Table of Contents\n\n- [Introduction](#introduction)\n- [Installation](#installation)\n- [Examples](#examples)\n- [Understanding The Basics](#understanding-the-basics)\n    - [Importers](#importers)\n    - [IR](#ir)\n        - [Tensor](#tensor)\n        - [Node](#node)\n        - [A Note On Modifying Inputs And Outputs](#a-note-on-modifying-inputs-and-outputs)\n        - [Graph](#graph)\n    - [Exporters](#exporters)\n- [Advanced](#advanced)\n    - [Working With Models With External Data](#working-with-models-with-external-data)\n\n## Introduction\n\nONNX GraphSurgeon is a tool that allows you to easily generate new ONNX graphs, or modify existing ones.\n\n\n## Installation\n\n### Using Prebuilt Wheels\n```bash\npython3 -m pip install onnx_graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com\n```\n\n### Building From Source\n\n#### Using Make Targets\n```\nmake install\n```\n\n#### Building Manually\n\n1. Build a wheel:\n```\nmake build\n```\n\n2. Install the wheel manually from **outside** the repository:\n```\npython3 -m pip install onnx_graphsurgeon/dist/onnx_graphsurgeon-*-py2.py3-none-any.whl\n```\n\n\n## Examples\n\nThe [examples](./examples) directory contains several examples of common use-cases of ONNX GraphSurgeon.\n\nThe visualizations provided were generated using [Netron](https://github.com/lutzroeder/netron).\n\n\n## Understanding The Basics\n\nONNX GraphSurgeon is composed of three major components: Importers, the IR, and Exporters.\n\n### Importers\n\nImporters are used to import a graph into the ONNX GraphSurgeon IR.\nThe importer interface is defined in [base_importer.py](./onnx_graphsurgeon/importers/base_importer.py).\n\nONNX GraphSurgeon also provides [high-level importer APIs](./onnx_graphsurgeon/api/api.py) for ease of use:\n```python\ngraph = gs.import_onnx(onnx.load(\"model.onnx\"))\n```\n\n### IR\n\nThe Intermediate Representation (IR) is where all modifications to the graph are made. It can also be used to\ncreate new graphs from scratch. The IR involves three components: [Tensor](./onnx_graphsurgeon/ir/tensor.py)s,\n[Node](./onnx_graphsurgeon/ir/node.py)s, and [Graph](./onnx_graphsurgeon/ir/graph.py)s.\n\nNearly all of the member variables of each component can be freely modified. For details on the various\nattributes of these classes, you can view the help output using `help(<class_or_instance>)` in an\ninteractive shell, or using `print(help(<class_or_instance>))` in a script, where `<class_or_instance>`\nis an ONNX GraphSurgeon type, or an instance of that type.\n\n#### Tensor\n\nTensors are divided into two subclasses: `Variable` and `Constant`.\n\n- A `Constant` is a tensor whose values are known upfront, and can be retrieved as a NumPy array and modified.\n    *Note: The `values` property of a `Constant` is loaded on-demand. If the property is not accessed, the values will*\n    *not be loaded as a NumPy array*.\n- A `Variable` is a tensor whose values are unknown until inference-time, but may contain information about data type and shape.\n\nThe inputs and outputs of Tensors are always Nodes.\n\n**An example constant tensor from ResNet50:**\n```\n>>> print(tensor)\nConstant (gpu_0/res_conv1_bn_s_0)\n[0.85369843 1.1515082  0.9152944  0.9577646  1.0663182  0.55629414\n 1.2009839  1.1912311  2.2619808  0.62263143 1.1149117  1.4921428\n 0.89566356 1.0358194  1.431092   1.5360111  1.25086    0.8706703\n 1.2564877  0.8524589  0.9436758  0.7507614  0.8945271  0.93587166\n 1.8422242  3.0609846  1.3124607  1.2158023  1.3937513  0.7857263\n 0.8928106  1.3042281  1.0153942  0.89356416 1.0052011  1.2964457\n 1.1117343  1.0669073  0.91343874 0.92906713 1.0465593  1.1261675\n 1.4551278  1.8252873  1.9678202  1.1031747  2.3236883  0.8831993\n 1.1133649  1.1654979  1.2705412  2.5578163  0.9504889  1.0441847\n 1.0620039  0.92997414 1.2119316  1.3101407  0.7091761  0.99814713\n 1.3404484  0.96389204 1.3435135  0.9236031 ]\n```\n\n**An example variable tensor from ResNet50:**\n```\n>>> print(tensor)\nVariable (gpu_0/data_0): (shape=[1, 3, 224, 224], dtype=float32)\n```\n\n\n#### Node\n\nA `Node` defines an operation in the graph. A node may specify attributes; attribute values can be any\nPython primitive types, as well as ONNX GraphSurgeon `Graph`s or `Tensor`s\n\nThe inputs and outputs of Nodes are always Tensors\n\n**An example ReLU node from ResNet50:**\n```\n>>> print(node)\n (Relu)\n    Inputs: [Tensor (gpu_0/res_conv1_bn_1)]\n    Outputs: [Tensor (gpu_0/res_conv1_bn_2)]\n```\n\nIn this case, the node has no attributes. Otherwise, attributes are displayed as an `OrderedDict`.\n\n\n#### A Note On Modifying Inputs And Outputs\n\nThe `inputs`/`outputs` members of nodes and tensors have special logic that will update the inputs/outputs of all\naffected nodes/tensors when you make a change. This means, for example, that you do **not** need to update the `inputs`\nof a Node when you make a change to the `outputs` of its input tensor.\n\nConsider the following node:\n```\n>>> print(node)\n (Relu).\n    Inputs: [Tensor (gpu_0/res_conv1_bn_1)]\n    Outputs: [Tensor (gpu_0/res_conv1_bn_2)]\n```\n\nThe input tensor can be accessed like so:\n```\n>>> tensor = node.inputs[0]\n>>> print(tensor)\nTensor (gpu_0/res_conv1_bn_1)\n>>> print(tensor.outputs)\n[ (Relu).\n\tInputs: [Tensor (gpu_0/res_conv1_bn_1)]\n\tOutputs: [Tensor (gpu_0/res_conv1_bn_2)]\n```\n\nIf we remove the node from the outputs of the tensor, this is reflected in the node inputs as well:\n```\n>>> del tensor.outputs[0]\n>>> print(tensor.outputs)\n[]\n>>> print(node)\n (Relu).\n    Inputs: []\n    Outputs: [Tensor (gpu_0/res_conv1_bn_2)]\n```\n\n\n#### Graph\n\nA `Graph` contains zero or more `Node`s and input/output `Tensor`s.\n\nIntermediate tensors are not explicitly tracked, but are instead retrieved from the nodes contained within the graph.\n\nThe `Graph` class exposes several functions. A small subset is listed here:\n\n- `cleanup()`: Removes unused nodes and tensors in the graph\n- `toposort()`: Topologically sorts the graph.\n- `tensors()`: Returns a `Dict[str, Tensor]` mapping tensor names to tensors, by walking over all the tensors in the graph.\n    This is an `O(N)` operation, and so may be slow for large graphs.\n\nTo see the full Graph API, you can see `help(onnx_graphsurgeon.Graph)` in an interactive Python shell.\n\n### Exporters\n\nExporters are used to export the ONNX GraphSurgeon IR to ONNX or other types of graphs.\nThe exporter interface is defined in [base_exporter.py](./onnx_graphsurgeon/exporters/base_exporter.py).\n\nONNX GraphSurgeon also provides [high-level exporter APIs](./onnx_graphsurgeon/api/api.py) for ease of use:\n```python\nonnx.save(gs.export_onnx(graph), \"model.onnx\")\n```\n\n\n## Advanced\n\n### Working With Models With External Data\n\nUsing models with externally stored data with ONNX-GraphSurgeon is almost the same as working with\nONNX models without external data. Refer to the\n[official ONNX documentation](https://github.com/onnx/onnx/blob/master/docs/PythonAPIOverview.md#loading-an-onnx-model-with-external-data)\nfor details on how to load such models. To import the model into ONNX-GraphSurgeon, you can use the\n`import_onnx` function as normal.\n\nDuring export, you just need to take one additional step:\n\n1. Export the model from ONNX-GraphSurgeon as normal:\n    ```python\n    model = gs.export_onnx(graph)\n    ```\n\n2. Update the model so that it writes its data to the external location. If the location is not\n    specified, it defaults to the same directory as the ONNX model:\n    ```python\n    from onnx.external_data_helper import convert_model_to_external_data\n\n    convert_model_to_external_data(model, location=\"model.data\")\n    ```\n\n3. Then you can save the model as usual:\n    ```python\n    onnx.save(model, \"model.onnx\")\n    ```\n\n\n",
    "bugtrack_url": null,
    "license": "Apache 2.0",
    "summary": "ONNX GraphSurgeon",
    "version": "0.5.2",
    "project_urls": {
        "Homepage": "https://github.com/NVIDIA/TensorRT/tree/main/tools/onnx-graphsurgeon"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0d2093e7143af3a0b3b3d9f3306bfc46e55d0d307242b4c1bf36ff108460e5a3",
                "md5": "a3ad56f08483b547f20398ed1ca640bb",
                "sha256": "10c130d6129fdeee02945f8103b5b112e6fd4d9b356e2dd3e80f53e0ebee7b5c"
            },
            "downloads": -1,
            "filename": "onnx_graphsurgeon-0.5.2-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a3ad56f08483b547f20398ed1ca640bb",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 56430,
            "upload_time": "2024-04-13T23:29:58",
            "upload_time_iso_8601": "2024-04-13T23:29:58.545049Z",
            "url": "https://files.pythonhosted.org/packages/0d/20/93e7143af3a0b3b3d9f3306bfc46e55d0d307242b4c1bf36ff108460e5a3/onnx_graphsurgeon-0.5.2-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-13 23:29:58",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "NVIDIA",
    "github_project": "TensorRT",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "onnx-graphsurgeon"
}
        
Elapsed time: 0.22992s