nnef-tools


Namennef-tools JSON
Version 1.0.7 PyPI version JSON
download
home_pageNone
SummaryA package for managing NNEF files
upload_time2024-10-28 11:16:05
maintainerNone
docs_urlNone
authorNone
requires_python>=3.7
license Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
keywords nnef
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # NNEF Tools

This package contains a set of tools for converting and transforming machine learning models.

## Usage

For basic usage, you have to supply an input format, an output format and an input model. The output model name defaults to the input model name suffixed with the output format, but it can also be supplied explicitly.

```
python -m nnef_tools.convert --input-format tf --output-format nnef --input-model my_model.pb --output-model my_model.nnef
```

### Setting input shapes

If the model has (partially) undefined shapes, the concrete shapes can be supplied with the `--input-shapes` argument. The input shapes must be a Python dict expression, with string keys of input tensor names and tuple values as shapes. It is enough to supply shapes for those inputs that we want to freeze. For example:

```
--input-shapes "{'input': (1, 224, 224, 3)}"
```

### Transposing inputs and outputs

When converting between TF and NNEF, the (default) dimension ordering differs, and the model may be transposed (for example in case of 2D convolutional models). However, the inputs and outputs are not automatically transposed, as the converter cannot reliably decide which input and outputs represent images. Transposing inputs and outputs can be turned on by the `--io-transpose` option. There are two ways to use it: either to transpose all inputs and outputs, or to select the ones to be transposed. All inputs and outputs can be transposed by using `--io-transpose` without any further arguments, while selecting inputs and outputs can be done by providing a list of names:

```
--io-transpose "input1" "input2" "output1"
```

### Retaining input/output names

During conversion, the converter may generate suitable names for tensors. However, it is possible to force to keep the names of input and output tensors using the `--keep-io-names` option.


### Folding constants

The original model may contain operations that are performed on constant tensors, mainly resulting from shapes that are known in conversion time, or that became known by setting with the `--input-shape` option. In this case, it can be useful to fold constant operations, because the resulting graph is simplified. Furthermore, without constant folding, the graph may not even be convertible due to the presence of non-convertible operations, but constant folding may eliminate them and make the model convertible. To use it, simply turn on the `--fold-constants` option.

### Optimizing the output model

The resulting model may contain operations or sequences of operations that can be merged or even eliminated as they result in a no-op. To do so, turn on the `--optimize` flag. This works for NNEF output.

The converter can also be run with the same input and output format. In this case, the tool only reads and writes the model, with an optional optimization phase in between if the `--optimize` flag is set and an optimizer is available for the given format.

### Handling unsupported operations

When running into an unsupported operation, the converter stops the conversion process. It is possible to override this behavior by enabling mirror-conversion (one-to-one copying the operation to the destination format) using the `--mirror-unsupported` flag. This may not result in a valid output model, but may be helpful for debugging.

## Further options

The following further options can be used when the output format is NNEF:
* The `--compress` option generates a compressed `tgz` file. It can also take a further compression level argument.
* The `--annotate-shapes` flag generates the graph description with the shapes of tensors annotated in comments.
* The `--output-names` option takes a list of tensor names, and considers those as outputs, and only converts the sub-graph required to compute those outputs.
* The `--tensor-mapping` option allows to save the mapping of tensor names (mapping from the input model to the output model) into a separate json file.


## Conversion from TF Python code

When starting from Python code, the first step is to export the graph into a graph-def protobuf (.pb) file, which can then be further converted to a different format. To do so, the package contains some utility functions to freeze the graph and save it. Simply import these utilities and call them in your Python code:

```
import nnef_tools.io.tf.graphdef as graphdef
# define your TF model here
with tf.Session() as sess:
    ...     # initialize variables and train graph
    graphdef.save_default_graph('path/to/save.pb', session=sess, outputs=...)
```

If your model contains dynamic shapes, you can save the graph with concrete shapes by providing the input shapes to the save function. Furthermore, constant operations can also be folded while saving the model:

```
graphdef.save_default_graph('path/to/save.pb', session=..., outputs=...,
                            input_shapes={'input': (1, 224, 224, 3)},
                            fold_constants=True)
```

Outputs can be specified as a list of tensors, or alternatively, they can be renamed by mapping tensors to strings as new names.

### Saving composite functions as a single operation

Often, when exporting a graph, it is desirable to convert a subgraph (compound operation) into a single operation. This can be done by defining the subgraph in a Python function and annotating it with `@composite_function` of the `graphdef` module:

```
@graphdef.composite_function
def my_compound_op( x, a, b ):
    return a * x + b
```

Then `graphdef.save_default_graph` will magically take care of the rest, by converting composite functions into `PyFunc` ops in the graph-def. Note however, that if you are exporting such graphs repeatedly, you have to call `graphdef.reset_composites()`  before the definition of the graph.

How exactly the signature of the function is converted depends on the invocation of the function: tensor arguments are converted to inputs, while non-tensor arguments are converted to attributes. It does not matter whether positional or keyword arguments are used. Outputs must be tensors:

```
graphdef.reset_composites()

# define the graph
x = tf.placeholder(shape=(2,3), dtype=tf.float32, name='input')
y = my_compound_op(x, a=4, b=5)   # x is treated as tensor, a and b as attributes

with tf.Session() as sess:
    graphdef.save_default_graph('path/to/save.pb', session=sess, outputs={y: 'output'})
```

When exporting models containing composite functions, if the model has dynamic shapes it is preferable to export it with concrete shapes and folding constants during export. This is because before converting composite functions to a single op, TF can still perform shape inference and constant folding automatically, but after the conversion, it cannot infer shapes and perform the computation of the `PyFunc` operations resulting from the composite functions. If there are no composite functions in the model, then concrete shapes can be provided later as well (during conversion), accompanied by constant folding.

Collapsing composites to a single op when saving the graph can be turned off by `collapse_composites=False`. See `custom/composite_export_example.py` for more examples.


#### **Important note**

Composite functions **must not** get tensor inputs from other sources than the function arguments (such as global or class member variables). In that case, the code must be reorganized to make the actual composite function be called with explicitly marked tensor arguments. The same practice is also useful for attributes. In general, composite functions should be stateless.


## Custom converter plugins

The coverage of the converter can be extended to custom operations. This is required for example, when one wants to convert a composite function. Such a function is exported to the protobuf model as a `PyFunc` operation, that records the name, attributes, inputs and outputs of the original composite function. However, a converter must be provided for that name. In the actual conversion process, the `PyFunc` node is replaced with an operator of the original name of the composite function, so that it can be referenced.

The conversion of operations is governed by `nnef_tools.conversion.Transform` instances mapped to operator types. To add a new operator to be converted, one needs to provide a map entry for the operator. This is done by providing a Python module to the converter that contains the mapping for custom operators in a dict with the standard name `CUSTOM_TRANSFORMS`. The module is injected to the converter with the `--custom-converters` option:

```
--custom-converters my.custom.plugin.module
```

where `my/custom/plugin/module.py` is a Python module accessible to the Python interpreter (either by providing an absolute path or by setting `PYTHON_PATH`). Its contents may look like the following:

```
from nnef_tools.conversion import Transform

def my_conversion_helper_func(converter, ...):
    ...

CUSTOM_TRANSFORMS = {
    'op_type_to_convert_from':
        Transform(
            type='op_type_to_convert_into',
            name='optional_name_of_resulting op',
            inputs=(
                # one entry for each input
            ),
            outputs=(
                # one entry for each output
            ),
            attribs={
                # one entry for each attribute
            }
        ),
}
```

Entries are for the resulting operator, and may be constant Python values or expressions to be evaluated by the Python interpreter. Such expressions are written as Python strings that start with the `!` character, for example `'!a+2'` evaluates the expression `a+2`. The expressions are evaluated in the context of the source operator (the one converted from) and the converter context (that is defined by the input and output formats). It consists of the following:
* The type of the source operator is accessed via the identifier `_type_`.
* The name of the source operator is accessed via the identifier `_name_`.
* Inputs of the source operator are accessed via the identifier `I`, which is a Python `list`. For example the expression `'!I[0]'` results in the first input.
* Outputs of the source operator are accessed via the identifier `O`, which is a Python `list`. For example, the expression `'len(O)'` results in the number of outptus.
* Attributes of the source operator are accessed via identifiers that match the names of the attributes. For example if the source operator has attribute `a` then the expression `'!a'` takes its value.
* Furthermore, the following can be used in building complex expressions:
    * All built-in Python operators and functions.
    * All public member functions (not starting with `_`) defined by the converter in effect.
    * All public functions (not starting with `_`) defined in the custom module. Such functions must take a converter as their first argument, but otherwise can take arbitrary arguments. The public methods of the converter can be used in their definition.

The `Transform` can further contain a `using={'id': '!expr', ...}` field, which may define intermediate expressions that are evaluated first and can be used in other expressions for attributes/inputs/outputs. If the dictionary is ordered, the entries may depend on each other.

Furthermore, by adding an optional `cond='!expr'` field to the `Transform`, it is possible to achieve conditional conversion, only when the given expression evaluates to `True`. Otherwise, the converter treats it as if there was no converter provided for the given operator. This is to allow conversion of operations with only certain attribute values.

See `custom/custom_transforms_example.py` for more details.

Similarly to the above mechanism, custom shape inference functions and custom operator definitions (fragments) can be plugged in to converters that convert from NNEF using the `--custom-shapes` and `--custom-fragments` option. This may be required for custom NNEF operators defined as fragments in the input when such fragments are not decomposed. The fragments and shape inference functions must be defined in python module(s) supplied after the `--custom-shape` or `--custom-fragments` option. The module may look like this:

```
def my_custom_shape_function(intput1_shape, ..., attrib1, ...)
    ...     # assert validity of input shapes / attribs
    ...     # return calculated output shape(s)

CUSTOM_SHAPES = {
    'my_custom_op': my_custom_shape_function,
}
```

or

```
op_fragment =
"""
# NNEF fragment declaration/definition goes here
"""

CUSTOM_FRAGMENTS = {
    'op-name': op_fragment,
}
```

Furthermore, the `--decompose` option can be used to let the NNEF parser decompose the (composite) operators listed after the option (as separate args).

Additionally, with a similar mechanism, custom optimization passes can also be injected to the converter. The optimizer can match sequential sub-graphs (chains), and replace them with another sequence of operations. To provide custom optimizer passes, the chains of operations to be replaced must be mapped onto functions that perform generate the replacement sequence after checking the chain to bre replaced for validity:

```
def replace_my_chain(a, b, c):   # a, b, c will contain the matched chain of ops in order when this is called
    ...     # check attributes of the chain a, b, c to see if it should really be replaced;
            # if not, return False (do not modify the graph before all checks)
    ...     # create new tensors and operations in the graph that will replace the chain
    ...     # either return nothing (None), or any non-False value

CUSTOM_OPTIMIZERS = {
    ('a', 'b', 'c'): replace_my_chain,      # use a tuple as key, since list is not hashable
}
```

See `custom/custom_optimizers_example.py` for mode info.

## Executing a model and saving activations

A separate tool (`execute.py`) is available for executing a model. It requires a model and a format to be specified.

The inputs may be read from the (binary) input stream and outputs may be written to the (binary) output stream. Tensor data files can be piped as inputs and outputs:

```
python -m nnef_tools.execute < input.dat my_model.pb --format tf > output.dat
```

Alternatively, inputs can be random generated, and selected activations may be written to a folder, allowing to specify a different name:

```
python -m nnef_tools.execute my_model.pb --format tf --random "uniform(0,1)" --seed 0 --output-path . --output-names "{'tensor-name1': 'save-name1', ...}"
```

Further options to the model executor:

* The `--batch-size` option can be used to perform batched execution if a model specifies batch size of 1 in its inputs, supplying the desired batch size. If the supplied batch size is 0, it means that the (common) batch size of the actual inputs is used. Furthermore, when the supplied batch size equals the one defined by the model, execution will be done one-by-one instead of a single batch, which may be useful for reducing the memory footprint.
* The `--statistics` flag (followed by an optional output file path) can be used to generate activation statistics and save it in json format.
* The `--tensor-mapping` option can be used to provide a tensor name mapping obtained from the conversion step to the executor, used in remapping tensor names when generating statistics. This may be useful for comparing executions of the same model in different formats.
* Inputs and outputs (or activations) may need transposing before feeding into execution or after execution upon saving. This can be achieved with the `--io-transpose` flag. If no further arguments are listed, all tensors are transposed, but the transposed tensors can be controlled by enumerating a list of tensor names (as separate args). Inputs read from the input stream are transposed from channels first to last, while the outputs that are written to the output stream or saved are transposed from channels last to first if the format dictates so (TF/Lite).
* The `--decompose` option can be used to let the NNEF parser decompose the (composite) operators listed after the option (as separate args).
* The `--custom-operators` option can be used to inject custom operators to the executor by supplying a python module after the option. The contents of the module may look like this:

```
def my_custom_op(input1, ..., attrib1, ...):
    ...     # calculate output using inputs / attribs

CUSTOM_OPERATORS = {
    'my_custom_op': my_custom_op,
}
```

See `custom/custom_operators_example.py` for more info.

Further tools are available for generating random tensors (`random_tensor.py`) and converting images to tensors (`image_tensor.py`). These tools write their results to the output stream and can be directed into a file or piped to `execute.py`.

## Visualizing a model

NNEF models can be visualized with the `visualize.py` tool. The tool generates and svg/pdf/png rendering of the NNEF graph:

```
python -m nnef_tools.visualize my_model.nnef --format svg
```

By default, the render only contains the names of operations and tensors. In case of and svg output, _tooltips_ contain more details about nodes (op attributes, tensor dtypes and shapes). The shapes are only calculated if the `--infer-shapes` flag is turned on. To include those details in the render itself, use the `--verbose` flag.

## GMAC calculation

The script `gmac.py` can be used to calculate the GMACs required to execute a model. By default, it only calculates linear operations (convolutions, matrix multiplies), but it is possible to add other groups of operations (pooling, normalization, reduction, up-sampling) into the calculation:

```
python -m nnef_tools.gmac my_model.nnef --include-pooling
```

The calculation requires shape inference, so in case of custom operators, the `--custom-shapes` option should be used (same as for `convert.py`).

## Troubleshooting

Several things can go wrong during various stages of conversion, and sometimes it's hard to find where it exactly happened. Here are a few tips on how to get started:
* If the export process starts from Python code in a framework such as TensorFlow or PyTorch, the first step is saving the model into a framework specific format, such as TensorFlow protobuf or ONNX in case of PyTorch.
    * Check the resulting model to see if it accurately reflects the framework code. TensorBoard or Netron viewer can be used for this purpose.
    * If there is an error in this step, try to turn off certain flags during saving. For example in `nnef_tools.io.tf.graphdef.save_default_graph`, try turning off `fold_constants` and `collapse_composites` flag. The first merges operations on constant tensors, the second one merges composite operators into a single piece. By turning them off, errors in these transformation steps can be excluded.
* If the conversion from any model format to NNEF fails, typical reasons are as follows:
    * Conversion of some operator is not implemented. In this case, adding a custom converter using the `--custom-converters` option can solve the problem.
    * There is a bug in the converter; for example it does not support some parameter/version of an operator. In this case file a bug for `nnef_tools`.
* After the conversion to NNEF succeeds, check the converted model by executing it (`nnef_tools.execute`) on some (maybe random) inputs.
    * Execution may itself fail if there are custom operators in the model, in which case custom executors can be injected with the `--custom-operators` option.
    * If executed on non-random inputs, the outputs can be compared to results obtained from executing the same model in the original framework, or after saving it and executing the saved model (`nnef_tools.execute`). By comparing the results of those three stages, it is possible to tell in which stage something goes wrong. However, make sure to feed the same inputs to all stages, and beware that NNEF dimension order (channels first) is different from TensorFlow dimension order (channels last).
    * If the failing stage is the saving step, see above for turning off certain options too see if those are the culprits.
    * If the failing stage is the conversion step, first make sure to isolate optimizations by not using the `--optimize` option. The same goes for the `--fold-constants` option to see if that causes problems.
    * If conversion fails even without optimization and constant folding, it is usually due to the conversion of one of the operations, which must be found. Ideally, one would compare the intermediate tensors after each operation in a sequence, but exact comparison is hard to do automatically due to non 1-1 mappings during the conversion. However, generating statistics (`nnef_tools.execute --statistics`) for the same input for both models allows comparison of how execution proceeds in the two models and finding where the first difference occurs.
* When in doubt about some of the tools and this documentation does not provide enough information, check the help of the command-line tool itself (`-h` or `--help`) option.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nnef-tools",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "Viktor Gyenes <viktor.gyenes@aimotive.com>",
    "keywords": "nnef",
    "author": null,
    "author_email": "Viktor Gyenes <viktor.gyenes@aimotive.com>, Tamas Danyluk <9149812+tdanyluk@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/64/be/69f8a099cd92f5cd1a57d75d9ea7d99b83d19fbf3642899c4cf8edd23621/nnef_tools-1.0.7.tar.gz",
    "platform": null,
    "description": "# NNEF Tools\n\nThis package contains a set of tools for converting and transforming machine learning models.\n\n## Usage\n\nFor basic usage, you have to supply an input format, an output format and an input model. The output model name defaults to the input model name suffixed with the output format, but it can also be supplied explicitly.\n\n```\npython -m nnef_tools.convert --input-format tf --output-format nnef --input-model my_model.pb --output-model my_model.nnef\n```\n\n### Setting input shapes\n\nIf the model has (partially) undefined shapes, the concrete shapes can be supplied with the `--input-shapes` argument. The input shapes must be a Python dict expression, with string keys of input tensor names and tuple values as shapes. It is enough to supply shapes for those inputs that we want to freeze. For example:\n\n```\n--input-shapes \"{'input': (1, 224, 224, 3)}\"\n```\n\n### Transposing inputs and outputs\n\nWhen converting between TF and NNEF, the (default) dimension ordering differs, and the model may be transposed (for example in case of 2D convolutional models). However, the inputs and outputs are not automatically transposed, as the converter cannot reliably decide which input and outputs represent images. Transposing inputs and outputs can be turned on by the `--io-transpose` option. There are two ways to use it: either to transpose all inputs and outputs, or to select the ones to be transposed. All inputs and outputs can be transposed by using `--io-transpose` without any further arguments, while selecting inputs and outputs can be done by providing a list of names:\n\n```\n--io-transpose \"input1\" \"input2\" \"output1\"\n```\n\n### Retaining input/output names\n\nDuring conversion, the converter may generate suitable names for tensors. However, it is possible to force to keep the names of input and output tensors using the `--keep-io-names` option.\n\n\n### Folding constants\n\nThe original model may contain operations that are performed on constant tensors, mainly resulting from shapes that are known in conversion time, or that became known by setting with the `--input-shape` option. In this case, it can be useful to fold constant operations, because the resulting graph is simplified. Furthermore, without constant folding, the graph may not even be convertible due to the presence of non-convertible operations, but constant folding may eliminate them and make the model convertible. To use it, simply turn on the `--fold-constants` option.\n\n### Optimizing the output model\n\nThe resulting model may contain operations or sequences of operations that can be merged or even eliminated as they result in a no-op. To do so, turn on the `--optimize` flag. This works for NNEF output.\n\nThe converter can also be run with the same input and output format. In this case, the tool only reads and writes the model, with an optional optimization phase in between if the `--optimize` flag is set and an optimizer is available for the given format.\n\n### Handling unsupported operations\n\nWhen running into an unsupported operation, the converter stops the conversion process. It is possible to override this behavior by enabling mirror-conversion (one-to-one copying the operation to the destination format) using the `--mirror-unsupported` flag. This may not result in a valid output model, but may be helpful for debugging.\n\n## Further options\n\nThe following further options can be used when the output format is NNEF:\n* The `--compress` option generates a compressed `tgz` file. It can also take a further compression level argument.\n* The `--annotate-shapes` flag generates the graph description with the shapes of tensors annotated in comments.\n* The `--output-names` option takes a list of tensor names, and considers those as outputs, and only converts the sub-graph required to compute those outputs.\n* The `--tensor-mapping` option allows to save the mapping of tensor names (mapping from the input model to the output model) into a separate json file.\n\n\n## Conversion from TF Python code\n\nWhen starting from Python code, the first step is to export the graph into a graph-def protobuf (.pb) file, which can then be further converted to a different format. To do so, the package contains some utility functions to freeze the graph and save it. Simply import these utilities and call them in your Python code:\n\n```\nimport nnef_tools.io.tf.graphdef as graphdef\n# define your TF model here\nwith tf.Session() as sess:\n    ...     # initialize variables and train graph\n    graphdef.save_default_graph('path/to/save.pb', session=sess, outputs=...)\n```\n\nIf your model contains dynamic shapes, you can save the graph with concrete shapes by providing the input shapes to the save function. Furthermore, constant operations can also be folded while saving the model:\n\n```\ngraphdef.save_default_graph('path/to/save.pb', session=..., outputs=...,\n                            input_shapes={'input': (1, 224, 224, 3)},\n                            fold_constants=True)\n```\n\nOutputs can be specified as a list of tensors, or alternatively, they can be renamed by mapping tensors to strings as new names.\n\n### Saving composite functions as a single operation\n\nOften, when exporting a graph, it is desirable to convert a subgraph (compound operation) into a single operation. This can be done by defining the subgraph in a Python function and annotating it with `@composite_function` of the `graphdef` module:\n\n```\n@graphdef.composite_function\ndef my_compound_op( x, a, b ):\n    return a * x + b\n```\n\nThen `graphdef.save_default_graph` will magically take care of the rest, by converting composite functions into `PyFunc` ops in the graph-def. Note however, that if you are exporting such graphs repeatedly, you have to call `graphdef.reset_composites()`  before the definition of the graph.\n\nHow exactly the signature of the function is converted depends on the invocation of the function: tensor arguments are converted to inputs, while non-tensor arguments are converted to attributes. It does not matter whether positional or keyword arguments are used. Outputs must be tensors:\n\n```\ngraphdef.reset_composites()\n\n# define the graph\nx = tf.placeholder(shape=(2,3), dtype=tf.float32, name='input')\ny = my_compound_op(x, a=4, b=5)   # x is treated as tensor, a and b as attributes\n\nwith tf.Session() as sess:\n    graphdef.save_default_graph('path/to/save.pb', session=sess, outputs={y: 'output'})\n```\n\nWhen exporting models containing composite functions, if the model has dynamic shapes it is preferable to export it with concrete shapes and folding constants during export. This is because before converting composite functions to a single op, TF can still perform shape inference and constant folding automatically, but after the conversion, it cannot infer shapes and perform the computation of the `PyFunc` operations resulting from the composite functions. If there are no composite functions in the model, then concrete shapes can be provided later as well (during conversion), accompanied by constant folding.\n\nCollapsing composites to a single op when saving the graph can be turned off by `collapse_composites=False`. See `custom/composite_export_example.py` for more examples.\n\n\n#### **Important note**\n\nComposite functions **must not** get tensor inputs from other sources than the function arguments (such as global or class member variables). In that case, the code must be reorganized to make the actual composite function be called with explicitly marked tensor arguments. The same practice is also useful for attributes. In general, composite functions should be stateless.\n\n\n## Custom converter plugins\n\nThe coverage of the converter can be extended to custom operations. This is required for example, when one wants to convert a composite function. Such a function is exported to the protobuf model as a `PyFunc` operation, that records the name, attributes, inputs and outputs of the original composite function. However, a converter must be provided for that name. In the actual conversion process, the `PyFunc` node is replaced with an operator of the original name of the composite function, so that it can be referenced.\n\nThe conversion of operations is governed by `nnef_tools.conversion.Transform` instances mapped to operator types. To add a new operator to be converted, one needs to provide a map entry for the operator. This is done by providing a Python module to the converter that contains the mapping for custom operators in a dict with the standard name `CUSTOM_TRANSFORMS`. The module is injected to the converter with the `--custom-converters` option:\n\n```\n--custom-converters my.custom.plugin.module\n```\n\nwhere `my/custom/plugin/module.py` is a Python module accessible to the Python interpreter (either by providing an absolute path or by setting `PYTHON_PATH`). Its contents may look like the following:\n\n```\nfrom nnef_tools.conversion import Transform\n\ndef my_conversion_helper_func(converter, ...):\n    ...\n\nCUSTOM_TRANSFORMS = {\n    'op_type_to_convert_from':\n        Transform(\n            type='op_type_to_convert_into',\n            name='optional_name_of_resulting op',\n            inputs=(\n                # one entry for each input\n            ),\n            outputs=(\n                # one entry for each output\n            ),\n            attribs={\n                # one entry for each attribute\n            }\n        ),\n}\n```\n\nEntries are for the resulting operator, and may be constant Python values or expressions to be evaluated by the Python interpreter. Such expressions are written as Python strings that start with the `!` character, for example `'!a+2'` evaluates the expression `a+2`. The expressions are evaluated in the context of the source operator (the one converted from) and the converter context (that is defined by the input and output formats). It consists of the following:\n* The type of the source operator is accessed via the identifier `_type_`.\n* The name of the source operator is accessed via the identifier `_name_`.\n* Inputs of the source operator are accessed via the identifier `I`, which is a Python `list`. For example the expression `'!I[0]'` results in the first input.\n* Outputs of the source operator are accessed via the identifier `O`, which is a Python `list`. For example, the expression `'len(O)'` results in the number of outptus.\n* Attributes of the source operator are accessed via identifiers that match the names of the attributes. For example if the source operator has attribute `a` then the expression `'!a'` takes its value.\n* Furthermore, the following can be used in building complex expressions:\n    * All built-in Python operators and functions.\n    * All public member functions (not starting with `_`) defined by the converter in effect.\n    * All public functions (not starting with `_`) defined in the custom module. Such functions must take a converter as their first argument, but otherwise can take arbitrary arguments. The public methods of the converter can be used in their definition.\n\nThe `Transform` can further contain a `using={'id': '!expr', ...}` field, which may define intermediate expressions that are evaluated first and can be used in other expressions for attributes/inputs/outputs. If the dictionary is ordered, the entries may depend on each other.\n\nFurthermore, by adding an optional `cond='!expr'` field to the `Transform`, it is possible to achieve conditional conversion, only when the given expression evaluates to `True`. Otherwise, the converter treats it as if there was no converter provided for the given operator. This is to allow conversion of operations with only certain attribute values.\n\nSee `custom/custom_transforms_example.py` for more details.\n\nSimilarly to the above mechanism, custom shape inference functions and custom operator definitions (fragments) can be plugged in to converters that convert from NNEF using the `--custom-shapes` and `--custom-fragments` option. This may be required for custom NNEF operators defined as fragments in the input when such fragments are not decomposed. The fragments and shape inference functions must be defined in python module(s) supplied after the `--custom-shape` or `--custom-fragments` option. The module may look like this:\n\n```\ndef my_custom_shape_function(intput1_shape, ..., attrib1, ...)\n    ...     # assert validity of input shapes / attribs\n    ...     # return calculated output shape(s)\n\nCUSTOM_SHAPES = {\n    'my_custom_op': my_custom_shape_function,\n}\n```\n\nor\n\n```\nop_fragment =\n\"\"\"\n# NNEF fragment declaration/definition goes here\n\"\"\"\n\nCUSTOM_FRAGMENTS = {\n    'op-name': op_fragment,\n}\n```\n\nFurthermore, the `--decompose` option can be used to let the NNEF parser decompose the (composite) operators listed after the option (as separate args).\n\nAdditionally, with a similar mechanism, custom optimization passes can also be injected to the converter. The optimizer can match sequential sub-graphs (chains), and replace them with another sequence of operations. To provide custom optimizer passes, the chains of operations to be replaced must be mapped onto functions that perform generate the replacement sequence after checking the chain to bre replaced for validity:\n\n```\ndef replace_my_chain(a, b, c):   # a, b, c will contain the matched chain of ops in order when this is called\n    ...     # check attributes of the chain a, b, c to see if it should really be replaced;\n            # if not, return False (do not modify the graph before all checks)\n    ...     # create new tensors and operations in the graph that will replace the chain\n    ...     # either return nothing (None), or any non-False value\n\nCUSTOM_OPTIMIZERS = {\n    ('a', 'b', 'c'): replace_my_chain,      # use a tuple as key, since list is not hashable\n}\n```\n\nSee `custom/custom_optimizers_example.py` for mode info.\n\n## Executing a model and saving activations\n\nA separate tool (`execute.py`) is available for executing a model. It requires a model and a format to be specified.\n\nThe inputs may be read from the (binary) input stream and outputs may be written to the (binary) output stream. Tensor data files can be piped as inputs and outputs:\n\n```\npython -m nnef_tools.execute < input.dat my_model.pb --format tf > output.dat\n```\n\nAlternatively, inputs can be random generated, and selected activations may be written to a folder, allowing to specify a different name:\n\n```\npython -m nnef_tools.execute my_model.pb --format tf --random \"uniform(0,1)\" --seed 0 --output-path . --output-names \"{'tensor-name1': 'save-name1', ...}\"\n```\n\nFurther options to the model executor:\n\n* The `--batch-size` option can be used to perform batched execution if a model specifies batch size of 1 in its inputs, supplying the desired batch size. If the supplied batch size is 0, it means that the (common) batch size of the actual inputs is used. Furthermore, when the supplied batch size equals the one defined by the model, execution will be done one-by-one instead of a single batch, which may be useful for reducing the memory footprint.\n* The `--statistics` flag (followed by an optional output file path) can be used to generate activation statistics and save it in json format.\n* The `--tensor-mapping` option can be used to provide a tensor name mapping obtained from the conversion step to the executor, used in remapping tensor names when generating statistics. This may be useful for comparing executions of the same model in different formats.\n* Inputs and outputs (or activations) may need transposing before feeding into execution or after execution upon saving. This can be achieved with the `--io-transpose` flag. If no further arguments are listed, all tensors are transposed, but the transposed tensors can be controlled by enumerating a list of tensor names (as separate args). Inputs read from the input stream are transposed from channels first to last, while the outputs that are written to the output stream or saved are transposed from channels last to first if the format dictates so (TF/Lite).\n* The `--decompose` option can be used to let the NNEF parser decompose the (composite) operators listed after the option (as separate args).\n* The `--custom-operators` option can be used to inject custom operators to the executor by supplying a python module after the option. The contents of the module may look like this:\n\n```\ndef my_custom_op(input1, ..., attrib1, ...):\n    ...     # calculate output using inputs / attribs\n\nCUSTOM_OPERATORS = {\n    'my_custom_op': my_custom_op,\n}\n```\n\nSee `custom/custom_operators_example.py` for more info.\n\nFurther tools are available for generating random tensors (`random_tensor.py`) and converting images to tensors (`image_tensor.py`). These tools write their results to the output stream and can be directed into a file or piped to `execute.py`.\n\n## Visualizing a model\n\nNNEF models can be visualized with the `visualize.py` tool. The tool generates and svg/pdf/png rendering of the NNEF graph:\n\n```\npython -m nnef_tools.visualize my_model.nnef --format svg\n```\n\nBy default, the render only contains the names of operations and tensors. In case of and svg output, _tooltips_ contain more details about nodes (op attributes, tensor dtypes and shapes). The shapes are only calculated if the `--infer-shapes` flag is turned on. To include those details in the render itself, use the `--verbose` flag.\n\n## GMAC calculation\n\nThe script `gmac.py` can be used to calculate the GMACs required to execute a model. By default, it only calculates linear operations (convolutions, matrix multiplies), but it is possible to add other groups of operations (pooling, normalization, reduction, up-sampling) into the calculation:\n\n```\npython -m nnef_tools.gmac my_model.nnef --include-pooling\n```\n\nThe calculation requires shape inference, so in case of custom operators, the `--custom-shapes` option should be used (same as for `convert.py`).\n\n## Troubleshooting\n\nSeveral things can go wrong during various stages of conversion, and sometimes it's hard to find where it exactly happened. Here are a few tips on how to get started:\n* If the export process starts from Python code in a framework such as TensorFlow or PyTorch, the first step is saving the model into a framework specific format, such as TensorFlow protobuf or ONNX in case of PyTorch.\n    * Check the resulting model to see if it accurately reflects the framework code. TensorBoard or Netron viewer can be used for this purpose.\n    * If there is an error in this step, try to turn off certain flags during saving. For example in `nnef_tools.io.tf.graphdef.save_default_graph`, try turning off `fold_constants` and `collapse_composites` flag. The first merges operations on constant tensors, the second one merges composite operators into a single piece. By turning them off, errors in these transformation steps can be excluded.\n* If the conversion from any model format to NNEF fails, typical reasons are as follows:\n    * Conversion of some operator is not implemented. In this case, adding a custom converter using the `--custom-converters` option can solve the problem.\n    * There is a bug in the converter; for example it does not support some parameter/version of an operator. In this case file a bug for `nnef_tools`.\n* After the conversion to NNEF succeeds, check the converted model by executing it (`nnef_tools.execute`) on some (maybe random) inputs.\n    * Execution may itself fail if there are custom operators in the model, in which case custom executors can be injected with the `--custom-operators` option.\n    * If executed on non-random inputs, the outputs can be compared to results obtained from executing the same model in the original framework, or after saving it and executing the saved model (`nnef_tools.execute`). By comparing the results of those three stages, it is possible to tell in which stage something goes wrong. However, make sure to feed the same inputs to all stages, and beware that NNEF dimension order (channels first) is different from TensorFlow dimension order (channels last).\n    * If the failing stage is the saving step, see above for turning off certain options too see if those are the culprits.\n    * If the failing stage is the conversion step, first make sure to isolate optimizations by not using the `--optimize` option. The same goes for the `--fold-constants` option to see if that causes problems.\n    * If conversion fails even without optimization and constant folding, it is usually due to the conversion of one of the operations, which must be found. Ideally, one would compare the intermediate tensors after each operation in a sequence, but exact comparison is hard to do automatically due to non 1-1 mappings during the conversion. However, generating statistics (`nnef_tools.execute --statistics`) for the same input for both models allows comparison of how execution proceeds in the two models and finding where the first difference occurs.\n* When in doubt about some of the tools and this documentation does not provide enough information, check the help of the command-line tool itself (`-h` or `--help`) option.\n",
    "bugtrack_url": null,
    "license": " Apache License Version 2.0, January 2004 http://www.apache.org/licenses/  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION  1. Definitions.  \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.  \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.  \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.  \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.  \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.  \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.  \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).  \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.  \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"  \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.  2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.  3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.  4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:  (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and  (b) You must cause any modified files to carry prominent notices stating that You changed the files; and  (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and  (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.  You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.  5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.  6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.  7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.  8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.  9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.  END OF TERMS AND CONDITIONS  APPENDIX: How to apply the Apache License to your work.  To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets \"[]\" replaced with your own identifying information. (Don't include the brackets!)  The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same \"printed page\" as the copyright notice for easier identification within third-party archives.  Copyright [yyyy] [name of copyright owner]  Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at  http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ",
    "summary": "A package for managing NNEF files",
    "version": "1.0.7",
    "project_urls": {
        "Homepage": "https://www.khronos.org/nnef",
        "Repository": "https://github.com/KhronosGroup/NNEF-Tools"
    },
    "split_keywords": [
        "nnef"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "975340fc5d119136648310a83e255afcacae91b6f51267246c22b26c7915425d",
                "md5": "d1923565ebf3a6404e7e5ce6d805c538",
                "sha256": "b0eec9f93edd74591773ec810cadabc1224eaa5672d7136f38a588d4a04cde3b"
            },
            "downloads": -1,
            "filename": "nnef_tools-1.0.7-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d1923565ebf3a6404e7e5ce6d805c538",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 265396,
            "upload_time": "2024-10-28T11:16:04",
            "upload_time_iso_8601": "2024-10-28T11:16:04.311504Z",
            "url": "https://files.pythonhosted.org/packages/97/53/40fc5d119136648310a83e255afcacae91b6f51267246c22b26c7915425d/nnef_tools-1.0.7-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64be69f8a099cd92f5cd1a57d75d9ea7d99b83d19fbf3642899c4cf8edd23621",
                "md5": "77c06c0d33da9ff8206ef0cdd3e178a5",
                "sha256": "29bc04fc02fb2189cf582f778dd037181a14d345cd8b80f215dfefb9f3539790"
            },
            "downloads": -1,
            "filename": "nnef_tools-1.0.7.tar.gz",
            "has_sig": false,
            "md5_digest": "77c06c0d33da9ff8206ef0cdd3e178a5",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.7",
            "size": 165293,
            "upload_time": "2024-10-28T11:16:05",
            "upload_time_iso_8601": "2024-10-28T11:16:05.702610Z",
            "url": "https://files.pythonhosted.org/packages/64/be/69f8a099cd92f5cd1a57d75d9ea7d99b83d19fbf3642899c4cf8edd23621/nnef_tools-1.0.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-28 11:16:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "KhronosGroup",
    "github_project": "NNEF-Tools",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "nnef-tools"
}
        
Elapsed time: 1.07067s