[go: up one dir, main page]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update QONNX parsing for 1.0 #979

Open
wants to merge 55 commits into
base: main
Choose a base branch
from

Conversation

jmitrevs
Copy link
Contributor
@jmitrevs jmitrevs commented Mar 12, 2024

Description

This change updates the ONNX parser and adds support for QONNX. It replaces PR #832. It only supports ONNX that has been cleaned by the qonnx package, including converting convolutions to be channels-last and changing Gemm to MatMul and Add.

In QONNX Quant nodes can act on constants as well as the datapath. To make handling this easier, we explicitly put constants in the initial graph. There are also some helper nodes like MatMul and Conv that are introduced to support the explicit constant nodes. After the convert flow, no special ONNX nodes remain in the graph, though.

Generally Quant nodes that have power-of-2 scales and no zero-offset get converted to fixed data types either by setting the types of constants or adding a linear activation that is usually merged into preceding nodes. Non-power-of-2 scales result in ApplyAlpha nodes beings added to scale and unscale, with propagation across some layers. This can be further optimized and has generally been tested less.

This includes the changes from PR #855 with a few updates that will be backported and discussed there. Therefore, this PR needs to wait till that PR is merged, which is why I am making it draft.

Note: for the config_from_onnx_model I made the default granularity be "name" because that enables automatic precision inference, which you need for QONNX. The way that I did that is that I set config['Model']['Precision'] to the default (e.g. fixed<16,6>), but all the precisions filled by config['Model'] are auto. These can be overriden if, for example, the accumulator becomes too wide. In general, though, they are set by the infer_precision.py optimizer.

Binary networks are not yet supported.

More information can be found in this presentation:
https://www.icloud.com/keynote/025yxvgBx8IF2m3Iso6HosqPw#QONNX_Ingestion_0p1

Type of change

  • New feature (non-breaking change which adds functionality)
  • A new research paper code implementation

Tests

The pytest, test_qonnx.py, is the main test, building some models from the QONNX model zoo

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Jul 19, 2024
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Jul 19, 2024
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Aug 21, 2024
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Aug 21, 2024
@jmitrevs jmitrevs added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Aug 21, 2024
@@ -37,7 +29,7 @@ def parse_gemm_layer(reader, node, inputs_map, input_shapes, graph, config):
'Softmax',
'Softsign',
'Softplus',
'Clip',
# 'Clip',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented code?

@@ -53,70 +45,89 @@ def parse_gemm_layer(reader, node, inputs_map, input_shapes, graph, config):
'Softmax': 'Softmax',
'Softsign': 'Activation',
'Softplus': 'Activation',
'Clip': 'Clip',
# 'Clip': 'Clip',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented code?

)
output_shape[layer['axis']] = new_dim

elif layer['class_name'] == 'Add':
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for my understanding, what is the reason that BiasAdd is no longer supported?

"""

if not (len(node.inputs) == 5 and all(node.inputs)):
raise ValueError(f'All {len.node.inputs} BatchNormOnnnx inputs need to be defined')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I understand the error message here, shouldn't it just read "All 5 BatchNormOnnnx inputs need to be defined" since it will also throw the error if all inputs are defined, but their number does not equal 5?


gamma_node = node.get_input_node(node.inputs[1])
if not isinstance(gamma_node, Constant):
raise TypeError('Only consant gammas supported')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo, "constant"

class FuseConsecutiveBatchNormalization(OptimizerPass):
"""
OptimizerPass to merge consecutive BatchNormalization layers,
only if the earlier one does not have quantization specified
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code below seems to match also in the case when the current node does not have quantization specified, not just the earlier one. Does this description need to be updated?

# # Not sure why this part is needed
# node_map = node.get_output_use_map()
# if len(node_map[node.outputs[0]]) > 1:
# return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove commented code?

Note: Consider restricting this to ApplyAlpha. Batch Normalization quantization seems to be ignored.

Note: This optimizer may not be safe if weights are updateable. May need to turn off.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be the doc string from FuseConsecutiveBatchNormalization above and should probably be replaced with one appropriate to this function.

# # Not sure why this part is needed
# node_map = node.get_output_use_map()
# if len(node_map[node.outputs[0]]) > 1:
# return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove commented code.

# The ConvxD nodes expect the weight data to be in a different format, not (M, k1.., C)
if node.attributes['n_dim'] == 1:
newtype = Conv1D
attributes['weight_data'] = np.transpose(weight_data, (1, 2, 0))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, why do we need to transpose the weights here when the model is supposed to have been processed by qonnx-to-channels-last already?



class MergeToApplyAlpha(OptimizerPass):
"""Convert Add, Sub, Mul, or Div Merges with consant to ApplyAlpha"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo "consant" -> "constant"

Copy link
Contributor
@JanFSchulte JanFSchulte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left a bunch of very minor comments, otherwise this seems ready for merge from my point of view.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants