-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Updated onnx->symbolic_opset23.py #153702
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153702
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit b3927c5 with merge base 7e16cb9 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Is the google_test submodule intentional? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for contributing! I added comments for your consideration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you revert updates to the submodules?
symbolic_opset21, | ||
symbolic_opset22, | ||
symbolic_opset23, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not expose any new modules to public. They only exist for backward compatibility reasons. Prefer reverting changes to this file
def convert_grid_sample_mode(mode_s): | ||
return ( | ||
"linear" if mode_s == "bilinear" else "cubic" if mode_s == "bicubic" else mode_s | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed?
alpha = 1.6732632 | ||
gamma = 1.0507 | ||
return g.op("Selu", self, alpha_f=alpha, gamma_f=gamma) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
alpha = 1.6732632 | |
gamma = 1.0507 | |
return g.op("Selu", self, alpha_f=alpha, gamma_f=gamma) | |
return g.op("Selu", self) |
You may keep them as default unless pytorch specifies something different
|
||
@_onnx_symbolic("aten::attention") | ||
@symbolic_helper.parse_args("v", "v", "v", "v", "v", "v", "i", "i", "i", "f", "f", "i") | ||
def attention( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These ops do not seem to exist as ATen operators. You may start with implementing scaled_dot_product_attention which has the following signature:
pytorch/aten/src/ATen/native/native_functions.yaml
Line 14880 in 3aa8477
- func: scaled_dot_product_attention(Tensor query, Tensor key, Tensor value, Tensor? attn_mask=None, float dropout_p=0.0, bool is_causal=False, *, float? scale=None, bool enable_gqa=False) -> Tensor |
@@ -353,7 +353,7 @@ def export( | |||
Models exported this way are probably runnable only by Caffe2. | |||
opset_version (int, default 17): The version of the | |||
opset_version (int, default 21): The version of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert. We don't intend to change the default opset_version
@@ -1393,7 +1393,7 @@ def _export( | |||
if opset_version is None: | |||
opset_version = _constants.ONNX_DEFAULT_OPSET | |||
|
|||
# torch.onnx.export does not support opset versions >=18 | |||
# torch.onnx.export does not support opset versions >=21 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
Fixes #153687
cc @EikanWang @jgong5 @wenzhe-nrv @sanchitintel