10000 Updated onnx->symbolic_opset23.py by hamzaqureshi5 · Pull Request #153702 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Updated onnx->symbolic_opset23.py #153702

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

hamzaqureshi5
Copy link
@hamzaqureshi5 hamzaqureshi5 commented May 16, 2025

Copy link
pytorch-bot bot commented May 16, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153702

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b3927c5 with merge base 7e16cb9 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: onnx torch.onnx related changes that should show up in the release notes label May 16, 2025
@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label May 16, 2025
@Skylion007
Copy link
Collaborator

Is the google_test submodule intentional?

@justinchuby justinchuby self-assigned this May 16, 2025
Copy link
Collaborator
@justinchuby justinchuby left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for contributing! I added comments for your consideration.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you revert updates to the submodules?

Comment on lines +94 to +96
symbolic_opset21,
symbolic_opset22,
symbolic_opset23,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not expose any new modules to public. They only exist for backward compatibility reasons. Prefer reverting changes to this file

Comment on lines +26 to +29
def convert_grid_sample_mode(mode_s):
return (
"linear" if mode_s == "bilinear" else "cubic" if mode_s == "bicubic" else mode_s
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this needed?

Comment on lines +28 to +30
alpha = 1.6732632
gamma = 1.0507
return g.op("Selu", self, alpha_f=alpha, gamma_f=gamma)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
alpha = 1.6732632
gamma = 1.0507
return g.op("Selu", self, alpha_f=alpha, gamma_f=gamma)
return g.op("Selu", self)

You may keep them as default unless pytorch specifies something different


@_onnx_symbolic("aten::attention")
@symbolic_helper.parse_args("v", "v", "v", "v", "v", "v", "i", "i", "i", "f", "f", "i")
def attention(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These ops do not seem to exist as ATen operators. You may start with implementing scaled_dot_product_attention which has the following signature:

- func: scaled_dot_product_attention(Tensor query, Tensor key, Tensor value, Tensor? attn_mask=None, float dropout_p=0.0, bool is_causal=False, *, float? scale=None, bool enable_gqa=False) -> Tensor

@@ -353,7 +353,7 @@ def export(
Models exported this way are probably runnable only by Caffe2.
opset_version (int, default 17): The version of the
opset_version (int, default 21): The version of the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert. We don't intend to change the default opset_version

@@ -1393,7 +1393,7 @@ def _export(
if opset_version is None:
opset_version = _constants.ONNX_DEFAULT_OPSET

# torch.onnx.export does not support opset versions >=18
# torch.onnx.export does not support opset versions >=21
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

@jerryzh168 jerryzh168 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 16, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
oncall: jit Add this issue/PR to JIT oncall triage queue open source release notes: onnx torch.onnx related changes that should show up in the release notes triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants
0