-
Notifications
You must be signed in to change notification settings - Fork 24.4k
[Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E #127592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/127592
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (3 Unrelated Failures)As of commit 90c1011 with merge base 786c24a ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
BROKEN TRUNK - The following jobs failed but was present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Makes sense. I think you can also set |
@Chillee Thanks for the pointers. By "set Line 7380 in 4935a01
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe added an UT to get the generated code and check the number and post ops of QLinearBinary
.
torch/_inductor/ir.py
Outdated
@@ -7307,6 +7308,12 @@ def codegen(self, wrapper): | |||
if isinstance(self.layout, Layout): | |||
self.codegen_size_asserts(wrapper) | |||
|
|||
def get_mutation_names(self): | |||
return [self.inputs[self.idx_for_inplace_sum].get_name()] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since post op sum
and binary add
reusing same IR and inplace mutation is only valid for post op sum
, I think we should check it here for branch.
Good idea. How can I get the generated code in UT? |
There are some existing UTs in
|
Thanks. Checks are added in UT. |
I'm actually a little bit worried about the mutation support here. Actually, from reading the code, I would think that |
Thanks for your comments. I will double check the logic behind it. |
Hi @Chillee . I did some investigation. I may have found something, however, it looks too complicated to me. The thing is that, if we don't have that pytorch/torch/_inductor/scheduler.py Line 1665 in d6963e7
QLinearPointwiseBinary node in graph, which is the output, is marked is_weak=True . Then, during dead_node_elimination , the QLinearPointwiseBinary node, along with its input nodes, is removed from the graph. So, the qlinear_binary op is missing in the final generated code. It therefore looks like this get_mutation_names method is necessary here. Could you please help take a look? Thanks.
|
Hi @Chillee Could you please take a look? Thanks! |
@Xia-Weiwen what's supposed to happen when I run the test case? It passes for me. |
@pytorchbot merge |
Merge failedReason: This PR needs a If not, please add the To add a label, you can comment to pytorchbot, for example For more information, see Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…LinearPointwiseBinaryPT2E (pytorch#127592) Fixes pytorch#127402 - Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py - Add checks of mutation for QLinearPointwiseBinaryPT2E Pull Request resolved: pytorch#127592 Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee
…LinearPointwiseBinaryPT2E (pytorch#127592) Fixes pytorch#127402 - Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py - Add checks of mutation for QLinearPointwiseBinaryPT2E Pull Request resolved: pytorch#127592 Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee
… for QLinearPointwiseBinaryPT2E (#128591) Port #127592 from main to release/2.4 ------ Fixes #127402 - Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py - Add checks of mutation for QLinearPointwiseBinaryPT2E Pull Request resolved: #127592 Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee Pull Request resolved: #128591 Approved by: https://github.com/jgong5, https://github.com/Chillee
…LinearPointwiseBinaryPT2E (pytorch#127592) Fixes pytorch#127402 - Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py - Add checks of mutation for QLinearPointwiseBinaryPT2E Pull Request resolved: pytorch#127592 Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee
… for QLinearPointwiseBinaryPT2E (pytorch#128591) Port pytorch#127592 from main to release/2.4 ------ Fixes pytorch#127402 - Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py - Add checks of mutation for QLinearPointwiseBinaryPT2E Pull Request resolved: pytorch#127592 Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee Pull Request resolved: pytorch#128591 Approved by: https://github.com/jgong5, https://github.com/Chillee
… for QLinearPointwiseBinaryPT2E (#128591) [Quant][Inductor] Bug fix: mutation nodes not handled correctly for QLinearPointwiseBinaryPT2E (#127592) Fixes #127402 - Revert some changes to `ir.MutationOutput` and inductor/test_flex_attention.py - Add checks of mutation for QLinearPointwiseBinaryPT2E Pull Request resolved: #127592 Approved by: https://github.com/leslie-fang-intel, https://github.com/Chillee
Fixes #127402
ir.MutationOutput
and inductor/test_flex_attention.pycc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @voznesenskym @penguinwu @EikanWang @Guobing-Chen @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @peterbell10 @ipiszy @yf225 @chenyang78 @kadeng @muchulee8 @ColinPeppler @amjames @desertfire @chauhang