8000 Reland '[Inductor] GEMM shape padding improvements (#118522)' by eellison · Pull Request #125773 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Reland '[Inductor] GEMM shape padding improvements (#118522)' #125773

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 9 commits into from

Conversation

eellison
Copy link
Contributor
@eellison eellison commented May 8, 2024

Relanding just the pad in a single pass portion of the pr. Not including
the transpose logic:

[ghstack-poisoned]
Copy link
pytorch-bot bot commented May 8, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/125773

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit 5d998e2 with merge base afda668 (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

eellison added a commit that referenced this pull request May 8, 2024
Relanding just the pad in a single pass portion of the pr. Not including
the transpose logic:

ghstack-source-id: 710d916
Pull Request resolved: #125773
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
@eellison eellison requested review from kadeng and shunting314 May 9, 2024 19:39
eellison added 2 commits May 9, 2024 13:05
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
@shunting314
Copy link
Contributor

Test failure looks related.

@kadeng are you fine with the split?

@eellison
Copy link
Contributor Author

@shunting314 I made a late night change to the bias padding that fixes a regression benchmark lol.. looks like it is causing an issue. will investigate. (before recent change I think we were padding bias when it didnt need to be, causing us to miss out on padding a mm and regression)

…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
@@ -163,19 +145,34 @@ def pad_addmm(
input = pad_dim(input, n_padded_length, 1)
elif input.dim() == 1 and input.shape[0] != 1:
input = pad_dim(input, n_padded_length, 0)
elif m_padded_length != 0 and input.dim() == 2 and input.shape[0] != 1:
if m_padded_length != 0 and input.dim() == 2 and input.shape[0] != 1:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I remember: i believe the regression was because we were expanding the one D bias in a way that we would no longer hit cublas addmm. in any case - multi dimensional bias is extremely uncommon. and ive added tests here for all of the difference ways bias might be hit

eellison added 3 commits May 13, 2024 12:42
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
@eellison eellison added the ciflow/trunk Trigger trunk jobs on your pull request label May 14, 2024
…'"


Relanding just the pad in a single pass portion of [the pr](#118522). Not including
the transpose logic:

This was previously accepted and reviewed. 

cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx peterbell10 ipiszy yf225 chenyang78 kadeng muchulee8 ColinPeppler amjames desertfire chauhang

[ghstack-poisoned]
@eellison
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: This PR needs a release notes: label
If your changes are user facing and intended to be a part of release notes, please use a label starting with release notes:.

If not, please add the topic: not user facing label.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "topic: not user facing"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

Details for Dev Infra team Raised by workflow job

@eellison
Copy link
Contributor Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request May 15, 2024
For mm inputs which are not inputs of the graph, assume that we can memory plan them in the aten.cat and exclude the padding cost in the benchmarking comparison. Technically we also have to do a small amount of 0s writing, but that should be relatively small and encompassed in the weighting of the padding time by `1.1`

Pull Request resolved: #125780
Approved by: https://github.com/shunting314
ghstack dependencies: #125772, #125773
pytorchmergebot pushed a commit that referenced this pull request May 17, 2024
Otherwise you get an error in constant_pad_nd.

Pull Request resolved: #126475
Approved by: https://github.com/huydhn
ghstack dependencies: #125772, #125773, #125780
ZelboK pushed a commit to ZelboK/pytorch that referenced this pull request May 19, 2024
…ytorch#125773)

Relanding just the pad in a single pass portion of [the pr](pytorch#118522). Not including
the transpose logic:

This was previously accepted and reviewed.

Pull Request resolved: pytorch#125773
Approved by: https://github.com/shunting314
ghstack dependencies: pytorch#125772
ZelboK pushed a commit to ZelboK/pytorch that referenced this pull request May 19, 2024
For mm inputs which are not inputs of the graph, assume that we can memory plan them in the aten.cat and exclude the padding cost in the benchmarking comparison. Technically we also have to do a small amount of 0s writing, but that should be relatively small and encompassed in the weighting of the padding time by `1.1`

Pull Request resolved: pytorch#125780
Approved by: https://github.com/shunting314
ghstack dependencies: pytorch#125772, pytorch#125773
ZelboK pushed a commit to ZelboK/pytorch that referenced this pull request May 19, 2024
Otherwise you get an error in constant_pad_nd.

Pull Request resolved: pytorch#126475
Approved by: https://github.com/huydhn
ghstack dependencies: pytorch#125772, pytorch#125773, pytorch#125780
@github-actions github-actions bot deleted the gh/eellison/645/head branch June 15, 2024 02:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0