8000 [Intel GPU] Enable fp64 GEMM by ZhiweiYan-96 · Pull Request #140677 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[Intel GPU] Enable fp64 GEMM #140677

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 24 commits into from

Conversation

ZhiweiYan-96
Copy link
Collaborator
@ZhiweiYan-96 ZhiweiYan-96 commented Nov 14, 2024

[ghstack-poisoned]
Copy link
pytorch-bot bot commented Nov 14, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/140677

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit b358c61 with merge base 880e176 (image):

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the module: cpu CPU specific problem (e.g., perf, algorithm) label Nov 14, 2024
ZhiweiYan-96 added a commit that referenced this pull request Nov 14, 2024
ghstack-source-id: 36adebf
Pull Request resolved: #140677
@ZhiweiYan-96 ZhiweiYan-96 added module: xpu Intel XPU related issues topic: not user facing topic category labels Nov 14, 2024
@ZhiweiYan-96 ZhiweiYan-96 marked this pull request as draft November 14, 2024 06:27
@ZhiweiYan-96 ZhiweiYan-96 added ciflow/xpu Run XPU CI tasks ciflow/trunk Trigger trunk jobs on your pull request labels Nov 14, 2024
[ghstack-poisoned]
ZhiweiYan-96 added a commit that referenced this pull request Nov 14, 2024
ghstack-source-id: 72b12f8
Pull Request resolved: #140677
@ZhiweiYan-96 ZhiweiYan-96 self-assigned this Nov 14, 2024
[ghstack-poisoned]
ZhiweiYan-96 added a commit that referenced this pull request Nov 14, 2024
ghstack-source-id: 38c4397
Pull Request resolved: #140677
@@ -8,6 +8,7 @@
#include <Utils.h>

#include <oneapi/dnnl/dnnl.hpp>
#include "c10/core/ScalarType.h"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#include "c10/core/ScalarType.h"
#include <c10/core/ScalarType.h>

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for suggestions, the code is changed here

TORCH_CHECK(
8000 false, "Double and complex datatype matmul is not supported in oneDNN");
if (self.is_complex()) {
AT_ERROR("Complex datatype matmul is not supported in oneDNN");
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AT_ERROR has been deprecated.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for suggestion, AT_ERROR in file is changed to TORCH_CHECK

if (self.is_complex() || self.scalar_type() == ScalarType::Double) {
TORCH_CHECK(
false, "Double and complex datatype matmul is not supported in oneDNN");
if (self.is_complex()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

`TORCH_CHECK(!self.is_complex(), "error message");

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modified

Comment on lines 53 to 56
// complex case
if (mat1.is_complex()) {
AT_ERROR("Complex datatype matmul is not supported in oneDNN");
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// complex case
if (mat1.is_complex()) {
AT_ERROR("Complex datatype matmul is not supported in oneDNN");
}
// complex case
TORCH_CHECK(!mat1.is_complex(), "Complex datatype matmul is not supported in oneDNN");

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

modified

@@ -277,73 +280,6 @@ Tensor baddbmm(
return r;
}

Tensor& addbmm_out(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does Intel GPU not support addbmm_out?

Copy link
Collaborator Author
@ZhiweiYan-96 ZhiweiYan-96 Nov 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We does not need to write these glue codes, as cuda/cpu/xpu share an entry at natives_functions.yaml. They share same implementation(like op_stub or composite cases) in at::native::addbmm_out, the implementation in addbmm is general as it do the job by calling addmm which we have codes.

@@ -93,9 +94,13 @@ Tensor& addmm_out(
}
} else {
if (alpha.to<float>() == 1.f && beta_ == 1.f) {
bias = self;
bias = is_inplace ? self.clone() : self;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to add some comments to elaborate on why the clone is required here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, the comments is added here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need to clone if #144759 is merged. We should use post sum instead of post binary in this case

[ghstack-poisoned]
ZhiweiYan-96 added a commit that referenced this pull request Nov 18, 2024
ghstack-source-id: 56d0d04
Pull Request resolved: #140677
[ghstack-poisoned]
ZhiweiYan-96 added a commit that referenced this pull request Nov 20, 2024
ghstack-source-id: 531139b
Pull Request resolved: #140677
[ghstack-poisoned]
ZhiweiYan-96 added a commit that referenced this pull request Nov 20, 2024
ghstack-source-id: c7c11a0
Pull Request resolved: #140677
[ghstack-poisoned]
ZhiweiYan-96 added a commit that referenced this pull request Nov 20, 2024
ghstack-source-id: 36a1992
Pull Request resolved: #140677
@guangyey guangyey added release notes: xpu release notes category and removed topic: not user facing topic category labels Feb 14, 2025
@guangyey
Copy link
Collaborator

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: xpu / linux-jammy-xpu-2025.0-py3.9 / test (default, 2, 4, linux.idc.xpu)

Details for Dev Infra team Raised by workflow job

@guangyey
Copy link
Collaborator

@pytorchbot rebase -b main

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/main. Check the current status here

[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Successfully rebased gh/ZhiweiYan-96/40/orig onto refs/remotes/origin/main, please pull locally before adding more changes (for example, via ghstack checkout https://github.com/pytorch/pytorch/pull/140677)

pytorchmergebot pushed a commit that referenced this pull request Feb 14, 2025
ghstack-source-id: 6df76df
Pull Request resolved: #140677
@guangyey
Copy link
Collaborator
guangyey commented Feb 14, 2025

inductor/test_torchinductor_opinfo.py::TestInductorOpInfoXPU::test_comprehensive_linalg_eigvals_xpu_float64
@ZhiweiYan-96 The failure appears to be related to this PR. Please investigate and address the issue.

[ghstack-poisoned]
@ZhiweiYan-96 ZhiweiYan-96 added the keep-going Don't stop on first failure, keep running tests until the end label Feb 17, 2025
ZhiweiYan-96 added a commit that referenced this pull request Feb 17, 2025
ghstack-source-id: b2f08f4
Pull Request resolved: #140677
@ZhiweiYan-96
Copy link
Collaborator Author
ZhiweiYan-96 commented Feb 17, 2025

inductor/test_torchinductor_opinfo.py::TestInductorOpInfoXPU::test_comprehensive_linalg_eigvals_xpu_float64 @ZhiweiYan-96 The failure appears to be related to this PR. Please investigate and address the issue.

@guangyey There should be some new skipped uts were added, but this PR fixs them. I retriggered ci and wait for all fixed uts shown in CI results. After that, I will fix them at single commit.

@EikanWang
Copy link
Collaborator

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor ciflow/trunk Trigger trunk jobs on your pull request ciflow/xpu Run XPU CI tasks keep-going Don't stop on first failure, keep running tests until the end Merged module: cpu CPU specific problem (e.g., perf, algorithm) module: inductor module: xpu Intel XPU related issues open source release notes: xpu release notes category
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

7 participants
0