8000 [SDPA][EZ] Abate narrowing conversion warning spam in `flash_api.cpp` by eqy · Pull Request #153643 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[SDPA][EZ] Abate narrowing conversion warning spam in flash_api.cpp #153643

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from

Conversation

eqy
Copy link
Collaborator
@eqy eqy commented May 15, 2025

for messages like
/workspace/pytorch/aten/src/ATen/native/transformers/cuda/flash_attn/flash_api.cpp:1396:38: warning: narrowing conversion of ‘(char)(& q)->at::Tensor::<anonymous>.at::TensorBase::get_device()’ from ‘char’ to ‘c10::DeviceIndex’ {aka ‘signed

cc @ptrblck @msaroufim @jerryzh168

@eqy eqy added module: cuda Related to torch.cuda, and CUDA support in general module: build warnings Related to warnings during build process open source topic: not user facing topic category module: sdpa All things related to torch.nn.functional.scaled_dot_product_attentiion labels May 15, 2025
Copy link
pytorch-bot bot commented May 15, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153643

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ No Failures, 1 Pending

As of commit ed8f7d9 with merge base 00e5cb3 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@@ -479,7 +479,7 @@ mha_fwd(const at::Tensor &q, // batch_size x seqlen_q x num_heads x head

// Otherwise the kernel will be launched from cuda:0 device
// Cast to char to avoid compiler warning about narrowing
at::cuda::CUDAGuard device_guard{(char)q.get_device()};
at::cuda::CUDAGuard device_guard{(signed char)q.get_device()};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit: would be great if they were static casts.

@eqy
Copy link
Collaborator Author
eqy commented May 16, 2025

@pytorchmergebot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 16, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged module: build warnings Related to warnings during build process module: cuda Related to torch.cuda, and CUDA support in general module: sdpa All things related to torch.nn.functional.scaled_dot_product_attentiion open source topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
0