8000 Optimize shard_dim_alltoall to use alltoall_single by wanchaol · Pull Request #148868 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Optimize shard_dim_alltoall to use alltoall_single #148868

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed

Conversation

wanchaol
Copy link
Collaborator
@wanchaol wanchaol commented Mar 10, 2025

as titled, previously the shard_dim_alltoall uses all_to_all, which essentially could incur lots of copies if the tensor become non-contiguous during splits, and alltoall itself also incur copies

This PR uses alltoall_single instead, so that we could minimize tensor copies.

tested on all the shard dim change tests and it works properly:

pytest test/distributed/tensor/test_redistribute.py -s -k shard_dim_alltoall

Fixes #ISSUE_NUMBER

cc @H-Huang @awgu @kwen2501 @fegin @fduwjj @wz337 @wconstab @d4l3k @c-p-i-o

as titled, previously the shard_dim_alltoall uses `all_to_all`, which
essentially could incur lots of copies if the tensor become
non-contiguous during splits, and alltoall itself also incur copies

This PR uses alltoall_single instead, so that we could minimize tensor copies.

tested on all the shard dim change tests and it works properly
@pytorch-bot pytorch-bot bot added oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: distributed (c10d) release notes category labels Mar 10, 2025
Copy link
pytorch-bot bot commented Mar 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/148868

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 1 Pending, 5 Unrelated Failures

As of commit c63b4a2 with merge base d789c22 (image):

NEW FAILURES - The following jobs have failed:

FLAKY - The following jobs failed but were likely due to flakiness present on trunk:

UNSTABLE - The following jobs are marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@wanchaol wanchaol added release notes: distributed (dtensor) release notes category and removed release notes: distributed (c10d) release notes category labels Mar 10, 2025
@wanchaol wanchaol added the ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR label Mar 10, 2025
@wanchaol wanchaol added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 10, 2025
@wanchaol wanchaol requested a review from tianyu-l March 10, 2025 16:38
@wanchaol
Copy link
Collaborator Author

CI failures are not related to the PR

Copy link
Contributor
@tianyu-l tianyu-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good to me

Comment on lines +555 to +557
std::vector<int64_t> out_split_sizes;
std::vector<int64_t> in_split_sizes;
c10d::AllToAllOptions opts;
Copy link
Contributor
@tianyu-l tianyu-l Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are optional, so don't have to be fed in alltoall_base?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are required by the alltoall_base APIs :(

@wanchaol
Copy link
Collaborator Author

@pytorchbot merge -i "failures are not related to the PR"

Copy link
pytorch-bot bot commented Mar 10, 2025

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: unrecognized arguments: failures are not related to the PR

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci,cherry-pick,close} ...

Try @pytorchbot --help for more info.

@wanchaol
Copy link
Collaborator Author

@pytorchbot merge -i

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/trunk Trigger trunk jobs on your pull request Merged oncall: distributed Add this issue/PR to distributed oncall triage queue open source release notes: distributed (dtensor) release notes category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0