-
Notifications
You must be signed in to change notification settings - Fork 24.7k
[Distributed][CI] Rework continuous TestCase #153653
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153653
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 95f6d5d with merge base fa85434 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but the tests failures are real due to Python 3.10 syntax (Pytorch still supports 3.9).
# Rendezvous file | ||
rdvz_file: Optional[str] = None | ||
# timeout configured per class | ||
timeout: timedelta = timedelta(seconds=120) | ||
|
||
@classmethod | ||
@abc.abstractmethod | ||
def backend_str(cls) -> str: | ||
def backend_str(cls) -> str | None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PyTorch is still using Python 3.9, which doesn't support |
(this supports in Python 3.10)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unblock
# number of test processes | ||
world_size: int = 2 | ||
world_size: int = -2 # unset state |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we changing this to -2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will be determined at run time by device count.
|
||
@classmethod | ||
def _run_test_given_id(cls, test_id: str, **kwargs) -> None: | ||
# self.id() == e.g. '__main__.TestDistributed.TestAdditive.test_get_rank' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no this is an example.
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Starting merge as part of PR stack under #153677 |
Merge failedReason: 1 jobs have failed, first few of them are: trunk / linux-jammy-rocm-py3.10 / test (default, 1, 2, linux.rocm.gpu.2) Details for Dev Infra teamRaised by workflow job |
@pytorchbot merge -f "the RoCM node prep timeout does not seem related" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot revert -m "More fixes needed" -c=nosignal |
@pytorchbot successfully started a revert job. Check the current status here. |
This reverts commit 0d5c628. Reverted #153653 on behalf of https://github.com/kwen2501 due to More fixes needed ([comment](#153653 (comment)))
@kwen2501 your PR has been successfully reverted. |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Use `MultiProcContinousTest` to avoid re-create ProcessGroup in each test instance. Pull Request resolved: #153677 Approved by: https://github.com/fegin, https://github.com/Skylion007, https://github.com/ngimel ghstack dependencies: #153653
Fix #154373, #154391, #154408, #154443, #154481 Because MultiProcContinousTest [now executes the tests with 8 GPUs instead of 2](#153653), our PP tests comparing gradients have become flakier due to the longer pipeline. The gradients are still close but we need to relax the tolerance. Pull Request resolved: #154856 Approved by: https://github.com/Skylion007
Fix pytorch#154373, pytorch#154391, pytorch#154408, pytorch#154443, pytorch#154481 Because MultiProcContinousTest [now executes the tests with 8 GPUs instead of 2](pytorch#153653), our PP tests comparing gradients have become flakier due to the longer pipeline. The gradients are still close but we need to relax the tolerance. Pull Request resolved: pytorch#154856 Approved by: https://github.com/Skylion007
Fix pytorch#154373, pytorch#154391, pytorch#154408, pytorch#154443, pytorch#154481 Because MultiProcContinousTest [now executes the tests with 8 GPUs instead of 2](pytorch#153653), our PP tests comparing gradients have become flakier due to the longer pipeline. The gradients are still close but we need to relax the tolerance. Pull Request resolved: pytorch#154856 Approved by: https://github.com/Skylion007
A 2D AllToAllv shuffle is illustrated below: (`world_size` = 2, `ne` = 2, where `ne` is number of experts per rank) ``` Source: | Rank 0 | Rank 1 | | c0 | c1 | c2 | c3 | d0 | d1 | d2 | d3 | Dest : | Rank 0 | Rank 1 | | c0 | d0 | c1 | d1 | c2 | d2 | c3 | d3 | ``` where each `c_i` / `d_i` are slices of the `input` tensor, targeting expert `i`, with length indicated by input splits (in `in_out_splits[0]`). That is, the 2D AllToAllv shuffle achieves a transpose from rank-major order at input to expert-major order at output. Pull Request resolved: #155058 Approved by: https://github.com/ngimel ghstack dependencies: #153653, #153677
Downstream consumer of the 2D all-to-all-v is often a group GEMM. Today the GEMM often have an alignment requirement on the chunk sizes within grouped sequence, where each chunk carries the tokens headed for an expert. For example, `torch._group_mm` requires an alignment of 8. This PR adds that alignment capability, when user passes in a `major_align` argument, so that no extra padding step is needed. The key in supporting that is making the output offsets aligned to such value. (Output offsets are returned to the users in the 3rd row of `in_out_splits`, on device. The 2nd row, output splits, are unaffected by this alignment value -- i.e. reflecting true number of tokens for an expert.) The algorithm is as follows.  In detailed implementation, we use warp scan to calculate prefix sum on the "block" illustrated above. As a result, the "block" size, i.e. `npes` is currently limited to warp size 32. Pull Request resolved: #155172 Approved by: https://github.com/ngimel ghstack dependencies: #153653, #153677, #155058
Stack from ghstack (oldest at bottom):
Reworked
MultiProcContinousTest
to spawn processes duringsetUpClass
instead ofmain
(so that we can support multiple TestClass'es in one file).The child processes are now an infinite loop, monitoring test IDs passed from main process via a task queue. Reciprocally, the child processes inform the main process completion of a test via a completion queue.
Added a test template.
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k