8000 [ca][dynamo] always run eager checkpoint region's recomputation in eager by xmfan · Pull Request #153300 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

[ca][dynamo] always run eager checkpoint region's recomputation in eager #153300

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 16 commits into from

Conversation

xmfan
Copy link
Member
@xmfan xmfan commented May 10, 2025

Stack from ghstack (oldest at bottom):

I slap disable on the recomputation hook, otherwise the partitioner may save less/more activations and mismatch with the expected eager count in checkpoint. See code comment Note: [compiled autograd and checkpoint unpack hook].

This fixes all non-nested checkpointing tests. I also wrap nested checkpointing tests, and a few of them still fail.

This also seems to fix all PYTORCH_TEST_WITH_DYNAMO checkpointing tests except for TestAutograd.test_checkpointing_without_reentrant_custom_function_works. For those tests, it looks like we fail to HOPify the checkpointed region and when the backward executes the unpack hooks, dynamo tried to trace them. This messed up the internal state tracking of checkpointing, some raising the _StopRecomputationError and others raising the same count mismatch error as CA.

FIXES #127115

cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @chauhang @aakhundov

[ghstack-poisoned]
Copy link
pytorch-bot bot commented May 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153300

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (1 Unrelated Failure)

As of commit a2844d7 with merge base 20ba8fe (image):

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 10, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 10, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 10, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 10, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 10, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 12, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 12, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 12, 2025
@xmfan xmfan added the topic: not user facing topic category label May 12, 2025
[ghstack-poisoned]
xmfan added a commit that referenced this pull request May 13, 2025
@xmfan xmfan added the keep-going Don't stop on first failure, keep running tests until the end label May 13, 2025
[ghstack-poisoned]
[ghstack-poisoned]
@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #152735

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #152689

pytorchmergebot pushed a commit that referenced this pull request May 15, 2025
@malfet
Copy link
Contributor
malfet commented May 15, 2025

@pytorchbot revert -m "Looks like it breaks rocm, see https://hud.pytorch.org/hud/pytorch/pytorch/fa8543454ab5d3deda1d15c1f8d24e9ebe14f340/1?per_page=50&name_filter=slow%20%2F%20linux-jammy-rocm&mergeEphemeralLF=true" -c nosignal

1 similar comment
@malfet
Copy link
Contributor
malfet commented May 15, 2025

@pytorchbot revert -m "Looks like it breaks rocm, see https://hud.pytorch.org/hud/pytorch/pytorch/fa8543454ab5d3deda1d15c1f8d24e9ebe14f340/1?per_page=50&name_filter=slow%20%2F%20linux-jammy-rocm&mergeEphemeralLF=true" -c nosignal

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

@xmfan your PR has been successfully reverted.

@pytorchmergebot pytorchmergebot added Reverted ci-no-td Do not run TD on this PR labels May 15, 2025
[ghstack-poisoned]
@xmfan
Copy link
Member Author
xmfan commented May 15, 2025

The 4 checkpointing @slowTest are close to CI's 1 hour timeout, I'm moving them to run the CA graph without torch.compile

@pytorchmergebot
Copy link
Collaborator

Starting merge as part of PR stack under #152735

pytorchmergebot pushed a commit that referenced this pull request May 16, 2025
pytorchmergebot pushed a commit that referenced this pull request May 16, 2025
C++ Reducer is silently incorrect under CA, its implementation is no-oping the collective. I'm guessing that it was no-op'd because in DDP + python reducer, the C++ reducer is still being initialized.

Pull Request resolved: #152735
Approved by: https://github.com/fegin
ghstack dependencies: #153300, #152689
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci-no-td Do not run TD on this PR ciflow/inductor keep-going Don't stop on first failure, keep running tests until the end Merged module: inductor Reverted topic: not user facing topic category
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants
0