-
Notifications
You must be signed in to change notification settings - Fork 24.3k
FSDP + DTensor Loss Flatlines Randomly #117471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We confirmed this affects 2.2.0 final RC |
HI @mvpatel2000 and @Skylion007
|
@atalman unfortunately I do not have a minimal repro nor am able to share the code for this run at this time :( We run a transformer model with Dtensor + FSDP (pass in device mesh). The only different think we do is some weights are manually wrapped with dtensor and presharded before FSDP -- I'm pretty sure this won't matter so reproducing on your end shouldn't be too hard, but I'm not 100% confident |
@atalman I just checked our release branch, in addition to #117020 We'll also need this PR together to resolve the merge conflicts #116122. I can also confirms that I also met similar numeric issues (although not loss flatline, but it's loss NaN problem which looks similar to the issue that @mvpatel2000 met). These two fixes helps me resolve the NaN problem, it would be great if we can include these two fixes in our release branch :) |
Fixed in dev |
Uh oh!
There was an error while loading. Please reload this page.
🐛 Describe the bug
We have been training dtensor off torch nightly (in anticipation for 2.2), and we are very often seeing the loss flatline. We do not see this at all on current nightly (as of 4 days ago), and at this point we are very confident there is a regression/bug in the current release candidate (for 2.2) that breaks FSDP training (at least with dtensor).
Our best guess is one of the two PRs linked fix it:
To be safe, I personally would want to also include the no grad bug fix:
Versions
Torch 2.2 branch
cc @zhaojuanmao @mrshenli @rohan-varma @awgu @fegin @penguinwu @kwen2501 @wanchaol @XilunWu @tianyu-l
The text was updated successfully, but these errors were encountered: