-
Notifications
You must be signed in to change notification settings - Fork 24.3k
autograd: Add VJP and JVP rules for aten::aminmax #151186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
vijayabhaskar-ev
wants to merge
3
commits into
pytorch:main
Choose a base branch
from
vijayabhaskar-ev:fix-aminmax-autograd-rules
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NaNs can affect the results differently in aminmax compared to separate amin or amax calls, since those functions return scalars. However, I have now updated the handling of NaNs in aminmax to be consistent with how NaNs are handled elsewhere in the codebase.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply - Do you mind giving an example of how their nan behavior can differ? We definitely need to document this better if this is the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.
However, when using aminmax with dimension reduction, we need special handling because:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I'm not sure I understand
amin and amax separately don't always return scalars right?
Isn't this by default what would happen if you just called amin and amax separately?
If ANY value along a reduction dimension are NaN ?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if my earlier explanation caused any confusion.
The extra NaN branch in aminmax_backward is identical to the logic in standalone amin/amax.
min()/max() use
bool is_nan = value.isnan().item<bool>(); auto mask = is_nan ? input.isnan() : (input == value);
via evenly_distribute_backward.
Because aminmax returns two tensors (min, max) and has two incoming grads (grad_min, grad_max), we simply inline that same scalar NaN check twice once for the min branch, once for the max then call scale_grad_by_count and sum the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we care about what min/max do? amin/amax is different?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
amin and amax are single reduction operations. aminmax simultaneously returns both the minimum and maximum values.
In amin/amax, gradient is distributed to elements matching the reduction result. In aminmax, we need to handle gradient distribution for both min and max values simultaneously
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soulitzer I hope the above explanation resolves your query on special handling for nans for aminmax