-
Notifications
You must be signed in to change notification settings - Fork 24.3k
autograd: Add VJP and JVP rules for aten::aminmax #151186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
autograd: Add VJP and JVP rules for aten::aminmax #151186
Conversation
Adds functionally correct backward (VJP) and forward (JVP) autograd rules for the aten::aminmax operator to derivatives.yaml using existing helper functions. This ensures correct eager mode differentiation. Fixes pytorch#148808
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/151186
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit a76690e with merge base b081016 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Didn't find following labels among repository labels: release notes: bug fix |
Didn't find following labels among repository labels: topic: autograd |
test/test_autograd.py
Outdated
@@ -11369,6 +11369,53 @@ def test_scatter_index_reduce_amin_amax_backprops_to_all_values(self, device): | |||
|
|||
gradcheck(fn, (input, 0, idx, src, reduction), check_batched_grad=False) | |||
|
|||
def test_aminmax_backprops_to_all_values(self, device): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the thorough test, it would be nice though if we could consolidate it with existing testing of amin/amax?
You can wrap this in a function that expose only one of the outputs and test it basically in the same way as aminmax, e.g. amin_ = lambda x: torch.aminmax(x)[0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ve updated the test case in the existing test_min_max_median_backprops_to_all_values. Let me know if any further changes are needed.
We also have a test suite entry for aminmax that we should modify here to indicate that autograd is supported: |
… to handle amin and amax correctly. - Modified derivatives.yaml to reflect the changes in autograd behavior. - Updated test cases in test_autograd.py to validate the updated rules. - Updated common_methods_invocations.py to include amin and amax in relevant test cases.
Added the aminmax_backward implementation to correctly handle the case where all elements are NaN. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update, looking great. Just added a few more small comments.
auto grad_max_scaled = grad_max.defined() | ||
? scale_grad_by_count( | ||
restore_reduced_dims(grad_max, dims_, keepdim), mask_max, dims_) | ||
: at::zeros_symint(self.sym_sizes(), self.options()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want need to materialize a zeros tensor/do an addition when if one of grad_min/grad_max are not defined, e.g. You can initialize an undefined tensor (see linear_double_backward
where it does Tensor grad_self;`). Start with computing the grad contribution from min (or max). Then when you do the max (or min), you can check whether the tensor is still undefined and decide whether you need to add or not.
It would also be nice to do the addition in-place sometimes. You can do a.add_(b)
if b is not a subclass. (Search for usages of areAnyTensorSubclassLike(...)
in this file.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the gradient computation based on your suggestions. Now, the grad contribution from min (or max) is computed first, and the addition is only performed if both tensors are defined. I’ve also incorporated in-place addition where applicable, following the usage of areAnyTensorSubclassLike as referenced in the file. Thank you for the feedback
mask_max = at::ones_like(self, at::kBool); | ||
} else { | ||
mask_max = (max_reduced == self); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NaNs can affect the results differently in aminmax compared to separate amin or amax calls, since those functions return scalars. However, I have now updated the handling of NaNs in aminmax to be consistent with how NaNs are handled elsewhere in the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply - Do you mind giving an example of how their nan behavior can differ? We definitely need to document this better if this is the case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Example 1: Regular case without NaN
x = torch.tensor([1.0, 2.0, 3.0, 4.0])
print("Regular tensor:")
print(f"
8000
amin: {torch.amin(x)}") # 1.0
print(f"amax: {torch.amax(x)}") # 4.0
print(f"aminmax: {torch.aminmax(x)}") # (1.0, 4.0)
# Example 2: With NaN
x = torch.tensor([1.0, float('nan'), 3.0, 4.0])
print("\nTensor with NaN:")
print(f"amin: {torch.amin(x)}") # nan
print(f"amax: {torch.amax(x)}") # nan
print(f"aminmax: {torch.aminmax(x)}") # (nan, nan)
When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.
However, when using aminmax with dimension reduction, we need special handling because:
- We need to track NaN values consistently across both min and max computations
- We need to preserve the relationship between corresponding min/max pairs in the output
- If all values along a reduction dimension are NaN, both min and max should be NaN
- If some values are NaN and some are not, we need consistent handling for both outputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I'm not sure I understand
When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.
amin and amax separately don't always return scalars right?
We need to track NaN values consistently across both min and max computations
Isn't this by default what would happen if you just called amin and amax separately?
If all values along a reduction dimension are NaN, both min and max should be NaN
If ANY value along a reduction dimension are NaN ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if my earlier explanation caused any confusion.
The extra NaN branch in aminmax_backward is identical to the logic in standalone amin/amax.
min()/max() use
bool is_nan = value.isnan().item<bool>(); auto mask = is_nan ? input.isnan() : (input == value);
via evenly_distribute_backward.
Because aminmax returns two tensors (min, max) and has two incoming grads (grad_min, grad_max), we simply inline that same scalar NaN check twice once for the min branch, once for the max then call scale_grad_by_count and sum the results.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we care about what min/max do? amin/amax is different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?
amin and amax are single reduction operations. aminmax simultaneously returns both the minimum and maximum values.
In amin/amax, gradient is distributed to elements matching the reduction result. In aminmax, we need to handle gradient distribution for both min and max values simultaneously
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@soulitzer I hope the above explanation resolves your query on special handling for nans for aminmax
Makes backward pass more efficient by: - Avoiding zeros tensor creation when only one gradient defined - Using in-place operations where possible
@IvanKobzarev Is this PR good to go? Can you review this once? |
Hmm I still feel like I'm missing something. Could you explain why I cannot just do amin's backward and add that result to amax's backward?
|
aminmax returns two outputs (min, max), so backward gets two gradient tensors:
Either one of them can be missing (undefined) if the user only uses one of the outputs. aminmax_backward is needed because it handles missing gradients, NaN inputs, and tie cases without crashing or double-counting. |
Yes, that is fine. Could you comment more on the NaN handling part though |
aminmax needs the NaN guard because it has two separate gradients that can land on the same nan elements. the guard splits or skips them so back prop is correct (e.g., with x=[nan,nan,nan], loss = mn + mx should yield grads 2/3 each, sum 2, but without the guard you would double to 4 or even crash if only one output's grad is present).
then every element of x counts as both the min and the max, because NaN ties with itself. |
Adds functionally correct backward (VJP) and forward (JVP) autograd rules for the aten::aminmax operator to derivatives.yaml using existing helper functions. This ensures correct eager mode differentiation.
Fixes #148808