8000 autograd: Add VJP and JVP rules for aten::aminmax by vijayabhaskar-ev · Pull Request #151186 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

autograd: Add VJP and JVP rules for aten::aminmax #151186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

vijayabhaskar-ev
Copy link
@vijayabhaskar-ev vijayabhaskar-ev commented Apr 13, 2025

Adds functionally correct backward (VJP) and forward (JVP) autograd rules for the aten::aminmax operator to derivatives.yaml using existing helper functions. This ensures correct eager mode differentiation.

Fixes #148808

Adds functionally correct backward (VJP) and forward (JVP) autograd
rules for the aten::aminmax operator to derivatives.yaml using existing
helper functions. This ensures correct eager mode differentiation.

Fixes pytorch#148808
Copy link
pytorch-bot bot commented Apr 13, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/151186

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit a76690e with merge base b081016 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

Copy link
pytorch-bot bot commented Apr 13, 2025

Didn't find following labels among repository labels: release notes: bug fix

Copy link
pytorch-bot bot commented Apr 13, 2025

Didn't find following labels among repository labels: topic: autograd

@soulitzer soulitzer added release notes: autograd release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Apr 13, 2025
@@ -11369,6 +11369,53 @@ def test_scatter_index_reduce_amin_amax_backprops_to_all_values(self, device):

gradcheck(fn, (input, 0, idx, src, reduction), check_batched_grad=False)

def test_aminmax_backprops_to_all_values(self, device):
Copy link
Contributor
@soulitzer soulitzer Apr 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the thorough test, it would be nice though if we could consolidate it with existing testing of amin/amax?

You can wrap this in a function that expose only one of the outputs and test it basically in the same way as aminmax, e.g. amin_ = lambda x: torch.aminmax(x)[0]

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve updated the test case in the existing test_min_max_median_backprops_to_all_values. Let me know if any further changes are needed.

@soulitzer
Copy link
Contributor

We also have a test suite entry for aminmax that we should modify here to indicate that autograd is supported:

https://github.com/pytorch/pytorch/blame/ddfc14b3ae71dadfe0f831f6f2eff70b54d0f83f/torch/testing/_internal/common_methods_invocations.py#L14649-L14656

@albanD albanD removed their request for review April 14, 2025 15:27
… to handle amin and amax correctly.

- Modified derivatives.yaml to reflect the changes in autograd behavior.
- Updated test cases in test_autograd.py to validate the updated rules.
- Updated common_methods_invocations.py to include amin and amax in relevant test cases.
@vijayabhaskar-ev
Copy link
Author

Added the aminmax_backward implementation to correctly handle the case where all elements are NaN.

Copy link
Contributor
@soulitzer soulitzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update, looking great. Just added a few more small comments.

auto grad_max_scaled = grad_max.defined()
? scale_grad_by_count(
restore_reduced_dims(grad_max, dims_, keepdim), mask_max, dims_)
: at::zeros_symint(self.sym_sizes(), self.options());
Copy link
Contributor
@soulitzer soulitzer Apr 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't want need to materialize a zeros tensor/do an addition when if one of grad_min/grad_max are not defined, e.g. You can initialize an undefined tensor (see linear_double_backward where it does Tensor grad_self;`). Start with computing the grad contribution from min (or max). Then when you do the max (or min), you can check whether the tensor is still undefined and decide whether you need to add or not.

It would also be nice to do the addition in-place sometimes. You can do a.add_(b) if b is not a subclass. (Search for usages of areAnyTensorSubclassLike(...) in this file.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the gradient computation based on your suggestions. Now, the grad contribution from min (or max) is computed first, and the addition is only performed if both tensors are defined. I’ve also incorporated in-place addition where applicable, following the usage of areAnyTensorSubclassLike as referenced in the file. Thank you for the feedback

mask_max = at::ones_like(self, at::kBool);
} else {
mask_max = (max_reduced == self);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NaNs can affect the results differently in aminmax compared to separate amin or amax calls, since those functions return scalars. However, I have now updated the handling of NaNs in aminmax to be consistent with how NaNs are handled elsewhere in the codebase.

Copy link
Contributor
@soulitzer soulitzer Apr 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply - Do you mind giving an example of how their nan behavior can differ? We definitely need to document this better if this is the case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# Example 1: Regular case without NaN
x = torch.tensor([1.0, 2.0, 3.0, 4.0])
print("Regular tensor:")
print(f"
8000
amin: {torch.amin(x)}")  # 1.0
print(f"amax: {torch.amax(x)}")  # 4.0 
print(f"aminmax: {torch.aminmax(x)}")  # (1.0, 4.0)

# Example 2: With NaN
x = torch.tensor([1.0, float('nan'), 3.0, 4.0])
print("\nTensor with NaN:")
print(f"amin: {torch.amin(x)}")  # nan
print(f"amax: {torch.amax(x)}")  # nan
print(f"aminmax: {torch.aminmax(x)}")  # (nan, nan)

When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.

However, when using aminmax with dimension reduction, we need special handling because:

  • We need to track NaN values consistently across both min and max computations
  • We need to preserve the relationship between corresponding min/max pairs in the output
  • If all values along a reduction dimension are NaN, both min and max should be NaN
  • If some values are NaN and some are not, we need consistent handling for both outputs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm not sure I understand

When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.

amin and amax separately don't always return scalars right?

We need to track NaN values consistently across both min and max computations

Isn't this by default what would happen if you just called amin and amax separately?

If all values along a reduction dimension are NaN, both min and max should be NaN

If ANY value along a reduction dimension are NaN ?

Copy link
Author
@vijayabhaskar-ev vijayabhaskar-ev Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if my earlier explanation caused any confusion.

The extra NaN branch in aminmax_backward is identical to the logic in standalone amin/amax.

min()/max() use
bool is_nan = value.isnan().item<bool>(); auto mask = is_nan ? input.isnan() : (input == value);
via evenly_distribute_backward.

Because aminmax returns two tensors (min, max) and has two incoming grads (grad_min, grad_max), we simply inline that same scalar NaN check twice once for the min branch, once for the max then call scale_grad_by_count and sum the results.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we care about what min/max do? amin/amax is different?

Copy link
Author
@vijayabhaskar-ev vijayabhaskar-ev May 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?

amin and amax are single reduction operations. aminmax simultaneously returns both the minimum and maximum values.

In amin/amax, gradient is distributed to elements matching the reduction result. In aminmax, we need to handle gradient distribution for both min and max values simultaneously

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soulitzer I hope the above explanation resolves your query on special handling for nans for aminmax

Makes backward pass more efficient by:
- Avoiding zeros tensor creation when only one gradient defined
- Using in-place operations where possible
@vijayabhaskar-ev
Copy link
Author

@IvanKobzarev Is this PR good to go? Can you review this once?

@soulitzer
Copy link
Contributor

Hmm I still feel like I'm missing something. Could you explain why I cannot just do amin's backward and add that result to amax's backward?

- name: amax(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
- name: amin(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
a = restore_reduced_dims(grad, dim, keepdim)
b_max = restore_reduced_dims(max, dim, keepdim) == self
b_min  = restore_reduced_dims(max, dim, keepdim) == self
out = scale_grad_by_count(a, b_max, dim) + scale_grad_by_count(a, b_min, dim)

@vijayabhaskar-ev
Copy link
Author

Hmm I still feel like I'm missing something. Could you explain why I cannot just do amin's backward and add that result to amax's backward?

- name: amax(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
- name: amin(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
a = restore_reduced_dims(grad, dim, keepdim)
b_max = restore_reduced_dims(max, dim, keepdim) == self
b_min  = restore_reduced_dims(max, dim, keepdim) == self
out = scale_grad_by_count(a, b_max, dim) + scale_grad_by_count(a, b_min, dim)

Hmm I still feel like I'm missing something. Could you explain why I cannot just do amin's backward and add that result to amax's backward?

- name: amax(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
- name: amin(Tensor self, int[1] dim=[], bool keepdim=False) -> Tensor
  self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
  result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)
a = restore_reduced_dims(grad, dim, keepdim)
b_max = restore_reduced_dims(max, dim, keepdim) == self
b_min  = restore_reduced_dims(max, dim, keepdim) == self
out = scale_grad_by_count(a, b_max, dim) + scale_grad_by_count(a, b_min, dim)

aminmax returns two outputs (min, max), so backward gets two gradient tensors:

  • grad_min for the min output
  • grad_max for the max output

Either one of them can be missing (undefined) if the user only uses one of the outputs.

aminmax_backward is needed because it handles missing gradients, NaN inputs, and tie cases without crashing or double-counting.

@soulitzer
Copy link
Contributor

Yes, that is fine. Could you comment more on the NaN handling part though

@vijayabhaskar-ev
Copy link
Author

Yes, that is fine. Could you comment more on the NaN handling part though

aminmax needs the NaN guard because it has two separate gradients that can land on the same nan elements. the guard splits or skips them so back prop is correct (e.g., with x=[nan,nan,nan], loss = mn + mx should yield grads 2/3 each, sum 2, but without the guard you would double to 4 or even crash if only one output's grad is present).

x = tensor([nan, nan, nan], requires_grad=True)
mn, mx = aminmax(x) # both mn and mx are nan

then every element of x counts as both the min and the max, because NaN ties with itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
open source release notes: autograd release notes category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Inductor] Inference failed with the compiled model with aminmax operator
3 participants
0