8000 autograd: Add VJP and JVP rules for aten::aminmax by vijayabhaskar-ev · Pull Request #151186 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

autograd: Add VJP and JVP rules for aten::aminmax #151186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions test/test_autograd.py
Original file line number Diff line number Diff line change
Expand Up @@ -11339,12 +11339,17 @@ def fn():
# Generic device type autograd tests.
class TestAutogradDeviceType(TestCase):
def test_min_max_median_backprops_to_all_values(self, device):
for f in [torch.min, torch.max, torch.median, torch.nanmedian]:
amin_ = lambda x: torch.aminmax(x)[0]
amax_ = lambda x: torch.aminmax(x)[1]

for f in [torch.min, torch.max, torch.median, torch.nanmedian, amin_, amax_]:
x1 = torch.tensor(
[1.0, 0.0, 1.0, 0.0, 1.0, 0.0], device=device, requires_grad=True
)
x2 = torch.tensor(
[float("nan"), float("nan"), float("nan")], requires_grad=True
[float("nan"), float("nan"), float("nan")],
device=device,
requires_grad=True,
)
for x in [x1, x2]:
y = f(x)
Expand Down
5 changes: 5 additions & 0 deletions tools/autograd/derivatives.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1201,6 +1201,11 @@
self: scale_grad_by_count(restore_reduced_dims(grad, dim, keepdim), restore_reduced_dims(result, dim, keepdim) == self, dim)
result: amaxamin_jvp(self_p, self_t, result, dim, keepdim)

- name: aminmax(Tensor self, *, int? dim=None, bool keepdim=False) -> (Tensor min, Tensor max)
self: aminmax_backward(self, dim, keepdim, grad_min, grad_max, min, max)
min: 'amaxamin_jvp(self_p, self_t, min, dim.has_value() ? IntArrayRef{dim.value()} : IntArrayRef{}, keepdim)'
max: 'amaxamin_jvp(self_p, self_t, max, dim.has_value() ? IntArrayRef{dim.value()} : IntArrayRef{}, keepdim)'

- name: mm(Tensor self, Tensor mat2) -> Tensor
self: mm_mat1_backward(grad, mat2, self.sym_sizes(), self.sym_strides(), self.layout(), 1)
mat2: mm_mat2_backward(grad, self, mat2.sym_sizes(), mat2.sym_strides(), mat2.layout(), 1)
Expand Down
39 changes: 39 additions & 0 deletions torch/csrc/autograd/FunctionsManual.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -218,6 +218,45 @@ Tensor amaxamin_jvp(
return at::where(mask, dx, 0.).sum(dim, keepdim) / mask.sum(dim, keepdim);
}

Tensor aminmax_backward(
const Tensor& self,
c10::optional<int64_t> dim,
bool keepdim,
const Tensor& grad_min,
const Tensor& grad_max,
const Tensor& min,
const Tensor& max) {
auto dims = dim.has_value() ? IntArrayRef{*dim} : IntArrayRef{};

auto min_reduced = restore_reduced_dims(min, dims, keepdim);
auto max_reduced = restore_reduced_dims(max, dims, keepdim);

auto min_mask =
at::isnan(min).all().item<bool>() ? self.isnan() : self == min_reduced;
auto max_mask =
at::isnan(max).all().item<bool>() ? self.isnan() : self == max_reduced;

Tensor result;
if (grad_min.defined()) {
result = scale_grad_by_count(grad_min, min_mask, dims);

if (grad_max.defined()) {
auto grad_max_result = scale_grad_by_count(grad_max, max_mask, dims);
if (!areAnyTensorSubclassLike({result, grad_max_result})) {
result.add_(grad_max_result);
} else {
result = result + grad_max_result;
}
}
} else if (grad_max.defined()) {
result = scale_grad_by_count(grad_max, max_mask, dims);
} else {
result = at::zeros_symint(self.sym_sizes(), self.options());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NaNs can affect the results differently in aminmax compared to separate amin or amax calls, since those functions return scalars. However, I have now updated the handling of NaNs in aminmax to be consistent with how NaNs are handled elsewhere in the codebase.

Copy link
Contributor
@soulitzer soulitzer Apr 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late reply - Do you mind giving an example of how their nan behavior can differ? We definitely need to document this better if this is the case.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

# Example 1: Regular case without NaN
x = torch.tensor([1.0, 2.0, 3.0, 4.0])
print("Regular tensor:")
print(f"amin: {torch.amin(x)}")  # 1.0
print(f"amax: {torch.amax(x)}")  # 4.0 
print(f"aminmax: {torch.aminmax(x)}")  # (1.0, 4.0)

# Example 2: With NaN
x = torch.tensor([1.0, float('nan'), 3.0, 4.0])
print("\nTensor with NaN:")
print(f"amin: {torch.amin(x)}")  # nan
print(f"amax: {torch.amax(x)}")  # nan
print(f"aminmax: {torch.aminmax(x)}")  # (nan, nan)

When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.

However, when using aminmax with dimension reduction, we need special handling because:

  • We need to track NaN values consistently across both min and max computations
  • We need to preserve the relationship between corresponding min/max pairs in the output
  • If all values along a reduction dimension are NaN, both min and max should be NaN
  • If some values are NaN and some are not, we need consistent handling for both outputs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I'm not sure I understand

When called separately, amin and amax will return scalars that are NaN if any value in the input is NaN.

amin and amax separately don't always return scalars right?

We need to track NaN values consistently across both min and max computations

Isn't this by default what would happen if you just called amin and amax separately?

If all values along a reduction dimension are NaN, both min and max should be NaN

If ANY value along a reduction dimension are NaN ?

Copy link
Author
@vijayabhaskar-ev vijayabhaskar-ev Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry if my earlier explanation caused any confusion.

The extra NaN branch in aminmax_backward is identical to the logic in standalone amin/amax.

min()/max() use
bool is_nan = value.isnan().item<bool>(); auto mask = is_nan ? input.isnan() : (input == value);
via evenly_distribute_backward.

Because aminmax returns two tensors (min, max) and has two incoming grads (grad_min, grad_max), we simply inline that same scalar NaN check twice once for the min branch, once for the max then call scale_grad_by_count and sum the results.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we care about what min/max do? amin/amax is different?

Copy link
Author
@vijayabhaskar-ev vijayabhaskar-ev May 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need special handling for nans for aminmax? Shouldn't we behave just like amax and amin, which don't have this extra handling?

amin and amax are single reduction operations. aminmax simultaneously returns both the minimum and maximum values.

In amin/amax, gradient is distributed to elements matching the reduction result. In aminmax, we need to handle gradient distribution for both min and max values simultaneously

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soulitzer I hope the above explanation resolves your query on special handling for nans for aminmax


return result;
}

std::tuple<Tensor, Tensor> _euclidean_dist_backward(
const Tensor& grad,
const Tensor& x1,
Expand Down
8 changes: 8 additions & 0 deletions torch/csrc/autograd/FunctionsManual.h
Original file line number Diff line number Diff line change
Expand Up @@ -816,6 +816,14 @@ Tensor amaxamin_jvp(
const Tensor& result,
IntArrayRef dim,
bool keepdim);
Tensor aminmax_backward(
const at::Tensor& self,
c10::optional<int64_t> dim,
bool keepdim,
const at::Tensor& grad_min,
const at::Tensor& grad_max,
const at::Tensor& min,
const at::Tensor& max);
std::tuple<Tensor, Tensor, Tensor> layer_norm_double_backward(
const Tensor& input,
const std::optional<Tensor>& gamma,
Expand Down
2 changes: 1 addition & 1 deletion torch/testing/_internal/common_methods_invocations.py
Original file line number Diff line number Diff line change
Expand Up @@ -14647,7 +14647,7 @@ def sample_inputs_alias_copy(op_info, device, dtype, requires_grad, **kwargs):
dtypes=all_types_and(torch.bool, torch.float16, torch.bfloat16),
dtypesIfHpu=custom_types(torch.float32, torch.bfloat16, torch.int32, torch.int8),
decorators=(onlyNativeDeviceTypes,),
supports_autograd=False,
supports_autograd=True,
sample_inputs_func=sample_inputs_aminmax,
error_inputs_func=error_inputs_aminmax_amax_amin),
OpInfo('as_strided',
Expand Down
0