10000 Derivative for aten::linalg_pinv is not implemented · Issue #66618 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Derivative for aten::linalg_pinv is not implemented #66618

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
IvanYashchuk opened this issue Oct 14, 2021 · 6 comments
Closed

Derivative for aten::linalg_pinv is not implemented #66618

IvanYashchuk opened this issue Oct 14, 2021 · 6 comments
Labels
module: autograd Related to torch.autograd, and the autograd engine in general module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@IvanYashchuk
Copy link
Collaborator
IvanYashchuk commented Oct 14, 2021

🐛 Bug

This PR #66092 added an explicit derivative rule for the following signature:

name: linalg_pinv(Tensor self, float rcond=1e-15, bool hermitian=False) -> Tensor

but there is also an overload that accepts rcond argument of type Tensor. Differentiation with Tensor rcond is broken currently and was working before #66092.

To Reproduce

Steps to reproduce the behavior:

import torch
a = torch.randn(3, 3, dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(lambda x: torch.linalg.pinv(x, rcond=torch.tensor(1e-15)), [a])
# RuntimeError: derivative for aten::linalg_pinv is not implemented

Expected behavior

It should work as it did before #66092.

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @lezcano @Varal7 @jianyuh @mruberry @walterddr @IvanYashchuk @xwang233

@IvanYashchuk IvanYashchuk added module: autograd Related to torch.autograd, and the autograd engine in general module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul labels Oct 14, 2021
@soulitzer soulitzer added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Oct 14, 2021
@albanD
Copy link
Collaborator
albanD commented Oct 14, 2021

Is this actually a regression?

@mruberry
Copy link
Collaborator

Do we not have sample inputs covering the tensor rcond case?

@nikitaved
Copy link
Collaborator
nikitaved commented Oct 14, 2021

No, and I totally missed this signature. Looks like not that much of a common use case to pass tensor rcond or any rcond for that matter.

@IvanYashchuk
Copy link
Collaborator Author

If it's not in our tests it doesn't mean there's no use case for it. torch.linalg.pinv is in 8000 1.9.0 release where we claimed the linalg module to be stable and it's an actual regression not being able to differentiate through the function variant that does the same but accepts a different type for rcond. Moreover, when passing rcond as tensor it was possible to find derivatives wrt this variable.

import torch
a = torch.randn(3, 3, dtype=torch.float64, requires_grad=True)
rcond = torch.tensor(1e-15, dtype=torch.float64, requires_grad=True)
torch.autograd.gradcheck(lambda rcond: torch.linalg.pinv(a, rcond=rcond), [rcond])
# True in 1.9.0 and errors on master

Here's a use case for differentiable rcond:
Consider a regularized least squares problem: find x s.t. ∥Ax−b∥²+λ∥x∥² is minimized, the solution to this problem is very close to pinv(A, rcond=some_func_of(λ)) @ b, optimization could be used for tweaking the rcond parameter. lstsq was not differentiable at all in the 1.9.0 release so potentially someone could use pinv for solving a regularized least-squares problem. I don't have concrete application in mind though.

@nikitaved
Copy link
Collaborator
nikitaved commented Oct 15, 2021

I think that solving a regularized problem like that is very confusing. Besides, rcond is a hard-threshold to cut the singular values of A while regularization bumps the singular values of A by conditioning the matrix A^T A + lambda I.

@IvanYashchuk
Copy link
Collaborator Author

The problem will be fixed with #63102, I also included missing sample inputs there.

wconstab pushed a commit that referenced this issue Oct 20, 2021
Summary:
This pull request introduces new keyword arguments for `torch.linalg.matrix_rank` and `torch.linalg.pinv`: `atol` and `rtol`.

Currently, only tensor overload has default values for either `atol` or `rtol`, the float overload requires both arguments to be specified.

FC compatibility: #63102 (comment)

Fixes #54151. Fixes #66618.

cc jianyuh nikitaved pearu mruberry walterddr IvanYashchuk xwang233 Lezcano

Pull Request resolved: #63102

Reviewed By: H-Huang

Differential Revision: D31641456

Pulled By: mruberry

fbshipit-source-id: 4c765508ab1657730703e42975fc8c0d0a60eb7c
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: autograd Related to torch.autograd, and the autograd engine in general module: linear algebra Issues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmul triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants
0