-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Support torch.linalg.trace
#62714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support torch.linalg.trace
#62714
Changes from 1 commit
0b91b4b
ee1901c
d8a71b4
467580b
361acf5
2edb1c8
b9a6a7e
4e96d97
5858e22
c7061ab
f98c8e9
cecc545
5f073d3
37914e4
aa1b23f
025c6e2
de21898
9986e30
cabf8dd
bf17c76
d857bd8
e5872eb
9622b99
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -1079,6 +1079,19 @@ Tensor trace_cpu(const Tensor& self) { | |||
return result; | ||||
} | ||||
|
||||
// TODO: this routine should be implemented without diag and sum for perf problems, | ||||
// see https://github.com/pytorch/pytorch/pull/47305, | ||||
Tensor linalg_trace(const Tensor& self, int64_t offset) { | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is also missing a There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think it'd also be nice to check that offset is in the range [-dim, dim) and throw a nice error message otherwise. |
||||
return at::diagonal(self, offset, -2, -1).sum(-1); | ||||
} | ||||
|
||||
Tensor linalg_trace_backward(const Tensor & grad, IntArrayRef input_sizes, int64_t offset) { | ||||
auto grad_input = at::zeros(input_sizes, grad.options()); | ||||
auto diag = grad_input.diagonal(offset, -2, -1); | ||||
diag.copy_(grad.unsqueeze(-1)); | ||||
return grad_input; | ||||
} | ||||
|
||||
void impl_func_prod( | ||||
const Tensor& self, | ||||
IntArrayRef dims, | ||||
|
@@ -1093,6 +1106,13 @@ void impl_func_prod( | |||
} | ||||
} | ||||
|
||||
Tensor prod(const Tensor& self, int64_t dim, bool keepdim, c10::optional<ScalarType> opt_dtype) { | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this change needed in this PR?
|
||||
ScalarType dtype = get_dtype_from_self(self, opt_dtype, true); | ||||
Tensor result = create_reduction_result(self, dim, keepdim, dtype); | ||||
native::prod_out_impl(result, self, dim, keepdim, dtype); | ||||
return result; | ||||
} | ||||
|
||||
TORCH_IMPL_FUNC(prod_out) | ||||
(const Tensor& self, | ||||
int64_t dim, | ||||
|
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -6230,6 +6230,18 @@ | |||
CPU: trace_cpu | ||||
CUDA: trace_cuda | ||||
|
||||
- func: linalg_trace(Tensor self, int offset=0) -> Tensor | ||||
python_module: linalg | ||||
variants: method, function | ||||
dispatch: | ||||
CPU, CUDA: linalg_trace | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This line is overwritten by
Suggested change
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. On a second thought using |
||||
CompositeExplicitAutograd: linalg_trace | ||||
|
||||
- func: linalg_trace_backward(Tensor grad, int[] sizes, int offset) -> Tensor | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Could you please remove this entry from native_functions.yaml? |
||||
variants: function | ||||
device_check: NoCheck | ||||
device_guard: False | ||||
|
||||
- func: trace_backward(Tensor grad, int[] sizes) -> Tensor | ||||
variants: function | ||||
device_check: NoCheck | ||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2042,3 +2042,19 @@ | |
>>> torch.dist(Q.transpose(-2, -1) @ Q, torch.eye(4)) | ||
tensor(6.2158e-07) | ||
""") | ||
|
||
trace = _add_docstr(_linalg.linalg_trace, r""" | ||
trace(input, offset=0) -> Tensor | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. input, *, offset=0 |
||
|
||
Returns the sum of the elements of the diagonal. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What diagonal? Given that we have the parameter
Followed by an explanation of how th
F438
e |
||
|
||
lezcano marked this conversation as resolved.
Show resolved
Hide resolved
|
||
Example:: | ||
|
||
>>> x = torch.arange(1., 10.).view(3, 3) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. When working with a single tensor prefer |
||
>>> x | ||
tensor([[ 1., 2., 3.], | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is cool! Add an example showing how to use offset, too |
||
[ 4., 5., 6.], | ||
[ 7., 8., 9.]]) | ||
>>> torch.linalg.trace(x) | ||
tensor(15.) | ||
""") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
File a new issue to improve the performance of linalg.trace and point to that issue in this comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually (see below) I recommend we address this in this PR and not file a follow-up issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I'd say this would take quite some effort to do. I don't think that this function is going to be super popular, so I'd propose we stick with the current implementation.