-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Add torch.linalg.norm
#42749
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add torch.linalg.norm
#42749
Conversation
💊 CI failures summary and remediationsAs of commit d069246 (more details on the Dr. CI page):
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Hey @kurtamohler, let me know when this is done being drafted and ready for review. |
9e0eb53
to
0f0d85d
Compare
Will do, @mruberry. I believe I just need to update the documentation, and perhaps add a few more test cases, and then I'll let you know it's ready. |
1ec8b6c
to
0c88fc8
Compare
866dcf3
to
896319a
Compare
} | ||
} | ||
resize_output(result, result_.sizes()); | ||
result.copy_(result_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: I don't think we can do the set_
pattern used above here because it would replace result
's storage. This might be OK, but our current thinking is to preserve given storages where possible.
Sure. Definitely couldn't hurt. |
* Avoid unecessary clone * Use torch.empty rather than torch.tensor in unit test
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Looks like |
@kurtamohler xla tests are usually failing with discontiguous tensors - do your test produce those? If that's the case, those tests just have to be disabled on the xla side (either decorate with @onlyCPUandCUDA or something like that, or ask @ailzhang to disable them on the xla side). |
@ngimel none of the inputs or outputs of nuclear_norm in that test are discontiguous. But it's possible that something inside |
Just skip the test by decorating it with https://github.com/pytorch/xla/blob/19fc8124d5cbefd18796b2439ddf3fded5c7a9ba/test/pytorc
A373
h_test_base.py#L51 If you really wanted to you could run the XLA tests by installing PyTorch/XLA and using their test runner, but I don't think it's worthwhile in this case. The reason this test is failing is because XLA doesn't know to skip it and it's added to TestTorchDeviceType, which spawns "child" per-device classes (TestTorchDeviceTypeCPU, TestTorchDeviceTypeCUDA, TestTorchDeviceTypeXLA) depending on what software and hardware is available on the platform. Decorating the test with |
Lint failures are in base, not your PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
This has landed and tests look good. Nice work, @kurtamohler. It's great to have this centerpiece for torch.linalg in! |
* Add cross signature * Add det specification * Add diagonal specification * Add inv specification * Add norm specification * Add outer product specification * Add outer specification * Add trace specification * Add transpose * Update index * Fix type annotation * Update norm behavior for multi-dimensional arrays * Support all of NumPy's norms Further support for supporting all of NumPy's norms comes from pending updates to Torch (see pytorch/pytorch#42749). * Switch order * Split into separate tables * Escape markup * Escape markup * Add matrix_transpose interface Interface inspired by TensorFlow (see https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_transpose) and Torch (see https://pytorch.org/docs/stable/generated/torch.transpose.html). Allows transposing a stack of matrices. * Rename parameters * Remove matrix_transpose signature
Adds
torch.linalg.norm
function that matches the behavior ofnumpy.linalg.norm
.Additional changes:
frobenius_norm
andnuclear_norm
out
argument behavior fornuclear_norm
frobenius_norm
allowed duplicates indim
argument_norm_matrix
Closes #24802