E520 Add `torch.linalg.norm` by kurtamohler · Pull Request #42749 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Conversation

@kurtamohler
Copy link
Collaborator
@kurtamohler kurtamohler commented Aug 7, 2020

Adds torch.linalg.norm function that matches the behavior of numpy.linalg.norm.

Additional changes:

  • Add support for dimension wrapping in frobenius_norm and nuclear_norm
  • Fix out argument behavior for nuclear_norm
  • Fix issue where frobenius_norm allowed duplicates in dim argument
  • Add _norm_matrix

Closes #24802

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Aug 7, 2020
@dr-ci
Copy link
dr-ci bot commented Aug 7, 2020

💊 CI failures summary and remediations

As of commit d069246 (more details on the Dr. CI page):


  • 3/3 failures possibly* introduced in this PR
    • 1/3 non-CircleCI failure(s)

🕵️ 2 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_test1 (1/2)

Step: "Attaching workspace" (full log | diagnosis details | 🔁 rerun)

Downloading workspace layers
Downloading workspace layers
  workflows/workspaces/2859745d-c667-4c92-b546-e25ec4a56155/0/c16fb575-bd25-4a52-a96c-01c6e046f4e7/0/105.tar.gz - 8.4 MB
Applying workspace layers
  c16fb575-bd25-4a52-a96c-01c6e046f4e7

See CircleCI build pytorch_windows_vs2019_py36_cuda10.1_test2 (2/2)

Step: "Attaching workspace" (full log | diagnosis details | 🔁 rerun)

Downloading workspace layers
Downloading workspace layers
  workflows/workspaces/2859745d-c667-4c92-b546-e25ec4a56155/0/c16fb575-bd25-4a52-a96c-01c6e046f4e7/0/105.tar.gz - 8.4 MB
Applying workspace layers
  c16fb575-bd25-4a52-a96c-01c6e046f4e7

Extra GitHub checks: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 123 times.

@smessmer smessmer requested a review from mruberry August 10, 2020 21:26
@smessmer smessmer added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Aug 10, 2020
@mruberry
Copy link
Collaborator

Hey @kurtamohler, let me know when this is done being drafted and ready for review.

@kurtamohler
Copy link
Collaborator Author

Will do, @mruberry. I believe I just need to update the documentation, and perhaps add a few more test cases, and then I'll let you know it's ready.

@kurtamohler kurtamohler force-pushed the linalg-norm branch 2 times, most recently from 1ec8b6c to 0c88fc8 Compare August 11, 2020 21:55
@kurtamohler kurtamohler marked this pull request as ready for review August 11, 2020 23:25
@kurtamohler kurtamohler requested a review from apaszke as a code owner August 11, 2020 23:25
@kurtamohler kurtamohler force-pushed the linalg-norm branch 3 times, most recently from 866dcf3 to 896319a Compare August 12, 2020 03:05
}
}
resize_output(result, result_.sizes());
result.copy_(result_);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: I don't think we can do the set_ pattern used above here because it would replace result's storage. This might be OK, but our current thinking is to preserve given storages where possible.

@mruberry
Copy link
Collaborator

@mruberry, yes we are still OK to merge.

Do you think it would be a good idea to mention in the linalg module documentation which numpy versions we're currently up to date with?

Sure. Definitely couldn't hurt.

* Avoid unecessary clone
* Use torch.empty rather than torch.tensor in unit test
Copy link
Contributor
@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@kurtamohler
Copy link
Collaborator Author

Looks like test_nuclear_norm_out_xla_float32 and test_nuclear_norm_out_xla_float64 are failing. I'm trying to debug the CircleCI job through ssh, but I can't figure out how to invoke XLA tests. Is there some documentation somewhere to explain?

@ngimel
Copy link
Collaborator
ngimel commented Aug 28, 2020

@kurtamohler xla tests are usually failing with discontiguous tensors - do your test produce those? If that's the case, those tests just have to be disabled on the xla side (either decorate with @onlyCPUandCUDA or something like that, or ask @ailzhang to disable them on the xla side).

@kurtamohler
Copy link
Collaborator Author

@ngimel none of the inputs or outputs of nuclear_norm in that test are discontiguous. But it's possible that something inside nuclear_norm_out is using a temporary discontiguous tensor, I'll see if I can find one.

@mruberry
Copy link
Collaborator
mruberry commented Aug 28, 2020

@ngimel none of the inputs or outputs of nuclear_norm in that test are discontiguous. But it's possible that something inside nuclear_norm_out is using a temporary discontiguous tensor, I'll see if I can find one.

Just skip the test by decorating it with @onlyOnCPUAndCUDA. The XLA build skips the other nuclear norm tests, too. You can see some of these skips handcoded in the PyTorch/XLA repo:

https://github.com/pytorch/xla/blob/19fc8124d5cbefd18796b2439ddf3fded5c7a9ba/test/pytorch_test_base.py#L51
https://github.com/pytorch/xla/blob/19fc8124d5cbefd18796b2439ddf3fded5c7a9ba/test/pytorch_test_base.py#L108

If you really wanted to you could run the XLA tests by installing PyTorch/XLA and using their test runner, but I don't think it's worthwhile in this case. The reason this test is failing is because XLA doesn't know to skip it and it's added to TestTorchDeviceType, which spawns "child" per-device classes (TestTorchDeviceTypeCPU, TestTorchDeviceTypeCUDA, TestTorchDeviceTypeXLA) depending on what software and hardware is available on the platform. Decorating the test with @onlyOnCPUAndCUDA will prevent this test from appearing in the TestTorchDeviceTypeXLA class and avoid the issue.

@mruberry
Copy link
Collaborator

Lint failures are in base, not your PR.

Copy link
Contributor
@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 68b9daa.

@mruberry
Copy link
Collaborator

This has landed and tests look good. Nice work, @kurtamohler. It's great to have this centerpiece for torch.linalg in!

rgommers pushed a commit to data-apis/array-api that referenced this pull request Sep 8, 2020
* Add cross signature

* Add det specification

* Add diagonal specification

* Add inv specification

* Add norm specification

* Add outer product specification

* Add outer specification

* Add trace specification

* Add transpose

* Update index

* Fix type annotation

* Update norm behavior for multi-dimensional arrays

* Support all of NumPy's norms

Further support for supporting all of NumPy's norms comes from
pending updates to Torch (see pytorch/pytorch#42749).

* Switch order

* Split into separate tables

* Escape markup

* Escape markup

* Add matrix_transpose interface

Interface inspired by TensorFlow (see https://www.tensorflow.org/api_docs/python/tf/linalg/matrix_transpose)
and Torch (see https://pytorch.org/docs/stable/generated/torch.transpose.html).

Allows transposing a stack of matrices.

* Rename parameters

* Remove matrix_transpose signature
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FR] More consistent matrix norm for torch.norm

8 participants

0