-
Notifications
You must be signed in to change notification settings - Fork 24.3k
ENH: Add a force argument to numpy()
#78564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… force = True). Also it throws error if tensor.is_conj() = True and force = False and uses tensor.resolve_conj() if tensor.is_coj() = True and force = True
…ore specific with assertRaisesRegex
🔗 Helpful links
❌ 2 New FailuresAs of commit 41c0d18 (more details on the Dr. CI page): Expand to see more
🕵️ 2 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Thanks for the new PR.
@pytorchbot merge this please |
@janeyx99 could you look into the PyTorchBot issue? |
Ho I didn't realize the bot didn't merge that sorry! |
@pytorchbot merge |
@pytorchbot successfully started a merge job. Check the current status here |
Hey @HaoZeke. |
Ah! We've migrated off the natural language commands and moved to a more CLI format--please check our wiki for the valid commands now: https://github.com/pytorch/pytorch/wiki/Bot-commands |
Summary: **Reopened** to help with merge issues. See #59790 for full context. Fixes #20778. Helps #71688. Finalizes martinPasen's force argument for `Tensor.numpy()`. It is set to False by default. If it's set to True then we: 1. detatch the Tensor, if requires_grad == True 2. move to cpu, if not on cpu already 3. Uses .resolve_conj() if .is_conj() == True 4. Uses .resolve_neg() if .is_neg() == True cc albanD Pull Request resolved: #78564 Approved by: https://github.com/albanD Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/3f58dd18dc6fc18ed82fb1632cea48373c0a7798 Reviewed By: seemethere Differential Revision: D36935606 Pulled By: seemethere fbshipit-source-id: dc2dd7f569feb8da29add55db3d1625241ff8d77
@albanD Can we allow automatic detach / force = True when doing automatic numpy conversion? E.g. when visualizing Torch tensors with matplotlib: |
Do you have specific examples in mind? |
My treatment is: the better and better Pytorch --> NumPy interop becomes, the more value is in doing detach (or even gpu->cpu) automatically (if want to protect, can add a readonly bit in NumPy tensors). At least for PyTorch->matplotlib quick vis the explicit Device transfer is a bit more understandable, but why can't attached-to-graph tensors be passed for zero-copy NumPy conversion? |
Reopened to help with merge issues. See #59790 for full context.
Fixes #20778. Helps #71688.
Finalizes @martinPasen's force argument for
Tensor.numpy()
. It is set to False by default. If it's set to True then we:cc @albanD