-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Python-API created tensors to go through Tensor.__init__
#26566
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Need to discuss if it is even possible to implement. |
I'm wondering what the experience of adding non-Python client to PyTorch is. If this use-case is well-supported, then one could conceivably add a more Pythonic client to PyTorch using the same API as other languages, rather being a first-class citizen as it is now. This would also make Python type-inference more consistent, current solution relies on |
Hmm, making changes to how An explicit hook to register some code that needs to be run at the end of object creation could be put in, but even if it's do-nothing by default that is likely to affect performance a little. If we can find a way to do this without extra overhead, then it would perhaps be a handy thing to have. Something like
No idea if it's feasible, but it requires some global state and likely some tens of nanoseconds of overhead.
Rather than
Shapes are not just determined at tensor creation time, but also when creating views (e.g.
Since comprehensive support for custom tensor types and subclasses is still a work in progress (see gh-22402), I wouldn't worry about this one. Anyway it seems like the more interesting custom types (like |
@rgommers thanks for the in-depth response. The hook you suggest would obviate the need for having
|
Could you define "wrap into" a little more? It sounds like you want a subclass like
but that then has class |
Yes, that's the wrapping I had in mind. To think of it, this seems hard with init hook. It may require manipulating CPython internal structures, if possible at all. |
This may be what you want: #22402 (comment). It's probably fragile, but it seems to work today. |
I'm going to close this because it seems like #22402 subsumes this. If you disagree, please reopen. |
It would be easier to customize PyTorch behavior if Tensors constructed through Python API called
Tensor.__init__
method because user could monkey-patch that method for new behaviorExamples:
CPU vs GPU placement. To run someone's CPU code on my data I had to mechanically append
.to(device)
to every instancetorch.ones
,torch.randn
andtorch.eye
. This could instead by withmyutils.MoveToGpu()
. This could be a temporary workaround [feature request] Global GPU Flag #7535 (also known as permanent workaround ;)Debugging. TorchSnooper allows tracking tensor shapes automatically. However it 1) requires modifying user code with decorators 2) is complicated -- it's based on dynamically rewriting function representation. A monkey-patching solution would be much simpler and work without code modification, ie
python -m find_strange_tensors mycode.py
Supporting custom types. I'm looking at a representation where certain operations can be made more efficient by incorporating tensor structure. Monkey-patching Tensor would allow me to try it out on existing PyTorch training code without modifying that code.
In TensorFlow shape debugging was possible through monkey-patching because all Python tensor construction went through Python
create_op
function.The text was updated successfully, but these errors were encountered: