-
Notifications
You must be signed in to change notification settings - Fork 24.2k
[feature request] Global GPU Flag #7535
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I am not really a fan of being so clever and managing devices automatically because it makes it less clear which tensors are on which device, and multi-device code harder to read. That said, I definitely understand your needs and where you come from. I think, at least, we should have options to create |
Thanks. I understand that the kind of control PyTorch gives is unparalleled in this regard. I think a global setting will be equally useful for quick prototyping. |
Hi, this feature is an absolute requirement, please make it happen. |
Actually, I'm in a situation where pytorch 1.0 seems to be using the GPU automatically for all operations (even if I explicitly call .cpu() on the model and data), but I don't want it to use it (because it is actually slower). How can i make sure that everything inside a torch.nn.Module is run on CPU? |
@juancamilog that's not a situation that's possible. There must be a bug in your code, and you are overlooking it. |
@soumith You are definitely right, it was a bug in my code. |
Looking forward to it being implemented. |
Well... automatically moving all of the Modules into GPU sounds a bit unpleasant to me, but automatically moving tensors to prevent runtime error seems a very intuitive thing to do. |
Could this be solved by having a global This would be analogous to handling of |
The current accepted proposal for this is #27878. |
I think, this was solved in in #27878 and in https://docs.pytorch.org/docs/stable/generated/torch.set_default_device.html Should this issue be closed? |
The PyTorch 0.4 Migration Guide, simplifies writing device-agnostic code as follows:
However, this is still not clean.
Ideally, we would like PyTorch to move everything over to the GPU, if it's available...
much like TensorFlow.
I tried setting the global tensor type to a cuda tensor using the
torch.set_default_tensor_type()
method.However, there are some fundamental problems with setting the default tensor type.
Dataloaders give normal (non-cuda) tensors by default. They have to be manually cast using the
Tensor.to()
method.Many methods are simply not implemented for
torch.cuda.*Tensor
. Thus, setting the global tensor type to cuda fails.Conversions to numpy using the
numpy()
method aren’t available for cuda tensors. One has to gox.cpu().numpy()
.Although this chain is agnostic, it defeats the purpose.
I find that I use methods like
.to(device)
and.cpu()
far too often in my projects.In my view, it makes the code more verbose than it needs to be and makes it just a little harder to read.
I think that there is room for a global
use_gpu
flag that can enable developers to run the entire subsequent code in the GPU, where required.Specifically, my request is the following:
1. Abolish need for the
.to(device)
suffix:Circumvent it by letting the developer set the device using a global method like
torch.set_default_device()
or a convinience method/flag likeuse_gpu
Then, whenever, an error is encountered because a CUDA tensor is expected in place of a regular tensor or vice-versa, automatically cast the tensor to the expected device.
Additionally,
a. Move
nn.Module
automatically to the default device.b. Move the yield of
DataLoader
s to the default device.Prevent the need to manually cast to the default device.
2. Add the numpy() method to cuda tensors:
The existing way is to move the tensor to cpu first.
Thus, we have
x.cpu().numpy()
, which is agnostic but redundant.3. Use GPU by default if available:
PyTorch is built from the ground up with the Deep Learning community in mind.
With most Deep Learning done on GPUs, they be considered as the default device automatically.
Let PyTorch give first preference to the GPU.
The text was updated successfully, but these errors were encountered: