-
Notifications
You must be signed in to change notification settings - Fork 24.2k
Easy way to switch between CPU and cuda #1668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
If I run your code on a machine with pytorch and Cuda installed i receive this error: "AttributeError: 'module' object has no attribute 'zeros'" I think that x = torch.zeros()
if torch.cuda.is_available():
x = x.cuda() is the only way to achieve it. Am i wrong? |
In master there's something that lets you do it without conditionals. Here's an example: dtype = torch.cuda.float if torch.cuda.is_available() else torch.float
torch.zeros(2, 2, dtype=dtype) |
I think that the right tensor type for cuda is: |
@marcomiccheli as adam said |
@soumith you're right, sorry. Could anyon 8000 e explain how to get master branch of pytorch? My current version is 0.3.0.post4 Thank you all |
@marcomiccheli https://github.com/pytorch/pytorch#from-source |
In tensorflow, it will use gpu by default. If I don't need run on the gpu, I can simply run code by |
@BoyuanJiang I second that... If there is 1 CUDA capable device on the system, I think it should by default use it, unless some global setting says otherwise, or the user specifically codes it. If no CUDA, then default to CPU. If you have a CUDA device, and want to use CPU instead, then I think it's OK to ask the developer to specify the CPU, as its kinda an edge case. (avoiding kernel launches for really small recurrent stuff, or whatever). In most of these cases, using CPU instead of CUDA is a for optimization, so only the developer can figure that out anway. As it stands, requiring the developer to specifically say "I want to run this on GPU", seems a bit odd. Since that's kinda the point of deep learning, at least in it's current state of the art. |
We now have the |
@gchanan can you please provide a link? I've googled, searched the pytorch doc, tried to find it in my local 0.4.0 build, and I can't find the |
@gobbedy check the release notes for device agnostic code in https://github.com/pytorch/pytorch/releases/tag/v0.4.0 |
Thank you @fmassa. I see this as a step in the right direction. Now code can be written device-agnostically by adding .to(device) wherever you create a tensor. This removes an if statement around each creation of a tensor, which is great. However, this is remains a very code-heavy and not backward-compatible solution. It does not address @BoyuanJiang and @DuaneNielsen's request to have code run on GPU by default. I'd like to be able to add a single line to my otherwise CPU-only code to make it run on a GPU. Something like (or a more flexible/smart function that allows you to pick the device, yet achieves the same result.) Then I'd like any subsequent code such as this to automatically run on GPU without requiring either .cuda() or .to(device) -- nor dtype or device arguments. Just plain pythonic KISS. |
This still a problem in PyTorch switch between CPU and GPU are really very annoying. |
After a while digging. I suggest using:
and then replace .cuda() with .to(device) |
@jinfagang this is the recommended way of writing device-agnostic code nowadays, and is very handy, while still giving fine-grained control to the users. |
no such thing as need or not, say, discuss any no matter what. |
@jinfagang
and replacing every |
…013547 Summary: Previous import was 6b34743d2e361bbc0acb29dd73536478cb92562e Included changes: - **[4280470](onnx/onnx@4280470)**: Changes done internally at Facebook (pytorch#1668) <Lu Fang> - **[f85221f](onnx/onnx@f85221f)**: Fuse MatMul and Add into Gemm (pytorch#1542) <vloncar> - **[022230e](onnx/onnx@022230e)**: Replace np.long by np.int64 (pytorch#1664) <G. Ramalingam> - **[0ab3c95](onnx/onnx@0ab3c95)**: Infer shape from data in Constant nodes (pytorch#1667) <Shinichiro Hamaji> Differential Revision: D13330082 fbshipit-source-id: 8bc0362533482e0edc5438642151a46eca67f18f
…013547 (#14777) Summary: Pull Request resolved: #14777 Previous import was 6b34743d2e361bbc0acb29dd73536478cb92562e Included changes: - **[4280470](onnx/onnx@4280470)**: Changes done internally at Facebook (#1668) <Lu Fang> - **[f85221f](onnx/onnx@f85221f)**: Fuse MatMul and Add into Gemm (#1542) <vloncar> - **[022230e](onnx/onnx@022230e)**: Replace np.long by np.int64 (#1664) <G. Ramalingam> - **[0ab3c95](onnx/onnx@0ab3c95)**: Infer shape from data in Constant nodes (#1667) <Shinichiro Hamaji> Reviewed By: bddppq Differential Revision: D13330082 fbshipit-source-id: 13cf328626cf872d0983bbd2154d95c45da70f1c
AttributeError: module 'torch' has no attribute 'device' |
@MarStarck you probably have an old version of pytorch. |
You've got a typo in |
As per the documentation, it is better to do the following:
The previous comment would create the tensor on CPU then transfer it to GPU whereas this code creates the tensor on GPU directly. |
Looks like this was all fixed by 1.3 Thanks! https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics |
if the code is |
Try this: |
How about output = model(input) # input and output are on cuda:0
loss = loss_fn(true_output, output)
with torch.cpu(): # Any GPU tensor in this environment is automatically sent to cpu
plt.plot(input, true_output)
plt.plot(input, output) |
I have a large PyTorch project (at least 200+ places where tensors of different types are created and many are on the fly). I need to migrate all the tensors to |
Right now, as far as I know there is no one simple way to write code which runs seamlessly both on CPU and GPU. We need to resort to switches like
One reason for this is functions which create tensors like
torch.ones
etc create on CPU.If there are functions like
torch.cuda.ones
etc available we can use code like this which runs seamlessly:This is just one way to make the switch easy, there might be a better way to do the same.
The text was updated successfully, but these errors were encountered: