8000 Easy way to switch between CPU and cuda · Issue #1668 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Easy way to switch between CPU and cuda #1668

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
chsasank opened this issue May 27, 2017 · 27 comments
Closed

Easy way to switch between CPU and cuda #1668

chsasank opened this issue May 27, 2017 · 27 comments

Comments

@chsasank
Copy link
Contributor

Right now, as far as I know there is no one simple way to write code which runs seamlessly both on CPU and GPU. We need to resort to switches like

x = torch.zeros()
if torch.cuda.available():
    x = x.cuda()

One reason for this is functions which create tensors like torch.ones etc create on CPU.
If there are functions like torch.cuda.ones etc available we can use code like this which runs seamlessly:

if torch.cuda.available():
    import torch.cuda as t
else:
    import torch as t

x = t.zeros()
y = Variable(x)

This is just one way to make the switch easy, there might be a better way to do the same.

@marcomick
Copy link
marcomick commented Mar 2, 2018

If I run your code on a machine with pytorch and Cuda installed i receive this error:

"AttributeError: 'module' object has no attribute 'zeros'"

I think that

x = torch.zeros()
if torch.cuda.is_available():
    x = x.cuda()

is the only way to achieve it. Am i wrong?

@apaszke
Copy link
Contributor
apaszke commented Mar 2, 2018

In master there's something that lets you do it without conditionals. Here's an example:

dtype = torch.cuda.float if torch.cuda.is_available() else torch.float
torch.zeros(2, 2, dtype=dtype)

@marcomick
Copy link

I think that the right tensor type for cuda is:
torch.cuda.FloatTensor
and
torch.zeros()
has not argument "dtype"

@soumith
Copy link
Member
soumith commented Mar 2, 2018

@marcomiccheli as adam said dtype argument is available in pytorch master branch, not the binary releases yet.

@marcomick
Copy link

@soumith you're right, sorry. Could anyon 8000 e explain how to get master branch of pytorch? My current version is 0.3.0.post4

Thank you all

@soumith
Copy link
Member
soumith commented Mar 2, 2018

@marcomiccheli https://github.com/pytorch/pytorch#from-source

@BoyuanJiang
Copy link

In tensorflow, it will use gpu by default. If I don't need run on the gpu, I can simply run code by CUDA_VISIBLE_DEVICES=" " python main.py
Can Pytorch has the similar features?

@DuaneNielsen
Copy link

@BoyuanJiang I second that...

If there is 1 CUDA capable device on the system, I think it should by default use it, unless some global setting says otherwise, or the user specifically codes it. If no CUDA, then default to CPU.

If you have a CUDA device, and want to use CPU instead, then I think it's OK to ask the developer to specify the CPU, as its kinda an edge case. (avoiding kernel launches for really small recurrent stuff, or whatever). In most of these cases, using CPU instead of CUDA is a for optimization, so only the developer can figure that out anway.

As it stands, requiring the developer to specifically say "I want to run this on GPU", seems a bit odd. Since that's kinda the point of deep learning, at least in it's current state of the art.

< 8000 /include-fragment>
@gchanan
Copy link
Contributor
gchanan commented Apr 24, 2018

We now have the to method.

@gchanan gchanan closed this as completed Apr 24, 2018
@gobbedy
Copy link
gobbedy commented Jun 15, 2018

@gchanan can you please provide a link? I've googled, searched the pytorch doc, tried to find it in my local 0.4.0 build, and I can't find the to method. It's such a common word that it's practically impossible to do a refined search.

@fmassa
Copy link
Member
fmassa commented Jun 15, 2018

@gobbedy check the release notes for device agnostic code in https://github.com/pytorch/pytorch/releases/tag/v0.4.0

@gobbedy
Copy link
gobbedy commented Jun 15, 2018

Thank you @fmassa.

I see this as a step in the right direction. Now code can be written device-agnostically by adding .to(device) wherever you create a tensor.

This removes an if statement around each creation of a tensor, which is great.

However, this is remains a very code-heavy and not backward-compatible solution. It does not address @BoyuanJiang and @DuaneNielsen's request to have code run on GPU by default.

I'd like to be able to add a single line to my otherwise CPU-only code to make it run on a GPU.

Something like
if torch.cuda.is_available():
    torch.set_gpu_as_default_device()

(or a more flexible/smart function that allows you to pick the device, yet achieves the same result.)

Then I'd like any subsequent code such as this
my_tensor = torch.empty(3,3)

to automatically run on GPU without requiring either .cuda() or .to(device) -- nor dtype or device arguments. Just plain pythonic KISS.

@lucasjinreal
Copy link

This still a problem in PyTorch switch between CPU and GPU are really very annoying.

@svaisakh
Copy link

#7535
I second @gobbedy.

A global device flag would be useful.

@lucasjinreal
Copy link
lucasjinreal commented Sep 14, 2018

After a while digging. I suggest using:

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

and then replace .cuda() with .to(device)

@fmassa
Copy link
Member
fmassa commented Sep 14, 2018

@jinfagang this is the recommended way of writing device-agnostic code nowadays, and is very handy, while still giving fine-grained control to the users.

@asdfzxh8
Copy link

In master there's something that lets you do it without conditionals. Here's an example:

dtype = torch.cuda.float if torch.cuda.is_available() else torch.float
torch.zeros(2, 2, dtype=dtype)

no such thing as need or not, say, discuss any no matter what.

@ArnoutDevos
Copy link

@jinfagang
To (spell) correct your answer, I found this to be working:

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

and replacing every .cuda() with .to(device)

houseroad added a commit to houseroad/pytorch that referenced this issue Dec 4, 2018
…013547

Summary:
Previous import was 6b34743d2e361bbc0acb29dd73536478cb92562e

Included changes:
- **[4280470](onnx/onnx@4280470)**: Changes done internally at Facebook (pytorch#1668) <Lu Fang>
- **[f85221f](onnx/onnx@f85221f)**: Fuse MatMul and Add into Gemm (pytorch#1542) <vloncar>
- **[022230e](onnx/onnx@022230e)**: Replace np.long by np.int64 (pytorch#1664) <G. Ramalingam>
- **[0ab3c95](onnx/onnx@0ab3c95)**: Infer shape from data in Constant nodes (pytorch#1667) <Shinichiro Hamaji>

Differential Revision: D13330082

fbshipit-source-id: 8bc0362533482e0edc5438642151a46eca67f18f
facebook-github-bot pushed a commit that referenced this issue Dec 5, 2018
…013547 (#14777)

Summary:
Pull Request resolved: #14777

Previous import was 6b34743d2e361bbc0acb29dd73536478cb92562e

Included changes:
- **[4280470](onnx/onnx@4280470)**: Changes done internally at Facebook (#1668) <Lu Fang>
- **[f85221f](onnx/onnx@f85221f)**: Fuse MatMul and Add into Gemm (#1542) <vloncar>
- **[022230e](onnx/onnx@022230e)**: Replace np.long by np.int64 (#1664) <G. Ramalingam>
- **[0ab3c95](onnx/onnx@0ab3c95)**: Infer shape from data in Constant nodes (#1667) <Shinichiro Hamaji>

Reviewed By: bddppq

Differential Revision: D13330082

fbshipit-source-id: 13cf328626cf872d0983bbd2154d95c45da70f1c
@MarStarck
Copy link

@jinfagang
To (spell) correct your answer, I found this to be working:

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

and replacing every .cuda() with .to(device)

AttributeError: module 'torch' has no attribute 'device'

@gchanan
Copy link
Contributor
gchanan commented Nov 5, 2019

@MarStarck you probably have an old version of pytorch.

@kkulczak
Copy link
kkulczak commented Dec 6, 2019

After a while digging. I suggest using:

device = torch.device('cuda:0' if torch.cuda.is_avaliable() else 'cpu')

and then replace .cuda() with .to(device)

You've got a typo in torch.cuda.is_avaliable().
Pls edit this :)

@farid-fari
Copy link

As per the documentation, it is better to do the following:

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
t = torch.tensor(range(15), device=device)

The previous comment would create the tensor on CPU then transfer it to GPU whereas this code creates the tensor on GPU directly.

@DuaneNielsen
Copy link

Looks like this was all fixed by 1.3 Thanks!

https://pytorch.org/docs/stable/notes/cuda.html#cuda-semantics

@foreseez
Copy link

@jinfagang
To (spell) correct your answer, I found this to be working:

device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

and replacing every .cuda() with .to(device)

if the code is model = model.cuda(x_tensor) how to modify?

@kkulczak
Copy link

Try this:
model = model.to(x_tensor.device)

@patel-zeel
Copy link
Contributor
patel-zeel commented Sep 30, 2021

How about with environment? For example, if one needs to do some plotting after training,

output = model(input) # input and output are on cuda:0
loss = loss_fn(true_output, output)

with torch.cpu(): # Any GPU tensor in this environment is automatically sent to cpu
    plt.plot(input, true_output)
    plt.plot(input, output)

A3E2

@lahiri-phdworks
Copy link
lahiri-phdworks commented Dec 9, 2021

I have a large PyTorch project (at least 200+ places where tensors of different types are created and many are on the fly). I need to migrate all the tensors toGPU. As an experienced developer, I do it see it as a way to learn by reading the code and making the changes as necessary. BUT HAVING A DIRECT SWITCH / Having default as GPU Tensors would be really cool. Keep writing .to(device) or .cuda() the whole day.

zasdfgbnm pushed a commit that referenced this issue May 9, 2022
akashveramd pushed a commit to akashveramd/pytorch that referenced this issue Apr 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

0