-
Notifications
You must be signed in to change notification settings - Fork 9.7k
Add accelerate API support for Word Language Model example #1345
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
f56fb9c
a6d10a5
39bf17a
c50a636
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||
---|---|---|---|---|
|
@@ -37,10 +37,6 @@ | |||
help='tie the word embedding and softmax weights') | ||||
parser.add_argument('--seed', type=int, default=1111, | ||||
help='random seed') | ||||
parser.add_argument('--cuda', action='store_true', default=False, | ||||
help='use CUDA') | ||||
parser.add_argument('--mps', action='store_true', default=False, | ||||
help='enables macOS GPU training') | ||||
parser.add_argument('--log-interval', type=int, default=200, metavar='N', | ||||
help='report interval') | ||||
parser.add_argument('--save', type=str, default='model.pt', | ||||
|
@@ -51,25 +47,20 @@ | |||
help='the number of heads in the encoder/decoder of the transformer model') | ||||
parser.add_argument('--dry-run', action='store_true', | ||||
help='verify the code and the model') | ||||
parser.add_argument('--accel', action='store_true',help='Enables accelerated training') | ||||
args = parser.parse_args() | ||||
|
||||
# Set the random seed manually for reproducibility. | ||||
torch.manual_seed(args.seed) | ||||
if torch.cuda.is_available(): | ||||
if not args.cuda: | ||||
print("WARNING: You have a CUDA device, so you should probably run with --cuda.") | ||||
if hasattr(torch.backends, "mps") and torch.backends.mps.is_available(): | ||||
if not args.mps: | ||||
print("WARNING: You have mps device, to enable macOS GPU run with --mps.") | ||||
|
||||
use_mps = args.mps and torch.backends.mps.is_available() | ||||
if args.cuda: | ||||
device = torch.device("cuda") | ||||
elif use_mps: | ||||
device = torch.device("mps") | ||||
|
||||
if args.accel and torch.accelerator.is_available(): | ||||
device = torch.accelerator.current_accelerator() | ||||
|
||||
else: | ||||
device = torch.device("cpu") | ||||
|
||||
print("Using device:", device) | ||||
|
||||
############################################################################### | ||||
# Load data | ||||
############################################################################### | ||||
|
@@ -243,11 +234,11 @@ def export_onnx(path, batch_size, seq_len): | |||
|
||||
# Load the best saved model. | ||||
with open(args.save, 'rb') as f: | ||||
model = torch.load(f) | ||||
torch.load(f, weights_only=False) | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you, please, extract this change to separate PR? It also needs an update for required torch version:
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If I extract the change and update the requirements to 2.7 it won't work, this change allows the example to run with the simplest code change, since leaving it as it was fails to work There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In PyTorch 2.6, the default value for In this pull request, we can integrate the use of the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
From 2.6 actually. See https://docs.pytorch.org/docs/2.6/accelerator.html#module-torch.accelerator. To integrate
I believe you are doing changes in the wrong order. First, update requirement to be able to use latest pytorch and fix issues which appear. Next, as a second step, introduce new APIs. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I did run the modified There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. First, we need to run the example with latest PyTorch and fix any issue in a separate PR. Thanks for the feedback @dvrogozh. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, but you don't need to close this PR. Just mark it as a draft while working on the update requirements PR. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Here is a PR to update torch version requirement as I would do it: There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. |
||||
# after load the rnn params are not a continuous chunk of memory | ||||
# this makes them a continuous chunk, and will speed up forward pass | ||||
# Currently, only rnn model supports flatten_parameters function. | ||||
if args.model in ['RNN_TANH', 'RNN_RELU', 'LSTM', 'GRU']: | ||||
if args.model in ['RNN_TANH', 'RNN_RELU', 'LSTM', 'GRU'] and device.type == 'cuda': | ||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. what was the error you're getting? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Seems to be an overlook from my part. This was needed when trying a safe approach of only loading the weights but apparently it is no longer needed. I will remove it to prevent any unwanted changes |
||||
model.rnn.flatten_parameters() | ||||
|
||||
# Run on test data. | ||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to drop this from example command line and maybe add a note that example supports running on acceleration devices and list which were tried (CUDA, MPS, XPU).