8000 Add accelerate API support for Word Language Model example by framoncg · Pull Request #1345 · pytorch/examples · GitHub
[go: up one dir, main page]

Skip to content

Add accelerate API support for Word Language Model example #1345

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

framoncg
Copy link

Refactor Word Language Model example to utilize torch.accelerator API torch.accelerator API allows to abstract some of the accelerator specifics in the user scripts. By leveraging this API, the code becomes more adaptable to various hardware accelerators.

Updated word_language_model/main.py with accelerator flag
Updated word_language_model/generate.py with accelerator flag
Updated README to match word_language_model/main.py flags
Updated run_python_examples.sh to add new accelerator flag

CC: @msaroufim, @malfet, @dvrogozh

Copy link
netlify bot commented May 15, 2025

Deploy Preview for pytorch-examples-preview canceled.

Name Link
🔨 Latest commit c50a636
🔍 Latest deploy log https://app.netlify.com/projects/pytorch-examples-preview/deploys/6827a112331af90008c6ab16

# after load the rnn params are not a continuous chunk of memory
# this makes them a continuous chunk, and will speed up forward pass
# Currently, only rnn model supports flatten_parameters function.
if args.model in ['RNN_TANH', 'RNN_RELU', 'LSTM', 'GRU']:
if args.model in ['RNN_TANH', 'RNN_RELU', 'LSTM', 'GRU'] and device.type == 'cuda':
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what was the error you're getting?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be an overlook from my part. This was needed when trying a safe approach of only loading the weights but apparently it is no longer needed. I will remove it to prevent any unwanted changes

@@ -243,11 +234,11 @@ def export_onnx(path, batch_size, seq_len):

# Load the best saved model.
with open(args.save, 'rb') as f:
model = torch.load(f)
torch.load(f, weights_only=False)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you, please, extract this change to separate PR? It also needs an update for required torch version:

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I extract the change and update the requirements to 2.7 it won't work, this change allows the example to run with the simplest code change, since leaving it as it was fails to work

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In PyTorch 2.6, the default value for weights_only was set to True, and PyTorch 2.7 introduced support for the accelerator API.

In this pull request, we can integrate the use of the accelerator API in this PR. Meanwhile, we will address the update for saving and loading models using state_dict in a separate pull request.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyTorch 2.7 introduced support for the accelerator API. <...>In this pull request, we can integrate the use of the accelerator API in this PR.

From 2.6 actually. See https://docs.pytorch.org/docs/2.6/accelerator.html#module-torch.accelerator.

To integrate torch.accelerator we must update the requirement for torch to be >=2.6. Otherwise tests will simply fail. I suspect that you did not actually run the modified run_python_examples.sh.

If I extract the change and update the requirements to 2.7 it won't work

I believe you are doing changes in the wrong order. First, update requirement to be able to use latest pytorch and fix issues which appear. Next, as a second step, introduce new APIs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did run the modified run_python_examples.sh but maybe I am doing this in the wrong order. So the suggestion here is to first update requirements and fix the issues in a separate PR, close this one and create a new one for the new API?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, we need to run the example with latest PyTorch and fix any issue in a separate PR.

Thanks for the feedback @dvrogozh.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the suggestion here is to first update requirements and fix the issues in a separate PR, close this one and create

Yes, but you don't need to close this PR. Just mark it as a draft while working on the update requirements PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is a PR to update torch version requirement as I would do it:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@framoncg, the #1347 got merged. Please, rebase your PR.

python main.py --cuda --epochs 6 --tied # Train a tied LSTM on Wikitext-2 with CUDA.
python main.py --cuda --tied # Train a tied LSTM on Wikitext-2 with CUDA for 40 epochs.
python main.py --cuda --epochs 6 --model Transformer --lr 5
python main.py --accel --epochs 6 # Train a LSTM on Wikitext-2 with CUDA.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

with CUDA

I suggest to drop this from example command line and maybe add a note that example supports running on acceleration devices and list which were tried (CUDA, MPS, XPU).

framoncg and others added 2 commits May 16, 2025 14:32
Co-authored-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Co-authored-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
@framoncg framoncg marked this pull request as draft May 16, 2025 23:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
0