-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[training] Adding NUMA support for pytorch #150597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
This appears to be a diff that was exported from phabricator, but the PR author does not have sufficient permissions to run CI. @efiks, please do step 2 of internal wiki to get write access so you do not need to get CI approvals in the future. If you think this is a mistake, please contact the Pytorch Dev Infra team. |
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/150597
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New FailureAs of commit 1348578 with merge base 52d172e ( NEW FAILURE - The following job has failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
This pull request was exported from Phabricator. Differential Revision: D72321369
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
Summary: Add entry point and environment variable to control NUMA binding / assigment for distributed pytorch runs (training). Test Plan: build and run tests for modified libraries locally Differential Revision: D72321369
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
Summary: X-link: facebookresearch/FBGEMM#1014 X-link: pytorch/pytorch#150597 Add entry point and environment variable to control NUMA binding / assigment for distributed pytorch runs (training). Differential Revision: D72321369
Summary: X-link: facebookresearch/FBGEMM#1014 X-link: pytorch/pytorch#150597 Add entry point and environment variable to control NUMA binding / assigment for distributed pytorch runs (training). Differential Revision: D72321369
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
Summary: X-link: pytorch/FBGEMM#3926 X-link: facebookresearch/FBGEMM#1014 Add entry point and environment variable to control NUMA binding / assigment for distributed pytorch runs (training). Test Plan: build and run tests for modified libraries locally Differential Revision: D72321369
|
This pull request was exported from Phabricator. Differential Revision: D72321369 |
|
|
||
| # If device_id is provide try to bind current process to the | ||
| # NUMA node attached to the device | ||
| if device_id is not None: | ||
| maybe_enable_numa_binding(device_type=device_id.type, device_id=device_id.index) | ||
| else: | ||
| maybe_enable_numa_binding() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious - would it be a bit late to do binding here?
|
Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as |
Test Plan:
build and run tests for modified libraries locally
buck2 build arvr/mode/platform010/opt //xplat/caffe2:pytorch_ovrsource
buck run arvr/mode/win/debug-md -c python.package_style=inplace //xplat/caffe2:pytorch_test_ovrsource
buck test arvr/mode/linux/opt -c python.package_style=inplace //xplat/caffe2:pytorch_test_ovrsource
buck test mode/opt //caffe2/fb/test:_utils_internal_test
Differential Revision: D72321369
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k