8000 Remove redundant type aliases of _device_t for torch.Device (#152952) by sanjai-11 · Pull Request #153007 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Remove redundant type aliases of _device_t for torch.Device (#152952) #153007

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

sanjai-11
Copy link
@sanjai-11 sanjai-11 commented May 7, 2025

Copy link
pytorch-bot bot commented May 7, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/153007

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the module: cpu CPU specific problem (e.g., perf, algorithm) label May 7, 2025
@sanjai-11
Copy link
Author

@pytorchbot label "topic: not user facing"

@pytorch-bot pytorch-bot bot added the topic: not user facing topic category label May 7, 2025
@@ -23,7 +23,7 @@ def empty_cache() -> None:
torch._C._xpu_emptyCache()


def reset_peak_memory_stats(device: _device_t = None) -> None:
def reset_peak_memory_stats(device: torch.types.Device = None) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe line 10 is no longer needed.

_device_t = Union[Device, str, int, None]

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the review. Will change

@sanjai-11 sanjai-11 requested a review from shink May 7, 2025 04:15
Copy link
Contributor
@shink shink left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@janeyx99 janeyx99 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label May 7, 2025
Copy link
Contributor
@janeyx99 janeyx99 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

@janeyx99 janeyx99 added the suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) label May 7, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@albanD albanD removed their request for review May 8, 2025 15:46
@sanjai-11
Copy link
Author

@pytorchbot label "suppress-bc-linter"

@sanjai-11
Copy link
Author

Hi @janeyx99
Can you please help me with this please,
I believe that I have solved the issue. But, in the reviewers section I can see there are many reviewers which was added by mistake - Please remove reviewers who are not supposed to review my PR.
Lint / lintrunner-noclang / linux-job (pull_request) - this has been failing, I do not know how to handle this, Can you please suppress this issue.

Thank you for your hardwork!

@omkar-334
Copy link
omkar-334 commented May 11, 2025

@sanjai-11 Go through the test that has failed and you'll know what to fix.

  1. Fix the line length in torch/cuda/__init__.py (it should be less than 88)
    .
    image

  2. Remove the unused imports in torch/xpu/memory.py.
    .
    image

@pytorch-bot pytorch-bot bot removed the ciflow/trunk Trigger trunk jobs on your pull request label May 12, 2025
@sanjai-11
Copy link
Author

@omkar-334 This is very useful, thanks

@janeyx99
Copy link
Contributor

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 12, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 mandatory check(s) failed. The first few are:

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@omkar-334
Copy link

@sanjai-11 you haven't fixed all the things. You still have to remove Union and fix the line length. Lint errors are still occuring.
Alternatively, just install the precommit hooks and run this pre-commit run --all-files - It should take care of this.

@pytorch-bot pytorch-bot bot removed the ciflow/trunk Trigger trunk jobs on your pull request label May 12, 2025
@sanjai-11
Copy link
Author

@omkar-334 I tried my best to resolve the issues. I need to learn a lot. Thanks for your feedback.

@jeffdaily
Copy link
Collaborator

Hi @janeyx99 Can you please help me with this please, I believe that I have solved the issue. But, in the reviewers section I can see there are many reviewers which was added by mistake - Please remove reviewers who are not supposed to review my PR. Lint / lintrunner-noclang / linux-job (pull_request) - this has been failing, I do not know how to handle this, Can you please suppress this issue.

Thank you for your hardwork!

Your PR diff is showing too many changed files that have nothing to do with the intent of your PR. At some point during your development there was likely an incorrect merge or incorrect git pull rebase or similar. I suggest creating a new branch off of latest tip of upstream main, applying only the changes you intended, and then creating a new PR to replace this one.

Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: amp (automated mixed precision) autocast module: compiled autograd compiled_autograd module: cpu CPU specific problem (e.g., perf, algorithm) module: dynamo module: inductor module: mkldnn Related to Intel IDEEP or oneDNN (a.k.a. mkldnn) integration oncall: distributed Add this issue/PR to distributed oncall triage queue open source release notes: distributed (checkpoint) release notes: inductor (aoti) release notes: quantization release notes category Stale suppress-bc-linter Suppresses the failures of API backward-compatibility linter (Lint/bc_linter) topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove redundant type aliases of _device for torch.Device
7 participants
0