-
Notifications
You must be signed in to change notification settings - Fork 24.7k
Closed
Labels
triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Milestone
Description
🚀 The feature, motivation and pitch
PyTorch-2.1 is a big release with a lots of new feature, so we need to make sure that:
- CUDA
- pypi binaries with slimmed dependencies are usable in standard AWS containers (amazonlinux:2 regression in 1.13)
- pypi binaries with slimmed dependencies are usable with stock Ubuntu-20.04: torch-2.0.0-rc1 and torch-1.13.1 can not be installed on Ubuntu 20.04 #91067 . Test: https://github.com/pytorch/builder/actions/runs/6760038762/job/18373329780#step:11:1724
- PyTorch+CUDA11.8 and CUDA 12.1 has a working FFT on 4090
- Check cuda 1.12.1 update issue:
torch.linalg.eigh
fails on GPU #94772 with small wheels
-
torch.compile
- Basic test works (for example see test mentioned in Search for
libdevice
relative to shared library triton-lang/triton#1176 ) in PyTorch docker container -
torch.compile
produces a binary which can be used on 3090 -
torch.compile
raises an error if used on Windows. Test: https://github.com/pytorch/builder/actions/runs/6760038762/job/18373336797#step:9:12915 -
torch.compile
works on 3.11 : Test: https://github.com/pytorch/builder/actions/runs/6760038762/job/18373330949#step:11:16049
- Basic test works (for example see test mentioned in Search for
- MPS
- Resnet is usable out of the box (i.e. https://github.com/pytorch/builder/blob/main/test/smoke_test/smoke_test.py passes for MPS device). Test: https://github.com/pytorch/builder/actions/runs/6760038762/job/18373333594#step:9:260
- Validate docker release builds
Alternatives
No response
Additional context
No response
Metadata
Metadata
Labels
triagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module