8000 Release 2.4 windows wheels are not compatible with numpy 2.0 · Issue #131668 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Release 2.4 windows wheels are not compatible with numpy 2.0 #131668

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
atalman opened this issue Jul 24, 2024 · 9 comments
Closed

Release 2.4 windows wheels are not compatible with numpy 2.0 #131668

atalman opened this issue Jul 24, 2024 · 9 comments
Assignees
Labels
ciflow/binaries Trigger all binary build and upload jobs on the PR high priority triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Milestone

Comments

@atalman
Copy link
Contributor
atalman commented Jul 24, 2024

🐛 Describe the bug

When numpy not preinstalled on the machine, with latest torch install we get : https://github.com/pytorch/builder/actions/runs/10080038080/job/27868882276#step:9:438

++ python ./test/smoke_test/smoke_test.py

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

then later:
std::thread::hardware_concurrency() : 16
Environment variables:
	OMP_NUM_THREADS : [not set]
	MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP

Testing smoke_test_conv2d
Testing smoke_test_linalg on cpu
  File "C:\actions-runner\_work\builder\builder\pytorch\builder\test\smoke_test\smoke_test.py", line 330, in main
    test_numpy()
  File "C:\actions-runner\_work\builder\builder\pytorch\builder\test\smoke_test\smoke_test.py", line 72, in test_numpy
    torch.tensor(x)
RuntimeError: Could not infer dtype of numpy.int64

Cause:
numpy release https://pypi.org/project/numpy/2.0.1/ is not compatible with torch
numpy dependency is coming from torchvision METADATA

Requires-Dist: numpy

Versions

2.4.0 - torchvision 0.19

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim

@atalman atalman added ciflow/binaries Trigger all binary build and upload jobs on the PR high priority labels Jul 24, 2024
@atalman atalman changed the title Release 2.4 not compatible with numpy 2.0 Release 2.4 windows wheels are not compatible with numpy 2.0 Jul 24, 2024
@atalman
Copy link
Contributor Author
atalman commented Jul 24, 2024

Looks like linux, macos are not affected

pip install torch torchvision
Collecting torch
  Downloading torch-2.4.0-cp310-cp310-manylinux1_x86_64.whl.metadata (26 kB)
Collecting torchvision
  Downloading torchvision-0.19.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.0 kB)
Collecting filelock (from torch)
  Using cached filelock-3.15.4-py3-none-any.whl.metadata (2.9 kB)
Collecting typing-extensions>=4.8.0 (from torch)
  Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting sympy (from torch)
  Downloading sympy-1.13.1-py3-none-any.whl.metadata (12 kB)
Collecting networkx (from torch)
  Using cached networkx-3.3-py3-none-any.whl.metadata (5.1 kB)
Collecting jinja2 (from torch)
  Using cached jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting fsspec (from torch)
  Using cached fsspec-2024.6.1-py3-none-any.whl.metadata (11 kB)
Collecting nvidia-cuda-nvrtc-cu12==12.1.105 (from torch)
  Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu12==12.1.105 (from torch)
  Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-cupti-cu12==12.1.105 (from torch)
  Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu12==9.1.0.70 (from torch)
  Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu12==12.1.3.1 (from torch)
  Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cufft-cu12==11.0.2.54 (from torch)
  Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu12==10.3.2.106 (from torch)
  Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cusolver-cu12==11.4.5.107 (from torch)
  Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu12==12.1.0.106 (from torch)
  Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nccl-cu12==2.20.5 (from torch)
  Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl.metadata (1.8 kB)
Collecting nvidia-nvtx-cu12==12.1.105 (from torch)
  Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl.metadata (1.7 kB)
Collecting triton==3.0.0 (from torch)
  Downloading triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.3 kB)
Collecting nvidia-nvjitlink-cu12 (from nvidia-cusolver-cu12==11.4.5.107->torch)
  Using cached nvidia_nvjitlink_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting numpy (from torchvision)
  Downloading numpy-2.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 60.9/60.9 kB 4.8 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0 (from torchvision)
  Downloading pillow-10.4.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (9.2 kB)
Collecting MarkupSafe>=2.0 (from jinja2->torch)
  Using cached MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting mpmath<1.4,>=1.1.0 (from sympy->torch)
  Using cached mpmath-1.3.0-py3-none-any.whl.metadata (8.6 kB)
Downloading torch-2.4.0-cp310-cp310-manylinux1_x86_64.whl (797.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 797.2/797.2 MB 3.6 MB/s eta 0:00:00
Using cached nvidia_cublas_cu12-12.1.3.1-py3-none-manylinux1_x86_64.whl (410.6 MB)
Using cached nvidia_cuda_cupti_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (14.1 MB)
Using cached nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (23.7 MB)
Using cached nvidia_cuda_runtime_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (823 kB)
Downloading nvidia_cudnn_cu12-9.1.0.70-py3-none-manylinux2014_x86_64.whl (664.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 664.8/664.8 MB 4.7 MB/s eta 0:00:00
Using cached nvidia_cufft_cu12-11.0.2.54-py3-none-manylinux1_x86_64.whl (121.6 MB)
Using cached nvidia_curand_cu12-10.3.2.106-py3-none-manylinux1_x86_64.whl (56.5 MB)
Using cached nvidia_cusolver_cu12-11.4.5.107-py3-none-manylinux1_x86_64.whl (124.2 MB)
Using cached nvidia_cusparse_cu12-12.1.0.106-py3-none-manylinux1_x86_64.whl (196.0 MB)
Using cached nvidia_nccl_cu12-2.20.5-py3-none-manylinux2014_x86_64.whl (176.2 MB)
Using cached nvidia_nvtx_cu12-12.1.105-py3-none-manylinux1_x86_64.whl (99 kB)
Downloading triton-3.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (209.4 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 209.4/209.4 MB 18.3 MB/s eta 0:00:00
Downloading torchvision-0.19.0-cp310-cp310-manylinux1_x86_64.whl (7.0 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.0/7.0 MB 127.4 MB/s eta 0:00:00
Downloading pillow-10.4.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.5/4.5 MB 120.4 MB/s eta 0:00:00
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Using cached filelock-3.15.4-py3-none-any.whl (16 kB)
Using cached fsspec-2024.6.1-py3-none-any.whl (177 kB)
Using cached jinja2-3.1.4-py3-none-any.whl (133 kB)
Using cached networkx-3.3-py3-none-any.whl (1.7 MB)
Downloading numpy-2.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (19.5 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.5/19.5 MB 96.4 MB/s eta 0:00:00
Downloading sympy-1.13.1-py3-none-any.whl (6.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 120.9 MB/s eta 0:00:00
Using cached MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Using cached mpmath-1.3.0-py3-none-any.whl (536 kB)
Using cached nvidia_nvjitlink_cu12-12.5.82-py3-none-manylinux2014_x86_64.whl (21.3 MB)
Installing collected packages: mpmath, typing-extensions, sympy, pillow, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, numpy, networkx, MarkupSafe, fsspec, filelock, triton, nvidia-cusparse-cu12, nvidia-cudnn-cu12, jinja2, nvidia-cusolver-cu12, torch, torchvision


Successfully installed MarkupSafe-2.1.5 filelock-3.15.4 fsspec-2024.6.1 jinja2-3.1.4 mpmath-1.3.0 networkx-3.3 numpy-2.0.1 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-9.1.0.70 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.20.5 nvidia-nvjitlink-cu12-12.5.82 nvidia-nvtx-cu12-12.1.105 pillow-10.4.0 sympy-1.13.1 torch-2.4.0 torchvision-0.19.0 triton-3.0.0 typing-extensions-4.12.2
(rpy310) atalman@ip-10-200-84-248:~/temo$ cd builder/
(rpy310) atalman@ip-10-200-84-248:~/temo/builder$ cd test/smoke_test/
(rpy310) atalman@ip-10-200-84-248:~/temo/builder/test/smoke_test$ python smoke_test.py --package torchonly
torch: 2.4.0+cu121
ATen/Parallel:
        at::get_num_threads() : 48
        at::get_num_interop_threads() : 48
OpenMP 201511 (a.k.a. OpenMP 4.5)
        omp_get_max_threads() : 48
Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
        mkl_get_max_threads() : 48
Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
std::thread::hardware_concurrency() : 96
Environment variables:
        OMP_NUM_THREADS : [not set]
        MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP

Skip version check for channel None as stable version is None
Testing smoke_test_conv2d
Testing smoke_test_linalg on cpu
Testing smoke_test_compile for cuda and torch.float16
Testing smoke_test_compile for cuda and torch.float32
Testing smoke_test_compile for cuda and torch.float64
Testing smoke_test_compile with mode 'max-autotune' for torch.float32
AUTOTUNE convolution(64x32x26x26, 64x32x3x3)
  triton_convolution_12 0.0512 ms 100.0%
  triton_convolution_9 0.0522 ms 98.0%
  triton_convolution_10 0.0553 ms 92.6%
  triton_convolution_7 0.0563 ms 90.9%
  triton_convolution_11 0.0594 ms 86.2%
  convolution 0.0737 ms 69.4%
  triton_convolution_6 0.0748 ms 68.5%
  triton_convolution_8 0.1526 ms 33.6%
SingleProcess AUTOTUNE benchmarking takes 3.5510 seconds and 0.4397 seconds precompiling
AUTOTUNE convolution(64x1x28x28, 32x1x3x3)
  convolution 0.0205 ms 100.0%
  triton_convolution_1 0.0215 ms 95.2%
  triton_convolution_5 0.0225 ms 90.9%
  triton_convolution_2 0.0236 ms 87.0%
  triton_convolution_0 0.0246 ms 83.3%
  triton_convolution_3 0.0266 ms 76.9%
  triton_convolution_4 0.0266 ms 76.9%
SingleProcess AUTOTUNE benchmarking takes 0.7732 seconds and 0.0005 seconds precompiling
AUTOTUNE addmm(64x1, 64x9216, 9216x1)
  addmm 0.0195 ms 100.0%
  triton_mm_17 0.0543 ms 35.8%
  triton_mm_23 0.0543 ms 35.8%
  triton_mm_24 0.0573 ms 33.9%
  triton_mm_15 0.0645 ms 30.2%
  triton_mm_16 0.0686 ms 28.4%
  triton_mm_20 0.0809 ms 24.1%
  triton_mm_14 0.0840 ms 23.2%
  triton_mm_22 0.0860 ms 22.6%
  triton_mm_21 0.1219 ms 16.0%

Validation workflows: https://github.com/pytorch/builder/actions/runs/10080038080

@malfet malfet added this to the 2.4.1 milestone Jul 24, 2024
@rgommers
Copy link
Collaborator

From the linked CI log it seems likely indeed the 2.4.0 torch wheels on PyPI were built against numpy 1.x rather than 2.0.

The CI job confuses the matter slightly because:

  1. numpy 1.26.4 is installed with conda
  2. pip3 install --force-reinstall torch torchvision torchaudio ends up installing numpy 2.0.1 from PyPI

However, that shouldn't affect the result as far as I can tell. A plain pip install torch numpy on Windows in a clean environment should reproduce this problem.

@atalman
Copy link
Contributor Author
atalman commented Jul 24, 2024

To mitigate the issue torchvision windows binaries where published with numpy<2 constraint:
https://pypi.org/project/torchvision/#files

All the tests are successful now: https://github.com/pytorch/builder/actions/runs/10080038080/job/27877807547

@nawfalhasan
Copy link

@atalman I dont have torchvision. I was getting the error I quoted above with plain torch.

@Xiao-Chenguang
Copy link

@atalman I dont have torchvision. I was getting the error I quoted above with plain torch.

same error here.

@atalman atalman self-assigned this Jul 29, 2024
@atalman
Copy link
Contributor Author
atalman commented Jul 29, 2024

Todo: Add a smoke test https://github.com/pytorch/builder/blob/main/test/smoke_test/smoke_test.py for both numpy 1.x and 2.x install.

@Ark-kun
Copy link
Ark-kun commented Aug 13, 2024

JFYI: For older PyTorch versions (e.g. 2.3.1) this is still an issue (or at least a warning...).

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last):  File exllamav2\exllamav2\ext.py", line 291, in <module>
    none_tensor = torch.empty((1, 1), device = "meta")
exllamav2\exllamav2\ext.py:291: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)
  none_tensor = torch.empty((1, 1), device = "meta")

Such breakages inevitably happen when a package (e.g. torch) does not specify the upper bound of a dependency (e.g. numpy). Eventually an incompatible version is released and all torch package versions become broken.

@atalman
Copy link
Contributor Author
atalman commented Aug 14, 2024

This issue is fixed in nightly build by:
pytorch/builder#1945

Test passing:

conda list | grep numpy
numpy                     2.0.0                    pypi_0    pypi

 python smoke_test.py --package torchonly
torch: 2.5.0.dev20240814+cpu
ATen/Parallel:
        at::get_num_threads() : 8
        at::get_num_interop_threads() : 8
OpenMP 2019
        omp_get_max_threads() : 8
Intel(R) oneAPI Math Kernel Library Version 2024.2.1-Product Build 20240722 for Intel(R) 64 architecture applications
        mkl_get_max_threads() : 8
Intel(R) MKL-DNN v3.4.2 (Git Hash 1137e04ec0b5251ca2b4400a4fd3c667ce843d67)
std::thread::hardware_concurrency() : 16
Environment variables:
        OMP_NUM_THREADS : [not set]
        MKL_NUM_THREADS : [not set]
ATen parallel backend: OpenMP

Skip version check for channel None as stable version is None
Testing smoke_test_conv2d
Testing smoke_test_linalg on cpu

@atalman
Copy link
Contributor Author
atalman commented Aug 23, 2024

Fixed in rc 2.4.1. Validations with numpy 2.0.0 : https://github.com/pytorch/builder/actions/runs/10528142201/job/29172994583#step:9:486

@atalman atalman closed this as completed Aug 23, 2024
telamonian added a commit to Comfy-Org/comfy-cli that referenced this issue Aug 30, 2024
- can be removed once pytorch 2.4.1 releases
  - see: pytorch/pytorch#131668 (comment)
jamesobutler added a commit to jamesobutler/MONAI that referenced this issue Feb 26, 2025
PyTorch added support for numpy 2 starting with PyTorch 2.3.0. This allows for numpy 1 or numpy 2 to be used with torch>=2.3.0.

A special case is being handled on Windows as PyTorch Windows binaries had compatibilities issues with numpy 2 that were fixed in torch 2.4.1 (see pytorch/pytorch#131668 (comment)).

Signed-off-by: James Butler <james.butler@revvity.com>
jamesobutler added a commit to jamesobutler/MONAI that referenced this issue Feb 26, 2025
PyTorch added support for numpy 2 starting with PyTorch 2.3.0. This allows for numpy 1 or numpy 2 to be used with torch>=2.3.0.

A special case is being handled on Windows as PyTorch Windows binaries had compatibilities issues with numpy 2 that were fixed in torch 2.4.1 (see pytorch/pytorch#131668 (comment)).

Signed-off-by: James Butler <james.butler@revvity.com>
jamesobutler added a commit to jamesobutler/MONAI that referenced this issue Feb 26, 2025
PyTorch added support for numpy 2 starting with PyTorch 2.3.0. This allows for numpy 1 or numpy 2 to be used with torch>=2.3.0.

A special case is being handled on Windows as PyTorch Windows binaries had compatibilities issues with numpy 2 that were fixed in torch 2.4.1 (see pytorch/pytorch#131668 (comment)).

Signed-off-by: James Butler <james.butler@revvity.com>
jamesobutler added a commit to jamesobutler/MONAI that referenced this issue Feb 26, 2025
PyTorch added support for numpy 2 starting with PyTorch 2.3.0. This allows for numpy 1 or numpy 2 to be used with torch>=2.3.0.

A special case is being handled on Windows as PyTorch Windows binaries had compatibilities issues with numpy 2 that were fixed in torch 2.4.1 (see pytorch/pytorch#131668 (comment)).

Signed-off-by: James Butler <james.butler@revvity.com>
jamesobutler added a commit to jamesobutler/MONAI that referenced this issue Feb 26, 2025
PyTorch added support for numpy 2 starting with PyTorch 2.3.0. This allows for numpy 1 or numpy 2 to be used with torch>=2.3.0.

A special case is being handled on Windows as PyTorch Windows binaries had compatibilities issues with numpy 2 that were fixed in torch 2.4.1 (see pytorch/pytorch#131668 (comment)).

Signed-off-by: James Butler <james.butler@revvity.com>
ericspod pushed a commit to Project-MONAI/MONAI that referenced this issue Mar 4, 2025
…able numpy 2 compatibility (#8368)

This is a follow-up to the comments made in
#8296 (comment).

### Description

This bumps the minimum required `torch` version from 1.13.1 to 2.2.0 in
the first commit.

See GHSA-5pcm-hx3q-hm94 and
GHSA-pg7h-5qx3-wjr3 for more details
regarding the "High" severity scoring.

- https://nvd.nist.gov/vuln/detail/CVE-2024-31580
- https://nvd.nist.gov/vuln/detail/CVE-2024-31583

Additionally, PyTorch added support for numpy 2 starting with PyTorch
2.3.0. The second commit in this PR allows for numpy 1 or numpy 2 to be
used with torch>=2.3.0. 
628C
I have included this commit in this PR as
upgrading to torch 2.2 means you might as well update to 2.3 to get the
numpy 2 compatibility.

A special case is being handled on Windows as PyTorch Windows binaries
had compatibilities issues with numpy 2 that were fixed in torch 2.4.1
(see
pytorch/pytorch#131668 (comment)).

Maintainers will need to update the required status checks for the
[`dev`](https://github.com/Project-MONAI/MONAI/tree/dev) branch to:
- Remove min-dep-pytorch (2.0.1)

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [X] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.

---------

Signed-off-by: James Butler <james.butler@revvity.com>
Can-Zhao pushed a commit to Can-Zhao/MONAI that referenced this issue Mar 10, 2025
…able numpy 2 compatibility (Project-MONAI#8368)

This is a follow-up to the comments made in
Project-MONAI#8296 (comment).

### Description

This bumps the minimum required `torch` version from 1.13.1 to 2.2.0 in
the first commit.

See GHSA-5pcm-hx3q-hm94 and
GHSA-pg7h-5qx3-wjr3 for more details
regarding the "High" severity scoring.

- https://nvd.nist.gov/vuln/detail/CVE-2024-31580
- https://nvd.nist.gov/vuln/detail/CVE-2024-31583

Additionally, PyTorch added support for numpy 2 starting with PyTorch
2.3.0. The second commit in this PR allows for numpy 1 or numpy 2 to be
used with torch>=2.3.0. I have included this commit in this PR as
upgrading to torch 2.2 means you might as well update to 2.3 to get the
numpy 2 compatibility.

A special case is being handled on Windows as PyTorch Windows binaries
had compatibilities issues with numpy 2 that were fixed in torch 2.4.1
(see
pytorch/pytorch#131668 (comment)).

Maintainers will need to update the required status checks for the
[`dev`](https://github.com/Project-MONAI/MONAI/tree/dev) branch to:
- Remove min-dep-pytorch (2.0.1)

### Types of changes
<!--- Put an `x` in all the boxes that apply, and remove the not
applicable items -->
- [X] Breaking change (fix or new feature that would cause existing
functionality to change).
- [ ] Integration tests passed locally by running `./runtests.sh -f -u
--net --coverage`.
- [ ] Quick tests passed locally by running `./runtests.sh --quick
--unittests --disttests`.

---------

Signed-off-by: James Butler <james.butler@revvity.com>
Signed-off-by: Can-Zhao <volcanofly@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/binaries Trigger all binary build and upload jobs on the PR high priority triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

7 participants
0