10000 What version or package name will be used in aarch64 release for 2.6 on pypi? · Issue #138971 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content
What version or package name will be used in aarch64 release for 2.6 on pypi? #138971
@dilililiwhy

Description

@dilililiwhy

🐛 Describe the bug

An +cpu suffix is added to aarch64 (cpu) nightly package. #138588

Before

# https://download.pytorch.org/whl/nightly/cpu/torch-2.6.0.dev20241022-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
>>> torch.__version__
'2.6.0.dev20241022'

After

# https://download.pytorch.org/whl/nightly/cpu/torch-2.6.0.dev20241023%2Bcpu-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
>>> torch.__version__
'2.6.0.dev20241023+cpu'

For the name of aarch64 (cpu) package does not contain +cpu suffix on pypi in previous version, what version or package name will be used in aarch64 release for 2.6 on pypi?

Versions

PyTorch version: 2.6.0.dev20241023+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: CentOS Linux 7 (AltArch) (aarch64)
GCC version: (GCC) 10.2.1 20210130 (Red Hat 10.2.1-11)
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.17

Python version: 3.11.9 (main, Jul 15 2024, 06:10:42) [GCC 10.2.1 20210130 (Red Hat 10.2.1-11)] (64-bit runtime)
Python platform: Linux-4.18.0-80.7.2.el7.aarch64-aarch64-with-glibc2.17
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: aarch64
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Thread(s) per core: 1
Core(s) per socket: 16
Socket(s): 2
NUMA node(s): 2
Model: 0
CPU max MHz: 2600.0000
CPU min MHz: 2600.0000
BogoMIPS: 200.00
L1d cache: 64K
L1i cache: 64K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm

Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.6.0.dev20241023+cpu
[conda] Could not collect

cc @ezyang @gchanan @zou3519 @kadeng @msaroufim

Metadata

Metadata

Assignees

Labels

high prioritymodule: regressionIt used to work, and now it doesn'toncall: relengIn support of CI and Release EngineeringtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate module

Type

No type

Projects

Status

Done

Status

Done

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions

    0