10000 Weird dataloader performance degradation caused by torch and numpy import order · Issue #101188 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Weird dataloader performance degradation caused by torch and numpy import order #101188

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
hubert0527 opened this issue May 11, 2023 · 7 comments
Labels
module: openmp Related to OpenMP (omp) support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@hubert0527
Copy link
hubert0527 commented May 11, 2023

🐛 Describe the bug

Hi,

I recently noticed a weird behavior in Pytorch. The import order of torch and numpy can have a significant impact on the dataloader performance. In short (and see the complete example script below):

# Setting A: faster. 100 iterations take 91 seconds. Average load time 0.035.
import torch
import numpy as np

# Setting B: slower. 100 iterations take 158 seconds. Average load time ~0.45.
import numpy as np
import torch

It seems this performance is determined by the first time these two packages are imported (e.g., importing these two packages in the main script in B order and the dataloader script in A order, it will end up with B performance).

Reproduction

I used AWS EC2 machines (and more specifically, zone us-west-2 with p4de24xlarge instances). I am not sure if this is reproducible in other places.

Step 1: Dockerfile

FROM nvidia/cuda:11.3.0-devel-ubuntu20.04

# Fix NV docker problem
RUN apt-key adv --fetch-keys https://developer.download.nvidia.cn/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu2004/x86_64/7fa2af80.pub

# Install handy tools
RUN apt-get update && apt-get install vim htop tmux sudo git wget -y && rm -rf /var/lib/apt/lists/*

# Create user with sudo privilege
RUN addgroup --gid 1000 ubuntu
RUN adduser --disabled-password --gecos '' --uid 1000 --gid 1000 ubuntu 
RUN adduser ubuntu sudo
RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
USER ubuntu
USER 1000:1000
RUN sudo chown ubuntu:ubuntu /home/ubuntu/ # Actually, not sure if this is even needed

# Conda
ENV PATH="/home/ubuntu/miniconda3/bin:${PATH}"
ARG PATH="/home/ubuntu/miniconda3/bin:${PATH}"
RUN sudo apt-get install -y wget
RUN cd /home/ubuntu/ \
    && wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
    && bash Miniconda3-latest-Linux-x86_64.sh -b -p /home/ubuntu/miniconda3/ \
    && rm -f Miniconda3-latest-Linux-x86_64.sh 
RUN conda install python=3.9 pytorch torchvision torchaudio pytorch-cuda=11.8 xformers -c pytorch -c nvidia -c xformers -y
RUN pip install tqdm
RUN conda clean --all -y

Step 2: After getting into the environment

Add these lines to ~/.bashrc and source ~/.bashrc to init conda

conda_root="/home/ubuntu/miniconda3/"
conda_steup_bin="${conda_root}bin/conda"
__conda_setup="$($conda_steup_bin 'shell.bash' 'hook')"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
else
    if [ -f "${conda_root}etc/profile.d/conda.sh" ]; then
        . "${conda_root}etc/profile.d/conda.sh"
    else
        export PATH="${conda_root}bin:$PATH"
    fi
fi
unset __conda_setup

Step 3: Example script

# Switching these two lines will get a different performance
import torch
import numpy as np

from torch.utils.data import Dataset
import time

class MyDataset(Dataset):
    def __getitem__(self, i):
        st = time.time()
        data = {k: np.random.rand(3, 512, 512) for k in range(6)} # it seems only numpy has the issue
        print(" [*] Worker load time {:.4f}".format(time.time()-st))
        return data

    def __len__(self):
        return 1000000

if __name__ == "__main__":
    from torch.utils.data import DataLoader
    from tqdm import tqdm

    dataloader = DataLoader(MyDataset())
    for data in tqdm(dataloader):
        pass

Versions

Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.8
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31

Python version: 3.9.16 (main, Mar 8 2023, 14:00:05) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.228-132.418.amzn2.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB

Nvidia driver version: 470.161.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
Stepping: 7
CPU MHz: 1916.685
BogoMIPS: 5999.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB
L1i cache: 1.5 MiB
L2 cache: 48 MiB
L3 cache: 71.5 MiB
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Itlb multihit: KVM: Vulnerable
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Vulnerable
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke

Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[pip3] torchaudio==2.0.0
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.6 py39h417a72b_1
[conda] mkl_random 1.2.2 py39h417a72b_1
[conda] numpy 1.24.3 py39hf6e8229_1
[conda] numpy-base 1.24.3 py39h060ed82_1
[conda] pytorch 2.0.0 py3.9_cuda11.8_cudnn8.7.0_0 pytorch
[conda] pytorch-cuda 11.8 h7e8668a_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchaudio 2.0.0 py39_cu118 pytorch
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.0 py39_cu118 pytorch

cc @ssnl @VitalyFedyunin @ejguan @NivekT @dzhulgakov

@vadimkantorov
Copy link
Contributor

import order can change which OpenMP library got loaded, and I think some OpenMP libraries mess up CPU core affinities...

@cpuhrsch cpuhrsch added module: dataloader Related to torch.utils.data.DataLoader and Sampler triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 12, 2023
@ejguan ejguan added module: openmp Related to OpenMP (omp) support in PyTorch and removed module: dataloader Related to torch.utils.data.DataLoader and Sampler labels May 12, 2023
@Andredance
Copy link

I faced the same problem. Testing this problem provided by @hubert0527 code and environment is also possible. If you will specify num_workers>0 for Dataloader when numpy is imported as a first library - you will face the problem when the script uses only 1 CPU core (even if num_workers=30 for example and there are available 30 CPU cores). I solved this by downgrading the mkl library from 2023.1.0 (something was wrong with the last release of mkl and Dataloader) to 2021.4.0. Another possible fix is to put pytorch import above the first numpy import. I faced this specifically with pytorch version 1.11, but tested with pytorch 2.0 (still the same problem).

To be a little bit more specific, here is my setup:
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.16.3
Libc version: glibc-2.31

Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA A10
Nvidia driver version: 525.85.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 57 bits virtual
CPU(s): 30
On-line CPU(s) list: 0-29
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 30
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
Stepping: 6
CPU MHz: 2593.954
BogoMIPS: 5187.90
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 960 KiB
L1i cache: 960 KiB
L2 cache: 120 MiB
L3 cache: 480 MiB
NUMA node0 CPU(s): 0-29
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.24.3
[pip3] torch==2.0.0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.3 py310h5f9d8c6_1
[conda] numpy-base 1.24.3 py310hb5e798b_1
[conda] pytorch 2.0.0 py3.10_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch

@TjuJianyu
Copy link

I faced the same problem. Interesting.

@gau-nernst
Copy link
gau-nernst commented Sep 12, 2023

I'm getting the same problem also... Seems like it has to do with the latest Intel libraries (2023.1). On another machine with older versions of intel libraries (2021.4), I don't face this problem.

For those who need, I used this command to install older version of Intel libraries in a new conda environment. For some reasons, trying to install mkl=2021.* before or after pytorch installation would result in strange/conflicting packages.

conda install pytorch pytorch-cuda=11.8 "mkl=2021.*" -c pytorch -c nvidia

@MajorDavidZhang
Copy link

same problem when using openclip. The average usage per cpu core is about 2% when using DDP and num_workers>0 in pytorch Dataloader. After deleting 'import numpy as np', the cpu usage becomes normal (nearly 100%)

@ice-tong
Copy link

I encountered the same problem and fixed it by reinstalling numpy.

@daviddwlee84
Copy link

#37377 (comment)
#67011

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: openmp Related to OpenMP (omp) support in PyTorch triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

10 participants
0