-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Closed as not planned
Closed as not planned
Copy link
Labels
module: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulmodule: rocmAMD GPU support for PytorchAMD GPU support for PytorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
TunableOp ignores the list of the GEMMs in tunableop_untuned.csv
during offline tuning. Instead of tuning only those GEMMs in the csv file, it discovers and tunes all the GEMMs in the workload.
Steps to reproduce
- Create a simple workload script
import torch
import torch.cuda.tunable as tunable
# the untuned csv is ignored even if file is loaded within the code
# tunable.tune_gemm_in_file("tunableop_untuned0.csv")
def mm(M, N, K):
torch.manual_seed(42)
A = torch.randn(M, K).cuda()
B = torch.randn(K, N).cuda()
C = A @ B
if __name__ == "__main__":
mm(256, 256, 512)
mm(512, 1024, 2048)
- Run the following bash script
#!/bin/bash
set -e
# Collect GEMMs
echo "===== Collecting gemms ====="
PYTORCH_TUNABLEOP_ENABLED=1 \
PYTORCH_TUNABLEOP_VERBOSE=2 \
PYTORCH_TUNABLEOP_ROCBLAS_ENABLED=0 \
PYTORCH_TUNABLEOP_TUNING=0 \
PYTORCH_TUNABLEOP_RECORD_UNTUNED=1 \
python test_tunableop.py
: ' Expected output:
===== Collecting gemms =====
no result, using default
no result, using default
'
# Tune Gemms
echo "===== Tuning gemms ====="
PYTORCH_TUNABLEOP_ENABLED=1 \
PYTORCH_TUNABLEOP_VERBOSE=2 \
PYTORCH_TUNABLEOP_ROCBLAS_ENABLED=0 \
PYTORCH_TUNABLEOP_TUNING=1 \
PYTORCH_TUNABLEOP_RECORD_UNTUNED=0 \
python test_tunableop.py
: ' output:
===== Tuning gemms =====
reading tuning results from tunableop_results0.csv
could not open tunableop_results0.csv for reading tuning results
finding fastest for GemmTunableOp_float_NN(nn_256_256_512) out of 12613 candidates
Rotating buffer 4 MiB. Needed Size: 1 MiB. Needed number of param copies: 4
└──found fastest for GemmTunableOp_float_NN(nn_256_256_512) Gemm_Hipblaslt_NN_274
GemmTunableOp_float_NN(nn_256_256_512) -> Gemm_Hipblaslt_NN_274,0.0157312
finding fastest for GemmTunableOp_float_NN(nn_1024_512_2048) out of 12613 candidates
Rotating buffer 4 MiB. Needed Size: 14 MiB. Needed number of param copies: 1
└──found fastest for GemmTunableOp_float_NN(nn_1024_512_2048) Gemm_Hipblaslt_NN_274
GemmTunableOp_float_NN(nn_1024_512_2048) -> Gemm_Hipblaslt_NN_274,0.037102
'
# Modify untuned csv and rename results
echo "===== Modifying csvs to retain a subset untuned gemms ====="
cp tunableop_untuned0.csv tunableop_untuned0.csv.bak
sed -i '2,$d' "tunableop_untuned0.csv"
mv tunableop_results0.csv tunableop_results0.csv.bak
: ' output:
===== Modifying csvs to retain a subset untuned gemms =====
'
# rerun tuning
echo "===== Rerunning tuning with subset of gemms ====="
PYTORCH_TUNABLEOP_ENABLED=1 \
PYTORCH_TUNABLEOP_VERBOSE=2 \
PYTORCH_TUNABLEOP_ROCBLAS_ENABLED=0 \
PYTORCH_TUNABLEOP_TUNING=1 \
PYTORCH_TUNABLEOP_RECORD_UNTUNED=0 \
python test_tunableop.py
: ' output:
===== Rerunning tuning with subset of gemms =====
reading tuning results from tunableop_results0.csv
could not open tunableop_results0.csv for reading tuning results
finding fastest for GemmTunableOp_float_NN(nn_256_256_512) out of 12613 candidates
Rotating buffer 4 MiB. Needed Size: 1 MiB. Needed number of param copies: 4
└──found fastest for GemmTunableOp_float_NN(nn_256_256_512) Gemm_Hipblaslt_NN_274
GemmTunableOp_float_NN(nn_256_256_512) -> Gemm_Hipblaslt_NN_274,0.0157388
finding fastest for GemmTunableOp_float_NN(nn_1024_512_2048) out of 12613 candidates
Rotating buffer 4 MiB. Needed Size: 14 MiB. Needed number of param copies: 1
└──found fastest for GemmTunableOp_float_NN(nn_1024_512_2048) Gemm_Hipblaslt_NN_274
GemmTunableOp_float_NN(nn_1024_512_2048) -> Gemm_Hipblaslt_NN_274,0.0371637
'
Versions
Collecting environment information...
PyTorch version: 2.7.0a0+git6374332
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42131-fa1d09cbd
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.0 24455 f24aa3b4a91f6ee2fcd15629ba0b49fa545d8d6b)
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-116-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42131
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9575F 64-Core Processor
CPU family: 26
Model: 2
Thread(s) per core: 1
Core(s) per socket: 64
Socket(s): 2
Stepping: 1
Frequency boost: enabled
CPU max MHz: 5008.0068
CPU min MHz: 1500.0000
BogoMIPS: 6599.99
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d
Virtualization: AMD-V
L1d cache: 6 MiB (128 instances)
L1i cache: 4 MiB (128 instances)
L2 cache: 128 MiB (128 instances)
L3 cache: 512 MiB (16 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-63
NUMA node1 CPU(s): 64-127
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] optree==0.14.0
[pip3] torch==2.7.0a0+git6374332
[pip3] torchao==0.9.0.dev20250214+rocm6.3
[pip3] torchtune==0.0.0
[pip3] torchvision==0.22.0a0+867521e
[pip3] triton==3.1.0
[conda] No relevant packages
cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd @jianyuh @nikitaved @mruberry @walterddr @xwang233 @lezcano
Metadata
Metadata
Assignees
Labels
module: linear algebraIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulIssues related to specialized linear algebra operations in PyTorch; includes matrix multiply matmulmodule: rocmAMD GPU support for PytorchAMD GPU support for PytorchtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Type
Projects
Status
Done