8000 Query Regarding Memory Release API in AOTInductor for PyTorch · Issue #153363 · pytorch/pytorch · GitHub
[go: up one dir, main page]

Skip to content

Query Regarding Memory Release API in AOTInductor for PyTorch #153363

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
sujuyu opened this issue May 12, 2025 · 3 comments
Closed

Query Regarding Memory Release API in AOTInductor for PyTorch #153363

sujuyu opened this issue May 12, 2025 · 3 comments
Labels
export-triaged This tag is used to tag issues that have been looked by PT2 Export team and determined the next step module: aotinductor aot inductor oncall: export oncall: pt2

Comments

@sujuyu
Copy link
sujuyu commented May 12, 2025

🐛 Describe the bug

Hello,
I am currently working with a C++ service where I utilize a dynamically compiled library from AOTInductor. In my implementation, I use multiple CUDA streams and set num_model > 0, as shown in the example below:

_aotiModelContainerRunner = std::make_shared<torch::inductor::AOTIModelContainerRunnerCuda>(
                modelFilePath, _aotModelParallelNum, "cuda", FileUtil::getParentDir(_modelConfig.aotModelFilePath));

The service is exposed to external calls and maintains stability under an average of 400 QPS (queries per second). However, over time, I observe a gradual increase in GPU memory usage by AOTI, which eventually leads to the inability to allocate sufficient memory, causing forward operations to fail.

Image

My question is, does AOTI provide an API to release GPU memory during service operation? This would be useful for situations where the number of forward failures surpasses a certain threshold, allowing us to invoke this API to manage memory usage effectively.
Thank you for your assistance.

Versions

$python collect_env.py
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A

OS: Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)
GCC version: (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)
Clang version: 17.0.6 (Alibaba Cloud Compiler 17.0.6.4-24.11.20.alios7)
CMake version: version 4.0.0
Libc version: glibc-2.32

Python version: 3.10.17 | packaged by conda-forge | (main, Apr 10 2025, 22:19:12) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.32
Is CUDA available: True
CUDA runtime version: 12.4.99
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz
Stepping: 6
CPU MHz: 2899.994
CPU max MHz: 3500.0000
CPU min MHz: 800.0000
BogoMIPS: 5800.00
Virtualization: VT-x
L1d cache: 48K
L1i cache: 32K
L2 cache: 1280K
L3 cache: 49152K
NUMA node0 CPU(s): 0-31,64-95
NUMA node1 CPU(s): 32-63,96-127
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid fsrm md_clear pconfig flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] mypy_extensions==1.1.0
[pip3] numpy==2.2.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] torchmetrics==1.0.3
[pip3] torchrec==1.1.0
[pip3] triton==3.2.0
[conda] numpy 2.2.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchrec 1.1.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi

cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1

@sujuyu
Copy link
Author
sujuyu commented May 12, 2025

The root cause of the problem is unlikely to be a GPU memory leak in the external framework; otherwise, at 400 QPS, a CUDA OOM (Out of Memory) error would typically occur in less than 5 minutes. From the observed behavior, this does not seem to be the case.

@williamwen42 williamwen42 added the module: aotinductor aot inductor label May 13, 2025
@williamwen42 williamwen42 added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module and removed triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 13, 2025
@avikchaudhuri
Copy link
Contributor

@desertfire to advise.

@desertfire
Copy link
Contributor

This would be useful for situations where the number of forward failures surpasses a certain threshold, allowing us to invoke this API to manage memory usage effectively.

Did you only observe memory leak when forward failures happened? Or is there memory leak in regular runs?

@desertfire desertfire added the export-triaged This tag is used to tag issues that have been looked by PT2 Export team and determined the next step label May 20, 2025
@sujuyu sujuyu closed this as completed May 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
export-triaged This tag is used to tag issues that have been looked by PT2 Export team and determined the next step module: aotinductor aot inductor oncall: export oncall: pt2
Projects
None yet
Development

No branches or pull requests

5 participants
0