8000 [Bug] V100服务启动时卡住没有反应 · Issue #3568 · InternLM/lmdeploy · GitHub
[go: up one dir, main page]

Skip to content
[Bug] V100服务启动时卡住没有反应 #3568
@muziyongshixin

Description

@muziyongshixin

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

在8卡V100机器上使用如下命令启动时,卡在某个位置一直没有反应

Reproduction

启动命令:
lmdeploy serve api_server /data/liyongzhi/hf_models/QwQ-32B/ --server-port 8080 --server-name qwq32B --tp=8 --dtype=float16 --log-level DEBUG

最后打印的日志:


[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)
[TM][DEBUG] void turbomind::UnifiedAttentionLayer::Forward(turbomind::UnifiedAttentionLayer::ForwardParam)

Environment

lmdeploy check_env                                                                                                                                          
sys.platform: linux                                                                                                                                                                            Python: 3.10.16 (main, Dec  4 2024, 08:53:37) [GCC 9.4.0]                                                                                                                                      CUDA available: True                                                                                                                                                                           MUSA available: False                                                                                                                                                                          numpy_random_seed: 2147483648                                                                                                                                                                  GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB                                                                                                                                                      CUDA_HOME: /usr/local/cuda                                                                                                                                                                     NVCC: Cuda compilation tools, release 12.1, V12.1.105                                                                                                                                          GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0                                                                                                                                PyTorch: 2.6.0+cu124                                                                                                                                                                           PyTorch compiling details: PyTorch built with:                                                                                                                                                 
  - GCC 9.3                                                                                                                                                                                    
  - C++ Version: 201703                                                                                                                                                                        
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications                                                                        
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)                                                                                                                
  - OpenMP 201511 (a.k.a. OpenMP 4.5)                                                                                                                                                          
  - LAPACK is enabled (usually provided by MKL)                                                                                                                                                
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 12.4
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=comp
ute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=2236df1770800ffea5697b11b0bb0d910b2e59e1, CUDA_VERSION=12.4, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/
bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEM
M -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-fi
eld-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -
Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -
Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.6.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF,
 USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.21.0+cu124
LMDeploy: 0.8.0+
transformers: 4.51.3
gradio: Not Found
fastapi: 0.115.8
pydantic: 2.10.6
triton: 3.2.0
NVIDIA Topology: TorchVision: 0.21.0+cu124
LMDeploy: 0.8.0+
transformers: 4.51.3
gradio: Not Found
fastapi: 0.115.8
pydantic: 2.10.6
triton: 3.2.0
NVIDIA Topology: 
        GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV1     NV2     NV1     SYS     SYS     SYS     NV2     NODE    0-19,40-59      0               N/A
GPU1    NV1      X      NV1     NV2     SYS     SYS     NV2     SYS     NODE    0-19,40-59      0               N/A
GPU2    NV2     NV1      X      NV2     SYS     NV1     SYS     SYS     PIX     0-19,40-59      0               N/A
GPU3    NV1     NV2     NV2      X      NV1     SYS     SYS     SYS     PIX     0-19,40-59      0               N/A
GPU4    SYS     SYS     SYS     NV1      X      NV2     NV2     NV1     SYS     20-39,60-79     1               N/A
GPU5    SYS     SYS     NV1     SYS     NV2      X      NV1     NV2     SYS     20-39,60-79     1               N/A
GPU6    SYS     NV2     SYS     SYS     NV2     NV1      X      NV1     SYS     20-39,60-79     1               N/A
GPU7    NV2     SYS     SYS     SYS     NV1     NV2     NV1      X      SYS     20-39,60-79     1               N/A
NIC0    NODE    NODE    PIX     PIX     SYS     SYS     SYS     SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

Error traceback

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0