8000 Build fails with Target "ggml-cuda" links to: CUDA::cublas but the target was not found. · Issue #2106 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content

Build fails with Target "ggml-cuda" links to: CUDA::cublas but the target was not found. #2106

@prasanthreddy-git

Description

@prasanthreddy-git

Need help with building llama-cpp-python CUDA support. I am getting the below error. It is able to find the CUDA toolkit installed on the machine.

× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [96 lines of output]
*** scikit-build-core 0.11.6 using CMake 3.28.3 (wheel)
*** Configuring CMake...
loading initial cache file /tmp/tmpwk3n417q/build/CMakeInit.txt
-- The C compiler identification is GNU 13.3.0
-- The CXX compiler identification is GNU 13.3.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/x86_64-linux-gnu-gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/x86_64-linux-gnu-g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMAKE_BUILD_TYPE=Release
-- Found Git: /usr/bin/git (found version "2.43.0")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- Found OpenMP_C: -fopenmp (found version "4.5")
-- Found OpenMP_CXX: -fopenmp (found version "4.5")
-- Found OpenMP: TRUE (found version "4.5")
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native
-- Found CUDAToolkit: /usr/local/cuda/targets/x86_64-linux/include (found version "12.8.93")
-- CUDA Toolkit found
-- Using CUDA architectures: native
-- The CUDA compiler identification is NVIDIA 12.8.93
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- CUDA host compiler is GNU 13.3.0
-- Including CUDA backend
-- ggml version: 0.0.1
-- ggml commit: 4227c9b
CMake Warning (dev) at CMakeLists.txt:13 (install):
Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
Call Stack (most recent call first):
CMakeLists.txt:108 (llama_cpp_python_install_target)
This warning is for project developers. Use -Wno-dev to suppress it.

  CMake Warning (dev) at CMakeLists.txt:21 (install):
    Target llama has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:108 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  CMake Warning (dev) at CMakeLists.txt:13 (install):
    Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:109 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  CMake Warning (dev) at CMakeLists.txt:21 (install):
    Target ggml has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:109 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  CMake Warning (dev) at CMakeLists.txt:13 (install):
    Target mtmd has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:162 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  CMake Warning (dev) at CMakeLists.txt:21 (install):
    Target mtmd has PUBLIC_HEADER files but no PUBLIC_HEADER DESTINATION.
  Call Stack (most recent call first):
    CMakeLists.txt:162 (llama_cpp_python_install_target)
  This warning is for project developers.  Use -Wno-dev to suppress it.
  
  -- Configuring done (18.8s)
  CMake Error at vendor/llama.cpp/ggml/src/ggml-cuda/CMakeLists.txt:110 (target_link_libraries):
    Target "ggml-cuda" links to:
  
      CUDA::cublas
  
    but the target was not found.  Possible reasons include:
  
      * There is a typo in the target name.
      * A find_package call is missing for an IMPORTED target.
      * An ALIAS target is missing.
  
  
  
  -- Generating done (0.1s)
  CMake Generate step failed.  Build files cannot be regenerated correctly.
  
  *** CMake configuration failed
  [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0