8000 CUDA llama-cpp-python build failed. · Issue #1986 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content
CUDA llama-cpp-python build failed. #1986
Open
@Ado012

Description

@Ado012

I am trying to build llama-cpp-python with CUDA and it is failing. I have tried to use some of the suggestions in here for similar issues and they aren't working and I don't see what is wrong in the output. My system is 21.3 Linux Mint. My CUDA gfx driver version is 12.5. My CUDA toolkit version is 11.5.

This is the command I used for the installation.

python3 -m venv venv
source venv/bin/activate
CMAKE_ARGS="-DGGML_CUDA=on" LD_LIBRARY_PATH="/usr/lib/x86_64-linux-gnu"  pip install llama-cpp-python==0.3.4 --verbose 

The output is here

https://gist.github.com/Ado012/70492020c3567bc60f6daa2ba89e2be5

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0