8000 fix(ci): Use LLAMA_CUDA for cuda wheels · asusevski/llama-cpp-python@4fb6fc1 · GitHub
[go: up one dir, main page]

Skip to content

Commit 4fb6fc1

Browse files
committed
fix(ci): Use LLAMA_CUDA for cuda wheels
1 parent b4cc923 commit 4fb6fc1

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

.github/workflows/build-wheels-cuda.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ jobs:
114114
$env:LD_LIBRARY_PATH = $env:CONDA_PREFIX + '/lib:' + $env:LD_LIBRARY_PATH
115115
}
116116
$env:VERBOSE = '1'
117-
$env:CMAKE_ARGS = '-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=all'
117+
$env:CMAKE_ARGS = '-DLLAMA_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=all'
118118
$env:CMAKE_ARGS = "-DLLAMA_CUDA_FORCE_MMQ=ON $env:CMAKE_ARGS"
119119
# if ($env:AVXVER -eq 'AVX') {
120120
$env:CMAKE_ARGS = $env:CMAKE_ARGS + ' -DLLAMA_AVX2=off -DLLAMA_FMA=off -DLLAMA_F16C=off'

0 commit comments

Comments
 (0)
0