You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried renting a server with 4 MTT S4000 GPUs at AutoDL to test the deployment of llama.cpp. Operating system is Ubuntu 22.04, using the latest clone llama.cpp first confirm that mthreads-gmi can display GPU situation properly, because I believe MTT driver and MUSA SDK should be installed properly
Compile according to MUSA instructions in build.md cmake -B build -DGGML_MUSA=ON
When I execute build cmake --build build --config Release
the following error occurs
First Bad Commit
cmake --build build --config Release
[ 3%] Built target ggml-base
[ 3%] Building CXX object ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/mudnn.cu.o
/root/llama.cpp/ggml/src/ggml-musa/mudnn.cu:25:29: error: no member named 'EXECUTION_FAILED' in 'musa::dnn::Status'
case mudnn::Status::EXECUTION_FAILED:
~~~~~~~~~~~~~~~^
1 error generated when compiling for mp_21.
gmake[2]: *** [ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/build.make:1168: ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/mudnn.cu.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:1783: ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2
You might want to reset to an earlier commit for a successful build. Note: Our previously tested version is MUSA SDK rc3.1.1, which is still different from the one used in the AutoDL instance.
You might want to reset to an earlier commit for a successful build. Note: Our previously tested version is MUSA SDK rc3.1.1, which is still different from the one used in the AutoDL instance.
Thank you for your response.
I have submitted the issue to AutoDL's customer service. AutoDL may help me upgrade the version of MUSA SDK. In addition, it was also communicated with MTheads that a new version of cloud resources may be used for testing in the later stage.
Git commit
cdf94a1
Operating systems
Linux
GGML backends
Musa
Problem description & steps to reproduce
I tried renting a server with 4 MTT S4000 GPUs at AutoDL to test the deployment of llama.cpp. Operating system is Ubuntu 22.04, using the latest clone llama.cpp first confirm that mthreads-gmi can display GPU situation properly, because I believe MTT driver and MUSA SDK should be installed properly
Compile according to MUSA instructions in build.md
cmake -B build -DGGML_MUSA=ON
When I execute build
cmake --build build --config Release
the following error occurs
First Bad Commit
cmake --build build --config Release
[ 3%] Built target ggml-base
[ 3%] Building CXX object ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/mudnn.cu.o
/root/llama.cpp/ggml/src/ggml-musa/mudnn.cu:25:29: error: no member named 'EXECUTION_FAILED' in 'musa::dnn::Status'
case mudnn::Status::EXECUTION_FAILED:
~~~~~~~~~~~~~~~^
1 error generated when compiling for mp_21.
gmake[2]: *** [ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/build.make:1168: ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/mudnn.cu.o] Error 1
gmake[1]: *** [CMakeFiles/Makefile2:1783: ggml/src/ggml-musa/CMakeFiles/ggml-musa.dir/all] Error 2
gmake: *** [Makefile:146: all] Error 2
Compile command
Relevant log output
The text was updated successfully, but these errors were encountered: