10000 Could not install llama-cpp-python 0.3.7 on Macbook Air M1 - Compilation issue · Issue #1956 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content

Could not install llama-cpp-python 0.3.7 on Macbook Air M1 - Compilation issue #1956

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
4 tasks done
vietanhdev opened this issue Mar 2, 2025 · 2 comments
Open
4 tasks done

Comments

@vietanhdev
Copy link
vietanhdev commented Mar 2, 2025

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

The installation should be done correctly with:

pip install llama-cpp-python==0.3.7
# Or
pip install llama-cpp-python \c
  --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

Current Behavior

Error logs: https://gist.github.com/vietanhdev/c641ec7c4acc8fd9d5f0f3a02a850189

I think this part caused the issue:

[31/66] : && /opt/homebrew/opt/llvm/bin/clang -O3 -DNDEBUG -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.2.sdk -dynamiclib -Wl,-headerpad_max_install_names  -o bin/libggml-metal.dylib -install_name @rpath/libggml-metal.dylib vendor/llama.cpp/ggml/src/ggml-metal/CMakeFiles/ggml-metal.dir/ggml-metal.m.o vendor/llama.cpp/ggml/src/ggml-metal/CMakeFiles/ggml-metal.dir/__/__/__/__/__/autogenerated/ggml-metal-embed.s.o  -Wl,-rpath,@loader_path  bin/libggml-base.dylib  -framework Foundation  -framework Metal  -framework MetalKit && :
      FAILED: bin/libggml-metal.dylib
      : && /opt/homebrew/opt/llvm/bin/clang -O3 -DNDEBUG -arch arm64 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX15.2.sdk -dynamiclib -Wl,-headerpad_max_install_names  -o bin/libggml-metal.dylib -install_name @rpath/libggml-metal.dylib vendor/llama.cpp/ggml/src/ggml-metal/CMakeFiles/ggml-metal.dir/ggml-metal.m.o vendor/llama.cpp/ggml/src/ggml-metal/CMakeFiles/ggml-metal.dir/__/__/__/__/__/autogenerated/ggml-metal-embed.s.o  -Wl,-rpath,@loader_path  bin/libggml-base.dylib  -framework Foundation  -framework Metal  -framework MetalKit && :
      Undefined symbols for architecture arm64:
        "_OBJC_CLASS_$_MTLResidencySetDescriptor", referenced from:
             in ggml-metal.m.o
      ld: symbol(s) not found for architecture arm64
      clang: error: linker command failed with exit code 1 (use -v to see invocation)

The installation works well with version 0.3.6.

Environment and Context

  • Macbook Air M1 - OS version: 15.3.1.
  • Environment: Miniconda3, tried with Python 3.10 and 3.11.
  • sw_vers
    ProductName: macOS
    ProductVersion: 15.3.1
    BuildVersion: 24D70
  • xcodebuild -version
    Xcode 16.2
  • xcrun -sdk macosx --show-sdk-version
    15.2

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

  • Install Miniconda3
  • Create a new environment and install the package:
conda create -n llama-cpp python=3.10 # or 3.11
conda activate llama-cpp
pip install llama-cpp-python==0.3.7
@MonolithFoundation
Copy link
MonolithFoundation commented Mar 19, 2025

Same error:

Undefined symbols for architecture arm64:
"OBJC_CLASS$_MTLResidencySetDescriptor", referenced from:
in ggml-metal.m.o
ld: symbol(s) not found for architecture arm64

0.3.8 also fails:

CMAKE_ARGS="-DCMAKE_OSX_ARCHITECTURES=arm64 -DCMAKE_APPLE_SILICON_PROCESSOR=arm64 -DGGML_METAL=on" pip install --upgrade --verbose --force-reinstall --no-cache-dir llama-cpp-python

@droumis
Copy link
droumis commented May 12, 2025

I'm having to use the following on M1

CMAKE_ARGS="-DLLAMA_METAL=on -DCMAKE_OSX_ARCHITECTURES=arm64" FORCE_CMAKE="1" pip install 'llama-cpp-python'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
0