8000 pyinstaller hook script by earonesty · Pull Request #709 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content

pyinstaller hook script #709

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

pyinstaller hook script #709

wants to merge 3 commits into from

Conversation

earonesty
Copy link
Contributor
@earonesty earonesty commented Sep 13, 2023

copies the dll around so pyinstaller works

if anyone needs it

works on windows/linux

osx seems to work too.

@903124
Copy link
903124 commented Oct 6, 2023

Very helpful thanks!

@bishwenduk029
Copy link

@earonesty , till this PR gets merged, can we do this manually by modifying the existing .spec file generated from pyinstaller?

@earonesty
Copy link
Contributor Author

You can just specify an additional hooks directory in the command line when you build

@inferense
Copy link

this is great, thank you

antoine-lizee pushed a commit to antoine-lizee/llama-cpp-python that referenced this pull request Oct 30, 2023
@r
8000
obertritz
Copy link

FYI on Mac I'm also seeing libllama.dylib. I edited the hook file like so and it's working great.

elif sys.platform == 'darwin':  # Mac
    so_path = os.path.join(package_path, 'llama_cpp', 'libllama.dylib')
    datas.append((so_path, 'llama_cpp'))

@abetlen abetlen force-pushed the main branch 2 times, most recently from 8c93cf8 to cc0fe43 Compare November 14, 2023 20:24
@demattosanthony
Copy link

On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck

llama_new_context_with_model: kv self size  = 1000.00 MiB
llama_build_graph: non-view tensors processed: 740/740
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
llama_new_context_with_model: ggml_metal_init() failed
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | 

@earonesty
Copy link
Contributor Author
earonesty commented Dec 4, 2023 via email

@demattosanthony
Copy link
demattosanthony commented Dec 4, 2023

@earonesty I added it to the datas array and rebuilt it but still failing

@eric-prog
Copy link
eric-prog commented Jan 6, 2024

Hi @earonesty! I get an error when running pyinstaller --additional-hooks-dir=./hooks main.py with the hooks folder created and your script file in the folder:

Unable to find '/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/llama_cpp/llama.so' when adding binary and data files.

Know how to solve it potentially?

I am trying to package a tkinter file using pyinstaller and my tkinter file has llama-cpp-python installed and imported.

@averypfeiffer
Copy link

@eric-prog Just a guess, but I believe the name for the artifact was changed from "llama.so" to "libllama.so". Same goes for the dylib and dll artifacts.

Making that small change in the script worked for me! But you can verify in your own environment by checking ".venv/lib/python3.11/site-packages/llama_cpp" in your project (Note: you may need to replace python3.11 with the version you're using in your venv) to see the names of the build artifacts

@alexeygridnev
Copy link
alexeygridnev commented Aug 3, 2024

Unfortunately, this pull request doesn't fix the issue for me. Adding the above-mentioned ./hooks folder with the hook-llama_cpp.py file (as per commit 3a9227c, with libllama.so) doesn't fix the problem for me on Linux. PyInstaller produces the executable but when you try to run it, it fails with the same error

FileNotFoundError: Shared library with base name 'llama' not found

as in issue #1475 .

@gudarzi
Copy link
gudarzi commented Sep 20, 2024

Cool, but I had to change this:

dll_path = os.path.join(package_path, 'llama_cpp', 'llama.dll')

To this:

dll_path = os.path.join(package_path, 'llama_cpp', 'lib', 'llama.dll')

For my project to work!

@JulienElkaim
Copy link
JulienElkaim commented Jan 29, 2025

kudos to this solution ! I was importing manually the binary like:
binaries=[('/path-to-my-python-env-folder/site-packages/llama_cpp/lib/libllama.dylib', 'llama_cpp/lib/')],
Which is a pain.... and not shareable to a team.

For future developers reading this:

  • The name of the file is important for Pyinstaller to get the hook ! Should be hook-<package_name>, here do as the commit.
  • If you keep the SAME impl as this commit, at list add 'lib' in the path.join and rename the files like linux, i.e "libllama.dylib" . For example : dll_path = os.path.join(package_path, 'llama_cpp', 'lib', 'libllama.dylib')
  • As of today, using the last pyinstaller and llama_cpp, your hook can be drastically reduced to:
from PyInstaller.utils.hooks import collect_dynamic_libs

# Automatically collect all shared libraries
binaries = collect_dynamic_libs('llama_cpp')

8000
print(f"🚀 Hook executed at compile time: {binaries}")

After running the pyinstaller command, you will see this print and the binaries it includes (dylib etc are successfully added !)

@movingJin
Copy link

It works to me, Thanks!

@Eros483
Copy link
Eros483 commented May 23, 2025

When I am running the pyinstaller command
pyinstaller --name binary-name --additional-hooks-dir=./hooks frontend.py
I am not even getting frontend.exe
Am i using the command wrong?
@JulienElkaim with your file placed inside the hooks folder and named as hook_llama_cpp.py, how would i go about running the pyinstaller command?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0