-
Notifications
You must be signed in to change notification settings - Fork 1.1k
pyinstaller hook script #709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
osx seems to use dylib
Very helpful thanks! |
@earonesty , till this PR gets merged, can we do this manually by modifying the existing .spec file generated from pyinstaller? |
You can just specify an additional hooks directory in the command line when you build |
this is great, thank you |
FYI on Mac I'm also seeing libllama.dylib. I edited the hook file like so and it's working great. elif sys.platform == 'darwin': # Mac
so_path = os.path.join(package_path, 'llama_cpp', 'libllama.dylib')
datas.append((so_path, 'llama_cpp')) |
8c93cf8
to
cc0fe43
Compare
On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how to fix? I added the ggml-metal.metal file to the datas array but still no luck
|
probably need to add "ggml-metal.metal" to the list of files picked up by
the hook.
…On Mon, Dec 4, 2023 at 10:55 AM demattosanthony ***@***.***> wrote:
On mac I'm having issues when setting n_gpu_layers to 1. Any ideas on how
to fix? I added the ggml-metal.metal file to the datas array but still no
luck
llama_new_context_with_model: kv self size = 1000.00 MiB
llama_build_graph: non-view tensors processed: 740/740
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: error: Error Domain=NSCocoaErrorDomain Code=260 "The file “ggml-metal.metal” couldn’t be opened because there is no such file." UserInfo={NSFilePath=ggml-metal.metal, NSUnderlyingError=0x13fe76d20 {Error Domain=NSPOSIXErrorDomain Code=2 "No such file or directory"}}
llama_new_context_with_model: ggml_metal_init() failed
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 |
—
Reply to this email directly, view it on GitHub
<#709 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAMMUP5JNQYST3LD3I6VVLYHXW77AVCNFSM6AAAAAA4W62WCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZYHE2DCNBRGM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@earonesty I added it to the datas array and rebuilt it but still failing |
Hi @earonesty! I get an error when running
Know how to solve it potentially? I am trying to package a tkinter file using pyinstaller and my tkinter file has llama-cpp-python installed and imported. |
@eric-prog Just a guess, but I believe the name for the artifact was changed from "llama.so" to "libllama.so". Same goes for the dylib and dll artifacts. Making that small change in the script worked for me! But you can verify in your own environment by checking ".venv/lib/python3.11/site-packages/llama_cpp" in your project (Note: you may need to replace python3.11 with the version you're using in your venv) to see the names of the build artifacts |
Unfortunately, this pull request doesn't fix the issue for me. Adding the above-mentioned ./hooks folder with the hook-llama_cpp.py file (as per commit 3a9227c, with libllama.so) doesn't fix the problem for me on Linux. PyInstaller produces the executable but when you try to run it, it fails with the same error
as in issue #1475 . |
Cool, but I had to change this:
To this:
For my project to work! |
kudos to this solution ! I was importing manually the binary like: For future developers reading this:
from PyInstaller.utils.hooks import collect_dynamic_libs
# Automatically collect all shared libraries
binaries = collect_dynamic_libs('llama_cpp')
8000
print(f"🚀 Hook executed at compile time: {binaries}") After running the pyinstaller command, you will see this print and the binaries it includes (dylib etc are successfully added !) |
It works to me, Thanks! |
When I am running the pyinstaller command |
copies the dll around so pyinstaller works
if anyone needs it
works on windows/linux
osx seems to work too.