-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: Unable to load Mixtral-8x7B-Instruct-v0.1-GGUF on Amazon Linux with AMD EPYC 7R13 #512
Comments
Here's the output of
|
I don't think we support Mistral Large yet. It's blocked on another sync with llama.cpp upstream. |
This is happening with |
I have it with TinyLlama-1.1B from the list here: https://github.com/Mozilla-Ocho/llamafile?tab=readme-ov-file#other-example-llamafiles error: Uncaught SIGSEGV (SEGV_MAPERR) at 0 on linuxdesktop pid 15045 tid 15045 RAX 00007fffcaa25af8 RBX 000010008037c370 RDI 0000100088d8f170 XMM0 ffff0000000000000000000000000000 XMM8 00000000000000000000000000000000 cosmoaddr2line /tmp/mxbai-embed-large-v1-f16.llamafile 596f03 595768 5964e9 498397 494f07 449949 401c26 4161f3 4015fb note: pledge() sandboxing makes backtraces not as good 10008004-10008009 rw-pa- 6x automap 384kB 1453mB total mapped memory./mxbai-embed-large-v1-f16.llamafile --embedding --model mxbai-embed-large-v1-f16.gguf |
Contact Details
rpchastain@proton.me
What happened?
I'm attempting to use
mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf
weights on an AWS ec2 instance with an AMD EPYC 7R13 and 4 NVidia L4 gpus. Llamafile fails to load with the attached log output.Version
llamafile v0.8.12
What operating system are you seeing the problem on?
No response
Relevant log output
The text was updated successfully, but these errors were encountered: