You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
$ llama-tts --tts-oute-default -p "The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance"&& aplay output.wav
This doesn't (and abort):
$ llama-tts --tts-oute-default -p "The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware"&& aplay output.wav
Difference is only the prompt:
"on a wide range of hardware"
First Bad Commit
No response
Relevant log output
|1750|><|875|><|933|><|1595|><|1406|><|861|><|437|><|747|><|1542|><|639|><|607|><|1308|><|1427|><|1141|><|1450|><|1304|><|1492|><|1656|>'
main: codes audio size: 544/home/runner/work/llama.cpp/llama.cpp/src/llama-context.cpp:897: GGML_ASSERT((cparams.causal_attn || cparams.n_ubatch >= n_tokens_all) &&"non-causal attention requires n_ubatch >= n_tokens") failed
[NewLWP260885]
[NewLWP260884]
[NewLWP260883]
[NewLWP260882]
[NewLWP260879]
[NewLWP260877]
[NewLWP260876]
[NewLWP260875]
[NewLWP260867]
ThisGDB supports auto-downloading debuginfo from the following URLs:
<https://debuginfod.archlinux.org>Enable debuginfod for this session? (y or [n]) [answered N; input not from terminal]
Debuginfod has been disabled.
To make this setting permanent, add 'set debuginfod enabled off' to .gdbinit.
Function(s) ^std::(move|forward|as_const|(__)?addressof) will be skipped when stepping.
Function(s) ^std::(shared|unique)_ptr<.*>::(get|operator) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|(forward_)?list|(unordered_|flat_)?(multi)?(map|set)|span)<.*>::(c?r?(begin|end)|front|back|data|size|empty) will be skipped when stepping.
Function(s) ^std::(basic_string|vector|array|deque|span)<.*>::operator.] will be skipped when stepping.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/libthread_db.so.1".
0x0000789957280e22in??() from /usr/lib/libc.so.6#00x0000789957280e22in??() from /usr/lib/libc.so.6#10x0000789957274fdain??() from /usr/lib/libc.so.6#20x0000789957275024in??() from /usr/lib/libc.so.6#30x00007899572e592fin wait4 () from /usr/lib/libc.so.6#40x00007899578b7f6din ggml_abort () from /home/kuro/Exec5/libggml-base.so
#50x0000789957a03c17in llama_context::decode(llama_batch&) () from /home/kuro/Exec5/libllama.so
#60x0000789957a03d78in llama_decode () from /home/kuro/Exec5/libllama.so
#70x0000568382fde354in main ()
[Inferior1 (process 260866) detached]
Aborted (core dumped)
The text was updated successfully, but these errors were encountered:
Uh oh!
There was an error while loading. Please reload this page.
Name and Version
Last (binaries) version from Releases:
llama-b5456-bin-ubuntu-vulkan-x64.zip
Operating systems
Linux
GGML backends
Vulkan
Hardware
System (updated today):
Models
--tts-oute-default
(OuteTTS-0.2-500M)Problem description & steps to reproduce
This works:
This doesn't (and abort):
Difference is only the prompt:
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: