8000
We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
version: 5476 (17fc817) built with gcc (GCC) 13.3.0 for x86_64-unknown-linux-gnu
Linux
Other (Please specify in the next section)
Create grammar file tlaplus-min.gbnf with contents:
tlaplus-min.gbnf
root ::= ws module ws module ::= line sp "MODULE" sp name sp line ws dline line ::= "-"{4,} dline ::= "="{4,} name ::= [0-9a-zA-Z_]*[a-zA-Z][0-9a-zA-Z_]* # Filler tokens ws ::= (sp | nl)* sp ::= [ \t]* nl ::= "\r"? "\n"
Create file Test.tla with contents:
Test.tla
---- MODULE Test ---- ====
Compile llama (in either debug or Release) then run ./build/bin/test-gbnf-validator tlaplus-min.gbnf Test.tla. Observe segfault.
./build/bin/test-gbnf-validator tlaplus-min.gbnf Test.tla
Note: I am building on nixOS, so I ran:
nix develop ./flake.nix
cmake -B build -DCMAKE_BUILD_TYPE=Debug
cmake --build build
No response
Program received signal SIGSEGV, Segmentation fault. 0x00007ffff76a8625 in malloc () from /nix/store/pacbfvpzqz2mksby36awvbcn051zcji3-glibc-2.40-36/lib/libc.so.6 (gdb) backtrace #0 0x00007ffff76a8625 in malloc () from /nix/store/pacbfvpzqz2mksby36awvbcn051zcji3-glibc-2.40-36/lib/libc.so.6 #1 0x00007ffff78bc95c in operator new(unsigned long) () from /nix/store/97f3gw9vpyxvwjv2i673isvg92q65mwn-gcc-13.3.0-lib/lib/libstdc++.so.6 #2 0x00007ffff7dd62d6 in std::__new_allocator<llama_grammar_element const*>::allocate ( this=<optimized out>, __n=<optimized out>) at /nix/store/xzfmarrq8x8s4ivpya24rrndqsq2ndiz-gcc-13.3.0/include/c++/13.3.0/bits/new_allocator.h:126 #3 std::allocator_traits<std::allocator<llama_grammar_element const*> >::allocate ( __n=<optimized out>, __a=...) at /nix/store/xzfmarrq8x8s4ivpya24rrndqsq2ndiz-gcc-13.3.0/include/c++/13.3.0/bits/alloc_traits.h:482 #4 std::_Vector_base<llama_grammar_element const*, std::allocator<llama_grammar_element const*> >::_M_allocate (this=<optimized out>, __n=<optimized out>) at /nix/store/xzfmarrq8x8s4ivpya24rrndqsq2ndiz-gcc-13.3.0/include/c++/13.3.0/bits/stl_vector.h:381 #5 std::_Vector_base<llama_grammar_element const*, std::allocator<llama_grammar_element const*> >::_M_allocate (__n=<optimized out>, this=<optimized out>) at /nix/store/xzfmarrq8x8s4ivpya24rrndqsq2ndiz-gcc-13.3.0/include/c++/13.3.0/bits/stl_vector.h:378 #6 std::vector<llama_grammar_element const*, std::allocator<llama_grammar_element const*> >::_M_realloc_insert<llama_grammar_element const* const&> (this=this@entry=0x7fffff7ff0b0, __position=0x0) at /nix/store/xzfmarrq8x8s4ivpya24rrndqsq2ndiz-gcc-13.3.0/include/c++/13.3.0/bits/vector.tcc:459 #7 0x00007ffff7dcf76e in std::vector<llama_grammar_element const*, std::allocator<llama_grammar_element const*> >::push_back (__x=@0x7fffff7ff0a0: 0x41eec0, this=0x7fffff7ff0b0) at /nix/store/xzfmarrq8x8s4ivpya24rrndqsq2ndiz-gcc-13.3.0/include/c++/13.3.0/bits/stl_vector.h:1292 #8 llama_grammar_advance_stack (rules=std::vector of length 16, capacity 16 = {...}, stack=std::vector of length 3, capacity 4 = {...}, new_stacks=std::vector of length 1, capacity 1 = {...}) at /home/ahelwer/src/tlaplus/llm/llama.cpp/src/llama-grammar.cpp:714 #9 0x00007ffff7dcf60d in llama_grammar_advance_stack ( rules=std::vector of length 16, capacity 16 = {...}, stack=std::vector of length 2, capacity 2 = {...}, new_stacks=std::vector of length 1, capacity 1 = {...}) at /home/ahelwer/src/tlaplus/llm/llama.cpp/src/llama-grammar.cpp:716 #10 0x00007ffff7dcf60d in llama_grammar_advance_stack ( rules=std::vector of length 16, capacity 16 = {...}, stack=std::vector of length 2, capacity 2 = {...}, new_stacks=std::vector of length 1, capacity 1 = {...}) at /home/ahelwer/src/tlaplus/llm/llama.cpp/src/llama-grammar.cpp:716 #11 0x00007ffff7dcf60d in llama_grammar_advance_stack ( rules=std::vector of length 16, capacity 16 = {...}, stack=std::vector of length 3, capacity 4 = {...}, new_stacks=std::vector of length 1, capacity 1 = {...}) at /home/ahelwer/src/tlaplus/llm/llama.cpp/src/llama-grammar.cpp:716
Stacktrace continues for a very long time, seemingly indicating infinite recursion.
The text was updated successfully, but these errors were encountered:
This slightly modified grammar does not segfault (note the changes to ws and sp rules):
ws
sp
root ::= ws module ws module ::= line sp "MODULE" sp name sp line ws dline line ::= "-"{4,} dline ::= "="{4,} name ::= [0-9a-zA-Z_]*[a-zA-Z][0-9a-zA-Z_]* # Filler tokens ws ::= [ \t\r\n]* sp ::= [ \t]*
Sorry, something went wrong.
No branches or pull requests
Uh oh!
There was an error while loading. Please reload this page.
Name and Version
version: 5476 (17fc817)
built with gcc (GCC) 13.3.0 for x86_64-unknown-linux-gnu
Operating systems
Linux
Which llama.cpp modules do you know to be affected?
Other (Please specify in the next section)
Command line
Problem description & steps to reproduce
Create grammar file
tlaplus-min.gbnf
with contents:Create file
Test.tla
with contents:Compile llama (in either debug or Release) then run
./build/bin/test-gbnf-validator tlaplus-min.gbnf Test.tla
. Observe segfault.Note: I am building on nixOS, so I ran:
nix develop ./flake.nix
cmake -B build -DCMAKE_BUILD_TYPE=Debug
cmake --build build
First Bad Commit
No response
Relevant log output
Stacktrace continues for a very long time, seemingly indicating infinite recursion.
The text was updated successfully, but these errors were encountered: