Closed
Description
Sorry, this might be totally wrong place to open the issue. Feel free to close.
Anyway, I'm working with a 3rd party project* that uses your awesome wrapper and I'm having problems there, which brings me back here. Everything seems to be working, but not with the speed I expect after using plain llama.cpp. With some prompts it seems to even completely freeze, never completing the task. Could I somehow raise this wrapper's logging level to make it more verbose, so I could see in real-time as it works?