8000 Add Verbose Logging Support to Diagnose Performance Issues · Issue #19 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content
Add Verbose Logging Support to Diagnose Performance Issues #19
Closed
@luminalle

Description

@luminalle

Sorry, this might be totally wrong place to open the issue. Feel free to close.

Anyway, I'm working with a 3rd party project* that uses your awesome wrapper and I'm having problems there, which brings me back here. Everything seems to be working, but not with the speed I expect after using plain llama.cpp. With some prompts it seems to even completely freeze, never completing the task. Could I somehow raise this wrapper's logging level to make it more verbose, so I could see in real-time as it works?

* https://github.com/hwchase17/langchain

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0