8000 Stack trace on API server · Issue #1631 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content
Stack trace on API server  #1631
Closed
@shamitv

Description

@shamitv

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [ Yes] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [ Yes] I carefully followed the README.md.
  • [ Yes] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ Yes] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

Server should bring up API without any errors

Current Behavior

Server generates following stack trace :

Exception ignored in: <function Llama.del at 0x7771013582c0>
Traceback (most recent call last):
File "..llama-cpp-python/llama_cpp/llama.py", line 2089, in del
if self._lora_adapter is not None:
^^^^^^^^^^^^^^^^^^
AttributeError: 'Llama' object has no attribute '_lora_adapter'

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

  • Physical (or virtual) hardware you are using, e.g. for Linux:

Linux PC (intel)

  • Operating System, e.g. for Linux:

Linux i3tiny1 6.8.0-39-generic #39-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 21:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

  • SDK version, e.g. for Linux:

Python 3.12.3

Failure Information (for bugs)

Stack trace is generated while bringing up API server

Root cause :

Traceback (most recent call last):
File ".../llama-cpp-python/llama_cpp/llama.py", line 2089, in del
if self._lora_adapter is not None:
^^^^^^^^^^^^^^^^^^

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

Run API server :

export MODEL_FILE=...models/microsoft/Phi-3-mini-4k-instruct_Q4_K_M.gguf

python3 -m llama_cpp.server --n_threads -1 --n_ctx 8096 --model $MODEL_FILE

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python -m pip install .
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

Failure Logs

Exception ignored in: <function Llama.del at 0x7771013582c0>
Traceback (most recent call last):
File "/home/shamit/proj/llama-cpp-python/llama_cpp/llama.py", line 2089, in del
if self._lora_adapter is not None:
^^^^^^^^^^^^^^^^^^
AttributeError: 'Llama' object has no attribute '_lora_adapter'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0