Description
Prerequisites
Please answer the following questions for yourself before submitting an issue.
- I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- I carefully followed the README.md.
- I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- I reviewed the Discussions, and have a new bug or useful enhancement to share.
Expected Behavior
Please provide a detailed written description of what you were trying to do, and what you expected llama-cpp-python
to do.
I have installed the latest source (commit 165b4dc). In order to minimize confusion, I compiled with CPU only support:
$ CMAKE_ARGS='-DLLAMA_CUDA=OFF' pip install -e .
I then attempted to build a minimal servlet of my own:
#!/usr/bin/env python3
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import json
from typing import Dict, Any
from llama_cpp import Llama
from llama_cpp import llama_get_timings
from llama_cpp import llama_reset_timings
config = json.load(open("config.json"))
# config.json:
#{
# "model_path": "/models/llama-3-8b-instruct-f16.gguf",
# "chat_format": "chatml",
# "ngl": -1,
# "n_gpu_layers": -1,
# "n_ctx": 32000,
# "parallel": 2,
# "temperature": 0.3,
# "verbose": true
#}
llm = Llama(
model_path=config['model_path'],
chat_format=config['chat_format'],
ngl=config['ngl'],
n_gpu_layers=config['n_gpu_layers'],
n_ctx=config['n_ctx'],
parallel=config['parallel'],
verbose=config['verbose']
)
class RequestData(BaseModel):
prompt: str
json_schema: Dict[str, Any]
contact: str
app = FastAPI()
@app.post("/analyze")
def analyze_request(data: RequestData):
user_message = f"{data.contact}\n\n{data.prompt}"
messages = config['analyze_messages']
messages[1]['content'] = user_message
result = llm.create_chat_completion(
messages=messages,
response_format=data.json_schema,
temperature=config['temperature'],
stream=False
)
json_result = json.loads(result['choices'][0]['message']['content'])
return json_result
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
I run this server with:
$ uvicorn main:app --host 0.0.0.0 --port 9093 --workers 1
then I call it via curl with:
$ curl -s -X POST -d @analyze_request.json -H "content-type: application/json" -H "accept: application/json" http://localhost:9093/analyze
where my request looks like:
{
"contact": "My name is John Doe and I'm 32 years old and my birthday is Feb 18th, 19
7475
94. I was born in New York City, USA.",
"prompt": "Please analyze the contact and produce and tell me the first name and birthplace of the individual. Respond in JSON, with the keys first_name, last_name, birthplace.",
"json_schema": {
"type": "json_object",
"properties": {
"first_name": {
"type": "string"
},
"birthplace": {
"type": "string"
},
"last_name": {
"type": "string"
}
}
}
}
the first couple of requests work fine and return the expected result. However inevitably, after 3, or 5 or 15 or sometimes 40 requests the servlet will just hang. What am I missing here?
Current Behavior
Please provide a detailed written description of what llama-cpp-python
did, instead.
After 3, 5, 15, 30 requests - different each time, the server will simply stop responding and the final result will hang. I have tried switching the GPU on and off, I have tried immitating the async and run_in_threadpool configuration in the main llama_cpp_python server lib, and using the context reset, and a variety of other things all to no avail. I assume I must need to reset something but I cannot figure it out.
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
- Physical (or virtual) hardware you are using, e.g. for Linux:
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 48 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Vendor ID: AuthenticAMD
Model name: AMD Ryzen Threadripper PRO 5975WX 32-Cores
CPU family: 25
Model: 8
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 1
Stepping: 2
Frequency boost: enabled
CPU max MHz: 7006.6401
CPU min MHz: 1800.0000
BogoMIPS: 7186.68
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constan
t_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave a
vx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpex
t perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdse
ed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoin
vd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_
ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization features:
Virtualization: AMD-V
Caches (sum of all):
L1d: 1 MiB (32 instances)
L1i: 1 MiB (32 instances)
L2: 16 MiB (32 instances)
L3: 128 MiB (4 instances)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-63
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: Not affected
L1tf: Not affected
Mds: Not affected
Meltdown: Not affected
Mmio stale data: Not affected
Retbleed: Not affected
Spec rstack overflow: Vulnerable: Safe RET, no microcode
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Srbds: Not affected
Tsx async abort: Not affected
- Operating System, e.g. for Linux:
$ uname -a
Linux joe-llm 6.5.0-35-generic #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
- SDK version, e.g. for Linux:
$ python3 --version
Python 3.10.12
$ make --version
GNU Make 4.3
Built for x86_64-pc-linux-gnu
$ g++ --version
g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
- Run the provided mini servlet
- Execute the example request with the example request body
- Run the same request 100 times in a row
- Wait until it stalls.
Any hints about what I might be doing wrong here or ideas about how to overcome this issue would be greatly appreciated!