8000 Add additional verbose logs for cache · IntrinsicLabsAI/llama-cpp-python@f27393a · GitHub
[go: up one dir, main page]

Skip to content

Commit f27393a

Browse files
committed
Add additional verbose logs for cache
1 parent 4cefb70 commit f27393a

File tree

1 file changed

+4
-0
lines changed

1 file changed

+4
-0
lines changed

llama_cpp/server/app.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,8 +119,12 @@ def create_app(settings: Optional[Settings] = None):
119119
)
120120
if settings.cache:
121121
if settings.cache_type == "disk":
122+
if settings.verbose:
123+
print(f"Using disk cache with size {settings.cache_size}")
122124
cache = llama_cpp.LlamaDiskCache(capacity_bytes=settings.cache_size)
123125
else:
126+
if settings.verbose:
127+
print(f"Using ram cache with size {settings.cache_size}")
124128
cache = llama_cpp.LlamaRAMCache(capacity_bytes=settings.cache_size)
125129

126130
cache = llama_cpp.LlamaCache(capacity_bytes=settings.cache_size)

0 commit comments

Comments
 (0)
0