8000 fix: Avoid thread starvation on many concurrent requests by making use of asyncio to lock llama_proxy context by gjpower · Pull Request #1798 · abetlen/llama-cpp-python · GitHub
[go: up one dir, main page]

Skip to content

fix: Avoid thread starvation on many concurrent requests by making use of asyncio to lock llama_proxy context #1798

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Merge branch 'main' into fix/server_llama_call_thread_starvation

9e0728b
Select commit
Loading
Failed to load commit list.
Merged

fix: Avoid thread starvation on many concurrent requests by making use of asyncio to lock llama_proxy context #1798

Merge branch 'main' into fix/server_llama_call_thread_starvation
9e0728b
Select commit
Loading
Failed to load commit list.

Workflow runs completed with no jobs

0