-
Notifications
You must be signed in to change notification settings - Fork 10.9k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Update docker base image to Ubuntu 24.04 LTS
feature request
New feature or request
#9679
opened Mar 12, 2025 by
vrampal
Unusually high VRAM usage of Gemma 3 27B
bug
Something isn't working
#9678
opened Mar 12, 2025 by
vYLQs6
The installation-free version of Ollama cannot change the model installation path
bug
Something isn't working
#9677
opened Mar 12, 2025 by
lmh87883819
qwq:32b-fp16 model fails with EOF error during inference
bug
Something isn't working
#9676
opened Mar 12, 2025 by
mrhein
Error: POST predict: Post "http://127.0.0.1:62622/completion": read tcp 127.0.0.1:62627->127.0.0.1:62622: wsarecv: The remote host has closed a connection.
bug
Something isn't working
#9674
opened Mar 12, 2025 by
mswcap
Context Modification for Stop Extended Thinking Process
feature request
New feature or request
#9670
opened Mar 12, 2025 by
123gggwnnggg
When answering the question, there was no mention of the SYSTEM prompt in the Modelfile
bug
Something isn't working
#9666
opened Mar 12, 2025 by
Cooooder-zc
Please document when AMD iGPU support is planned
amd
Issues relating to AMD GPUs and ROCm
feature request
New feature or request
#9663
opened Mar 11, 2025 by
justincranford
Compatibility with new OpenAI responses API
feature request
New feature or request
#9659
opened Mar 11, 2025 by
pamelafox
API to get performance status/information about GPU/CPU of instance
feature request
New feature or request
#9658
opened Mar 11, 2025 by
trollkarlen
Hard coding to not use cache
feature request
New feature or request
#9652
opened Mar 11, 2025 by
VistritPandey
Phi4 14b with tool calling and full quantization
model request
Model requests
#9647
opened Mar 11, 2025 by
andrea-tomassi-sharelock
Unsupported Value NaN in Ollama log
bug
Something isn't working
#9639
opened Mar 11, 2025 by
satya-devloper
Support for AMD 9000 GPUs
feature request
New feature or request
#9633
opened Mar 10, 2025 by
gergob
Ollama not streaming tool calling responses
bug
Something isn't working
#9632
opened Mar 10, 2025 by
fireblade2534
mistral-small:24B chat template
feature request
New feature or request
#9628
opened Mar 10, 2025 by
logkn
removing <think> tag as an option
feature request
New feature or request
#9627
opened Mar 10, 2025 by
Shahin-rmz
one model loaded multiple times hogging whole available memory
bug
Something isn't working
gpu
nvidia
Issues relating to Nvidia GPUs and CUDA
#9625
opened Mar 10, 2025 by
tendermonster
Previous Next
ProTip!
Updated in the last three days: updated:>2025-03-09.