8000 Add config option for controlling Ollama think parameter by ViViDboarder · Pull Request #146000 · home-assistant/core · GitHub
[go: up one dir, main page]

Skip to content

Add config option for controlling Ollama think parameter #146000

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 4, 2025

Conversation

ViViDboarder
Copy link
Contributor
@ViViDboarder ViViDboarder commented Jun 1, 2025

Proposed change

Allows enabling or disable thinking for supported models. Neither option will display thinking content in the chat. Future support for displaying think content will require frontend changes for formatting.

I verified that with an old version of Ollama, the request still succeeds, however the think tags are not filtered.

Type of change

  • Dependency upgrade
  • Bugfix (non-breaking change which fixes an issue)
  • New integration (thank you!)
  • New feature (which adds functionality to an existing integration)
  • Deprecation (breaking change to happen in the future)
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code or addition of tests

Additional information

Checklist

  • The code change is tested and works locally.
  • Local tests pass. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • I have followed the development checklist
  • I have followed the perfect PR recommendations
  • The code has been formatted using Ruff (ruff format homeassistant tests)
  • Tests have been added to verify that the new code works.

If user exposed functionality or configuration variables are added/changed:

If the code communicates with devices, web services, or third-party tools:

  • The manifest file has all fields filled out correctly.
    Updated and included derived files by running: python3 -m script.hassfest.
  • New or updated dependencies have been added to requirements_all.txt.
    Updated by running python3 -m script.gen_requirements_all 8000 .
  • For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.

To help with the load of incoming pull requests:

@home-assistant
Copy link
home-assistant bot commented Jun 1, 2025

Hey there @synesthesiam, mind taking a look at this pull request as it has been labeled with an integration (ollama) you are listed as a code owner for? Thanks!

Code owner commands

Code owners of ollama can trigger bot actions by commenting:

  • @home-assistant close Closes the pull request.
  • @home-assistant rename Awesome new title Renames the pull request.
  • @home-assistant reopen Reopen the pull request.
  • @home-assistant unassign ollama Removes the current integration label and assignees on the pull request, add the integration domain after the command.
  • @home-assistant add-label needs-more-information Add a label (needs-more-information, problem in dependency, problem in custom component) to the pull request.
  • @home-assistant remove-label needs-more-information Remove a label (needs-more-information, problem in dependency, problem in custom component) on the pull request.

Copy link
Contributor
@allenporter allenporter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please send the dependency bump first in a separate PR.

@home-assistant home-assistant bot marked this pull request as draft June 2, 2025 04:26
@home-assistant
Copy link
home-assistant bot commented Jun 2, 2025

Please take a look at the requested changes, and use the Ready for review button when you are done, thanks 👍

Learn more about our pull request process.

@allenporter
Copy link
Contributor

Appreciate you sending this, thank you. Happy if you want to tag me in the separate bump PR

@ViViDboarder
Copy link
Contributor Author

Split and rebased on the the other PR, will do it again and remove from draft when the bump is merged.

Allows enabling or disable thinking for supported models. Neither option
will dislay thinking content in the chat. Future support for displaying
think content will require frontend changes for formatting.
@ViViDboarder ViViDboarder marked this pull request as ready for review June 2, 2025 21:02
@home-assistant home-assistant bot requested a review from allenporter June 2, 2025 21:02
@ViViDboarder
Copy link
Contributor Author

Now that the dependency bump is merged, I've rebased this.

Copy link
Contributor
@allenporter allenporter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great. I believe this needs (1) strings in the translation files for these new options (2) Documentation PR to update https://www.home-assistant.io/integrations/ollama/ via https://github.com/home-assistant/home-assistant.io

@ViViDboarder
Copy link
Contributor Author

I will make the docs PR now, but is there anything I need to do for the strings? The docs suggest that this is automatic after merging to dev? https://developers.home-assistant.io/docs/internationalization/core#introducing-new-strings

@ViViDboarder
Copy link
Contributor Author

Oh! My strings file got lost somewhere. I'll add that back.

Copy link
Contributor
@allenporter allenporter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, thank you for this. I am very excited to eval Qwen 3 and have it not take 30 seconds per request!

@allenporter allenporter merged commit e3f7e57 into home-assistant:dev Jun 4, 2025
34 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Thinking tokens are present in the Ollama output
3 participants
0