8000 chore: bump llama.cpp to support tool streaming by p5 · Pull Request #1438 · containers/ramalama · GitHub
[go: up one dir, main page]

Skip to content

chore: bump llama.cpp to support tool streaming #1438

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 27, 2025

Conversation

p5
Copy link
Contributor
@p5 p5 commented May 26, 2025

Closes #1431

Bumps llama.cpp to the commit sha in https://github.com/ggml-org/llama.cpp/releases/tag/b5499

These commits include ggml-org/llama.cpp#12379, plus all fixes related to this in order to support running AI code assistants like Codex.

While I was able to call Codex and get semi-sane responses from it after building the cuda container, I am not familiar enough to demonstrate an AI assistant doing it's magic.
image

And apologies for the unrelated changes - my IDE decided it wanted to format the code too. Checked through these and they don't appear to be functionally different, and just switch to using a consistent number of spaces within the script.

Summary by Sourcery

Bump llama.cpp to the latest commit to enable tool streaming support and update the build script indentation

New Features:

  • Enable streaming support in llama.cpp backend

Enhancements:

  • Upgrade llama.cpp SHA to include recent fixes for AI code assistant integration

Chores:

  • Reformat build_llama_and_whisper.sh for consistent indentation

Signed-off-by: Robert Sturla <robertsturla@outlook.com>
Copy link
Contributor
sourcery-ai bot commented May 26, 2025

Reviewer's Guide

This PR updates the llama.cpp clone target to a newer commit that supports tool streaming (including PR #12379) and reapplies consistent formatting across the build_llama_and_whisper.sh script to standardize indentation, remove extraneous semicolons, and align multiline arrays.

Sequence diagram for conceptual tool streaming with updated llama.cpp

sequenceDiagram
    actor User
    participant OllamaService as "Ollama Service\n(with updated llama.cpp)"
    participant LlamaCppInternal as "llama.cpp (b5499)"
    participant ExternalTool as "External Tool\n(e.g., Codex)"

    User->>OllamaService: Prompt requiring tool use
    OllamaService->>LlamaCppInternal: Process prompt
    LlamaCppInternal-->>ExternalTool: Call Tool API (e.g., code interpreter)
    ExternalTool-->>LlamaCppInternal: Tool Response
    LlamaCppInternal->>OllamaService: Formatted response incorporating tool output
    OllamaService->>User: Final Response
Loading

File-Level Changes

Change Details Files
Bump llama.cpp clone SHA to include streaming support
  • Updated local llama_cpp_sha variable to new commit
  • Re-ran clone_and_build_llama_cpp with updated SHA
container-images/scripts/build_llama_and_whisper.sh
Standardize formatting in build_llama_and_whisper.sh
  • Re-indented multiline array literals for rpm lists and flags
  • Removed trailing backslashes and semicolons
  • Normalized spacing in function bodies and case blocks
container-images/scripts/build_llama_and_whisper.sh

Assessment against linked issues

Issue Objective Addressed Explanation
#1431 Enable the use of tools in streaming mode, resolving the error 'Cannot use tools with stream'.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor
@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @p5 - I've reviewed your changes and they look great!

Here's what I looked at during the review
  • 🟡 General issues: 1 issue found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -52,7 +52,7 @@ jobs:
sudo rm -rf \
/usr/share/dotnet /usr/local/lib/android /opt/ghc \
/usr/local/share/powershell /usr/share/swift /usr/local/.ghcup \
/usr/lib/jvm || true
/usr/lib/jvm /opt/hostedtoolcache/CodeQL || true
Copy link
Contributor Author
@p5 p5 May 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a potential fix to the storage issues in the runner.
Removing CodeQL (which is only used when you explicitly call it) frees up an additional 5GB.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for figuring this out @p5 !

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately it didn't work. I'm unsure whether it helped, or is freeing up space in the wrong disk.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newest commit frees up an additional 8GB of storage, which I'm hoping is sufficient.
If not, the next option is probably to matrix these builds into separate jobs.

Signed-off-by: Robert Sturla <robertsturla@outlook.com>
@p5 p5 force-pushed the bump-llama.cpp branch from 57d5fdd to b3adc74 Compare May 27, 2025 09:00
@rhatdan
Copy link
Member
rhatdan commented May 27, 2025

LGTM
This will not be released until June 1.

@rhatdan rhatdan merged commit b7d45f4 into containers:main May 27, 2025
15 of 17 checks passed
@p5
Copy link
Contributor Author
p5 commented May 27, 2025

Awesome, thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RFE] Support for tools in streaming mode
3 participants
0