Skip to content

Restores previous llama.cpp jinja behavior#2422

Merged
olliewalsh merged 1 commit intocontainers:mainfrom
ramalama-labs:bug/template-default-0.17.0
Feb 13, 2026
Merged

Restores previous llama.cpp jinja behavior#2422
olliewalsh merged 1 commit intocontainers:mainfrom
ramalama-labs:bug/template-default-0.17.0

Conversation

@ieaves
Copy link
Copy Markdown
Collaborator

@ieaves ieaves commented Feb 12, 2026

llama.cpp changed the default llama-serve --jinja settings in December 2025: ggml-org/llama.cpp#17911

This causes a regression when using the 0.17.0 base images for some models likehf://ggml-org/SmolVLM-500M-Instruct-GGUF

While executing For at line 1, column 162 in source:
...}{{':'}}{% else %}{{': '}}{% endif %}{% for line in message['content'] %}{% if l...
^
Error: Expected iterable or object type in for loop: got String
srv init: init: please consider disabling jinja via --no-jinja, or use a custom chat template via --chat-template
srv init: init: for example: --no-jinja --chat-template chatml
srv operator(): operator(): cleaning up before exit...
main: exiting due to model loading error

This PR explicitly sets our default jinja settings.

Summary by Sourcery

Set explicit Jinja defaults for llama.cpp, disabling Jinja for multimodal models to restore compatibility with impacted base images.

Enhancements:

  • Add a --no-jinja flag for models with mmproj_path to align multimodal behavior with previous llama.cpp defaults.

Tests:

  • Update factory command expectation to include --no-jinja for multimodal models, ensuring the new default is covered by tests.

Signed-off-by: Ian Eaves <ian.k.eaves@gmail.com>
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai bot commented Feb 12, 2026

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Explicitly restores previous llama.cpp Jinja behavior by enabling Jinja only for non-multimodal models and disabling it for multimodal ones, and updates the test command expectations accordingly.

File-Level Changes

Change Details Files
Restore llama-serve Jinja default behavior by toggling Jinja flags based on multimodal support.
  • Add a --no-jinja command option that is used when a model has an mmproj_path (multimodal)
  • Keep existing --jinja option but restrict it to models without an mmproj_path (non-multimodal)
inference-spec/engines/llama.cpp.yaml
test/unit/command/data/engines/llama.cpp.yaml
Update factory tests to reflect the new CLI flags for multimodal models.
  • Adjust the expected llama-server command string for models with mmproj to include the --no-jinja flag before --no-warmup
test/unit/command/test_factory.py

Possibly linked issues

  • #: They fix the same jinja flag issue: ensuring llama-server uses correct --jinja/--no-jinja for chat templates.

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @ieaves, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a regression introduced by a recent llama.cpp update that altered default Jinja templating behavior. By explicitly setting the --jinja or --no-jinja flag based on whether a model is multimodal, it restores the expected functionality and prevents errors during model loading, particularly for models like SmolVLM-500M-Instruct-GGUF.

Highlights

  • Jinja Behavior Restoration: Explicitly configured Jinja templating behavior for llama.cpp models to prevent regressions caused by recent upstream changes.
  • Multimodal Model Support: Introduced the --no-jinja flag for multimodal models to correctly handle their chat templates and avoid errors.
  • Non-Multimodal Model Consistency: Ensured that non-multimodal models continue to utilize the --jinja flag as expected.
  • Unit Test Updates: Updated relevant unit tests to reflect the new conditional Jinja configuration logic for llama.cpp commands.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • inference-spec/engines/llama.cpp.yaml
    • Added a conditional --no-jinja command line argument for llama.cpp when model.mmproj_path is present.
  • test/unit/command/data/engines/llama.cpp.yaml
    • Added a conditional --no-jinja command line argument for llama.cpp in test data when model.mmproj_path is present.
  • test/unit/command/test_factory.py
    • Modified a test case to include --no-jinja in the expected command string when has_mmproj is true.
Activity
  • No activity has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a regression caused by a recent change in llama.cpp that enabled Jinja templating by default. To restore the previous behavior for multimodal models, this PR introduces logic to explicitly pass the --no-jinja flag when a model has an mmproj_path. The changes are correctly implemented in the engine specification file, and the unit tests have been updated to reflect and validate this new conditional logic. The fix is straightforward and effectively resolves the issue.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@rhatdan
Copy link
Copy Markdown
Member

rhatdan commented Feb 13, 2026

@engelmi @olliewalsh PTAL

Copy link
Copy Markdown
Collaborator

@olliewalsh olliewalsh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@olliewalsh
Copy link
Copy Markdown
Collaborator

MacOS e2e failure I expect is caused by the cli output changing in python 3.14.3 (python/cpython#75949)

@olliewalsh olliewalsh merged commit 9a42a38 into containers:main Feb 13, 2026
58 of 61 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants