Skip to content

Conversation

@mudler
Copy link
Owner

@mudler mudler commented Nov 2, 2025

Description

This PR allows to setup via options ctx_shift and cache-ram for llama.cpp:

options:
- context_shift:false
- cache_ram:-1

Notes for Reviewers

Signed commits

  • Yes, I signed my commits.

@netlify
Copy link

netlify bot commented Nov 2, 2025

Deploy Preview for localai ready!

Name Link
🔨 Latest commit 915afa5
🔍 Latest deploy log https://app.netlify.com/projects/localai/deploys/69075cec8af89c0008619c03
😎 Deploy Preview https://deploy-preview-7009--localai.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@mudler mudler force-pushed the feat/llama-cpp-cache-ram-context_shift branch from e8bf655 to cc63f7b Compare November 2, 2025 09:15
Signed-off-by: Ettore Di Giacinto <[email protected]>
@mudler mudler merged commit 424acd6 into master Nov 2, 2025
37 checks passed
@mudler mudler deleted the feat/llama-cpp-cache-ram-context_shift branch November 2, 2025 16:33
@mudler mudler added the enhancement New feature or request label Nov 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants