Skip to content

Commit 81d6ed3

Browse files
authored
feat: support per-model overrides in llama.cpp load() (#5820)
* feat: support per-model overrides in llama.cpp load() Extend the `load()` method in the llama.cpp extension to accept optional `overrideSettings`, allowing fine-grained per-model configuration. This enables users to override provider-level settings such as `ctx_size`, `chat_template`, `n_gpu_layers`, etc., when loading a specific model. Fixes: #5818 (Feature Request - Jan v0.6.6) Use cases enabled: - Different context sizes per model (e.g., 4K vs 32K) - Model-specific chat templates (ChatML, Alpaca, etc.) - Performance tuning (threads, GPU layers) - Better memory management per deployment Maintains full backward compatibility with existing provider config. * swap overrideSettings and isEmbedding argument
1 parent bc4fe52 commit 81d6ed3

File tree

1 file changed

+2
-1
lines changed
  • extensions/llamacpp-extension/src

1 file changed

+2
-1
lines changed

extensions/llamacpp-extension/src/index.ts

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -764,6 +764,7 @@ export default class llamacpp_extension extends AIEngine {
764764

765765
override async load(
766766
modelId: string,
767+
overrideSettings?: Partial<LlamacppConfig>,
767768
isEmbedding: boolean = false
768769
): Promise<SessionInfo> {
769770
const sInfo = this.findSessionByModel(modelId)
@@ -778,7 +779,7 @@ export default class llamacpp_extension extends AIEngine {
778779
)
779780
}
780781
const args: string[] = []
781-
const cfg = this.config
782+
const cfg = { ...this.config, ...(overrideSettings ?? {}) }
782783
const [version, backend] = cfg.version_backend.split('/')
783784
if (!version || !backend) {
784785
throw new Error(

0 commit comments

Comments
 (0)