Skip to content

UPSTREAM PR #17976: common : add llama-completion to completion-bash executables#541

Open
loci-dev wants to merge 1 commit intomainfrom
upstream-PR17976-branch_ggml-org-cisc/llama-completion-completion-bash
Open

UPSTREAM PR #17976: common : add llama-completion to completion-bash executables#541
loci-dev wants to merge 1 commit intomainfrom
upstream-PR17976-branch_ggml-org-cisc/llama-completion-completion-bash

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#17976

@loci-review
Copy link

loci-review bot commented Dec 12, 2025

Explore the complete analysis inside the Version Insights

Pull Request Review Summary: PR #541

Change Scope: Single-line addition to bash completion configuration
Files Modified: 1 (common/arg.cpp)
Performance Impact: None

Summary

This PR adds the llama-completion executable to the bash completion list in common_params_print_completion(). The change is purely a configuration update that enables shell autocompletion for the llama-completion tool. No performance-critical functions are modified. All core inference paths (llama_decode, llama_encode, llama_tokenize), tensor operations, and memory management functions remain unchanged. Power consumption analysis shows 0.0% change across all binaries. No impact on tokens per second.

2 similar comments
@loci-review
Copy link

loci-review bot commented Dec 12, 2025

Explore the complete analysis inside the Version Insights

Pull Request Review Summary: PR #541

Change Scope: Single-line addition to bash completion configuration
Files Modified: 1 (common/arg.cpp)
Performance Impact: None

Summary

This PR adds the llama-completion executable to the bash completion list in common_params_print_completion(). The change is purely a configuration update that enables shell autocompletion for the llama-completion tool. No performance-critical functions are modified. All core inference paths (llama_decode, llama_encode, llama_tokenize), tensor operations, and memory management functions remain unchanged. Power consumption analysis shows 0.0% change across all binaries. No impact on tokens per second.

@loci-review
Copy link

loci-review bot commented Dec 12, 2025

Explore the complete analysis inside the Version Insights

Pull Request Review Summary: PR #541

Change Scope: Single-line addition to bash completion configuration
Files Modified: 1 (common/arg.cpp)
Performance Impact: None

Summary

This PR adds the llama-completion executable to the bash completion list in common_params_print_completion(). The change is purely a configuration update that enables shell autocompletion for the llama-completion tool. No performance-critical functions are modified. All core inference paths (llama_decode, llama_encode, llama_tokenize), tensor operations, and memory management functions remain unchanged. Power consumption analysis shows 0.0% change across all binaries. No impact on tokens per second.

@loci-dev loci-dev force-pushed the main branch 24 times, most recently from 799183f to 26e8fe3 Compare December 16, 2025 07:11
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from fef1737 to cc6b7b1 Compare December 21, 2025 04:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants