Skip to content

UPSTREAM PR #17969: webui: Improve copy to clipboard with text attachments#537

Open
loci-dev wants to merge 6 commits intomainfrom
upstream-PR17969-branch_allozaur-17834-copy-user-message-content-with-pasted-content-attachments
Open

UPSTREAM PR #17969: webui: Improve copy to clipboard with text attachments#537
loci-dev wants to merge 6 commits intomainfrom
upstream-PR17969-branch_allozaur-17834-copy-user-message-content-with-pasted-content-attachments

Conversation

@loci-dev
Copy link

Mirrored from ggml-org/llama.cpp#17969

Close #17834

  • Implemented functionality that allows to copy & paste user message including the pasted text
  • Added a Setting that disables separation of prompt text + attached text and copies/pastes all as a single text string
  • Rearranged copy utils into copy.ts file & supplimented with unit tests
  • Reorganized unit tests location in webui project

Demo

demo.mp4

@loci-review
Copy link

loci-review bot commented Dec 12, 2025

Explore the complete analysis inside the Version Insights

Performance Analysis Summary - PR #537

Analysis Result

This PR introduces WebUI-only changes for clipboard handling of chat messages with text attachments. All modifications are in TypeScript/Svelte frontend code within tools/server/webui/. Zero C/C++ code changes were made to llama.cpp core libraries, GGML tensor operations, or inference pipelines.

Performance Impact: The binary analysis confirms zero change in power consumption across all llama.cpp binaries (libggml-base.so, libggml-cpu.so, libllama.so, llama-run, etc.). No functions within Performance-Critical Areas (matrix operations, attention mechanisms, model loading, tokenization, sampling) were modified. The changes are isolated to browser-side clipboard utilities that execute only on user copy/paste events.

Inference Impact: No impact on tokens per second. Functions responsible for tokenization and inference (llama_decode, llama_encode, llama_tokenize, ggml_mul_mat, llama_graph_compute) remain unchanged with identical response time and throughput metrics between versions.

@loci-dev loci-dev force-pushed the main branch 22 times, most recently from d582acc to 4b559d8 Compare December 15, 2025 13:21
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from 4cdd94d to 8d9c8bb Compare December 20, 2025 12:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants