Skip to content

UPSTREAM PR #17445: webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden#291

Open
loci-dev wants to merge 2 commits intomainfrom
upstream-PR17445-branch_awasisto-master
Open

UPSTREAM PR #17445: webui: Fix zero pasteLongTextToFileLen to disable conversion being overridden#291
loci-dev wants to merge 2 commits intomainfrom
upstream-PR17445-branch_awasisto-master

Conversation

@loci-dev
Copy link
Copy Markdown

Mirrored from ggml-org/llama.cpp#17445

Setting pasteLongTextToFileLen to 0 is supposed to turn the text-to-file conversion feature off, but it always gets replaced with 2500. This change fixes that behavior.

@loci-review
Copy link
Copy Markdown

loci-review bot commented Nov 23, 2025

Explore the complete analysis inside the Version Insights

Pull Request #291 - Performance Analysis Summary

Overview

PR #291 is a WebUI bug fix that corrects JavaScript configuration handling for the pasteLongTextToFileLen parameter. The change modifies frontend code in ChatForm.svelte to properly handle zero values, allowing users to disable text-to-file conversion by setting the configuration to 0.

Performance Impact Assessment

No performance changes detected. The binary performance analysis shows 0% change across all 16 compiled binaries, with total power consumption remaining stable at approximately 1.52 mJ. This is expected because the PR modifies only frontend JavaScript code, not the C++ inference engine.

Affected Components:

  • Modified: tools/server/webui/src/lib/components/app/chat/ChatForm/ChatForm.svelte (JavaScript)
  • Rebuilt: tools/server/public/index.html.gz (compiled WebUI bundle)
  • No changes to: libllama.so, libggml.so, or any performance-critical C++ binaries

Core Function Analysis:

  • llama_decode: 0% change (Response Time: 44,338,963 ns, Throughput: 69 ns)
  • llama_encode: 0% change (Response Time: 11,150,656 ns)
  • llama_tokenize: 0% change (Response Time: 898,668 ns, Throughput: 22 ns)

Tokens per Second Impact: None. The inference pipeline functions (llama_decode, llama_encode, llama_tokenize) show identical performance metrics between versions.

Code Change Analysis

Bug Fix Details:

  • Before: Number(currentConfig.pasteLongTextToFileLen) || 2500 incorrectly treated 0 as falsy, replacing it with 2500
  • After: Explicit Number.isNaN() check preserves 0 as valid while defaulting invalid inputs to 2500
  • Impact: Fixes user-reported bug where disabling the feature was impossible

Correctness: The change is strictly more correct, handling edge cases properly without introducing regressions.

Recommendations

Approve. This is a low-risk bug fix with no impact on inference performance. The change is well-scoped, addresses a legitimate issue, and maintains backward compatibility. Consider adding unit tests for configuration parsing edge cases and input validation for negative values in future iterations.

@loci-dev loci-dev force-pushed the main branch 28 times, most recently from 409b78f to b789b13 Compare November 27, 2025 00:34
@loci-dev loci-dev force-pushed the main branch 15 times, most recently from 8c7587c to a2a0d0e Compare December 1, 2025 22:07
…erridden

Zero pasteLongTextToFileLen should disable the conversion, but it was
overwritten with 2500.
@loci-review
Copy link
Copy Markdown

loci-review bot commented Dec 2, 2025

Explore the complete analysis inside the Version Insights

Performance Analysis Summary - PR #291

Analysis Overview

This PR introduces a WebUI configuration bug fix in the ChatForm component. Performance analysis across all binaries shows zero measurable impact on inference performance metrics.

Condition Assessment: Condition 1 applies - no performance metric changes detected.

Summary

PR #291 fixes a JavaScript configuration handling bug in the WebUI layer where setting pasteLongTextToFileLen = 0 was incorrectly overridden to 2500. The change replaces logical OR operator with explicit Number.isNaN() checking. Performance analysis confirms 0% change in response time, throughput time, and power consumption across all 16 analyzed binaries including libllama.so, libggml-cpu.so, and llama-run. No C++ inference code was modified. Tokens per second remains unaffected.

@loci-dev loci-dev force-pushed the main branch 8 times, most recently from 4b0bde9 to 0aca875 Compare December 2, 2025 17:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants