Skip to content

UPSTREAM PR #17452: webui: minor settings reorganization and add disable autoscroll option#293

Open
loci-dev wants to merge 3 commits intomainfrom
upstream-PR17452-branch_ServeurpersoCom-display-preferences-autoscroll
Open

UPSTREAM PR #17452: webui: minor settings reorganization and add disable autoscroll option#293
loci-dev wants to merge 3 commits intomainfrom
upstream-PR17452-branch_ServeurpersoCom-display-preferences-autoscroll

Conversation

@loci-dev
Copy link
Copy Markdown

Mirrored from ggml-org/llama.cpp#17452

Summary

Adds a "Display" settings section grouping all visualization preferences, and introduces a disableAutoScroll option to address #17292.

Changes

  • Created "Display" section consolidating 7 display-related settings
  • Added disableAutoScroll toggle to prevent automatic scrolling during message streaming
  • Maintains backward compatibility (auto-scroll enabled by default)

Motivation

While I couldn't reproduce the reported issue, this option is valuable for several use cases:

  • Reading long responses without scroll interruption (issue #17292)
  • Developer workflow: focusing on specific sections like thinking/reasoning output that aggregates during inference (relevant for MCP client in development)
  • Manual viewport control during multi-turn conversations

This is a common UX pattern missing from most chat interfaces, giving users control while keeping the default behavior unchanged.

The new "Display" section also improves settings organization: the "General" section was overloaded, and we had a "Reasoning" section with only one display-related option. Grouping all visualization preferences together creates better balance and discoverability.

Closes #17292

@loci-review
Copy link
Copy Markdown

loci-review bot commented Nov 23, 2025

Explore the complete analysis inside the Version Insights

Performance Analysis Summary: PR #293

Assessment

No performance impact detected. This PR modifies only frontend WebUI components (Svelte/TypeScript) for UI settings reorganization and adds a user-configurable auto-scroll disable option. Zero changes to performance-critical C++ inference engine, model loading, tokenization, or backend computation paths.

Analysis Results

Performance Metrics

  • Response Time: 0% change across all functions
  • Throughput Time: 0% change across all functions
  • Power Consumption: 0% change across all 16 binaries (libllama.so, libggml-base.so, libggml-cpu.so, etc.)
  • Modified Functions: None in performance-critical paths

Core Function Status

All critical functions remain unmodified:

  • llama_decode: 44,338,824 ns response time (unchanged)
  • llama_model_load_from_file: 375,587,503 ns (unchanged)
  • llama_encode: 11,150,621 ns (unchanged)
  • llama_tokenize: 898,665 ns (unchanged)
  • llama_init_from_model: 55,794,960 ns (unchanged)

Tokens per second impact: None. No changes to llama_decode, llama_encode, or llama_tokenize response times.

Code Changes

Modified Files (Frontend Only):

  • ChatScreen.svelte: Added conditional guards for scroll behavior (61 additions, 33 deletions)
  • ChatSettings.svelte: Reorganized settings UI, created "Display" section
  • settings-config.ts: Added disableAutoScroll configuration (default: false)
  • index.html.gz: Updated compiled bundle

Implementation: Introduces opt-in feature to disable automatic scrolling during message streaming. Guards all scroll-related code paths with if (!disableAutoScroll) checks. Maintains backward compatibility with default behavior unchanged.

Flame Graph & CFG Analysis

Complete structural equivalence confirmed between versions. All 10 analyzed functions show identical CFG topology, instruction sequences, and call depths. No assembly-level differences detected beyond address offsets.

Code Review Findings

No critical issues. Changes are isolated to presentation layer with positive frontend performance characteristics (reduced scroll event processing when feature enabled). Clean implementation with consistent guard patterns and proper state management.

Recommendation: Approve. Zero impact on inference performance.

@loci-dev loci-dev force-pushed the main branch 24 times, most recently from 92ef8cd to 7dd50b8 Compare November 26, 2025 16:10
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from 14c82b3 to 1c3cc79 Compare December 2, 2025 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants