Fix: test_vlm_offline_throughput output throughput#13279
Merged
hnyls2002 merged 1 commit intosgl-project:mainfrom Nov 15, 2025
Merged
Fix: test_vlm_offline_throughput output throughput#13279hnyls2002 merged 1 commit intosgl-project:mainfrom
hnyls2002 merged 1 commit intosgl-project:mainfrom
Conversation
sywangyi
added a commit
to sywangyi/sglang
that referenced
this pull request
Feb 27, 2026
* port layernorm 3d * apply layernorm * support for bias * fix * intf fix * add support for CPU * fix tp=3/6 padding issue in encoder vision * fix tp=3/6 padding issue in qwen3-omni * refactor code * add mrope * change attention_mask shape to use flash attn * add kernel apply_rotary_pos_emb_cpu * replace nn.Linear with ReplicatedLinear * enable torch.compile * construct mask using query.dtype instead of bool on CPU * add fast path for sparse attention * fix double free segfault by wrong setting of BLOCK_M * improve extend kernel performance for long context length * update test_extend.py * update comment * fix topk softmax performance issue * port optimization for image preprocessor in Qwen2VLImageProcessorFast * apply optimization for image preprocessor * update docker file * optimize conv3d used in patch embedding * resolve conflict * apply optimized conv3d * apply optimization for flash_attn_varlen_func (sgl-project#19) * port optimization for flash_attn_varlen_func * apply flash_attn_varlen_func * remove contiguous before rope (sgl-project#20) * Revert "resolve conflict" This reverts commit 7622f6d. * fix after rebase * Update pyproject_cpu.toml * Update xeon.Dockerfile * minor fix after rebase * rope: add support for bf16 sincos (sgl-project#102) * format * Update xeon.Dockerfile * odd tp for cpu * Apply linear_gelu_linear and fix numa memory bind (sgl-project#22) * [CPU] Optimize small oc GEMM for Qwen3-next on CPU (sgl-project#12446) Co-authored-by: Zheng, Beilei <[email protected]> * port linear_gelu_linear kernel * apply linear_gelu_linear for TP=1 * fix numa memory bind * apply parallel partition patch --------- Co-authored-by: jianan-gu <[email protected]> * Revert "Fix: test_vlm_offline_throughput output throughput (sgl-project#13279)" (sgl-project#101) This reverts commit 7ee3e36. * fix input dtype mismatch issue * apply optimized layernorm --------- Co-authored-by: Zheng, Beilei <[email protected]> Co-authored-by: ZailiWang <[email protected]> Co-authored-by: mingfeima <[email protected]> Co-authored-by: jianan-gu <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Motivation
Around 10/23, we noticed a steep drop in the output throughput from test_vlm_offline_throughput as shown above (from ~18-19k to ~13k tok/s). This affects:
python -m sglang.bench_serving --backend sglangThis PR fixes this performance regression in test_vlm_offline_throughput introduced in commit 92009bd, identified through git bisecting; the issue was that the sglang backend (the default) was configured to use the formatted chat template prompts instead of the raw text prompts, and the formatted prompts introduce more complex token sequences that degrades output performance. This is likely unintentional as original code used the raw text prompts for all backends.
tldr: Commit 92009bd added some backend-specific prompt handling but excluded "sglang" from the list of backends that should use raw prompts.
Modifications
Simply added sglang to the list of backends that should use the raw text prompt.
Accuracy Tests
N/A.
Benchmarking and Profiling
Test ran:
python3 -m unittest test_bench_serving.TestBenchServing.test_vlm_offline_throughputBefore fix:
Run 1: Input token throughput (tok/s): 2793.03, Output token throughput (tok/s): 13944.74, Total token throughput (tok/s): 16737.77
Run 2: Input token throughput (tok/s): 2843.13, Output token throughput (tok/s): 14194.86, Total token throughput (tok/s): 17038.00
Run 3: Input token throughput (tok/s): 2843.55, Output token throughput (tok/s): 14196.95, Total token throughput (tok/s): 17040.50
After fix:
Run 1: Input token throughput (tok/s): 4103.24, Output token throughput (tok/s): 20486.21 , Total token throughput (tok/s): 24589.45
Run 2: Input token throughput (tok/s): 4029.99, Output token throughput (tok/s): 20120.46, Total token throughput (tok/s): 24150.44
Run 3: Input token throughput (tok/s): 4077.38, Output token throughput (tok/s): 20357.06, Total token throughput (tok/s): 24434.44
Checklist