Skip to content

Fix: test_vlm_offline_throughput output throughput#13279

Merged
hnyls2002 merged 1 commit intosgl-project:mainfrom
dougyster:vlm-bisect
Nov 15, 2025
Merged

Fix: test_vlm_offline_throughput output throughput#13279
hnyls2002 merged 1 commit intosgl-project:mainfrom
dougyster:vlm-bisect

Conversation

@dougyster
Copy link
Copy Markdown
Collaborator

@dougyster dougyster commented Nov 14, 2025

Motivation

VLM Offline Throughput Output

Around 10/23, we noticed a steep drop in the output throughput from test_vlm_offline_throughput as shown above (from ~18-19k to ~13k tok/s). This affects:

  1. CI/CD performance tests (test_vlm_offline_throughput)
  2. Users running benchmarks via python -m sglang.bench_serving --backend sglang

This PR fixes this performance regression in test_vlm_offline_throughput introduced in commit 92009bd, identified through git bisecting; the issue was that the sglang backend (the default) was configured to use the formatted chat template prompts instead of the raw text prompts, and the formatted prompts introduce more complex token sequences that degrades output performance. This is likely unintentional as original code used the raw text prompts for all backends.

tldr: Commit 92009bd added some backend-specific prompt handling but excluded "sglang" from the list of backends that should use raw prompts.

Modifications

Simply added sglang to the list of backends that should use the raw text prompt.

Accuracy Tests

N/A.

Benchmarking and Profiling

Test ran: python3 -m unittest test_bench_serving.TestBenchServing.test_vlm_offline_throughput

Before fix:

Run 1: Input token throughput (tok/s): 2793.03, Output token throughput (tok/s): 13944.74, Total token throughput (tok/s): 16737.77
Run 2: Input token throughput (tok/s): 2843.13, Output token throughput (tok/s): 14194.86, Total token throughput (tok/s): 17038.00
Run 3: Input token throughput (tok/s): 2843.55, Output token throughput (tok/s): 14196.95, Total token throughput (tok/s): 17040.50

After fix:

Run 1: Input token throughput (tok/s): 4103.24, Output token throughput (tok/s): 20486.21 , Total token throughput (tok/s): 24589.45
Run 2: Input token throughput (tok/s): 4029.99, Output token throughput (tok/s): 20120.46, Total token throughput (tok/s): 24150.44
Run 3: Input token throughput (tok/s): 4077.38, Output token throughput (tok/s): 20357.06, Total token throughput (tok/s): 24434.44

Checklist

@dougyster dougyster marked this pull request as ready for review November 14, 2025 12:28
@hnyls2002 hnyls2002 merged commit 7ee3e36 into sgl-project:main Nov 15, 2025
44 of 49 checks passed
@dougyster dougyster deleted the vlm-bisect branch November 16, 2025 20:51
blzheng added a commit to blzheng/sglang that referenced this pull request Feb 4, 2026
blzheng added a commit to blzheng/sglang that referenced this pull request Feb 4, 2026
blzheng added a commit to blzheng/sglang that referenced this pull request Feb 4, 2026
blzheng added a commit to jianan-gu/sglang that referenced this pull request Feb 4, 2026
jianan-gu pushed a commit to jianan-gu/sglang that referenced this pull request Feb 25, 2026
sywangyi added a commit to sywangyi/sglang that referenced this pull request Feb 27, 2026
* port layernorm 3d

* apply layernorm

* support for bias

* fix

* intf fix

* add support for CPU

* fix tp=3/6 padding issue in encoder vision

* fix tp=3/6 padding issue in qwen3-omni

* refactor code

* add mrope

* change attention_mask shape to use flash attn

* add kernel apply_rotary_pos_emb_cpu

* replace nn.Linear with ReplicatedLinear

* enable torch.compile

* construct mask using query.dtype instead of bool on CPU

* add fast path for sparse attention

* fix double free segfault by wrong setting of BLOCK_M

* improve extend kernel performance for long context length

* update test_extend.py

* update comment

* fix topk softmax performance issue

* port optimization for image preprocessor in Qwen2VLImageProcessorFast

* apply optimization for image preprocessor

* update docker file

* optimize conv3d used in patch embedding

* resolve conflict

* apply optimized conv3d

* apply optimization for flash_attn_varlen_func (sgl-project#19)

* port optimization for flash_attn_varlen_func

* apply flash_attn_varlen_func

* remove contiguous before rope (sgl-project#20)

* Revert "resolve conflict"

This reverts commit 7622f6d.

* fix after rebase

* Update pyproject_cpu.toml

* Update xeon.Dockerfile

* minor fix after rebase

* rope: add support for bf16 sincos (sgl-project#102)

* format

* Update xeon.Dockerfile

* odd tp for cpu

* Apply linear_gelu_linear and fix numa memory bind (sgl-project#22)

* [CPU]  Optimize small oc GEMM for Qwen3-next on CPU (sgl-project#12446)

Co-authored-by: Zheng, Beilei <[email protected]>

* port linear_gelu_linear kernel

* apply linear_gelu_linear for TP=1

* fix numa memory bind

* apply parallel partition patch

---------

Co-authored-by: jianan-gu <[email protected]>

* Revert "Fix: test_vlm_offline_throughput output throughput (sgl-project#13279)" (sgl-project#101)

This reverts commit 7ee3e36.

* fix input dtype mismatch issue

* apply optimized layernorm

---------

Co-authored-by: Zheng, Beilei <[email protected]>
Co-authored-by: ZailiWang <[email protected]>
Co-authored-by: mingfeima <[email protected]>
Co-authored-by: jianan-gu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants