Skip to content

[BugFix] Fixed the issue where ignore_eos was not working.#1286

Merged
Gaohan123 merged 3 commits intovllm-project:mainfrom
amy-why-3459:sampling
Feb 10, 2026
Merged

[BugFix] Fixed the issue where ignore_eos was not working.#1286
Gaohan123 merged 3 commits intovllm-project:mainfrom
amy-why-3459:sampling

Conversation

@amy-why-3459
Copy link
Contributor

@amy-why-3459 amy-why-3459 commented Feb 9, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Fix #1242

Test Plan

vllm serve Qwen/Qwen3-Omni-30B-A3B-Instruct 
     --omni 
     --port 8091 
     --stage-configs-path /vllm_omni/model_executor/stage_configs/qwen3_omni_moe_async_chunk.yaml

Test Result

vllm bench serve     --omni   --dataset-name random   --port 28889   --max-concurrency 1
    --model Qwen/Qwen3-Omni-30B-A3B-Instruct   
    --endpoint /v1/chat/completions   --backend openai-chat-omni   
    --request-rate 1   --num-prompts 1   --random-input-len 100   
    --ignore-eos   --percentile-metrics ttft,tpot,itl,e2el,audio_ttfp,audio_rtf   
    --random-output-len 100   --extra_body '{"modalities": ["text", "audio"]}'
{'type': 'request_level_metrics',
'request_id': 'chatcmpl-bench-dff362ff-0',
'e2e_time_ms': 5127.692222595215,
'e2e_tpt': 10.817916081424503,
'e2e_total_tokens': 474,
 'transfers_total_time_ms': 0.0,
'transfers_total_bytes': 0,
 'stages': {0: {'stage_gen_time_ms': 26.72863006591797,
                'num_tokens_out': 100,
                 'num_tokens_in': 108},
             1: {'stage_gen_time_ms': 27.85658836364746, 'num_tokens_out': 266},
             2: {'stage_gen_time_ms': 268.20921897888184, 'num_tokens_out': 0}}}
============ Serving Benchmark Result ============
Successful requests:                     1
Failed requests:                         0
Maximum request concurrency:             1
Request rate configured (RPS):           1.00
Benchmark duration (s):                  6.26
Request throughput (req/s):              0.16
Peak concurrent requests:                1.00
----------------End-to-end Latency----------------
Mean E2EL (ms):                          5255.59
Median E2EL (ms):                        5255.59
P99 E2EL (ms):                           5255.59
================== Text Result ===================
Total input tokens:                      100
Total generated tokens:                  5050
Output token throughput (tok/s):         807.04
Peak output token throughput (tok/s):    61.00
Peak concurrent requests:                1.00
Total Token throughput (tok/s):          823.02
---------------Time to First Token----------------
Mean TTFT (ms):                          105.67
Median TTFT (ms):                        105.67
P99 TTFT (ms):                           105.67
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          1.02
Median TPOT (ms):                        1.02
P99 TPOT (ms):                           1.02
---------------Inter-token Latency----------------
Mean ITL (ms):                           14.12
Median ITL (ms):                         12.64
P99 ITL (ms):                            119.90
================== Audio Result ==================
Total audio duration generated(s):       20.95
Total audio frames generated:            502695
Audio throughput(audio duration/s):      3.35
---------------Time to First Packet---------------
Mean AUDIO_TTFP (ms):                    947.13
Median AUDIO_TTFP (ms):                  947.13
P99 AUDIO_TTFP (ms):                     947.13
-----------------Real Time Factor-----------------
Mean AUDIO_RTF:                          0.25
Median AUDIO_RTF:                        0.25
P99 AUDIO_RTF:                           0.25
==================================================

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes OpenAI-compatible chat serving sampling-parameter override behavior so request-provided fields (notably ignore_eos) are applied when building SamplingParams for the comprehension stage (addresses #1242).

Changes:

  • Expand the allowlist of request fields (_OPENAI_SAMPLING_FIELDS) that can override YAML/default sampling params.
  • Ensure ignore_eos (and additional sampling-related fields) are included in the override set.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@amy-why-3459
Copy link
Contributor Author

@tzhouam PTAL

@Gaohan123 Gaohan123 added this to the v0.16.0 milestone Feb 10, 2026
@Gaohan123 Gaohan123 added the ready label to trigger buildkite CI label Feb 10, 2026
Signed-off-by: amy-why-3459 <wuhaiyan17@huawei.com>
Copy link
Collaborator

@Gaohan123 Gaohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

@Gaohan123 Gaohan123 merged commit ee624d0 into vllm-project:main Feb 10, 2026
7 checks passed
@amy-why-3459
Copy link
Contributor Author

Further optimizations: vllm-omni needs to reuse the sampling parameter construction method of vllm.

YanickSchraner pushed a commit to YanickSchraner/vllm-omni that referenced this pull request Feb 20, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: The sampling parameter ignore_eos did not take effect in the request.

3 participants