Skip to content

fix: tool_choice="required" falls back to tool_parser for non-JSON formats (streaming + non-streaming)#35936

Open
voipmonitor wants to merge 1 commit intovllm-project:mainfrom
voipmonitor:fix/tool-choice-required-xml-parser
Open

fix: tool_choice="required" falls back to tool_parser for non-JSON formats (streaming + non-streaming)#35936
voipmonitor wants to merge 1 commit intovllm-project:mainfrom
voipmonitor:fix/tool-choice-required-xml-parser

Conversation

@voipmonitor
Copy link
Copy Markdown
Contributor

@voipmonitor voipmonitor commented Mar 4, 2026

Summary

tool_choice="required" fails when the model produces non-JSON tool calls (e.g. Qwen3 XML format), even though a compatible --tool-call-parser (e.g. qwen3_coder) is configured. This affects both the non-streaming and streaming code paths.

Root cause

Both tool_choice="required" code paths bypass the configured tool_parser and use JSON-only parsing, while the tool_choice="auto" branches correctly use the tool parser.

Non-streaming path (engine/serving.py)

In _parse_tool_calls_from_content(), the code uses pydantic TypeAdapter JSON validation only. When the model produces XML-formatted tool calls (e.g. <tool_call>{"name": ...}</tool_call>), parsing fails with ValidationError. #36841 changed this block to silently suppress ValidationError (fixing the crash when content is None), but this means non-JSON tool calls are now silently dropped instead of being routed to the tool parser.

Streaming path (chat_completion/serving.py)

Three issues in the streaming required branch:

  1. tool_parsers not initialized -- tool_parsers array is only created when tool_choice_auto is true. When tool_choice="required", it stays empty, so even if a parser is configured, it is never used.

  2. if/else prevents tool parsing when reasoning ends -- With MTP (multi-token prediction), the model can send the </think> reasoning-end token and the beginning of tool call content in the same chunk. The original if/else structure (reasoning processing vs. tool call processing) meant that the iteration that detected reasoning-end would skip tool call extraction entirely.

  3. No tool_parser fallback for non-JSON formats -- The required branch only has extract_tool_call_required_streaming() which expects JSON. Models using XML-style tool calls (like Qwen3 with qwen3_coder parser) fail silently.

Fix

Non-streaming (engine/serving.py)

Replace the contextlib.suppress(ValidationError) from #36841 with a try/except that preserves the crash-safety (content = content or "") while adding a fallback: when JSON parsing fails with ValidationError or JSONDecodeError, fall back to the configured tool_parser.extract_tool_calls() (when enable_auto_tools is true and a parser is available). Uses the new tool_parser_cls(tokenizer, request.tools) constructor from #38029.

Streaming (chat_completion/serving.py)

  1. Initialize tool_parsers for required too -- Create parser instances when tool_choice is "auto" OR "required".

  2. Separate if statements -- Change the if reasoning / else tool_parsing to two independent if blocks so both can execute in the same iteration when reasoning ends.

  3. Dual parser approach -- When a tool_parser is available, try it first (for XML/custom formats). If it has not detected tool calls yet, also try the JSON extract_tool_call_required_streaming() as fallback. This handles the non-deterministic output from MTP, where the same model may produce JSON in one request and XML in another.

Note: content alongside tool_calls

When using the tool_parser fallback for required, the parser may emit partial text as delta.content before it detects the tool call pattern (e.g., qwen3_coder looks for XML <tool_call> tags but the model outputs JSON [{"name":...). This means some SSE chunks may contain both content and tool_calls. Clients consuming tool_choice="required" responses should prioritize tool_calls over content when both are present in the stream -- the content in that case is leaked parser buffer text, not a real text response.

Discussion point: Dual parser approach

Fix #3 is somewhat opinionated. We use it because with MTP (speculative decoding), the same model (Qwen3) non-deterministically produces either JSON or XML tool calls across different requests. The dual approach handles both formats reliably. However, if the vLLM team prefers a different strategy (e.g., always routing through the tool_parser for required), we are happy to adjust.

Testing

  • Unit tests added in tests/tool_use/test_tool_choice_required_fallback.py
  • Tested with Qwen3-Coder 32B + MTP (--tool-call-parser qwen3_coder --enable-auto-tool-choice):
    • 30/30 streaming requests with tool_choice="required" succeed
    • 10/10 agentic RAG pipeline queries (multi-turn tool calling) succeed
  • Zero impact on JSON-based models (GLM, Llama, Hermes) -- JSON path succeeds immediately, fallback never triggers

Files changed

  • vllm/entrypoints/openai/engine/serving.py -- Non-streaming fallback (replaces silent suppress from [Bugfix] Fix crash when tool_choice=required exceeds max_tokens #36841 with try/except + tool_parser fallback)
  • vllm/entrypoints/openai/chat_completion/serving.py -- Streaming fixes (3 changes)
  • tests/tool_use/test_tool_choice_required_fallback.py -- Unit tests (new)

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a bug where tool_choice="required" would fail for non-JSON tool call formats. The fix, which involves adding a try...except block to fall back to the configured tool_parser, is logical and mirrors the behavior of tool_choice="auto". This is a good, low-risk solution. I have one suggestion to make the exception handling more specific and robust.

for tool_call in tool_calls
]
)
except Exception:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using except Exception: is too broad and can mask unexpected errors (e.g., KeyboardInterrupt). It's a best practice to catch more specific exceptions. In this case, TypeAdapter(...).validate_json() raises pydantic.ValidationError on failure. Catching this specific exception makes the code safer and the intent clearer. You will need to add from pydantic import ValidationError at the top of the file.

Suggested change
except Exception:
except ValidationError:

@mergify
Copy link
Copy Markdown

mergify bot commented Mar 4, 2026

Hi @voipmonitor, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

@voipmonitor voipmonitor force-pushed the fix/tool-choice-required-xml-parser branch from 2817ac6 to ac8975e Compare March 4, 2026 01:04
scottgl9 added a commit to scottgl9/vllm that referenced this pull request Mar 4, 2026
Cherry-pick upstream fixes for GB10 Spark (SM121):

- PR vllm-project#35568: Recognize SM121 as SM120 family for Marlin/CUTLASS FP8
  kernels (generate_kernels.py, ops.cu, scaled_mm*.cuh, marlin_utils.py)
- PR vllm-project#35675: Fix Qwen3.5 MTP fc layer weight shape mismatch with NVFP4
  by using ReplicatedLinear with quant_config=None
- PR vllm-project#35833: FP8 KV cache for Triton MLA decode on Blackwell — adds
  on-the-fly FP8 dequantization in Triton kernels
- PR vllm-project#35936: tool_choice="required" falls back to tool_parser for
  non-JSON (XML) tool calls from Qwen3 models

Local patches:
- Patch FlashInfer TRTLLM JIT to compile for SM12x
  (supported_major_versions=[10] → [10, 12])
- Skip VLLM_TEST_FORCE_FP8_MARLIN for NVFP4 MoE (not SM121-ready)
@voipmonitor voipmonitor changed the title fix: tool_choice="required" falls back to tool_parser for non-JSON formats fix: tool_choice="required" falls back to tool_parser for non-JSON formats (streaming + non-streaming) Mar 4, 2026
@mergify mergify bot added the tool-calling label Mar 4, 2026
@mergify
Copy link
Copy Markdown

mergify bot commented Mar 4, 2026

Hi @voipmonitor, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

@voipmonitor voipmonitor force-pushed the fix/tool-choice-required-xml-parser branch from 4fb8b2c to 50662ad Compare March 4, 2026 14:10
@mergify
Copy link
Copy Markdown

mergify bot commented Mar 4, 2026

Hi @voipmonitor, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

scottgl9 added a commit to scottgl9/vllm that referenced this pull request Mar 4, 2026
Cherry-pick upstream fixes for GB10 Spark (SM121):

- PR vllm-project#35568: Recognize SM121 as SM120 family for Marlin/CUTLASS FP8
  kernels (generate_kernels.py, ops.cu, scaled_mm*.cuh, marlin_utils.py)
- PR vllm-project#35675: Fix Qwen3.5 MTP fc layer weight shape mismatch with NVFP4
  by using ReplicatedLinear with quant_config=None
- PR vllm-project#35833: FP8 KV cache for Triton MLA decode on Blackwell — adds
  on-the-fly FP8 dequantization in Triton kernels
- PR vllm-project#35936: tool_choice="required" falls back to tool_parser for
  non-JSON (XML) tool calls from Qwen3 models

Local patches:
- Patch FlashInfer TRTLLM JIT to compile for SM12x
  (supported_major_versions=[10] → [10, 12])
- Skip VLLM_TEST_FORCE_FP8_MARLIN for NVFP4 MoE (not SM121-ready)
scottgl9 added a commit to scottgl9/vllm that referenced this pull request Mar 5, 2026
Cherry-pick upstream fixes for GB10 Spark (SM121):

- PR vllm-project#35568: Recognize SM121 as SM120 family for Marlin/CUTLASS FP8
  kernels (generate_kernels.py, ops.cu, scaled_mm*.cuh, marlin_utils.py)
- PR vllm-project#35675: Fix Qwen3.5 MTP fc layer weight shape mismatch with NVFP4
  by using ReplicatedLinear with quant_config=None
- PR vllm-project#35833: FP8 KV cache for Triton MLA decode on Blackwell — adds
  on-the-fly FP8 dequantization in Triton kernels
- PR vllm-project#35936: tool_choice="required" falls back to tool_parser for
  non-JSON (XML) tool calls from Qwen3 models

Local patches:
- Patch FlashInfer TRTLLM JIT to compile for SM12x
  (supported_major_versions=[10] → [10, 12])
- Skip VLLM_TEST_FORCE_FP8_MARLIN for NVFP4 MoE (not SM121-ready)
@voipmonitor voipmonitor force-pushed the fix/tool-choice-required-xml-parser branch from e57e4f9 to eed0ae7 Compare March 13, 2026 00:42
@mergify
Copy link
Copy Markdown

mergify bot commented Mar 13, 2026

Hi @voipmonitor, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy failing?
mypy is run differently in CI. If the failure is related to this check, please use the following command to run it locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10

scottgl9 added a commit to scottgl9/vllm that referenced this pull request Mar 18, 2026
Cherry-pick upstream fixes for GB10 Spark (SM121):

- PR vllm-project#35568: Recognize SM121 as SM120 family for Marlin/CUTLASS FP8
  kernels (generate_kernels.py, ops.cu, scaled_mm*.cuh, marlin_utils.py)
- PR vllm-project#35675: Fix Qwen3.5 MTP fc layer weight shape mismatch with NVFP4
  by using ReplicatedLinear with quant_config=None
- PR vllm-project#35833: FP8 KV cache for Triton MLA decode on Blackwell — adds
  on-the-fly FP8 dequantization in Triton kernels
- PR vllm-project#35936: tool_choice="required" falls back to tool_parser for
  non-JSON (XML) tool calls from Qwen3 models

Local patches:
- Patch FlashInfer TRTLLM JIT to compile for SM12x
  (supported_major_versions=[10] → [10, 12])
- Skip VLLM_TEST_FORCE_FP8_MARLIN for NVFP4 MoE (not SM121-ready)
ec-jt added a commit to ec-jt/vllm that referenced this pull request Mar 22, 2026
PR vllm-project#35675 equivalent (MTP fc layer fix)
Updated qwen3_5_mtp.py
Switched import from ColumnParallelLinear to ReplicatedLinear
Changed FC construction from self.fc = ColumnParallelLinear(...) to self.fc = ReplicatedLinear(...)
Removed TP-only args (gather_output, return_bias)
Set quant_config=None for this layer
Updated call site to unpack tuple: hidden_states, _ = self.fc(hidden_states)
PR vllm-project#35936 equivalent (tool_choice="required" fallback)
Updated engine/serving.py
Replaced JSON parse suppress-block at elif request.tool_choice == "required":
New flow:
First try TypeAdapter(...).validate_json(content)
On ValidationError or JSON decode error, fallback to configured tool parser when available
Convert parsed tool calls into FunctionCall(...) entries
Removed now-unused contextlib import

Signed-off-by: ec-jt <james.trappett@elementalcompute.com>
…rmats

When tool_choice="required" and the model produces non-JSON tool calls
(e.g. XML from Qwen3 with qwen3_coder parser), both non-streaming and
streaming paths now fall back to the configured tool_parser instead of
silently dropping tool calls or failing.

Non-streaming (engine/serving.py):
- Replace contextlib.suppress(ValidationError) from vllm-project#36841 with
  try/except that preserves crash-safety (content or "") while adding
  fallback to tool_parser.extract_tool_calls() for non-JSON formats.

Streaming (chat_completion/serving.py):
- Initialize tool_parsers for "required" (not just "auto").
- Use separate if blocks (not if/else) so tool parsing runs in the
  same iteration when reasoning ends (critical for MTP/speculative
  decoding where </think> and tool call arrive in one chunk).
- Dual parser: try tool_parser first (XML), fall back to JSON-only
  extract_tool_call_required_streaming() for non-deterministic MTP.

Signed-off-by: voipmonitor <festr@voipmonitor.org>
@voipmonitor voipmonitor force-pushed the fix/tool-choice-required-xml-parser branch from eed0ae7 to a269629 Compare March 31, 2026 13:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: No status

Development

Successfully merging this pull request may close these issues.

1 participant