Skip to content

Conversation

@freeliuzc
Copy link
Collaborator

Motivation

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings January 6, 2026 03:23
@paddle-bot
Copy link

paddle-bot bot commented Jan 6, 2026

Thanks for your contribution!

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This cherry-pick PR aims to support multi-step MTP (Multi-Token Prediction) with CUDA Graph by modifying the capture process and fixing CUDA Graph compatibility issues.

  • Simplified CUDA Graph capture by removing separate draft model capture logic in gpu_model_runner.py
  • Modified the expected decode length calculation for MTP warmup
  • Enhanced _initialize_forward_meta to conditionally enable CUDA Graph based on substep during dummy runs
  • Fixed CUDA error 700 by replacing paddle.clone with copy_ in CUDA Graph mode

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
fastdeploy/worker/gpu_model_runner.py Removed complex draft model CUDA Graph capture logic, updated expected_decode_len calculation for MTP, simplified warmup logging
fastdeploy/spec_decode/mtp.py Added parameters to _initialize_forward_meta for multi-step CUDA Graph support, replaced paddle.clone with copy_ to avoid CUDA error 700, added documentation about CUDA Graph capture requirements

# Initialize forward meta data
self._initialize_forward_meta(step_use_cudagraph=step_use_cudagraph)
self._initialize_forward_meta(
step_use_cudagraph=step_use_cudagraph, is_dummy_run=is_dummy_run, substep=substep
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable 'is_dummy_run' is used but not defined in the _propose method. This will cause a NameError at runtime when _initialize_forward_meta is called. The _propose method signature only includes 'step_use_cudagraph' as a parameter, but 'is_dummy_run' is being passed to _initialize_forward_meta. You need to either add 'is_dummy_run' as a parameter to the _propose method or determine it from existing state/attributes.

Suggested change
step_use_cudagraph=step_use_cudagraph, is_dummy_run=is_dummy_run, substep=substep
step_use_cudagraph=step_use_cudagraph, substep=substep

Copilot uses AI. Check for mistakes.
Comment on lines 1956 to +1957
logger.info(
f"Warm up the Target model with the num_tokens:{capture_size}, expected_decode_len:{self.speculative_config.num_speculative_tokens}"
f"Warm up the model with the num_tokens:{capture_size}, expected_decode_len:{self.speculative_config.num_speculative_tokens}"
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log message is inconsistent with the actual expected_decode_len value being passed to _dummy_run. The log says 'expected_decode_len:{self.speculative_config.num_speculative_tokens}' but the actual parameter passed on line 1953 is 'self.speculative_config.num_speculative_tokens * 2 + 1'. The log message should reflect the actual value being used to avoid confusion during debugging.

Copilot uses AI. Check for mistakes.
@codecov-commenter
Copy link

codecov-commenter commented Jan 6, 2026

Codecov Report

❌ Patch coverage is 85.71429% with 1 line in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (release/online/20251131@7aea651). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/spec_decode/mtp.py 85.71% 1 Missing ⚠️
Additional details and impacted files
@@                    Coverage Diff                     @@
##             release/online/20251131    #5897   +/-   ##
==========================================================
  Coverage                           ?   58.50%           
==========================================================
  Files                              ?      320           
  Lines                              ?    39181           
  Branches                           ?     5909           
==========================================================
  Hits                               ?    22923           
  Misses                             ?    14425           
  Partials                           ?     1833           
Flag Coverage Δ
GPU 58.50% <85.71%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@gongshaotian gongshaotian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Jiang-Jia-Jun Jiang-Jia-Jun merged commit 43dc335 into PaddlePaddle:release/online/20251131 Jan 7, 2026
13 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants