Skip to content

Conversation

@FocusLuo
Copy link
Contributor

@FocusLuo FocusLuo commented Jan 6, 2026

ERNIE-4.5-21B-A3B-Thinking needs to use DefaultModelLoaderV1 mode

reference command line:
ENABLE_V1_KVCACHE_SCHEDULER=1 FD_ENC_DEC_BLOCK_NUM=8 HPU_PERF_BREAKDOWN_SYNC_MODE=1 \ HPU_WARMUP_BUCKET=0 MAX_PREFILL_NUM=1 FD_ATTENTION_BACKEND=HPU_ATTN \ python -m fastdeploy.entrypoints.openai.api_server --model \ ./models--baidu--ERNIE-4.5-21B-A3B-Thinking/snapshots/4341bb42644d5422859509fa25d41544c57181f8/ \ --port 8388 --engine-worker-queue-port 8302 --metrics-port 8301 \ --cache-queue-port 8303 --max-model-len 16384 --tensor-parallel-size 1 \ --load-choices "default_v1" --num-gpu-blocks-override 5000 --kv-cache-ratio 0.5 \ --max-num-seqs 128 --block-size 64 --no-enable-prefix-caching \ --graph-optimization-config '{"use_cudagraph":false}'

python bench_gsm8k.py --data-path ./test.jsonl --port 8388 --num-shots 5 --num-questions 1319 --parallel 1 --result-file test_accuracy_parallel_1.json

Motivation

supported ERNIE-4.5-21B-A3B-Thinking mold

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

enabled DefaultModelLoaderV1 on Intel HPU

Usage or Command

ENABLE_V1_KVCACHE_SCHEDULER=1 FD_ENC_DEC_BLOCK_NUM=8 HPU_PERF_BREAKDOWN_SYNC_MODE=1 HPU_WARMUP_BUCKET=0 MAX_PREFILL_NUM=1 FD_ATTENTION_BACKEND=HPU_ATTN python -m fastdeploy.entrypoints.openai.api_server --model /mnt/disk3/ernie_opensource/hub/models--baidu--ERNIE-4.5-21B-A3B-Thinking/snapshots/4341bb42644d5422859509fa25d41544c57181f8/ --port 8388 --engine-worker-queue-port 8302 --metrics-port 8301 --cache-queue-port 8303 --max-model-len 16384 --tensor-parallel-size 1 --load-choices "default_v1" --num-gpu-blocks-override 5000 --kv-cache-ratio 0.5 --max-num-seqs 128 --block-size 64 --no-enable-prefix-caching --graph-optimization-config '{"use_cudagraph":false}'

Accuracy Tests

On Intel HPU
{"task": "gsm8k", "backend": "paddlepaddle", "num_gpus": 1, "latency": 21223.108, "accuracy": 0.774, "num_requests": 1319, "other": {"num_questions": 1319, "parallel": 1}}
{"task": "gsm8k", "backend": "paddlepaddle", "num_gpus": 1, "latency": 3019.105, "accuracy": 0.754, "num_requests": 1319, "other": {"num_questions": 1319, "parallel": 16}}

On NV H20
ENABLE_V1_KVCACHE_SCHEDULER=1 python -m fastdeploy.entrypoints.openai.api_server --model ./models--baidu--ERNIE-4.5-21B-A3B-Thinking/snapshots/4341bb42644d5422859509fa25d41544c57181f8/ --port 8388 --engine-worker-queue-port 8302 --metrics-port 8301 \ --cache-queue-port 8303 --tensor-parallel-size 1 --max-model-len 16384 --max-num-seqs 128 --block-size 64 --kv-cache-ratio 0.5 --num-gpu-blocks-override 6000 --graph-optimization-config '{"use_cudagraph":false}' --no-enable-prefix-caching --load-choices "default_v1"
{"task": "gsm8k", "backend": "paddlepaddle", "num_gpus": 1, "latency": 18805.898, "accuracy": 0.763, "num_requests": 1319, "other": {"num_questions": 1319, "parallel": 1}}

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Jan 6, 2026

Thanks for your contribution!

@paddle-bot paddle-bot bot added the contributor External developers label Jan 6, 2026
@codecov-commenter
Copy link

codecov-commenter commented Jan 6, 2026

Codecov Report

❌ Patch coverage is 0% with 1 line in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@adb91dc). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/model_executor/utils.py 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5891   +/-   ##
==========================================
  Coverage           ?   66.59%           
==========================================
  Files              ?      347           
  Lines              ?    44467           
  Branches           ?     6835           
==========================================
  Hits               ?    29614           
  Misses             ?    12668           
  Partials           ?     2185           
Flag Coverage Δ
GPU 66.59% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@FocusLuo
Copy link
Contributor Author

FocusLuo commented Jan 7, 2026

@LeoZhao-Intel @yanfeich @JianyuLi01 @feiwan1 @fmiao2372 could you help to give a review?

or current_platform.is_maca()
or current_platform.is_intel_hpu()
):
_err_msg("v1loader currently only support backends gpu, xpu, iluvatar and maca")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggest to add "intel hpu" into _err_msg.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks review, updated and added,

ERNIE-4.5-21B-A3B-Thinking needs to use DefaultModelLoaderV1 mode

reference command line:
ENABLE_V1_KVCACHE_SCHEDULER=1 FD_ENC_DEC_BLOCK_NUM=8 HPU_PERF_BREAKDOWN_SYNC_MODE=1 \
HPU_WARMUP_BUCKET=0 MAX_PREFILL_NUM=1 FD_ATTENTION_BACKEND=HPU_ATTN \
python -m fastdeploy.entrypoints.openai.api_server --model \
./models--baidu--ERNIE-4.5-21B-A3B-Thinking/snapshots/4341bb42644d5422859509fa25d41544c57181f8/ \
--port 8388 --engine-worker-queue-port 8302 --metrics-port 8301 \
--cache-queue-port 8303 --max-model-len 16384 --tensor-parallel-size 1 \
--load-choices "default_v1" --num-gpu-blocks-override 5000 --kv-cache-ratio 0.5 \
--max-num-seqs 128 --block-size 64 --no-enable-prefix-caching \
--graph-optimization-config '{"use_cudagraph":false}'

Signed-off-by: Luo, Focus <[email protected]>
@FocusLuo FocusLuo force-pushed the enable-ERNIE-4.5-21B-A3B-Thinking branch from fd36ec3 to c42a0b0 Compare January 7, 2026 07:28
Copy link
Collaborator

@zoooo0820 zoooo0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Jiang-Jia-Jun Jiang-Jia-Jun merged commit 64f9105 into PaddlePaddle:develop Jan 7, 2026
25 of 31 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants