Skip to content

Dynamic model class loading#101

Merged
merrymercy merged 2 commits intomainfrom
cody/model
Jan 25, 2024
Merged

Dynamic model class loading#101
merrymercy merged 2 commits intomainfrom
cody/model

Conversation

@comaniac
Copy link
Copy Markdown
Contributor

This PR attempts to make the model loading process more systematic and self-contained. Currently when addinga new model, we have to explicitly import the model entry class in model_runnter.load_model function. This introduces two drawbacks:

  1. model_runner.py cannot focus on the changes of "model runner".
  2. Developers have to change 2 files when adding a new model. The change inmodel_runner.py is easy to be missed and may result in conflicts.

sglang.srt does not follow the standard Python package structure (i.e., leveraging __init__.py to construct module hierarchy), so it's not straightforward to use a model registry with decorator. In this PR, I tried to dynamically scan all model files under models, and load their entry classes. The major drawback of this approach is that every model file has to have EntryClass alias to be scanned. IMHO it should be relatively easy for developers to follow.

Please share your thoughts.

@comaniac comaniac requested a review from merrymercy January 25, 2024 21:25
@merrymercy merrymercy merged commit 3a581e9 into main Jan 25, 2024
@merrymercy merrymercy deleted the cody/model branch January 25, 2024 23:29
timethink pushed a commit to timethink/sglang that referenced this pull request Mar 9, 2025
NorthmanPKU pushed a commit to NorthmanPKU/sglang that referenced this pull request May 16, 2025
…d input/output strides for kernel graphs. (sgl-project#98)

* initial docs

* update path

* add doxygen files

* add more

* update docs

* add more files

* fix typo

* link update

* add gated mlp

* add more tutorials

* format

* add transpiler docs

* .

* .

* .

* .

* .

* update RMSNorm example

* adjust image width

* Update LoRA tutorial

* update files

* fix typo

* transpiler doc fix (sgl-project#89)

* transpiler doc fix

* fmt

* .

* checkpoint

* Add files via upload

* Update README.md

* add file

* add demo

* update demo

* minor changes

* add input_strides into KNInputOp

* support output_strides

* transpiler add input/output stride (sgl-project#101)

---------

Co-authored-by: Xinhao Cheng <[email protected]>
Co-authored-by: Zhihao Jia <[email protected]>
zhuyijie88 pushed a commit to zhuyijie88/sglang that referenced this pull request Sep 4, 2025
sammysun0711 pushed a commit to sammysun0711/sglang that referenced this pull request Dec 19, 2025
Garrybest pushed a commit to Garrybest/sglang that referenced this pull request Jan 9, 2026
blzheng added a commit to blzheng/sglang that referenced this pull request Feb 6, 2026
sywangyi pushed a commit to sywangyi/sglang that referenced this pull request Feb 26, 2026
sywangyi added a commit to sywangyi/sglang that referenced this pull request Feb 27, 2026
* port layernorm 3d

* apply layernorm

* support for bias

* fix

* intf fix

* add support for CPU

* fix tp=3/6 padding issue in encoder vision

* fix tp=3/6 padding issue in qwen3-omni

* refactor code

* add mrope

* change attention_mask shape to use flash attn

* add kernel apply_rotary_pos_emb_cpu

* replace nn.Linear with ReplicatedLinear

* enable torch.compile

* construct mask using query.dtype instead of bool on CPU

* add fast path for sparse attention

* fix double free segfault by wrong setting of BLOCK_M

* improve extend kernel performance for long context length

* update test_extend.py

* update comment

* fix topk softmax performance issue

* port optimization for image preprocessor in Qwen2VLImageProcessorFast

* apply optimization for image preprocessor

* update docker file

* optimize conv3d used in patch embedding

* resolve conflict

* apply optimized conv3d

* apply optimization for flash_attn_varlen_func (sgl-project#19)

* port optimization for flash_attn_varlen_func

* apply flash_attn_varlen_func

* remove contiguous before rope (sgl-project#20)

* Revert "resolve conflict"

This reverts commit 7622f6d.

* fix after rebase

* Update pyproject_cpu.toml

* Update xeon.Dockerfile

* minor fix after rebase

* rope: add support for bf16 sincos (sgl-project#102)

* format

* Update xeon.Dockerfile

* odd tp for cpu

* Apply linear_gelu_linear and fix numa memory bind (sgl-project#22)

* [CPU]  Optimize small oc GEMM for Qwen3-next on CPU (sgl-project#12446)

Co-authored-by: Zheng, Beilei <[email protected]>

* port linear_gelu_linear kernel

* apply linear_gelu_linear for TP=1

* fix numa memory bind

* apply parallel partition patch

---------

Co-authored-by: jianan-gu <[email protected]>

* Revert "Fix: test_vlm_offline_throughput output throughput (sgl-project#13279)" (sgl-project#101)

This reverts commit 7ee3e36.

* fix input dtype mismatch issue

* apply optimized layernorm

---------

Co-authored-by: Zheng, Beilei <[email protected]>
Co-authored-by: ZailiWang <[email protected]>
Co-authored-by: mingfeima <[email protected]>
Co-authored-by: jianan-gu <[email protected]>
khalil2ji3mp6 added a commit to khalil2ji3mp6/sglang that referenced this pull request Mar 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants