Skip to content

[WIP] Support glm4.1v#8015

Closed
JustinTong0323 wants to merge 15 commits intosgl-project:mainfrom
JustinTong0323:new-model-glm4v
Closed

[WIP] Support glm4.1v#8015
JustinTong0323 wants to merge 15 commits intosgl-project:mainfrom
JustinTong0323:new-model-glm4v

Conversation

@JustinTong0323
Copy link
Copy Markdown
Collaborator

@JustinTong0323 JustinTong0323 commented Jul 14, 2025

Motivation

Close #7993

Modifications

This pull request introduces support for the GLM-4V model within the sglang framework. It includes the necessary model files, configuration adjustments, and a new conversation template to facilitate interaction with the GLM-4V model. Additionally, it addresses potential issues with MRoPE section mismatches and enhances the flexibility of the get_rope_index function to accommodate various model configurations.

  • Model Support: Adds support for the GLM-4V model, including necessary configurations and model files.
  • Conversation Template: Registers a new conversation template specifically for GLM-4V, defining system messages, roles, and separators.
  • MRoPE Section Adjustment: Implements auto-correction for MRoPE section mismatches in rotary embeddings, ensuring compatibility with different model configurations.
  • Vision Start Token ID: Modifies the get_rope_index function to accept a list of vision start token IDs, accommodating models with multiple vision start tokens.
  • Conditional Logic: Adds conditional logic to handle specific model types like 'glm4v' in the get_rope_index function.

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces support for the GLM-4V model within the sglang framework. It includes the necessary model files, configuration adjustments, and a new conversation template to facilitate interaction with the GLM-4V model. Additionally, it addresses potential issues with MRoPE section mismatches and enhances the flexibility of the get_rope_index function to accommodate various model configurations.

Highlights

  • Model Support: Adds support for the GLM-4V model, including necessary configurations and model files.
  • Conversation Template: Registers a new conversation template specifically for GLM-4V, defining system messages, roles, and separators.
  • MRoPE Section Adjustment: Implements auto-correction for MRoPE section mismatches in rotary embeddings, ensuring compatibility with different model configurations.
  • Vision Start Token ID: Modifies the get_rope_index function to accept a list of vision start token IDs, accommodating models with multiple vision start tokens.
  • Conditional Logic: Adds conditional logic to handle specific model types like 'glm4v' in the get_rope_index function.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

gemini-code-assist[bot]

This comment was marked as outdated.

@JustinTong0323 JustinTong0323 marked this pull request as draft July 14, 2025 08:55
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Clarifies that the image resizing method for GLM-4V is currently inherited, indicating it may be subject to future modification or replacement.

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Increased the test case limits for vision OpenAI server tests and added a new test for server C. Enhanced assertions in the common test class to include checks for the term "graphic" in addition to existing keywords.

Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
@JustinTong0323 JustinTong0323 marked this pull request as ready for review July 14, 2025 10:35
@JustinTong0323
Copy link
Copy Markdown
Collaborator Author

CI not pass, caused by transformers error:

  File "/usr/local/lib/python3.10/dist-packages/transformers/image_processing_utils_fast.py", line 614, in __call__
    return self.preprocess(images, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/glm4v/image_processing_glm4v_fast.py", line 310, in preprocess
    patches, image_grid_thw = self._preprocess(
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/glm4v/image_processing_glm4v_fast.py", line 176, in _preprocess
    grouped_images, grouped_images_index = group_images_by_shape(images)
TypeError: group_images_by_shape() missing 1 required positional argument: 'disable_grouping'

Solved in upstream:

@JustinTong0323
Copy link
Copy Markdown
Collaborator Author

Update transformers version first:

@zRzRzRzRzRzRzR
Copy link
Copy Markdown
Contributor

Thank you for your submission and for your efforts integrating GLM-4.1V into the sglang.

Regarding the issue with transformers, it can be resolved by updating to version 4.53.2.

we appreciate your contribution and are currently reviewing the PR. Thanks again!

"DeepseekVL2ForCausalLM",
"Gemma3ForConditionalGeneration",
"Gemma3nForConditionalGeneration",
"Glm4vForConditionalGeneration",
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we should have some auto mechanism for detecting potential vlm arch, e.g. if it has a vision_config or vision_token

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or more straight forward, add an attribute to the Model Class

return embeddings


class Glm4vVisionRotaryEmbedding(nn.Module):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we move this to rotary_embedding?

@@ -0,0 +1,541 @@
import logging
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: no pad_input_ids in this model?

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inherit from qwenvl

@lambertwjh
Copy link
Copy Markdown

I found a blocking bug regarding multimodal video input with the OpenAI /v1/chat/completions API:
Even when do_sample_frames: false (and all related param names) are set both at the root and in video_url, the server always returns this error for video input:
{"object":"error","message":"Frame sampling is enabled but no video metadata was found. Please pass in VideoMetadata object per each input video or set do_sample_frames=False","type":"BadRequestError"}
All keys (do_sample_frames, sample_frames, frame_sampling, etc.) are boolean and included, together with a minimal video_metadata dict, and verified in the server-side debug log.
The server still ignores these parameters and refuses to disable frame sampling for video, making it impossible to use video input via OpenAI API.
For text/image input, everything works fine.
Please check if the backend is reading and respecting these parameters for video input in this PR.

@zRzRzRzRzRzRzR
Copy link
Copy Markdown
Contributor

zRzRzRzRzRzRzR commented Jul 19, 2025

do_sample_frames = True is the mandatory requirement we set for transformers processors, as our model must sample according to the specified method to function.

video_token="<|begin_of_video|><|video|><|end_of_video|>",
)
)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your great contribution! However, I noticed that the template for GLM-4V differs significantly from that of Qwen2-VL, including differences in special tokens. You can refer to the following links for details:

https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking/blob/main/chat_template.jinja

https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking/blob/main/config.json

Given this, should the code for registering the template be modified accordingly?
If there's any issue with my suggestion, please feel free to disregard it.

JustinTong0323 and others added 5 commits July 21, 2025 00:48
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
@JustinTong0323
Copy link
Copy Markdown
Collaborator Author

@JustinTong0323
Copy link
Copy Markdown
Collaborator Author

The tests cannot pass, I don't know why the CI has passed.... (PR Test / unit-test-backend-1-gpu (5) (pull_request))

@JustinTong0323
Copy link
Copy Markdown
Collaborator Author

do_sample_frames = True is the mandatory requirement we set for transformers processors, as our model must sample according to the specified method to function.

Hi @zRzRzRzRzRzRzR, thanks for reviewing. I've solved this issue, but I've encountered some more strange bugs. Could I get in touch with you via Slack, WeChat, or any other contact method to ask for your help? I appreciate your help in advance!

JustinTong0323 and others added 2 commits August 3, 2025 21:11
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
@zRzRzRzRzRzRzR
Copy link
Copy Markdown
Contributor

We have already communicated on WeChat. Once your image understanding is working properly, we can try to start modifying the video part.

@JustinTong0323
Copy link
Copy Markdown
Collaborator Author

move to #8798

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again!

@byjiang1996 byjiang1996 mentioned this pull request Aug 7, 2025
6 tasks
@JustinTong0323 JustinTong0323 deleted the new-model-glm4v branch August 7, 2025 20:29
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Any plan to support glm-4.1v-thinking

5 participants