Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @JustinTong0323, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces support for the GLM-4V model within the sglang framework. It includes the necessary model files, configuration adjustments, and a new conversation template to facilitate interaction with the GLM-4V model. Additionally, it addresses potential issues with MRoPE section mismatches and enhances the flexibility of the get_rope_index function to accommodate various model configurations.
Highlights
- Model Support: Adds support for the GLM-4V model, including necessary configurations and model files.
- Conversation Template: Registers a new conversation template specifically for GLM-4V, defining system messages, roles, and separators.
- MRoPE Section Adjustment: Implements auto-correction for MRoPE section mismatches in rotary embeddings, ensuring compatibility with different model configurations.
- Vision Start Token ID: Modifies the
get_rope_indexfunction to accept a list of vision start token IDs, accommodating models with multiple vision start tokens. - Conditional Logic: Adds conditional logic to handle specific model types like 'glm4v' in the
get_rope_indexfunction.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
6b6c4df to
08c45bf
Compare
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Clarifies that the image resizing method for GLM-4V is currently inherited, indicating it may be subject to future modification or replacement. Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Increased the test case limits for vision OpenAI server tests and added a new test for server C. Enhanced assertions in the common test class to include checks for the term "graphic" in addition to existing keywords. Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
Signed-off-by: Xinyuan Tong <justinning0323@outlook.com>
|
CI not pass, caused by transformers error: Solved in upstream: |
|
Update transformers version first: |
|
Thank you for your submission and for your efforts integrating GLM-4.1V into the sglang. Regarding the issue with transformers, it can be resolved by updating to version 4.53.2. we appreciate your contribution and are currently reviewing the PR. Thanks again! |
| "DeepseekVL2ForCausalLM", | ||
| "Gemma3ForConditionalGeneration", | ||
| "Gemma3nForConditionalGeneration", | ||
| "Glm4vForConditionalGeneration", |
There was a problem hiding this comment.
nit: we should have some auto mechanism for detecting potential vlm arch, e.g. if it has a vision_config or vision_token
There was a problem hiding this comment.
Or more straight forward, add an attribute to the Model Class
| return embeddings | ||
|
|
||
|
|
||
| class Glm4vVisionRotaryEmbedding(nn.Module): |
There was a problem hiding this comment.
should we move this to rotary_embedding?
| @@ -0,0 +1,541 @@ | |||
| import logging | |||
There was a problem hiding this comment.
nit: no pad_input_ids in this model?
There was a problem hiding this comment.
inherit from qwenvl
|
I found a blocking bug regarding multimodal video input with the OpenAI /v1/chat/completions API: |
|
do_sample_frames = True is the mandatory requirement we set for transformers processors, as our model must sample according to the specified method to function. |
python/sglang/srt/conversation.py
Outdated
| video_token="<|begin_of_video|><|video|><|end_of_video|>", | ||
| ) | ||
| ) | ||
|
|
There was a problem hiding this comment.
Thank you for your great contribution! However, I noticed that the template for GLM-4V differs significantly from that of Qwen2-VL, including differences in special tokens. You can refer to the following links for details:
https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking/blob/main/chat_template.jinja
https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking/blob/main/config.json
Given this, should the code for registering the template be modified accordingly?
If there's any issue with my suggestion, please feel free to disregard it.
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
Processor logic is from https://github.com/vllm-project/vllm/blob/32142b3c62277ac7cb941f2036270decb6b514f4/vllm/model_executor/models/glm4_1v.py#L1082. But it doesn't work well. |
|
The tests cannot pass, I don't know why the CI has passed.... (PR Test / unit-test-backend-1-gpu (5) (pull_request)) |
Hi @zRzRzRzRzRzRzR, thanks for reviewing. I've solved this issue, but I've encountered some more strange bugs. Could I get in touch with you via Slack, WeChat, or any other contact method to ask for your help? I appreciate your help in advance! |
Signed-off-by: Xinyuan Tong <xinyuantong.cs@gmail.com>
|
We have already communicated on WeChat. Once your image understanding is working properly, we can try to start modifying the video part. |
|
move to #8798 |
|
Warning You have reached your daily quota limit. Please wait up to 24 hours and I will start processing your requests again! |
Motivation
Close #7993
Modifications
This pull request introduces support for the GLM-4V model within the sglang framework. It includes the necessary model files, configuration adjustments, and a new conversation template to facilitate interaction with the GLM-4V model. Additionally, it addresses potential issues with MRoPE section mismatches and enhances the flexibility of the
get_rope_indexfunction to accommodate various model configurations.get_rope_indexfunction to accept a list of vision start token IDs, accommodating models with multiple vision start tokens.get_rope_indexfunction.Checklist