-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
[Models] Replace all nn.Conv2d with vLLM's Conv2dLayer
#28842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -30,6 +30,7 @@ | |
| from vllm.attention.layer import MultiHeadAttention | ||
| from vllm.distributed import get_tensor_model_parallel_world_size | ||
| from vllm.model_executor.layers.activation import get_act_fn | ||
| from vllm.model_executor.layers.conv import Conv2dLayer | ||
| from vllm.model_executor.layers.linear import ( | ||
| ColumnParallelLinear, | ||
| QKVParallelLinear, | ||
|
|
@@ -60,7 +61,7 @@ def __init__(self, config: Idefics2VisionConfig): | |
| self.embed_dim = config.hidden_size | ||
| self.image_size = config.image_size | ||
| self.patch_size = config.patch_size | ||
| self.patch_embedding = nn.Conv2d( | ||
| self.patch_embedding = Conv2dLayer( | ||
| in_channels=config.num_channels, | ||
| out_channels=self.embed_dim, | ||
| kernel_size=self.patch_size, | ||
|
Comment on lines
+64
to
67
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The new Useful? React with 👍 / 👎. |
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -24,6 +24,7 @@ | |
| from vllm.config.multimodal import BaseDummyOptions | ||
| from vllm.distributed import get_tensor_model_parallel_world_size | ||
| from vllm.logger import init_logger | ||
| from vllm.model_executor.layers.conv import Conv2dLayer | ||
| from vllm.model_executor.layers.linear import ( | ||
| ColumnParallelLinear, | ||
| QKVParallelLinear, | ||
|
|
@@ -204,7 +205,7 @@ def __init__(self, config: PretrainedConfig): | |
| self.image_size = config.image_size | ||
| self.patch_size = config.patch_size | ||
|
|
||
| self.patch_embedding = nn.Conv2d( | ||
| self.patch_embedding = Conv2dLayer( | ||
| in_channels=config.num_channels, | ||
| out_channels=self.embed_dim, | ||
| kernel_size=self.patch_size, | ||
|
Comment on lines
+208
to
211
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Useful? React with 👍 / 👎.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you update the type annotation to account for this? |
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The addition of this validation check is crucial for correctness.
padding='same'behavior is not well-defined for strided convolutions in all frameworks, and explicitly disallowing it prevents potential silent miscalculations or unexpected output dimensions. This improves the robustness of theConv2dLayer.