-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
[Models] Replace all nn.Conv2d with vLLM's Conv2dLayer
#28842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -24,6 +24,7 @@ | |
| from vllm.config.multimodal import BaseDummyOptions | ||
| from vllm.distributed import get_tensor_model_parallel_world_size | ||
| from vllm.logger import init_logger | ||
| from vllm.model_executor.layers.conv import Conv2dLayer | ||
| from vllm.model_executor.layers.linear import ( | ||
| ColumnParallelLinear, | ||
| QKVParallelLinear, | ||
|
|
@@ -204,7 +205,7 @@ def __init__(self, config: PretrainedConfig): | |
| self.image_size = config.image_size | ||
| self.patch_size = config.patch_size | ||
|
|
||
| self.patch_embedding = nn.Conv2d( | ||
| self.patch_embedding = Conv2dLayer( | ||
| in_channels=config.num_channels, | ||
| out_channels=self.embed_dim, | ||
| kernel_size=self.patch_size, | ||
|
Comment on lines
+208
to
211
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Useful? React with 👍 / 👎.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Can you update the type annotation to account for this? |
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -45,6 +45,7 @@ | |
| from vllm.distributed import parallel_state | ||
| from vllm.distributed import utils as dist_utils | ||
| from vllm.model_executor.layers.activation import get_act_fn | ||
| from vllm.model_executor.layers.conv import Conv2dLayer | ||
| from vllm.model_executor.layers.linear import ( | ||
| ColumnParallelLinear, | ||
| QKVParallelLinear, | ||
|
|
@@ -419,7 +420,7 @@ def __init__(self, config: PretrainedConfig): | |
| self.image_size = config.image_size | ||
| self.patch_size = config.patch_size | ||
|
|
||
| self.patch_embedding = nn.Conv2d( | ||
| self.patch_embedding = Conv2dLayer( | ||
| in_channels=config.num_channels, | ||
| out_channels=self.embed_dim, | ||
| kernel_size=self.patch_size, | ||
|
Comment on lines
+423
to
426
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The padding argument is now the literal string "valid", but Useful? React with 👍 / 👎. |
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new
Conv2dLayerwrapper forwardspaddingdirectly toF.conv2dand does not implement the "valid" shortcut thatnn.Conv2dprovided. Using the string here will cause a runtime failure whenforwardruns. Replace with the correct numeric padding (0) or add conversion logic.Useful? React with 👍 / 👎.