Skip to content

Commit e7a9260

Browse files
committed
Workaround for weird (CI) fails
Signed-off-by: Vadim Gimpelson <[email protected]>
1 parent 9a3883c commit e7a9260

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

vllm/model_executor/layers/layernorm.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,6 @@
1212
rms_norm_batch_invariant,
1313
vllm_is_batch_invariant,
1414
)
15-
from vllm.model_executor.layers.fla.ops.layernorm_guard import rmsnorm_fn
1615
from vllm.platforms import current_platform
1716
from vllm.utils.torch_utils import direct_register_custom_op
1817

@@ -460,6 +459,8 @@ def forward_native(
460459
def forward_cuda(
461460
self, x: torch.Tensor, z: torch.Tensor | None = None
462461
) -> torch.Tensor:
462+
from vllm.model_executor.layers.fla.ops.layernorm_guard import rmsnorm_fn
463+
463464
return rmsnorm_fn(
464465
x,
465466
self.weight,

0 commit comments

Comments
 (0)