Skip to content
This repository was archived by the owner on Oct 11, 2024. It is now read-only.

Commit 0f5a490

Browse files
youkaichaoRobert Shaw
authored andcommitted
[Core][Distributed] improve logging for init dist (vllm-project#4042)
1 parent 0356684 commit 0f5a490

File tree

1 file changed

+6
-0
lines changed

1 file changed

+6
-0
lines changed

vllm/distributed/parallel_state.py

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,10 @@
88

99
import torch
1010

11+
from vllm.logger import init_logger
12+
13+
logger = init_logger(__name__)
14+
1115
# Tensor model parallel group that the current rank belongs to.
1216
_TENSOR_MODEL_PARALLEL_GROUP = None
1317
# Pipeline model parallel group that the current rank belongs to.
@@ -45,6 +49,8 @@ def init_distributed_environment(
4549
local_rank: int = -1,
4650
backend: str = "nccl",
4751
):
52+
logger.debug(f"{world_size=} {rank=} {local_rank=} "
53+
f"{distributed_init_method=} {backend=}")
4854
if not torch.distributed.is_initialized():
4955
assert distributed_init_method is not None, (
5056
"distributed_init_method must be provided when initializing "

0 commit comments

Comments
 (0)