Skip to content

Commit bae1fa0

Browse files
authored
[worker] fix vllm sharding manager (#348)
1 parent 3271e6e commit bae1fa0

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

verl/workers/sharding_manager/fsdp_vllm.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ def load_vllm_and_sync_weights(self):
137137
self.torch_random_states = torch.cuda.get_rng_state()
138138
torch.cuda.set_rng_state(self.gen_random_states)
139139

140-
def offload_vllm(self, exc_type, exc_value, traceback):
140+
def offload_vllm(self):
141141
"""Offload vllm engine."""
142142
assert self.loaded is True, "vllm engine has not been loaded"
143143
self.loaded = False

0 commit comments

Comments
 (0)