Skip to content

Commit b39ff87

Browse files
kAIto47802masoudhashemi
authored andcommitted
[worker] fix: Fix missing rollout_log_probs argument in policy loss functions (volcengine#3274)
### What does this PR do? <!-- > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. --> In the recent PR: - volcengine#2953, the file `workers/actor/dp_actor.py` was updated so that `rollout_log_probs` is passed to `policy_loss_fn`: https://github.com/volcengine/verl/blob/38d23914ee512a125e00763fe3ddcc8df4319346/verl/workers/actor/dp_actor.py#L448-L456 In that PR, the "vanilla" policy loss function was modified to accept `rollout_log_probs` as an argument. However, other policy loss functions (e.g., "gspo") were not updated accordingly, which leads to an error such as: ``` TypeError: compute_policy_loss_gspo() got an unexpected keyword argument 'rollout_log_probs' ``` when setting `config.policy_loss.loss_mode` to one of these alternatives. Therefore, in this PR, `rollout_log_probs` is also added as an argument to the other policy loss functions. ### Checklist Before Starting - [x] Search for similar PRs. Paste at least one query link here: ... - [x] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [x] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [x] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [x] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [x] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [x] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
1 parent bd0ee2b commit b39ff87

File tree

1 file changed

+15
-2
lines changed

1 file changed

+15
-2
lines changed

verl/trainer/ppo/core_algos.py

Lines changed: 15 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,7 @@
4141
torch.Tensor, # response_mask
4242
str, # loss_agg_mode
4343
Optional[DictConfig | AlgoConfig], # config
44+
torch.Tensor | None, # rollout_log_probs
4445
],
4546
tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor],
4647
]
@@ -820,7 +821,7 @@ def compute_policy_loss_vanilla(
820821
response_mask: torch.Tensor,
821822
loss_agg_mode: str = "token-mean",
822823
config: Optional[DictConfig | AlgoConfig] = None,
823-
rollout_log_probs=None,
824+
rollout_log_probs: torch.Tensor | None = None,
824825
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
825826
"""
826827
Compute the clipped policy objective and related metrics for PPO.
@@ -909,6 +910,7 @@ def compute_policy_loss_gspo(
909910
response_mask: torch.Tensor,
910911
loss_agg_mode: str = "seq-mean-token-mean",
911912
config: Optional[DictConfig | ActorConfig] = None,
913+
rollout_log_probs: torch.Tensor | None = None,
912914
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
913915
"""
914916
Compute the clipped policy objective and related metrics for GSPO.
@@ -967,7 +969,15 @@ def compute_policy_loss_gspo(
967969

968970

969971
@register_policy_loss("gpg")
970-
def compute_policy_loss_gpg(old_log_prob, log_prob, advantages, response_mask, loss_agg_mode="token-mean", config=None):
972+
def compute_policy_loss_gpg(
973+
old_log_prob: torch.Tensor,
974+
log_prob: torch.Tensor,
975+
advantages: torch.Tensor,
976+
response_mask: torch.Tensor,
977+
loss_agg_mode: str = "token-mean",
978+
config: Optional[DictConfig | AlgoConfig] = None,
979+
rollout_log_probs: torch.Tensor | None = None,
980+
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
971981
"""Adapted from
972982
https://github.com/AMAP-ML/GPG/blob/main/VisualThinker-R1-Zero/src/open-r1-multimodal/src/open_r1/trainer/grpo_trainer.py#L495
973983
Args:
@@ -995,6 +1005,7 @@ def compute_policy_loss_clip_cov(
9951005
response_mask: torch.Tensor,
9961006
loss_agg_mode: str = "token-mean",
9971007
config: Optional[DictConfig | AlgoConfig] = None,
1008+
rollout_log_probs: torch.Tensor | None = None,
9981009
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
9991010
"""
10001011
Compute the clipped policy objective and related metrics for Clip-Cov.
@@ -1089,6 +1100,7 @@ def compute_policy_loss_kl_cov(
10891100
response_mask: torch.Tensor,
10901101
loss_agg_mode: str = "token-mean",
10911102
config: Optional[DictConfig | AlgoConfig] = None,
1103+
rollout_log_probs: torch.Tensor | None = None,
10921104
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
10931105
"""
10941106
Compute the clipped policy objective and related metrics for Clip-Cov.
@@ -1160,6 +1172,7 @@ def compute_policy_loss_geo_mean(
11601172
response_mask: torch.Tensor,
11611173
loss_agg_mode: str = "token-mean",
11621174
config: Optional[DictConfig | AlgoConfig] = None,
1175+
rollout_log_probs: torch.Tensor | None = None,
11631176
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
11641177
"""
11651178
Compute the clipped policy objective and related metrics for GMPO.

0 commit comments

Comments
 (0)