Skip to content

Conversation

@ISEEKYAN
Copy link
Collaborator

What does this PR do?

add a bunch of optimizations for megatron training, including:

  1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this sometimes the cache could use as much as 30GB so bring OOMs.
  2. better sequence packing pre/post-process. Before this there are a few times of d2h sync when pre/post-process the sequence packing.
  3. make override_ddp_config compatible to mbridge.

The optimized implementations have replaced the old ones, no options needed to enable them.

Add concise overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces optimizations for Megatron training, focusing on VRAM usage and sequence packing. Key changes include a new aggressive_empty_cache utility, optimized sequence packing pre/post-processing, and refactoring of override_ddp_config. A critical issue was identified in verl/utils/memory_utils.py regarding hardcoded GPU device indices, which has been flagged for immediate attention.

Comment on lines 41 to 42
before_reserved = device.memory_reserved(0)
before_allocated = device.memory_allocated(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Hardcoding device index 0 for memory statistics can lead to incorrect behavior in multi-GPU environments where the current device is not GPU 0. The memory functions should operate on the current device. The device.memory_reserved() and device.memory_allocated() functions default to the current device when no device index is provided. Please remove the hardcoded 0.

Suggested change
before_reserved = device.memory_reserved(0)
before_allocated = device.memory_allocated(0)
before_reserved = device.memory_reserved()
before_allocated = device.memory_allocated()

Comment on lines 55 to 56
after_reserved = device.memory_reserved(0)
after_allocated = device.memory_allocated(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the previous point, the device index 0 is hardcoded here. This should be removed to ensure the function queries memory statistics for the current device.

Suggested change
after_reserved = device.memory_reserved(0)
after_allocated = device.memory_allocated(0)
after_reserved = device.memory_reserved()
after_allocated = device.memory_allocated()

Comment on lines 88 to 93
"total_memory_gb": device.get_device_properties(0).total_memory / 1024**3,
"reserved_memory_gb": device.memory_reserved(0) / 1024**3,
"allocated_memory_gb": device.memory_allocated(0) / 1024**3,
"cached_memory_gb": (device.memory_reserved(0) - device.memory_allocated(0)) / 1024**3,
"max_memory_allocated_gb": device.max_memory_allocated(0) / 1024**3,
"max_memory_reserved_gb": device.max_memory_reserved(0) / 1024**3,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This function hardcodes the device index 0 for all memory statistics. This will report incorrect information on multi-GPU systems where the current process is not on device 0.

Most torch.cuda memory functions default to the current device if no index is provided. For get_device_properties, you need to explicitly pass the current device index, which you can get via device.current_device().

    device = get_torch_device()
    device_id = device.current_device()

    return {
        "total_memory_gb": device.get_device_properties(device_id).total_memory / 1024**3,
        "reserved_memory_gb": device.memory_reserved() / 1024**3,
        "allocated_memory_gb": device.memory_allocated() / 1024**3,
        "cached_memory_gb": (device.memory_reserved() - device.memory_allocated()) / 1024**3,
        "max_memory_allocated_gb": device.max_memory_allocated() / 1024**3,
        "max_memory_reserved_gb": device.max_memory_reserved() / 1024**3,
    }

@vermouth1992 vermouth1992 requested a review from ETOgaosion July 22, 2025 07:23
@ETOgaosion ETOgaosion merged commit dc8b507 into volcengine:main Jul 25, 2025
53 of 54 checks passed
oseyosey pushed a commit to oseyosey/verl that referenced this pull request Jul 28, 2025
…lcengine#2678)

### What does this PR do?

add a bunch of optimizations for megatron training, including:
1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this
sometimes the cache could use as much as 30GB so bring OOMs.
2. better sequence packing pre/post-process. Before this there are a few
times of d2h sync when pre/post-process the sequence packing.
3. make `override_ddp_config` compatible to mbridge.

The optimized implementations have replaced the old ones, no options
needed to enable them.


> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
CurryRice233 added a commit to CurryRice233/verl that referenced this pull request Jul 28, 2025
* origin/mindspeed: (39 commits)
  [perf] feat: add optional role selection in discrete mode for NPU Profiler (volcengine#2750)
  [rollout] feat: remove chat scheduler (volcengine#2725)
  [trainer] refactor: Make sure to keep the type checking (volcengine#2634)
  [doc] style: change resize handle from gradient to plain color (volcengine#2746)
  [CI] feat: add `mypy` to pre-commit (volcengine#2614)
  [megatron] feat: a bunch of optimzation on vram, sequence packing (volcengine#2678)
  [docker] feat: upgrade to torch 2.7, sglang 0.4.8 (volcengine#2617)
  [doc] feat: add resizable sidebar and improve layout (volcengine#2577)
  [ci] fix: release ascend test time, fix one step off-policy CI (volcengine#2731)
  [recipe] chore: add retool training script (volcengine#2732)
  [ci] fix: checkpoint_convertor ci miss a hf model download (volcengine#2730)
  [doc] feat: Add agent-lightning in the list of "awesome works using verl (volcengine#2726)
  [tool] fix: geo3k create return str instead of tuple (volcengine#2714)
  [megatron] fix: resolve backward propagation error in megatron_actor due to shared logits tensor in-place modification (volcengine#2484)
  [misc] chore: bump main branch version to v0.5.0.dev (volcengine#2718)
  [sglang] fix: Adding strict naming sanity for sglang (volcengine#2719)
  [ray] feat: RayWorkerGroup support set worker env (volcengine#2685)
  [ci] test: add CriticWorker unit test, make some util CPU friendly (volcengine#2717)
  [cfg] refactor: add ActorConfig, EngineConfig, and ActorWorker unit test, refactor validation code (volcengine#2621)
  [misc] chore: bump version to v0.5.0 (volcengine#2716)
  ...
SumanthRH pushed a commit to SumanthRH/verl that referenced this pull request Jul 29, 2025
…lcengine#2678)

### What does this PR do?

add a bunch of optimizations for megatron training, including:
1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this
sometimes the cache could use as much as 30GB so bring OOMs.
2. better sequence packing pre/post-process. Before this there are a few
times of d2h sync when pre/post-process the sequence packing.
3. make `override_ddp_config` compatible to mbridge.

The optimized implementations have replaced the old ones, no options
needed to enable them.


> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
yellowbee686 pushed a commit to yellowbee686/verl that referenced this pull request Jul 31, 2025
…lcengine#2678)

### What does this PR do?

add a bunch of optimizations for megatron training, including:
1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this
sometimes the cache could use as much as 30GB so bring OOMs.
2. better sequence packing pre/post-process. Before this there are a few
times of d2h sync when pre/post-process the sequence packing.
3. make `override_ddp_config` compatible to mbridge.

The optimized implementations have replaced the old ones, no options
needed to enable them.


> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
Juniper1021 pushed a commit to Juniper1021/verl that referenced this pull request Aug 7, 2025
…lcengine#2678)

### What does this PR do?

add a bunch of optimizations for megatron training, including:
1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this
sometimes the cache could use as much as 30GB so bring OOMs.
2. better sequence packing pre/post-process. Before this there are a few
times of d2h sync when pre/post-process the sequence packing.
3. make `override_ddp_config` compatible to mbridge.

The optimized implementations have replaced the old ones, no options
needed to enable them.


> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
whatadayG pushed a commit to whatadayG/verl that referenced this pull request Sep 5, 2025
…lcengine#2678)

### What does this PR do?

add a bunch of optimizations for megatron training, including:
1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this
sometimes the cache could use as much as 30GB so bring OOMs.
2. better sequence packing pre/post-process. Before this there are a few
times of d2h sync when pre/post-process the sequence packing.
3. make `override_ddp_config` compatible to mbridge.

The optimized implementations have replaced the old ones, no options
needed to enable them.


> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

### Checklist Before Starting

- [ ] Search for similar PRs. Paste at least one query link here: ...
- [ ] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [ ] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [ ] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants