-
Notifications
You must be signed in to change notification settings - Fork 2.4k
[megatron] feat: a bunch of optimzation on vram, sequence packing #2678
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces optimizations for Megatron training, focusing on VRAM usage and sequence packing. Key changes include a new aggressive_empty_cache utility, optimized sequence packing pre/post-processing, and refactoring of override_ddp_config. A critical issue was identified in verl/utils/memory_utils.py regarding hardcoded GPU device indices, which has been flagged for immediate attention.
verl/utils/memory_utils.py
Outdated
| before_reserved = device.memory_reserved(0) | ||
| before_allocated = device.memory_allocated(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoding device index 0 for memory statistics can lead to incorrect behavior in multi-GPU environments where the current device is not GPU 0. The memory functions should operate on the current device. The device.memory_reserved() and device.memory_allocated() functions default to the current device when no device index is provided. Please remove the hardcoded 0.
| before_reserved = device.memory_reserved(0) | |
| before_allocated = device.memory_allocated(0) | |
| before_reserved = device.memory_reserved() | |
| before_allocated = device.memory_allocated() |
verl/utils/memory_utils.py
Outdated
| after_reserved = device.memory_reserved(0) | ||
| after_allocated = device.memory_allocated(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous point, the device index 0 is hardcoded here. This should be removed to ensure the function queries memory statistics for the current device.
| after_reserved = device.memory_reserved(0) | |
| after_allocated = device.memory_allocated(0) | |
| after_reserved = device.memory_reserved() | |
| after_allocated = device.memory_allocated() |
verl/utils/memory_utils.py
Outdated
| "total_memory_gb": device.get_device_properties(0).total_memory / 1024**3, | ||
| "reserved_memory_gb": device.memory_reserved(0) / 1024**3, | ||
| "allocated_memory_gb": device.memory_allocated(0) / 1024**3, | ||
| "cached_memory_gb": (device.memory_reserved(0) - device.memory_allocated(0)) / 1024**3, | ||
| "max_memory_allocated_gb": device.max_memory_allocated(0) / 1024**3, | ||
| "max_memory_reserved_gb": device.max_memory_reserved(0) / 1024**3, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function hardcodes the device index 0 for all memory statistics. This will report incorrect information on multi-GPU systems where the current process is not on device 0.
Most torch.cuda memory functions default to the current device if no index is provided. For get_device_properties, you need to explicitly pass the current device index, which you can get via device.current_device().
device = get_torch_device()
device_id = device.current_device()
return {
"total_memory_gb": device.get_device_properties(device_id).total_memory / 1024**3,
"reserved_memory_gb": device.memory_reserved() / 1024**3,
"allocated_memory_gb": device.memory_allocated() / 1024**3,
"cached_memory_gb": (device.memory_reserved() - device.memory_allocated()) / 1024**3,
"max_memory_allocated_gb": device.max_memory_allocated() / 1024**3,
"max_memory_reserved_gb": device.max_memory_reserved() / 1024**3,
}…lcengine#2678) ### What does this PR do? add a bunch of optimizations for megatron training, including: 1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this sometimes the cache could use as much as 30GB so bring OOMs. 2. better sequence packing pre/post-process. Before this there are a few times of d2h sync when pre/post-process the sequence packing. 3. make `override_ddp_config` compatible to mbridge. The optimized implementations have replaced the old ones, no options needed to enable them. > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
* origin/mindspeed: (39 commits) [perf] feat: add optional role selection in discrete mode for NPU Profiler (volcengine#2750) [rollout] feat: remove chat scheduler (volcengine#2725) [trainer] refactor: Make sure to keep the type checking (volcengine#2634) [doc] style: change resize handle from gradient to plain color (volcengine#2746) [CI] feat: add `mypy` to pre-commit (volcengine#2614) [megatron] feat: a bunch of optimzation on vram, sequence packing (volcengine#2678) [docker] feat: upgrade to torch 2.7, sglang 0.4.8 (volcengine#2617) [doc] feat: add resizable sidebar and improve layout (volcengine#2577) [ci] fix: release ascend test time, fix one step off-policy CI (volcengine#2731) [recipe] chore: add retool training script (volcengine#2732) [ci] fix: checkpoint_convertor ci miss a hf model download (volcengine#2730) [doc] feat: Add agent-lightning in the list of "awesome works using verl (volcengine#2726) [tool] fix: geo3k create return str instead of tuple (volcengine#2714) [megatron] fix: resolve backward propagation error in megatron_actor due to shared logits tensor in-place modification (volcengine#2484) [misc] chore: bump main branch version to v0.5.0.dev (volcengine#2718) [sglang] fix: Adding strict naming sanity for sglang (volcengine#2719) [ray] feat: RayWorkerGroup support set worker env (volcengine#2685) [ci] test: add CriticWorker unit test, make some util CPU friendly (volcengine#2717) [cfg] refactor: add ActorConfig, EngineConfig, and ActorWorker unit test, refactor validation code (volcengine#2621) [misc] chore: bump version to v0.5.0 (volcengine#2716) ...
…lcengine#2678) ### What does this PR do? add a bunch of optimizations for megatron training, including: 1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this sometimes the cache could use as much as 30GB so bring OOMs. 2. better sequence packing pre/post-process. Before this there are a few times of d2h sync when pre/post-process the sequence packing. 3. make `override_ddp_config` compatible to mbridge. The optimized implementations have replaced the old ones, no options needed to enable them. > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
…lcengine#2678) ### What does this PR do? add a bunch of optimizations for megatron training, including: 1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this sometimes the cache could use as much as 30GB so bring OOMs. 2. better sequence packing pre/post-process. Before this there are a few times of d2h sync when pre/post-process the sequence packing. 3. make `override_ddp_config` compatible to mbridge. The optimized implementations have replaced the old ones, no options needed to enable them. > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
…lcengine#2678) ### What does this PR do? add a bunch of optimizations for megatron training, including: 1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this sometimes the cache could use as much as 30GB so bring OOMs. 2. better sequence packing pre/post-process. Before this there are a few times of d2h sync when pre/post-process the sequence packing. 3. make `override_ddp_config` compatible to mbridge. The optimized implementations have replaced the old ones, no options needed to enable them. > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
…lcengine#2678) ### What does this PR do? add a bunch of optimizations for megatron training, including: 1. aggressive_empty_cache to avoid OOM on hybrid engine. Before this sometimes the cache could use as much as 30GB so bring OOMs. 2. better sequence packing pre/post-process. Before this there are a few times of d2h sync when pre/post-process the sequence packing. 3. make `override_ddp_config` compatible to mbridge. The optimized implementations have replaced the old ones, no options needed to enable them. > Add **concise** overview of what this PR aims to achieve or accomplish. Reference related GitHub issues and PRs that help with the review. ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
What does this PR do?
add a bunch of optimizations for megatron training, including:
override_ddp_configcompatible to mbridge.The optimized implementations have replaced the old ones, no options needed to enable them.
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
API and Usage Example
# Add code snippet or script demonstrating how to use thisDesign & Code Changes
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace.