Skip to content

Conversation

@gursimar
Copy link

@gursimar gursimar commented Dec 5, 2025

Description

This PR continues the MultiLoRA work (RFC #609, PR #621) and implements single-LoRA XCCL-based weight updates for the vLLM backend. Previously we supported only disk-based LoRA updates (PR #621) for vLLM; this change enables an in-memory XCCL broadcast flow to push LoRA tensors from an FSDP training process to vLLM workers, and to materialize an active LoRA adapter inside vLLM.

This is Milestone 1: single-LoRA over XCCL. It is a follow-up to the earlier PR #621 that enabled disk-based LoRA updates.

High level flow added

  • FSDP side prepares a WeightUpdateMeta that includes a small peft_config describing LoRA hyperparams/target modules.

  • FSDP broadcasts parameter tensors via XCCL (existing distributed broadcast), but when use_lora is true we:

    • iterate only over LoRA trainable params instead of all parameters,
    • attach LoRA-specific metadata to the update meta,
    • trigger the vLLM-specific LoRA update endpoints.
  • vLLM worker receives the broadcast LoRA tensors, normalizes names, constructs a LoRAModel from the received tensors using PEFTHelper/LoRAModel.from_lora_tensors(...) helper primitives and registers/activates the adapter in vLLM's LoRA manager.

Key user-visible capabilities

  • XCCL-based in-memory LoRA updates for vLLM (no disk roundtrip required).
  • Metadata includes LoRA rank/alpha/target_modules/bias to correctly reconstruct LoRA model in vLLM.

Backward-compatible behavior

  • Existing non-LoRA weight update flow (full model / disk-based LoRA) remains intact. New code paths are taken only when meta.use_lora is true and vLLM backend is used.

Files changed

  • areal/api/io_struct.py — add peft_config to WeightUpdateMeta.
  • areal/engine/fsdp_engine.py — populate peft_config; LoRA-only param iteration.
  • areal/engine/vllm_remote.py — include LoRA metadata fields in vLLM requests.
  • areal/thirdparty/vllm/areal_vllm_server.py — new LoRA request model and endpoints wiring.
  • areal/thirdparty/vllm/vllm_worker_extension.py — full LoRA XCCL receive & adapter creation implementation.
  • examples/lora/gsm8k_grpo_lora_vllm.py & .yaml — example wiring and config for weight_update_mode=xccl.

Running instructions

Start a local test as shown in the example:

python3 -m areal.launcher.local examples/lora/gsm8k_grpo_lora_vllm.py --config examples/lora/gsm8k_grpo_lora_vllm.yaml

Testing

  • Tested xccl-based performance matches disk-based performance.
  • Tested non-lora pipeline works as expected.

Type of Change

  • Bug fix (non-breaking change that fixes an issue)
  • New feature (non-breaking change that adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not
    work as expected)
  • Documentation update
  • Code refactoring (no functional changes)
  • Performance improvement
  • Test coverage improvement

Checklist

  • I have read the Contributing Guide
  • I have run formatting tools (pre-commit or manual)
  • I have run relevant unit tests and they pass
  • I have added tests for new functionality
  • I have updated documentation if needed
  • My branch is up to date with main
  • This PR introduces breaking changes (if yes, fill out details below)
  • If this PR changes documentation, I have built and previewed it locally with
    jb build docs
  • No critical issues raised by AI reviewers (/gemini review)

Breaking Change Details (if applicable):


Need help? Check the Contributing Guide or ask in
GitHub Discussions!

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @gursimar, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the LoRA weight update mechanism for the vLLM backend by introducing XCCL-based in-memory broadcasts. This change allows for direct and efficient transfer of LoRA tensors from FSDP training processes to vLLM workers, bypassing disk I/O and enabling quicker activation of LoRA adapters. It represents a crucial first milestone in the broader MultiLoRA initiative, laying the groundwork for more advanced LoRA management capabilities.

Highlights

  • XCCL-based LoRA Updates: Introduces XCCL-based in-memory weight updates for single LoRA functionality within the vLLM backend, eliminating the need for disk-based updates and enabling faster LoRA tensor broadcasts from FSDP training processes to vLLM workers.
  • LoRA Metadata Integration: Adds a peft_config field to WeightUpdateMeta to carry LoRA hyperparameters (rank, alpha, target modules, bias) and modifies the FSDP engine to populate this configuration during weight updates.
  • Optimized Parameter Iteration: The FSDP engine is updated to iterate only over LoRA trainable parameters when use_lora is enabled, improving efficiency for LoRA-specific updates.
  • New vLLM Endpoints and Logic: Implements new vLLM API endpoints (/areal_set_update_weight_meta_lora, /areal_update_weights_lora_xccl) and corresponding worker-side logic to handle the reception, reconstruction, and activation of LoRA adapters via XCCL broadcast.
  • Backward Compatibility: Ensures that existing non-LoRA and disk-based LoRA weight update flows remain fully functional and backward compatible, with new code paths activated only when meta.use_lora is true and the vLLM backend is in use.
  • Example Configuration: Updates the gsm8k_grpo_lora_vllm example to demonstrate the new XCCL-based weight update mode, including a configuration change for the learning rate.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces XCCL-based weight updates for single LoRA models in the vLLM backend, which is a great feature. The implementation looks mostly correct, but I've found a critical bug in the example code and have some suggestions to improve maintainability and robustness.
My main concerns are:

  • A critical logic swap in the example file that will cause it to fail.
  • Heavy reliance on vLLM's private APIs, which is risky for future compatibility.
  • Some opportunities for refactoring to reduce code duplication and improve clarity.

Please see my detailed comments below.

@garrett4wade
Copy link
Collaborator

@gursimar Hi, sorry for the late review, but could your update the PR and resolve the conflict first?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants