Skip to content

Conversation

@kfallah
Copy link
Contributor

@kfallah kfallah commented Sep 27, 2025

What does this PR do?

Currently, async vLLM with AgentWorkerLoop throws an error when update_weights with LoRA weights. This expands support for AgentWorkerLoop with LoRAs.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: ...
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc.

API and Usage Example

Demonstrate how the API changes if any, and provide usage example(s) if possible.

# Add code snippet or script demonstrating how to use this

Design & Code Changes

Demonstrate the high-level design if this PR is complex, and list the specific changes.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

@kfallah kfallah marked this pull request as ready for review September 27, 2025 22:47
@wuxibin89 wuxibin89 changed the title [async vllm] Fix: Add LoRA Loading to Async vLLM [rollout,vllm] fix: Add LoRA Loading to Async vLLM Sep 28, 2025
@wuxibin89 wuxibin89 merged commit 39e531f into volcengine:main Sep 28, 2025
47 of 52 checks passed
@kfallah kfallah deleted the kion/async-rollout-lora branch October 2, 2025 04:12
@cyy489
Copy link

cyy489 commented Oct 5, 2025

Hi, I’m running GRPO + vLLM + Qwen3-8B + tool_agent_loop training base on the old code and hit two consecutive errors.
The first one is:
"""
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/rollout/vllm_rollout/vllm_async_server.py", line 375, in wake_up
await asyncio.gather(*[worker.wake_up.remote() for worker in self.workers])
File "/opt/conda/lib/python3.10/asyncio/tasks.py", line 650, in _wrap_awaitable
return (yield from awaitable.await())
(TaskRunner pid=121936) ray.exceptions.RayTaskError(ValueError): ray::WorkerDict.wake_up() (pid=126513, ip=33.184.126.226, actor_id=bb1576945f7ac4a998bcbdcc01000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x7fc4a9b04ac0>)
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/ossfs/workspace/yuyichen_yyc/verl/verl/single_controller/ray/base.py", line 705, in async_func
return await getattr(self.worker_dict[key], name)(*args, **kwargs)
File "/ossfs/workspace/yuyichen_yyc/verl/verl/single_controller/base/decorator.py", line 436, in async_inner
return await func(args, **kwargs)
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/fsdp_workers.py", line 1820, in wake_up
await self.rollout_mode()
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/fsdp_workers.py", line 640, in rollout_mode
await self.rollout.update_weights(per_tensor_param, peft_config=peft_config, base_sync_done=self.base_sync_done)
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py", line 559, in update_weights
model.load_weights(weights)
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/qwen2.py", line 497, in load_weights
return loader.load_weights(weights)
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 291, in load_weights
autoloaded_weights = set(self._load_module("", self.module, weights))
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 240, in _load_module
for child_prefix, child_weights in self._groupby_prefix(weights):
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 129, in _groupby_prefix
for prefix, group in itertools.groupby(weights_by_parts,
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 126, in
weights_by_parts = ((weight_name.split(".", 1), weight_data)
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 288, in
weights = ((name, weight) for name, weight in weights
ValueError: too many values to unpack (expected 2)
"""
After I tried to modify rollout_mode in fsdp_workers.py, the second error appears:
"""
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/rollout/vllm_rollout/vllm_async_server.py", line 375, in wake_up
await asyncio.gather(
[worker.wake_up.remote() for worker in self.workers])
File "/opt/conda/lib/python3.10/asyncio/tasks.py", line 650, in _wrap_awaitable
return (yield from awaitable.await())
ray.exceptions.RayTaskError(ValueError): �[36mray::WorkerDict.wake_up()�[39m (pid=282082, ip=33.184.121.5, actor_id=96bedd7ce8d5dead3bb9b35901000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x7fa3dad95360>)
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/conda/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/ossfs/workspace/yuyichen_yyc/verl/verl/single_controller/ray/base.py", line 705, in async_func
return await getattr(self.worker_dict[key], name)(*args, **kwargs)
File "/ossfs/workspace/yuyichen_yyc/verl/verl/single_controller/base/decorator.py", line 436, in async_inner
return await func(*args, **kwargs)
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/fsdp_workers.py", line 1821, in wake_up
await self.rollout_mode()
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/fsdp_workers.py", line 641, in rollout_mode
await self.rollout.update_weights(per_tensor_param, peft_config=peft_config, base_sync_done=self.base_sync_done)
File "/ossfs/workspace/yuyichen_yyc/verl/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py", line 560, in update_weights
model.load_weights(weights)
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/qwen3.py", line 321, in load_weights
return loader.load_weights(weights)
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 294, in load_weights
autoloaded_weights = set(self._load_module("", self.module, weights))
File "/opt/conda/lib/python3.10/site-packages/vllm/model_executor/models/utils.py", line 277, in _load_module
raise ValueError(msg)
ValueError: There is no module or parameter named 'base_model' in Qwen3ForCausalLM
"""
I’m quite puzzled—could my training config be the culprit?
Any suggestions on how to fix these issues?

Thanks in advance!

Below is my exact training config:
"""
set -x

ulimit -n 65535

PROJECT_DIR="$(pwd)"
CONFIG_PATH="$PROJECT_DIR/examples/sglang_multiturn/config"

TRAIN_DATA="/data/verl_train_search10times.parquet"
VAL_DATA="/data/verl_test_search10times.parquet"

TOOL_CONFIG="$CONFIG_PATH/tool_config/codebase_search_tool_config.yaml"

export VLLM_ATTENTION_BACKEND=XFORMERS

nohup python3 -m verl.trainer.main_ppo
--config-path="$CONFIG_PATH"
--config-name='search_multiturn_grpo'
algorithm.adv_estimator=grpo
data.train_batch_size=32
data.val_batch_size=16
data.max_prompt_length=4096
data.max_response_length=3000
data.filter_overlong_prompts=True
data.truncation='error'
data.return_raw_chat=True
actor_rollout_ref.model.path="/models/Qwen3-8B"
actor_rollout_ref.actor.optim.lr=1e-6
actor_rollout_ref.actor.optim.lr_warmup_steps_ratio=0.285
actor_rollout_ref.model.use_remove_padding=True
actor_rollout_ref.actor.ppo_mini_batch_size=16
actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=1
actor_rollout_ref.actor.use_kl_loss=True
actor_rollout_ref.actor.kl_loss_coef=0.001
actor_rollout_ref.actor.kl_loss_type=low_var_kl
actor_rollout_ref.actor.entropy_coeff=0
actor_rollout_ref.model.enable_gradient_checkpointing=True
actor_rollout_ref.actor.fsdp_config.param_offload=True
actor_rollout_ref.actor.fsdp_config.optimizer_offload=True
actor_rollout_ref.actor.strategy=fsdp
actor_rollout_ref.rollout.max_model_len=15000
actor_rollout_ref.rollout.log_prob_micro_batch_size=8
actor_rollout_ref.rollout.tensor_model_parallel_size=2
actor_rollout_ref.rollout.name=vllm
actor_rollout_ref.rollout.gpu_memory_utilization=0.6
actor_rollout_ref.rollout.n=4
actor_rollout_ref.rollout.multi_turn.max_assistant_turns=3
actor_rollout_ref.rollout.layered_summon=True
actor_rollout_ref.rollout.load_format=safetensors
actor_rollout_ref.rollout.mode=async
actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=1
actor_rollout_ref.ref.fsdp_config.param_offload=True
actor_rollout_ref.rollout.multi_turn.format=hermes
actor_rollout_ref.model.lora_rank=32
actor_rollout_ref.model.lora_alpha=16
actor_rollout_ref.model.target_modules="all-linear"
actor_rollout_ref.model.use_shm=True
algorithm.use_kl_in_reward=False
trainer.critic_warmup=0
trainer.val_before_train=False
trainer.logger='["console","wandb"]'
trainer.project_name='search_r1_like_async_rl'
trainer.experiment_name='qwen3-8b-instruct_function_rm-search-async-sgl-multi-w-searchtool-verify-n16'
trainer.n_gpus_per_node=8
trainer.nnodes=1
trainer.save_freq=100
trainer.test_freq=50
trainer.default_local_dir="/verl_checkpoints/qwen3-8B"
data.train_files="$TRAIN_DATA"
data.val_files="$VAL_DATA"
actor_rollout_ref.rollout.multi_turn.tool_config_path="$TOOL_CONFIG"
trainer.total_epochs=1 > nohup.out 2>&1 &
"""

masoudhashemi pushed a commit to masoudhashemi/verl that referenced this pull request Oct 19, 2025
### What does this PR do?

Currently, async vLLM with AgentWorkerLoop throws an error when
`update_weights` with LoRA weights. This expands support for
AgentWorkerLoop with LoRAs.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
vermouth1992 pushed a commit that referenced this pull request Oct 20, 2025
### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous #3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like #3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR #3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: #3639
#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
techkang pushed a commit to techkang/verl that referenced this pull request Oct 31, 2025
### What does this PR do?

Currently, async vLLM with AgentWorkerLoop throws an error when
`update_weights` with LoRA weights. This expands support for
AgentWorkerLoop with LoRAs.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
mtian8 pushed a commit to mtian8/verl that referenced this pull request Nov 1, 2025
### What does this PR do?

Currently, async vLLM with AgentWorkerLoop throws an error when
`update_weights` with LoRA weights. This expands support for
AgentWorkerLoop with LoRAs.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
wangboxiong320 pushed a commit to wangboxiong320/verl that referenced this pull request Nov 1, 2025
### What does this PR do?

Currently, async vLLM with AgentWorkerLoop throws an error when
`update_weights` with LoRA weights. This expands support for
AgentWorkerLoop with LoRAs.

### Checklist Before Starting

- [x] Search for similar PRs. Paste at least one query link here: ...
- [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)
- `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
- If this PR involves multiple modules, separate them with `,` like
`[megatron, fsdp, doc]`
  - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test`
- If this PR breaks any API (CLI arguments, config, function signature,
etc.), add `[BREAKING]` to the beginning of the title.
  - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that can not be tested by CI (e.g., algorithm
implementation, new model support), validate by experiment(s) and show
results like training curve plots, evaluation results, etc.

### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [ ] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs).
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: ...
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
wangboxiong320 pushed a commit to wangboxiong320/verl that referenced this pull request Nov 1, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/e94366d46a027d38e48e3a859b745387f131b0ad/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
NenoL2001 pushed a commit to NenoL2001/verl that referenced this pull request Nov 3, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
AlexJJ009 pushed a commit to AlexJJ009/verl that referenced this pull request Nov 5, 2025
…ine#3821)

### What does this PR do?

> Add **concise** overview of what this PR aims to achieve or
accomplish. Reference related GitHub issues and PRs that help with the
review.

The previous volcengine#3639 addressed the **crashing issues** in `update_weights`
of `vLLMAsyncRollout`. However, experiments (see **Tests** below) reveal
an implicit **off-policy issue**: the rollout generation still uses the
**base model** instead of the updated **LoRA model**, resulting in
degraded performance. We traced this to a bug in
`vllm_async_server.vLLMHttpServerBase` causing a mismatch between LoRA
updates and rollout generation. Specifically:

* In `vLLMAsyncRollout`, `update_weights` correctly updates LoRA weights
from the FSDP actor to the rollout `AsyncLLM` engine. However, the
updated adapter is assigned a random `lora_name` and `lora_int_id`
(generated from `time.ns()`), which are not stored—making them hard to
reuse.

https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py#L595-L604

* During rollout generation, the newly added LoRA adapter is **never
used** due to two issues:

1. The `vllm_config` used to create `AsyncLLM` lacks a `LoRAConfig`
(e.g., `max_lora_rank`), so `AsyncLLM` is not prepared for LoRA-based
generation requests.
See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L299-L304
2. When calling `generate` in `vLLMHttpServerBase`, the request to
`self.engine` (the `AsyncLLM` instance) **omits any `LoRARequest`**,
meaning generation always uses the base model. See
https://github.com/volcengine/verl/blob/f209c6f656bb8444e1ecd641c1af04231a5a2dec/verl/workers/rollout/vllm_rollout/vllm_async_server.py#L360

#### Proposed Fixes in this PR

* Standardize and persist `VLLM_LORA_INT_ID` and `VLLM_LORA_NAME` across
the training process to consistently locate and apply updated LoRA
weights.
* Inject `LoRAConfig` during `AsyncLLM` initialization and ensure
`vLLMHttpServerBase` passes a proper `LoRARequest` (identified via
`VLLM_LORA_NAME`) during rollout generation.
* Add utility methods to automatically validate and set `max_lora_rank`
in vLLM from `config.actor_rollout_ref.model.lora_rank`, addressing
issues like volcengine#3696

#### Remarks

Special thanks to @sanxing-chen for inspiring this fix with his prior
patches. Also his PR volcengine#3765 -- while also tackling an issue hurting LoRA
performance -- seems to be orthogonal to the issues addressed here.

### Checklist Before Starting

* [x] Search for similar PRs. Paste at least one query link here: volcengine#3639
volcengine#3765
* [x] Format the PR title as `[{modules}] {type}: {description}` (This
will be checked by the CI)

* `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`,
`trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`,
`ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`,
`env`, `tool`, `ckpt`, `doc`, `data`
* If this PR involves multiple modules, separate them with `,`, e.g.,
`[megatron, fsdp, doc]`
  * `{type}` ∈ {`feat`, `fix`, `refactor`, `chore`, `test`}
  * If this PR breaks any API, prepend `[BREAKING]` to the title.
  * Example: `[BREAKING][fsdp, megatron] feat: dynamic batching`

### Test

> For changes that cannot be tested by CI (e.g., algorithm
implementation, new model support), validate with experiments and
include results such as training curves or evaluation metrics.

Controlled experiments based on
`examples/grpo_trainer/run_qwen2_5-3b_gsm8k_grpo_lora.sh` (see [adapted
script](https://gist.github.com/listar2000/43bb0e1d6f0d3c2503922ca2bfee0a6b))
**clearly demonstrate both the issue and the effectiveness of the fix**.

<img width="2528" height="1328" alt="kl-loss"
src="https://github.com/user-attachments/assets/008cdace-fc6d-459a-8493-8ddb440c57ec"
/>
<img width="2528" height="1328" alt="val-reward"
src="https://github.com/user-attachments/assets/aa2e13c7-25cc-41cd-a916-d98f134060e6"
/>

See the full [W&B training
log](https://wandb.ai/listar2000/verl-latest-lora).
Summary:

* **sync-lora-32** — baseline (synchronous mode).
* **async-lora-32-before-fix** — async LoRA on `main` branch, showing
degraded performance.
* **async-lora-32-no-remove** — ablation variant with fixes applied
**but without removing old LoRA adapters** between updates (showing the
importance of removal).
* **async-lora-32-after-fix** — full fix applied, achieving expected
improvement.


### API and Usage Example

> Demonstrate how the API changes if any, and provide usage example(s)
if possible.

```python
# Add code snippet or script demonstrating how to use this
```

### Design & Code Changes

> Demonstrate the high-level design if this PR is complex, and list the
specific changes.

### Checklist Before Submitting

> [!IMPORTANT]
> Please check all the following items before requesting a review,
otherwise the reviewer might deprioritize this PR for review.

- [x] Read the [Contribute
Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md).
- [x] Apply [pre-commit
checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting):
`pre-commit install && pre-commit run --all-files --show-diff-on-failure
--color=always`
- [x] Add / Update [the
documentation](https://github.com/volcengine/verl/tree/main/docs). **Not
Applicable**
- [ ] Add unit or end-to-end test(s) to [the CI
workflow](https://github.com/volcengine/verl/tree/main/.github/workflows)
to cover all the code. If not feasible, explain why: **This PR can
hardly be covered by regular CI. I instead run concrete experiments with
GSM8K dataset.**
- [ ] Once your PR is ready for CI, send a message in [the `ci-request`
channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the
`verl` Slack
workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ).
(If not accessible, please try [the Feishu group
(飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants