Skip to content

Commit 22a1536

Browse files
authored
[ci] fix: try fix vllm test network issue (#3031)
### What does this PR do? [ci] fix: try fix vllm test network issue ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
1 parent 83cfc76 commit 22a1536

File tree

2 files changed

+5
-2
lines changed

2 files changed

+5
-2
lines changed

.github/workflows/vllm.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ permissions:
7676
jobs:
7777
vllm:
7878
runs-on: [L20x8]
79-
timeout-minutes: 60 # Increase this timeout value as needed
79+
timeout-minutes: 25 # Increase this timeout value as needed
8080
env:
8181
HTTP_PROXY: ${{ secrets.PROXY_HTTP }}
8282
HTTPS_PROXY: ${{ secrets.PROXY_HTTPS }}
@@ -100,6 +100,7 @@ jobs:
100100
huggingface-cli download Qwen/Qwen2.5-1.5B-Instruct
101101
huggingface-cli download 'Qwen/Qwen2-7B-Instruct'
102102
huggingface-cli download 'deepseek-ai/deepseek-llm-7b-chat'
103+
huggingface-cli download 'OldKingMeister/Qwen2.5-1.5B-Instruct-YaRN' --local-dir $HOME/models/OldKingMeister/Qwen2.5-1.5B-Instruct-YaRN
103104
export HF_HUB_OFFLINE=1
104105
# Disable requests to avoid network errors
105106
- name: Prepare gsm8k dataset

tests/workers/rollout/rollout_vllm/test_vllm_model_rope_scaling.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
# limitations under the License.
1414

1515
import gc
16+
import os
1617

1718
import torch
1819
import torch.distributed
@@ -32,9 +33,10 @@ def test_vllm_rollout_with_yarn_position_embeddings():
3233
"""
3334

3435
local_rank, rank, world_size = initialize_global_process_group()
36+
model_path = os.path.expanduser("~/models/OldKingMeister/Qwen2.5-1.5B-Instruct-YaRN")
3537
config = OmegaConf.create(
3638
{
37-
"model_path": "OldKingMeister/Qwen2.5-1.5B-Instruct-YaRN",
39+
"model_path": model_path,
3840
"prompt_length": 35000,
3941
"response_length": 512,
4042
"dtype": "bfloat16",

0 commit comments

Comments
 (0)