Skip to content

[CI] Add pytest markers in config files.#719

Merged
hsliuustc0106 merged 11 commits intovllm-project:mainfrom
congw729:ci/add_markers
Jan 14, 2026
Merged

[CI] Add pytest markers in config files.#719
hsliuustc0106 merged 11 commits intovllm-project:mainfrom
congw729:ci/add_markers

Conversation

@congw729
Copy link
Contributor

@congw729 congw729 commented Jan 9, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

This PR is intended to add markers to the config files and introduce related tutorial docs.
All the helpful functions are copied from vllm/tests/utils.py.

New markers:

[tool.pytest.ini_options]
markers = [
    # ci/cd required
    "core_model: Core model tests (run in each PR)",
    # function module markers
    "diffusion: Diffusion model tests",
    "omni: Omni model tests",
    "cache: Cache backend tests",
    "parallel: Parallelism/distributed tests",
    # platform markers
    "cpu: Tests that run on CPU",
    "gpu: Tests that run on GPU",
    "cuda: Tests that run on CUDA",
    "rocm: Tests that run on AMD/ROCm",
    "npu: Tests that run on NPU/Ascend",
    # specified computation resources marks (auto-added)
    "H100: Tests that require H100 GPU",
    "L4: Tests that require L4 GPU",
    "MI325: Tests that require MI325 GPU (AMD/ROCm)",
    "A2: Tests that require A2 NPU",
    "A3: Tests that require A3 NPU",
    "distributed_cuda: Tests that require multi cards on CUDA platform",
    "distributed_rocm: Tests that require multi cards on ROCm platform",
    "distributed_npu: Tests that require multi cards on NPU platform",
    "skipif_cuda: Skip if the num of CUDA cards is less than the required",
    "skipif_rocm: Skip if the num of ROCm cards is less than the required",
    "skipif_npu: Skip if the num of NPU cards is less than the required",
    # more detailed markers
    "slow: Slow tests (may skip in quick CI)",
    "benchmark: Benchmark tests",
]

Example usage:

Test run on CPU:

@pytest.mark.core_model
@pytest.mark.cpu
def test_xx()
    ...

Tests run on GPU:
```python
from tests.utils import hardware_test

@pytest.mark.core_model
@pytest.mark.omni
@hardware_test(
    res={"cuda": "L4", "rocm": "MI325"},
    num_cards=2,
)
@pytest.mark.parametrize("omni_server", test_params, indirect=True)
def test_video_to_audio()
    ...
from tests.utils import hardware_test

@pytest.mark.core_model
@pytest.mark.omni
@hardware_test(
    res={"cuda": "L4", "rocm": "MI325", "npu": "A2"},
    num_cards={"cuda": 2, "rocm": 2, "npu": 4},
)
@pytest.mark.parametrize("omni_server", test_params, indirect=True)
def test_video_to_audio()
    ...

The edition for the current test files will be completed in the upcoming PR #577 (stil WIP)..

Test Plan

No need to test.

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 541a84ecd6

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@congw729
Copy link
Contributor Author

congw729 commented Jan 9, 2026

@hsliuustc0106 This PR is ready for review.

@hsliuustc0106 hsliuustc0106 added ready label to trigger buildkite CI ROCm PR related to AMD hardware NPU PR related to Ascend NPU labels Jan 9, 2026
@hsliuustc0106
Copy link
Collaborator

@gcanlin @tjtanaa PTAL

@hsliuustc0106
Copy link
Collaborator

fix ci

Copy link
Contributor

@gcanlin gcanlin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! After this PR is merged, I will refactor run_npu_test.sh. I'm thinking how to implement the similar function on NPU as cuda_device_count_stateless. For now, NPU depends on run_npu_test.sh to allocate the cards, which is inconvenient.

Copy link
Contributor

@gcanlin gcanlin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall the scalable design for kinds of hardware LGTM, I don't have more bandwidth to test NPU markers now. Will test it while enabling the NPU CI again. Thanks!

@congw729
Copy link
Contributor Author

congw729 commented Jan 13, 2026

This PR is done and all tests passed@hsliuustc0106

@congw729
Copy link
Contributor Author

@DarkLight1337 @ZJY0516 @Isotr0py PTAL

Signed-off-by: Alicia <[email protected]>
Signed-off-by: Alicia <[email protected]>
@congw729
Copy link
Contributor Author

@hsliuustc0106 All the comments have been resolved. Ready to be merged.

Copy link
Collaborator

@hsliuustc0106 hsliuustc0106 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@hsliuustc0106 hsliuustc0106 merged commit 1fe64e8 into vllm-project:main Jan 14, 2026
7 checks passed
erfgss pushed a commit to erfgss/vllm-omni that referenced this pull request Jan 19, 2026
with1015 pushed a commit to with1015/vllm-omni that referenced this pull request Jan 20, 2026
@congw729 congw729 deleted the ci/add_markers branch February 4, 2026 02:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

NPU PR related to Ascend NPU ready label to trigger buildkite CI ROCm PR related to AMD hardware

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants