-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
[DO NOT MERGE] Prefix stash siyuan #39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
Author
|
close it: no longer need it in the short term |
tianyil1
pushed a commit
to tianyil1/vllm
that referenced
this pull request
Jun 5, 2024
dtrifiro
pushed a commit
to dtrifiro/vllm
that referenced
this pull request
Jun 5, 2024
Dockerfile.ubi: use vllm tgis adapter
yukavio
pushed a commit
to yukavio/vllm
that referenced
this pull request
Jul 3, 2024
SUMMARY * update `TORCH_CUDA_ARCH_LIST` to match `magic_wand` * update "test vllm" action to run tests serially * add helper script to find *.py tests, run them serially, and output JUnit formatted xml TEST working through changes manually on debug instance --------- Co-authored-by: andy-neuma <[email protected]>
bigPYJ1151
pushed a commit
to bigPYJ1151/vllm
that referenced
this pull request
Jul 30, 2024
…usion linear silu mul fusion (bf16)
Closed
1 task
1 task
1 task
1 task
Bounty-hunter
pushed a commit
to Bounty-hunter/vllm
that referenced
this pull request
Sep 25, 2025
1 task
jikunshang
pushed a commit
to jikunshang/vllm
that referenced
this pull request
Nov 27, 2025
* refactor torch complie for fp8 and int Signed-off-by: Zhu, Zufang <[email protected]> * refine for pre-commit Signed-off-by: Zhu, Zufang <[email protected]> * fix Signed-off-by: Zhu, Zufang <[email protected]> * update Signed-off-by: Zhu, Zufang <[email protected]> --------- Signed-off-by: Zhu, Zufang <[email protected]>
AndreasKaratzas
pushed a commit
to AndreasKaratzas/vllm
that referenced
this pull request
Dec 1, 2025
- Add comprehensive labels.yml with all priority, type, status, platform, hardware, component, model, and test labels - Add AMD issue templates (800-860 series) for bug, ci-failure, documentation, feature, infrastructure, performance, and usage - Add amd_project_automation.yml workflow to auto-assign issues: - amd-ci label -> AMD CI project (vllm-project#39) - amd label (w/o amd-ci) -> AMD project (vllm-project#38) - Update issue_autolabel.yml to add 'amd' label when rocm detected - Add sync_labels.yml workflow to sync labels from labels.yml - Update config.yml with AMD ROCm discussions link Labels routing: - CI failures use 'amd-ci' label -> routes to project vllm-project#39 - All other AMD issues use 'amd' label -> routes to project vllm-project#38
AndreasKaratzas
pushed a commit
to AndreasKaratzas/vllm
that referenced
this pull request
Dec 1, 2025
Changes: - Replace labels.yml with labels-amd.yml (AMD-specific only) - Pruned to only AMD labels that won't conflict with main - Uses skip-delete to preserve existing upstream labels - Update sync_labels.yml: - Manual trigger only (workflow_dispatch) - safe for upstream - Restricted to AMD team members - Added dry-run option - Uses crazy-max/ghaction-github-labeler with skip-delete - Enhance issue_autolabel.yml: - Add hardware labels (mi300x, mi325x, mi350-series) - Add ROCm version labels (rocm-6.x, rocm-7.x) - Add component labels (aiter-backend, rccl) - Add status labels (needs-profiling, has-workaround, upstream) - Add test labels (flaky-test, test:distributed, test:benchmark) - Add rocm-blocker detection - Extended CC configuration for AMD team - Improve amd_project_automation.yml: - Support for both issues and PRs - Detect any AMD-related label for project assignment - Status mapping logic for project columns - Job summary output Project routing: - amd-ci label → AMD CI project (vllm-project#39) - Any other AMD label → AMD project (vllm-project#38)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a temporary stash of all the mess. I keep this as a PR for each lookup.