Add PR and issue templates from vLLM project#8
Conversation
hsliuustc0106
commented
Oct 18, 2025
- Add PULL_REQUEST_TEMPLATE.md with vLLM-style PR template
- Add comprehensive issue templates:
- Bug report template (400-bug-report.yml)
- Feature request template (500-feature-request.yml)
- New model support template (600-new-model.yml)
- Documentation template (100-documentation.yml)
- Installation template (200-installation.yml)
- Performance discussion template (700-performance-discussion.yml)
- Issue template configuration (config.yml)
- Templates adapted for vLLM-omni with multimodal context
- Force added YAML files to override .gitignore exclusion
- Add PULL_REQUEST_TEMPLATE.md with vLLM-style PR template - Add comprehensive issue templates: - Bug report template (400-bug-report.yml) - Feature request template (500-feature-request.yml) - New model support template (600-new-model.yml) - Documentation template (100-documentation.yml) - Installation template (200-installation.yml) - Performance discussion template (700-performance-discussion.yml) - Issue template configuration (config.yml) - Templates adapted for vLLM-omni with multimodal context - Force added YAML files to override .gitignore exclusion
Summary of ChangesHello @hsliuustc0106, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the contribution process for the vLLM-omni project by introducing structured templates for both pull requests and various types of issues. This initiative aims to streamline communication, ensure comprehensive information is provided by contributors, and ultimately improve the efficiency of development and maintenance by guiding users through clear, predefined forms for reporting bugs, requesting features, and proposing new model integrations. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive set of issue and pull request templates, adapted from the vLLM project. The templates are well-structured and tailored for vllm-omni, covering bug reports, feature requests, documentation, installation, new model support, and performance discussions. This is a great addition that will significantly improve the quality and consistency of contributions and issue reporting.
I have one suggestion to improve the performance discussion issue template to prevent the submission of empty issues. Otherwise, the changes look excellent.
| - type: textarea | ||
| attributes: | ||
| label: Proposal to improve performance | ||
| description: > | ||
| How do you plan to improve vllm-omni's performance? | ||
| validations: | ||
| required: false | ||
| - type: textarea | ||
| attributes: | ||
| label: Report of performance regression | ||
| description: > | ||
| Please provide detailed description of performance comparison to confirm the regression. You may want to run the benchmark script at https://github.com/hsliuustc0106/vllm-omni/tree/main/tests/benchmarks . | ||
| validations: | ||
| required: false | ||
| - type: textarea | ||
| attributes: | ||
| label: Misc discussion on performance | ||
| description: > | ||
| Anything about the performance. | ||
| validations: | ||
| required: false |
There was a problem hiding this comment.
To prevent users from submitting empty performance discussion issues, it would be better to consolidate the three optional text areas for 'Proposal', 'Report', and 'Misc discussion' into a single, required text area. This ensures that every performance-related issue contains some descriptive content, improving the quality of issue reports. The user can be guided to provide details for one of the specific cases within the description of the single text area.
- type: textarea
attributes:
label: ⚡ Performance Discussion
description: |
Please provide details about one of the following:
- **Proposal to improve performance:** How do you plan to improve vllm-omni's performance?
- **Report of performance regression:** Please provide detailed description of performance comparison to confirm the regression. You may want to run the benchmark script at https://github.com/hsliuustc0106/vllm-omni/tree/main/tests/benchmarks .
- **Misc discussion on performance:** Anything else about performance.
validations:
required: trueAdd PR and issue templates from vLLM project