Skip to content

Add PR and issue templates from vLLM project#8

Merged
hsliuustc0106 merged 1 commit intomainfrom
hsliu-dev-C
Oct 18, 2025
Merged

Add PR and issue templates from vLLM project#8
hsliuustc0106 merged 1 commit intomainfrom
hsliu-dev-C

Conversation

@hsliuustc0106
Copy link
Collaborator

  • Add PULL_REQUEST_TEMPLATE.md with vLLM-style PR template
  • Add comprehensive issue templates:
    • Bug report template (400-bug-report.yml)
    • Feature request template (500-feature-request.yml)
    • New model support template (600-new-model.yml)
    • Documentation template (100-documentation.yml)
    • Installation template (200-installation.yml)
    • Performance discussion template (700-performance-discussion.yml)
    • Issue template configuration (config.yml)
  • Templates adapted for vLLM-omni with multimodal context
  • Force added YAML files to override .gitignore exclusion

- Add PULL_REQUEST_TEMPLATE.md with vLLM-style PR template
- Add comprehensive issue templates:
  - Bug report template (400-bug-report.yml)
  - Feature request template (500-feature-request.yml)
  - New model support template (600-new-model.yml)
  - Documentation template (100-documentation.yml)
  - Installation template (200-installation.yml)
  - Performance discussion template (700-performance-discussion.yml)
  - Issue template configuration (config.yml)
- Templates adapted for vLLM-omni with multimodal context
- Force added YAML files to override .gitignore exclusion
@gemini-code-assist
Copy link

Summary of Changes

Hello @hsliuustc0106, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the contribution process for the vLLM-omni project by introducing structured templates for both pull requests and various types of issues. This initiative aims to streamline communication, ensure comprehensive information is provided by contributors, and ultimately improve the efficiency of development and maintenance by guiding users through clear, predefined forms for reporting bugs, requesting features, and proposing new model integrations.

Highlights

  • Pull Request Template: Introduced a new PULL_REQUEST_TEMPLATE.md to standardize PR submissions, including sections for purpose, test plan, and results, along with a checklist for effective descriptions.
  • Comprehensive Issue Templates: Added a suite of structured issue templates covering bug reports, feature requests, new model support, documentation issues, installation problems, and performance discussions.
  • vLLM-omni Adaptation: All new templates are specifically tailored for the vLLM-omni project, incorporating multimodal context and project-specific guidance to streamline contributions.
  • Force-Added Configuration Files: The new YAML issue template files and the PR template were explicitly added to the repository, ensuring they are tracked and available despite any .gitignore rules.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@hsliuustc0106 hsliuustc0106 merged commit c150346 into main Oct 18, 2025
1 check passed
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive set of issue and pull request templates, adapted from the vLLM project. The templates are well-structured and tailored for vllm-omni, covering bug reports, feature requests, documentation, installation, new model support, and performance discussions. This is a great addition that will significantly improve the quality and consistency of contributions and issue reporting.

I have one suggestion to improve the performance discussion issue template to prevent the submission of empty issues. Otherwise, the changes look excellent.

Comment on lines +11 to +31
- type: textarea
attributes:
label: Proposal to improve performance
description: >
How do you plan to improve vllm-omni's performance?
validations:
required: false
- type: textarea
attributes:
label: Report of performance regression
description: >
Please provide detailed description of performance comparison to confirm the regression. You may want to run the benchmark script at https://github.com/hsliuustc0106/vllm-omni/tree/main/tests/benchmarks .
validations:
required: false
- type: textarea
attributes:
label: Misc discussion on performance
description: >
Anything about the performance.
validations:
required: false

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To prevent users from submitting empty performance discussion issues, it would be better to consolidate the three optional text areas for 'Proposal', 'Report', and 'Misc discussion' into a single, required text area. This ensures that every performance-related issue contains some descriptive content, improving the quality of issue reports. The user can be guided to provide details for one of the specific cases within the description of the single text area.

- type: textarea
  attributes:
    label: ⚡ Performance Discussion
    description: |
      Please provide details about one of the following:
      - **Proposal to improve performance:** How do you plan to improve vllm-omni's performance?
      - **Report of performance regression:** Please provide detailed description of performance comparison to confirm the regression. You may want to run the benchmark script at https://github.com/hsliuustc0106/vllm-omni/tree/main/tests/benchmarks .
      - **Misc discussion on performance:** Anything else about performance.
  validations:
    required: true

princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
Add PR and issue templates from vLLM project
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant