Skip to content

Enable ChatQnA with vllm Arc support#771

Closed
gavinlichn wants to merge 2 commits intoopea-project:mainfrom
gavinlichn:arc_vllm
Closed

Enable ChatQnA with vllm Arc support#771
gavinlichn wants to merge 2 commits intoopea-project:mainfrom
gavinlichn:arc_vllm

Conversation

@gavinlichn
Copy link
Copy Markdown
Contributor

Description

Enable ChatQnA with vllm inference on Intel ARC GPU

Issues

n/a

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)
  • Others (enhancement, documentation, validation, etc.)

Dependencies

opea-project/GenAIComps#641

Tests

n/a

Support vllm inference with Intel ARC GPU

Signed-off-by: Li Gang <[email protected]>
Co-authored-by: Chen, Hu1 <[email protected]>
@chensuyue
Copy link
Copy Markdown
Collaborator

Seems this PR is out of date, shall we close it? @gavinlichn

@gavinlichn
Copy link
Copy Markdown
Contributor Author

Seems this PR is out of date, shall we close it? @gavinlichn

Let's close this PR first, add vLLM ARC later if required.

@gavinlichn gavinlichn closed this Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants