-
Notifications
You must be signed in to change notification settings - Fork 216
Add Dockerfile for build ROCm vLLM Docker image #1372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
chensuyue
merged 28 commits into
opea-project:main
from
chyundunovDatamonsters:feature/Add_ROCm_vLLM_Dockerfile
Mar 12, 2025
Merged
Changes from 27 commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
aeff487
Build and upstream latest base image on push event (#1355)
chensuyue beec699
Add timeout param for DocSum and FaqGen to deal with long context (#1…
XinyaoWa a18a91e
Megaservice / orchestrator metric testing + fixes (#1348)
eero-t 31b5120
update image push machine (#1361)
chensuyue e892090
Add Dockerfile for build ROCm vLLM Docker image
76446a9
Add Dockerfile for build ROCm vLLM Docker image
b37ad61
Merge remote-tracking branch 'origin/feature/Add_ROCm_vLLM_Dockerfile…
d8df913
Add Dockerfile for build ROCm vLLM Docker image
c032a3e
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] f54c26c
Add Dockerfile for build ROCm vLLM Docker image
ac3b7c6
Merge remote-tracking branch 'origin/feature/Add_ROCm_vLLM_Dockerfile…
9d9cbb6
Add Dockerfile for build ROCm vLLM Docker image
42712b6
Add Dockerfile for build ROCm vLLM Docker image
a2929f1
Add Dockerfile for build ROCm vLLM Docker image
4677d1d
Add Dockerfile for build ROCm vLLM Docker image
5ce31c3
Add Dockerfile for build ROCm vLLM Docker image
94db1d2
Add Dockerfile for build ROCm vLLM Docker image
3d384a2
Add Dockerfile for build ROCm vLLM Docker image
b5eb8ad
Add Dockerfile for build ROCm vLLM Docker image
fb78fc3
Add Dockerfile for build ROCm vLLM Docker image
b5c0f9a
Add Dockerfile for build ROCm vLLM Docker image
9aea3fa
Add Dockerfile for build ROCm vLLM Docker image
f917cae
Add Dockerfile for build ROCm vLLM Docker image
14c68ff
Add Dockerfile for build ROCm vLLM Docker image
a6686ed
Merge branch 'main' into feature/Add_ROCm_vLLM_Dockerfile
chyundunovDatamonsters 79ef45f
Add Dockerfile for build ROCm vLLM Docker image
a3a57f8
Merge remote-tracking branch 'origin/feature/Add_ROCm_vLLM_Dockerfile…
d82abc7
Add Dockerfile for build ROCm vLLM Docker image
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| # Copyright (c) 2024 Advanced Micro Devices, Inc. | ||
chyundunovDatamonsters marked this conversation as resolved.
Show resolved
Hide resolved
chyundunovDatamonsters marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| # Refer to https://rocm.docs.amd.com/en/latest/how-to/rocm-for-ai/inference-optimization/index.html for further optimization | ||
|
|
||
| FROM rocm/vllm:rocm6.3.1_mi300_ubuntu22.04_py3.12_vllm_0.6.6 | ||
|
|
||
| # Set the working directory | ||
| WORKDIR /workspace | ||
|
|
||
| # Expose the port used by the API server | ||
| EXPOSE 8011 | ||
|
|
||
| # Set environment variables | ||
| ENV HUGGINGFACE_HUB_CACHE=/workspace | ||
| ENV VLLM_USE_TRITON_FLASH_ATTENTION=0 | ||
| ENV PYTORCH_JIT=0 | ||
|
|
||
| # Set the entrypoint to the api_server.py script | ||
| RUN cp /usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py /workspace/api_server.py | ||
| ENTRYPOINT ["python3", "/workspace/api_server.py"] | ||
117 changes: 117 additions & 0 deletions
117
tests/third_parties/test_third_parties_vllm_on_amd_gpu.sh
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,117 @@ | ||
| #!/bin/bash | ||
| # Copyright (c) 2024 Advanced Micro Devices, Inc. | ||
|
|
||
| set -x | ||
|
|
||
| IMAGE_REPO=${IMAGE_REPO:-"opea"} | ||
| export REGISTRY=${IMAGE_REPO} | ||
| export TAG="comps" | ||
| echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" | ||
| echo "TAG=${TAG}" | ||
|
|
||
| WORKPATH=$(dirname "$PWD") | ||
| WORKDIR=${WORKPATH}/../ | ||
| export host_ip=$(hostname -I | awk '{print $1}') | ||
| LOG_PATH="$WORKPATH" | ||
| service_name="vllm-rocm-server" | ||
| docker_container_name="vllm-server" | ||
|
|
||
| function build_container() { | ||
| cd $WORKPATH/comps/third_parties/vllm/src | ||
| docker build --no-cache -t ${REGISTRY:-opea}/vllm-rocm:${TAG:-latest} \ | ||
| -f Dockerfile.amd_gpu \ | ||
| . \ | ||
| --build-arg https_proxy=$https_proxy \ | ||
| --build-arg http_proxy=$http_proxy | ||
| if [ $? -ne 0 ]; then | ||
| echo "vllm-rocm built fail" | ||
| exit 1 | ||
| else | ||
| echo "vllm-rocm built successful" | ||
| fi | ||
| } | ||
|
|
||
| # Function to start Docker container | ||
| start_container() { | ||
| export VLLM_SERVICE_PORT=28011 | ||
| export HUGGINGFACEHUB_API_TOKEN=${HF_TOKEN} | ||
| export HF_CACHE_DIR="./data" | ||
| export VLLM_LLM_MODEL_ID="Intel/neural-chat-7b-v3-3" | ||
| export TENSOR_PARALLEL_SIZE=1 | ||
|
|
||
| cd $WORKPATH/comps/third_parties/vllm/deployment/docker_compose | ||
| docker compose -f compose.yaml up ${service_name} -d > ${LOG_PATH}/start_services_with_compose.log | ||
|
|
||
| # check whether service is fully ready | ||
| n=0 | ||
| until [[ "$n" -ge 300 ]]; do | ||
| docker logs $docker_container_name > ${LOG_PATH}/$docker_container_name.log 2>&1 | ||
| n=$((n+1)) | ||
| if grep -q ""Application startup complete"" ${LOG_PATH}/$docker_container_name.log; then | ||
chyundunovDatamonsters marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| break | ||
| fi | ||
| sleep 10s | ||
| done | ||
|
|
||
| } | ||
|
|
||
| # Function to test API endpoint | ||
| function test_api_endpoint { | ||
| local endpoint="$1" | ||
| local expected_status="$2" | ||
|
|
||
| # Make the HTTP request | ||
| if test "$1" = "v1/completions" | ||
| then | ||
| local response=$(curl "http://${host_ip}:${VLLM_SERVICE_PORT}/$endpoint" \ | ||
| -H "Content-Type: application/json" \ | ||
| -d '{ | ||
| "model": "Intel/neural-chat-7b-v3-3", | ||
| "prompt": "What is deep learning?", | ||
| "max_tokens": 300, | ||
| "temperature": 0.7 | ||
| }' \ | ||
| --write-out '%{http_code}' \ | ||
| --silent \ | ||
| --output /dev/null) | ||
| else | ||
| local response=$(curl "http://${host_ip}:${VLLM_SERVICE_PORT}/$endpoint" \ | ||
| --write-out '%{http_code}' \ | ||
| --silent \ | ||
| --output /dev/null) | ||
| fi | ||
|
|
||
| # Assert the response status code | ||
| if [[ "$response" -eq "$expected_status" ]]; then | ||
| echo "PASS: $endpoint returned expected status code: $expected_status" | ||
| else | ||
| echo "FAIL: $endpoint returned unexpected status code: $response (expected: $expected_status)" | ||
| docker logs $service_name | ||
| exit 1 | ||
| fi | ||
| } | ||
|
|
||
| function stop_docker() { | ||
| cd $WORKPATH/../comps/third_parties/vllm/deployment/docker_compose | ||
| docker compose -f compose.yaml down ${service_name} --remove-orphans | ||
| } | ||
|
|
||
| # Main function | ||
| main() { | ||
|
|
||
| build_container | ||
| start_container | ||
|
|
||
| # Sleep to allow the container to start up fully | ||
| sleep 10 | ||
| # Test the /v1/models API | ||
| test_api_endpoint "v1/models" 200 | ||
|
|
||
| # Test the /v1/completions API | ||
| test_api_endpoint "v1/completions" 200 | ||
|
|
||
| stop_docker | ||
| } | ||
|
|
||
| # Call main function | ||
| main | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.