@@ -13,7 +13,7 @@ Note: Please check the [FlashInfer installation doc](https://docs.flashinfer.ai/
1313## Method 2: From source
1414```
1515# Use the last release branch
16- git clone -b v0.4.0 https://github.com/sgl-project/sglang.git
16+ git clone -b v0.4.0.post1 https://github.com/sgl-project/sglang.git
1717cd sglang
1818
1919pip install --upgrade pip
@@ -26,7 +26,7 @@ Note: To AMD ROCm system with Instinct/MI GPUs, do following instead:
2626
2727```
2828# Use the last release branch
29- git clone -b v0.4.0 https://github.com/sgl-project/sglang.git
29+ git clone -b v0.4.0.post1 https://github.com/sgl-project/sglang.git
3030cd sglang
3131
3232pip install --upgrade pip
@@ -51,7 +51,7 @@ docker run --gpus all \
5151Note: To AMD ROCm system with Instinct/MI GPUs, it is recommended to use ` docker/Dockerfile.rocm ` to build images, example and usage as below:
5252
5353``` bash
54- docker build --build-arg SGL_BRANCH=v0.4.0 -t v0.4.0-rocm620 -f Dockerfile.rocm .
54+ docker build --build-arg SGL_BRANCH=v0.4.0.post1 -t v0.4.0.post1 -rocm620 -f Dockerfile.rocm .
5555
5656alias drun=' docker run -it --rm --network=host --device=/dev/kfd --device=/dev/dri --ipc=host \
5757 --shm-size 16G --group-add video --cap-add=SYS_PTRACE --security-opt seccomp=unconfined \
@@ -60,11 +60,11 @@ alias drun='docker run -it --rm --network=host --device=/dev/kfd --device=/dev/d
6060drun -p 30000:30000 \
6161 -v ~ /.cache/huggingface:/root/.cache/huggingface \
6262 --env " HF_TOKEN=<secret>" \
63- v0.4.0-rocm620 \
63+ v0.4.0.post1 -rocm620 \
6464 python3 -m sglang.launch_server --model-path meta-llama/Llama-3.1-8B-Instruct --host 0.0.0.0 --port 30000
6565
6666# Till flashinfer backend available, --attention-backend triton --sampling-backend pytorch are set by default
67- drun v0.4.0-rocm620 python3 -m sglang.bench_one_batch --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
67+ drun v0.4.0.post1 -rocm620 python3 -m sglang.bench_one_batch --batch-size 32 --input 1024 --output 128 --model amd/Meta-Llama-3.1-8B-Instruct-FP8-KV --tp 8 --quantization fp8
6868```
6969
7070## Method 4: Using docker compose
0 commit comments