Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/developer_guide/contribution/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ There are several principles to follow when writing unit tests:
```bash
# Run unit tests
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/Ascend/ascend-toolkit/latest/$(uname -m)-linux/devlib
VLLM_USE_V1=1 TORCH_DEVICE_BACKEND_AUTOLOAD=0 pytest -sv tests/ut
TORCH_DEVICE_BACKEND_AUTOLOAD=0 pytest -sv tests/ut
```

::::
Expand Down
1 change: 1 addition & 0 deletions docs/source/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
single_npu
single_npu_multimodal
single_npu_audio
single_npu_qwen3_embedding
multi_npu
multi_npu_moge
multi_npu_qwen3_moe
Expand Down
1 change: 0 additions & 1 deletion docs/source/tutorials/multi_node.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,6 @@ export TP_SOCKET_IFNAME=$nic_name
export HCCL_SOCKET_IFNAME=$nic_name
export OMP_PROC_BIND=false
export OMP_NUM_THREADS=100
export VLLM_USE_V1=1
export HCCL_BUFFSIZE=1024

# The w8a8 weight can obtained from https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V3-W8A8
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/multi_npu_quantization.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Multi-NPU (QwQ 32B W8A8)

## Run docker container:
## Run docker container
:::{note}
w8a8 quantization feature is supported by v0.8.4rc2 or higher
:::
Expand Down
3 changes: 0 additions & 3 deletions docs/source/tutorials/multi_npu_qwen3_moe.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,6 @@ export VLLM_USE_MODELSCOPE=True

# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256

# For vllm-ascend 0.9.2+, the V1 engine is enabled by default and no longer needs to be explicitly specified.
export VLLM_USE_V1=1
```

### Online Inference on Multi-NPU
Expand Down
6 changes: 2 additions & 4 deletions docs/source/tutorials/single_node_300i.md
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,6 @@ Run the following command to start the vLLM server:

```{code-block} bash
:substitutions:
export VLLM_USE_V1=1
vllm serve Qwen/Qwen3-0.6B \
--tensor-parallel-size 1 \
--enforce-eager \
Expand Down Expand Up @@ -90,7 +89,6 @@ Run the following command to start the vLLM server:

```{code-block} bash
:substitutions:
export VLLM_USE_V1=1
vllm serve Qwen/Qwen2.5-7B-Instruct \
--tensor-parallel-size 2 \
--enforce-eager \
Expand Down Expand Up @@ -129,7 +127,7 @@ Run the following command to start the vLLM server:
```{code-block} bash
:substitutions:

VLLM_USE_V1=1 vllm serve /home/pangu-pro-moe-mode/ \
vllm serve /home/pangu-pro-moe-mode/ \
--tensor-parallel-size 4 \
--enable-expert-parallel \
--dtype "float16" \
Expand Down Expand Up @@ -321,7 +319,7 @@ if __name__ == "__main__":

Run script:
```bash
VLLM_USE_V1=1 python example.py
python example.py
```

If you run this script successfully, you can see the info shown below:
Expand Down
8 changes: 2 additions & 6 deletions docs/source/tutorials/single_npu.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,6 @@ Run the following script to execute offline inference on a single NPU:
import os
from vllm import LLM, SamplingParams

os.environ["VLLM_USE_V1"] = "1"

prompts = [
"Hello, my name is",
"The future of AI is",
Expand All @@ -77,8 +75,6 @@ for output in outputs:
import os
from vllm import LLM, SamplingParams

os.environ["VLLM_USE_V1"] = "1"

prompts = [
"Hello, my name is",
"The future of AI is",
Expand Down Expand Up @@ -133,7 +129,7 @@ docker run --rm \
-e VLLM_USE_MODELSCOPE=True \
-e PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 \
-it $IMAGE \
VLLM_USE_V1=1 vllm serve Qwen/Qwen3-8B --max_model_len 26240
vllm serve Qwen/Qwen3-8B --max_model_len 26240
```
::::

Expand All @@ -158,7 +154,7 @@ docker run --rm \
-e VLLM_USE_MODELSCOPE=True \
-e PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 \
-it $IMAGE \
VLLM_USE_V1=1 vllm serve Qwen/Qwen3-8B --max_model_len 26240 --enforce-eager
vllm serve Qwen/Qwen3-8B --max_model_len 26240 --enforce-eager
```
::::
:::::
Expand Down
3 changes: 0 additions & 3 deletions docs/source/tutorials/single_npu_audio.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,6 @@ docker run --rm \
Setup environment variables:

```bash
# Use vllm v1 engine
export VLLM_USE_V1=1

# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=True

Expand Down
4 changes: 0 additions & 4 deletions docs/source/tutorials/single_npu_multimodal.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,6 @@ docker run --rm \
Setup environment variables:

```bash
# Use vllm v1 engine
export VLLM_USE_V1=1

# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=True

Expand Down Expand Up @@ -143,7 +140,6 @@ docker run --rm \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-e VLLM_USE_V1=1 \
-e VLLM_USE_MODELSCOPE=True \
-e PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 \
-it $IMAGE \
Expand Down
99 changes: 99 additions & 0 deletions docs/source/tutorials/single_npu_qwen3_embedding.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Single NPU (Qwen3-Embedding-8B)

The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This guide describes how to run the model with vLLM Ascend. Note that only 0.9.2rc1 and higher versions of vLLM Ascend support the model.

## Run docker container

Take Qwen3-Embedding-8B model as an example, first run the docker container with the following command:

```{code-block} bash
:substitutions:
# Update the vllm-ascend image
export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version|
docker run --rm \
--name vllm-ascend \
--device /dev/davinci0 \
--device /dev/davinci_manager \
--device /dev/devmm_svm \
--device /dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /root/.cache:/root/.cache \
-p 8000:8000 \
-it $IMAGE bash
```

Setup environment variables:

```bash
# Load model from ModelScope to speed up download
export VLLM_USE_MODELSCOPE=True

# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory
export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256
```

### Online Inference

```bash
vllm serve Qwen/Qwen3-Embedding-8B --task embed
```

Once your server is started, you can query the model with input prompts

```bash
curl http://localhost:8000/v1/embeddings -H "Content-Type: application/json" -d '{
"model": "Qwen/Qwen3-Embedding-8B",
"messages": [
{"role": "user", "content": "Hello"}
]
}'
```

### Offline Inference

```python
import torch
import vllm
from vllm import LLM

def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'


if __name__=="__main__":
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'

queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents

model = LLM(model="Qwen/Qwen3-Embedding-8B",
task="embed",
distributed_executor_backend="mp")

outputs = model.embed(input_texts)
embeddings = torch.tensor([o.outputs.embedding for o in outputs])
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
```

If you run this script successfully, you can see the info shown below:

```bash
Adding requests: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 282.22it/s]
Processed prompts: 0%| | 0/4 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s](VllmWorker rank=0 pid=4074750) ('Warning: torch.save with "_use_new_zipfile_serialization = False" is not recommended for npu tensor, which may bring unexpected errors and hopefully set "_use_new_zipfile_serialization = True"', 'if it is necessary to use this, please convert the npu tensor to cpu tensor for saving')
Processed prompts: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 31.95it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s]
[[0.7477798461914062, 0.07548339664936066], [0.0886271521449089, 0.6311039924621582]]
```
6 changes: 0 additions & 6 deletions docs/source/user_guide/feature_guide/graph_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@ import os

from vllm import LLM

os.environ["VLLM_USE_V1"] = "1"

model = LLM(model="Qwen/Qwen2-7B-Instruct")
outputs = model.generate("Hello, how are you?")
```
Expand All @@ -46,8 +44,6 @@ offline example:
import os
from vllm import LLM

os.environ["VLLM_USE_V1"] = "1"

# TorchAirGraph is only work without chunked-prefill now
model = LLM(model="deepseek-ai/DeepSeek-R1-0528", additional_config={"torchair_graph_config": {"enabled": True},"ascend_scheduler_config": {"enabled": True,}})
outputs = model.generate("Hello, how are you?")
Expand All @@ -71,8 +67,6 @@ offline example:
import os
from vllm import LLM

os.environ["VLLM_USE_V1"] = "1"

model = LLM(model="someother_model_weight", enforce_eager=True)
outputs = model.generate("Hello, how are you?")
```
Expand Down
2 changes: 0 additions & 2 deletions docs/source/user_guide/feature_guide/sleep_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ The following is a simple example of how to use sleep mode.
from vllm.utils import GiB_bytes


os.environ["VLLM_USE_V1"] = "1"
os.environ["VLLM_USE_MODELSCOPE"] = "True"
os.environ["VLLM_WORKER_MULTIPROC_METHOD"] = "spawn"

Expand Down Expand Up @@ -77,7 +76,6 @@ The following is a simple example of how to use sleep mode.

```bash
export VLLM_SERVER_DEV_MODE="1"
export VLLM_USE_V1="1"
export VLLM_WORKER_MULTIPROC_METHOD="spawn"
export VLLM_USE_MODELSCOPE="True"

Expand Down
Loading