|
| 1 | +# Single NPU (Qwen3-Embedding-8B) |
| 2 | + |
| 3 | +The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This guide describes how to run the model with vLLM Ascend. Note that only 0.9.2rc1 and higher versions of vLLM Ascend support the model. |
| 4 | + |
| 5 | +## Run docker container |
| 6 | + |
| 7 | +Take Qwen3-Embedding-8B model as an example, first run the docker container with the following command: |
| 8 | + |
| 9 | +```{code-block} bash |
| 10 | + :substitutions: |
| 11 | +# Update the vllm-ascend image |
| 12 | +export IMAGE=quay.io/ascend/vllm-ascend:|vllm_ascend_version| |
| 13 | +docker run --rm \ |
| 14 | +--name vllm-ascend \ |
| 15 | +--device /dev/davinci0 \ |
| 16 | +--device /dev/davinci_manager \ |
| 17 | +--device /dev/devmm_svm \ |
| 18 | +--device /dev/hisi_hdc \ |
| 19 | +-v /usr/local/dcmi:/usr/local/dcmi \ |
| 20 | +-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \ |
| 21 | +-v /usr/local/Ascend/driver/lib64/:/usr/local/Ascend/driver/lib64/ \ |
| 22 | +-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \ |
| 23 | +-v /etc/ascend_install.info:/etc/ascend_install.info \ |
| 24 | +-v /root/.cache:/root/.cache \ |
| 25 | +-p 8000:8000 \ |
| 26 | +-it $IMAGE bash |
| 27 | +``` |
| 28 | + |
| 29 | +Setup environment variables: |
| 30 | + |
| 31 | +```bash |
| 32 | +# Load model from ModelScope to speed up download |
| 33 | +export VLLM_USE_MODELSCOPE=True |
| 34 | + |
| 35 | +# Set `max_split_size_mb` to reduce memory fragmentation and avoid out of memory |
| 36 | +export PYTORCH_NPU_ALLOC_CONF=max_split_size_mb:256 |
| 37 | +``` |
| 38 | + |
| 39 | +### Online Inference |
| 40 | + |
| 41 | +```bash |
| 42 | +vllm serve Qwen/Qwen3-Embedding-8B --task embed |
| 43 | +``` |
| 44 | + |
| 45 | +Once your server is started, you can query the model with input prompts |
| 46 | + |
| 47 | +```bash |
| 48 | +curl http://localhost:8000/v1/embeddings -H "Content-Type: application/json" -d '{ |
| 49 | + "model": "Qwen/Qwen3-Embedding-8B", |
| 50 | + "messages": [ |
| 51 | + {"role": "user", "content": "Hello"} |
| 52 | + ] |
| 53 | +}' |
| 54 | +``` |
| 55 | + |
| 56 | +### Offline Inference |
| 57 | + |
| 58 | +```python |
| 59 | +import torch |
| 60 | +import vllm |
| 61 | +from vllm import LLM |
| 62 | + |
| 63 | +def get_detailed_instruct(task_description: str, query: str) -> str: |
| 64 | + return f'Instruct: {task_description}\nQuery:{query}' |
| 65 | + |
| 66 | + |
| 67 | +if __name__=="__main__": |
| 68 | + # Each query must come with a one-sentence instruction that describes the task |
| 69 | + task = 'Given a web search query, retrieve relevant passages that answer the query' |
| 70 | + |
| 71 | + queries = [ |
| 72 | + get_detailed_instruct(task, 'What is the capital of China?'), |
| 73 | + get_detailed_instruct(task, 'Explain gravity') |
| 74 | + ] |
| 75 | + # No need to add instruction for retrieval documents |
| 76 | + documents = [ |
| 77 | + "The capital of China is Beijing.", |
| 78 | + "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun." |
| 79 | + ] |
| 80 | + input_texts = queries + documents |
| 81 | + |
| 82 | + model = LLM(model="Qwen/Qwen3-Embedding-8B", |
| 83 | + task="embed", |
| 84 | + distributed_executor_backend="mp") |
| 85 | + |
| 86 | + outputs = model.embed(input_texts) |
| 87 | + embeddings = torch.tensor([o.outputs.embedding for o in outputs]) |
| 88 | + scores = (embeddings[:2] @ embeddings[2:].T) |
| 89 | + print(scores.tolist()) |
| 90 | +``` |
| 91 | + |
| 92 | +If you run this script successfully, you can see the info shown below: |
| 93 | + |
| 94 | +```bash |
| 95 | +Adding requests: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 282.22it/s] |
| 96 | +Processed prompts: 0%| | 0/4 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s](VllmWorker rank=0 pid=4074750) ('Warning: torch.save with "_use_new_zipfile_serialization = False" is not recommended for npu tensor, which may bring unexpected errors and hopefully set "_use_new_zipfile_serialization = True"', 'if it is necessary to use this, please convert the npu tensor to cpu tensor for saving') |
| 97 | +Processed prompts: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 31.95it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s] |
| 98 | +[[0.7477798461914062, 0.07548339664936066], [0.0886271521449089, 0.6311039924621582]] |
| 99 | +``` |
0 commit comments