You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you are testing with multi-nodes, adjust ``--nproc-per-node`` and ``--nnodes`` according to your setup and set ``MASTER_ADDR`` to the correct IP address of the master node, reachable from all nodes. Then, run:
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on x86 platform with cuda 12 for every commit since v0.5.3. You can download and install the latest one with the following command:
56
+
LLM inference is a fast-evolving field, and the latest code may contain bug fixes, performance improvements, and new features that are not released yet. To allow users to try the latest code without waiting for the next release, vLLM provides wheels for Linux running on a x86 platform with CUDA 12 for every commit since ``v0.5.3``. You can download and install it with the following command:
57
57
58
58
.. code-block:: console
59
59
@@ -66,7 +66,7 @@ If you want to access the wheels for previous commits, you can specify the commi
66
66
$ export VLLM_COMMIT=33f460b17a54acb3b6cc0b03f4a17876cff5eafd # use full commit hash from the main branch
Note that the wheels are built with Python 3.8 abi (see `PEP 425 <https://peps.python.org/pep-0425/>`_ for more details about abi), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (``1.0.0.dev``) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata.
69
+
Note that the wheels are built with Python 3.8 ABI (see `PEP 425 <https://peps.python.org/pep-0425/>`_ for more details about ABI), so **they are compatible with Python 3.8 and later**. The version string in the wheel file name (``1.0.0.dev``) is just a placeholder to have a unified URL for the wheels. The actual versions of wheels are contained in the wheel metadata.
70
70
71
71
Another way to access the latest code is to use the docker images:
72
72
@@ -77,17 +77,17 @@ Another way to access the latest code is to use the docker images:
77
77
78
78
These docker images are used for CI and testing only, and they are not intended for production use. They will be expired after several days.
79
79
80
-
Latest code can contain bugs and may not be stable. Please use it with caution.
80
+
The latest code can contain bugs and may not be stable. Please use it with caution.
81
81
82
82
.. _build_from_source:
83
83
84
84
Build from source
85
-
==================
85
+
=================
86
86
87
87
.. _python-only-build:
88
88
89
89
Python-only build (without compilation)
90
-
----------------------------------------
90
+
---------------------------------------
91
91
92
92
If you only need to change Python code, you can simply build vLLM without compilation.
93
93
@@ -122,22 +122,22 @@ Once you have finished editing or want to install another vLLM wheel, you should
122
122
123
123
$ python python_only_dev.py --quit-dev
124
124
125
-
The script with ``--quit-dev`` flag will:
125
+
The ``--quit-dev`` flag will:
126
126
127
127
* Remove the symbolic link from the current directory to the vLLM package.
128
128
* Restore the original vLLM package from the backup.
129
129
130
-
If you update the vLLM wheel and want to rebuild from the source and make further edits, you will need to start `all above<#python-only-build>`_ over again.
130
+
If you update the vLLM wheel and rebuild from the source to make further edits, you will need to repeat the `Python-only build<#python-only-build>`_ steps again.
131
131
132
132
.. note::
133
133
134
134
There is a possibility that your source code may have a different commit ID compared to the latest vLLM wheel, which could potentially lead to unknown errors.
135
-
It is recommended to use the same commit ID for the source code as the vLLM wheel you have installed. Please refer to `the above section <#install-the-latest-code>`_ for instructions on how to install a specified wheel.
135
+
It is recommended to use the same commit ID for the source code as the vLLM wheel you have installed. Please refer to `the section above<#install-the-latest-code>`_ for instructions on how to install a specified wheel.
136
136
137
137
Full build (with compilation)
138
-
---------------------------------
138
+
-----------------------------
139
139
140
-
If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes:
140
+
If you want to modify C++ or CUDA code, you'll need to build vLLM from source. This can take several minutes:
141
141
142
142
.. code-block:: console
143
143
@@ -153,7 +153,7 @@ If you want to modify C++ or CUDA code, you'll need to build vLLM from source. T
153
153
154
154
155
155
Use an existing PyTorch installation
156
-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
156
+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
157
157
There are scenarios where the PyTorch dependency cannot be easily installed via pip, e.g.:
158
158
159
159
* Building vLLM with PyTorch nightly or a custom PyTorch build.
@@ -171,7 +171,7 @@ To build vLLM using an existing PyTorch installation:
171
171
172
172
173
173
Troubleshooting
174
-
~~~~~~~~~~~~~~~~~
174
+
~~~~~~~~~~~~~~~
175
175
176
176
To avoid your system being overloaded, you can limit the number of compilation jobs
177
177
to be run simultaneously, via the environment variable ``MAX_JOBS``. For example:
@@ -207,7 +207,7 @@ Here is a sanity check to verify that the CUDA Toolkit is correctly installed:
207
207
208
208
209
209
Unsupported OS build
210
-
----------------------
210
+
--------------------
211
211
212
212
vLLM can fully run only on Linux but for development purposes, you can still build it on other systems (for example, macOS), allowing for imports and a more convenient development environment. The binaries will not be compiled and won't work on non-Linux systems.
0 commit comments