Skip to content

Commit fdd1119

Browse files
committed
Add documentation for using uv with PyTorch
1 parent a0562d1 commit fdd1119

3 files changed

Lines changed: 332 additions & 0 deletions

File tree

docs/guides/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ Learn how to integrate uv with other software:
1515
- [Using in GitHub Actions](./integration/github.md)
1616
- [Using in GitLab CI/CD](./integration/gitlab.md)
1717
- [Using with alternative package indexes](./integration/alternative-indexes.md)
18+
- [Installing PyTorch](./integration/pytorch.md)
1819
- [Building a FastAPI application](./integration/fastapi.md)
1920

2021
Or, explore the [concept documentation](../concepts/index.md) for comprehensive breakdown of each

docs/guides/integration/pytorch.md

Lines changed: 330 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,330 @@
1+
# Using uv with PyTorch
2+
3+
The [PyTorch](https://pytorch.org/) ecosystem is a popular choice for deep learning research and
4+
development. You can use uv to manage PyTorch projects and PyTorch dependencies across different
5+
Python versions and environments, even controlling for the choice of accelerator (e.g., CPU-only vs.
6+
CUDA).
7+
8+
## Installing PyTorch
9+
10+
From a packaging perspective, PyTorch has a few uncommon characteristics:
11+
12+
- Many PyTorch wheels are hosted on a dedicated index, rather than the Python Package Index (PyPI).
13+
As such, installing PyTorch typically often configuring a project to use the PyTorch index.
14+
- PyTorch includes distinct builds for each accelerator (e.g., CPU-only, CUDA). Since there's no
15+
standardized mechanism for specifying these accelerators when publishing or installing, PyTorch
16+
encodes them in the local version specifier. As such, PyTorch versions will often look like
17+
`2.5.1+cpu`, `2.5.1+cu121`, etc.
18+
- Builds for different accelerators are published to different indexes. For example, the `+cpu`
19+
builds are published on https://download.pytorch.org/whl/cpu, while the `+cu121` builds are
20+
published on https://download.pytorch.org/whl/cu121.
21+
22+
As such, the necessary packaging configuration will vary depending on both the platforms you need to
23+
support and the accelerators you want to enable.
24+
25+
To start, consider the following (default) configuration, which would be generated by running
26+
`uv init --python 3.12` followed by `uv add torch torchvision`.
27+
28+
In this case, PyTorch would be installed from PyPI, which hosts CPU-only wheels for Windows and
29+
macOS, and GPU-accelerated wheels on Linux (targeting CUDA 12.4):
30+
31+
```toml
32+
[project]
33+
name = "project"
34+
version = "0.1.0"
35+
requires-python = ">=3.12"
36+
dependencies = [
37+
"torch>=2.5.1",
38+
"torchvision>=0.20.1",
39+
]
40+
```
41+
42+
!!! tip "Supported Python versions"
43+
44+
At time of writing, PyTorch does not yet publish wheels for Python 3.13; as such projects with
45+
`requires-python = ">=3.13"` may fail to resolve. See the
46+
[compatibility matrix](https://github.com/pytorch/pytorch/blob/main/RELEASE.md#release-compatibility-matrix).
47+
48+
This is a valid configuration for projects that want to use CPU builds on Windows and macOS, and
49+
CUDA-enabled builds on Linux. However, if you need to support different platforms or accelerators,
50+
you'll need to configure the project accordingly.
51+
52+
## Using a PyTorch index
53+
54+
In some cases, you may want to use a specific PyTorch variant across all platforms. For example, you
55+
may want to use the CPU-only builds on Linux too.
56+
57+
In such cases, the first step is to add the relevant PyTorch index to your `pyproject.toml`:
58+
59+
=== "CPU-only"
60+
61+
```toml
62+
[[tool.uv.index]]
63+
name = "pytorch-cpu"
64+
url = "https://download.pytorch.org/whl/cpu"
65+
explicit = true
66+
```
67+
68+
=== "CUDA 11.8"
69+
70+
```toml
71+
[[tool.uv.index]]
72+
name = "pytorch-cu118"
73+
url = "https://download.pytorch.org/whl/cu118"
74+
explicit = true
75+
```
76+
77+
=== "CUDA 12.1"
78+
79+
```toml
80+
[[tool.uv.index]]
81+
name = "pytorch-cu121"
82+
url = "https://download.pytorch.org/whl/cu121"
83+
explicit = true
84+
```
85+
86+
=== "CUDA 12.4"
87+
88+
```toml
89+
[[tool.uv.index]]
90+
name = "pytorch-cu124"
91+
url = "https://download.pytorch.org/whl/cu124"
92+
explicit = true
93+
```
94+
95+
=== "ROCm6"
96+
97+
```toml
98+
[[tool.uv.index]]
99+
name = "pytorch-rocm"
100+
url = "https://download.pytorch.org/whl/rocm6.2"
101+
explicit = true
102+
```
103+
104+
We recommend the use of `explicit = true` to ensure that the index is _only_ used for `torch`,
105+
`torchvision`, and other PyTorch-related packages, as opposed to generic dependencies like `jinja2`,
106+
which should continue to be sourced from the default index (PyPI).
107+
108+
Next, we'll update the `pyproject.toml` to point `torch` and `torchvision` to the desired index:
109+
110+
=== "CPU-only"
111+
112+
PyTorch doesn't publish CPU-only builds for macOS, since macOS builds are always considered CPU-only.
113+
As such, we gate on `platform_system` to instruct uv to ignore the PyTorch index when resolving for macOS.
114+
115+
```toml
116+
[tool.uv.sources]
117+
torch = [
118+
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'"},
119+
]
120+
torchvision = [
121+
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'"},
122+
]
123+
```
124+
125+
=== "CUDA 11.8"
126+
127+
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore
128+
the PyTorch index when resolving for macOS.
129+
130+
```toml
131+
[tool.uv.sources]
132+
torch = [
133+
{ index = "pytorch-cu118", marker = "platform_system != 'Darwin'"},
134+
]
135+
torchvision = [
136+
{ index = "pytorch-cu118", marker = "platform_system != 'Darwin'"},
137+
]
138+
```
139+
140+
=== "CUDA 12.1"
141+
142+
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore
143+
the PyTorch index when resolving for macOS.
144+
145+
```toml
146+
[tool.uv.sources]
147+
torch = [
148+
{ index = "pytorch-cu121", marker = "platform_system != 'Darwin'"},
149+
]
150+
torchvision = [
151+
{ index = "pytorch-cu121", marker = "platform_system != 'Darwin'"},
152+
]
153+
```
154+
155+
=== "CUDA 12.4"
156+
157+
PyTorch doesn't publish CUDA builds for macOS. As such, we gate on `platform_system` to instruct uv to ignore
158+
the PyTorch index when resolving for macOS.
159+
160+
```toml
161+
[tool.uv.sources]
162+
torch = [
163+
{ index = "pytorch-cu124", marker = "platform_system != 'Darwin'"},
164+
]
165+
torchvision = [
166+
{ index = "pytorch-cu124", marker = "platform_system != 'Darwin'"},
167+
]
168+
```
169+
170+
=== "ROCm6"
171+
172+
PyTorch doesn't publish ROCm6 builds for macOS or Windows. As such, we gate on `platform_system` to instruct uv to
173+
ignore the PyTorch index when resolving for those platforms.
174+
175+
```toml
176+
[tool.uv.sources]
177+
torch = [
178+
{ index = "pytorch-rocm", marker = "platform_system == 'Linux'"},
179+
]
180+
torchvision = [
181+
{ index = "pytorch-rocm", marker = "platform_system == 'Linux'"},
182+
]
183+
```
184+
185+
As a complete example, the following project would use PyTorch's CPU-only builds on all platforms:
186+
187+
```toml
188+
[project]
189+
name = "project"
190+
version = "0.1.0"
191+
requires-python = ">=3.12.0"
192+
dependencies = [
193+
"torch>=2.5.1",
194+
"torchvision>=0.20.1",
195+
]
196+
197+
[tool.uv.sources]
198+
torch = [
199+
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'" },
200+
]
201+
torchvision = [
202+
{ index = "pytorch-cpu", marker = "platform_system != 'Darwin'" },
203+
]
204+
205+
[[tool.uv.index]]
206+
name = "pytorch-cpu"
207+
url = "https://download.pytorch.org/whl/cpu"
208+
explicit = true
209+
```
210+
211+
## Configuring accelerators with environment markers
212+
213+
In some cases, you may want to use CPU-only builds in one environment (e.g., macOS and Windows), and
214+
CUDA-enabled builds in another (e.g., Linux).
215+
216+
With `tool.uv.sources`, you can use environment markers to specify the desired index for each
217+
platform. For example, the following configuration would use PyTorch's CPU-only builds on Windows
218+
(and macOS, by way of falling back to PyPI), and CUDA-enabled builds on Linux:
219+
220+
```toml
221+
[project]
222+
name = "project"
223+
version = "0.1.0"
224+
requires-python = ">=3.12.0"
225+
dependencies = [
226+
"torch>=2.5.1",
227+
"torchvision>=0.20.1",
228+
]
229+
230+
[tool.uv.sources]
231+
torch = [
232+
{ index = "pytorch-cpu", marker = "platform_system == 'Windows'" },
233+
{ index = "pytorch-cu124", marker = "platform_system == 'Linux'" },
234+
]
235+
torchvision = [
236+
{ index = "pytorch-cpu", marker = "platform_system == 'Windows'" },
237+
{ index = "pytorch-cu124", marker = "platform_system == 'Linux'" },
238+
]
239+
240+
[[tool.uv.index]]
241+
name = "pytorch-cpu"
242+
url = "https://download.pytorch.org/whl/cpu"
243+
explicit = true
244+
245+
[[tool.uv.index]]
246+
name = "pytorch-cu124"
247+
url = "https://download.pytorch.org/whl/cu124"
248+
explicit = true
249+
```
250+
251+
## Configuring accelerators with optional dependencies
252+
253+
In some cases, you may want to use CPU-only builds in some cases, but CUDA-enabled builds in others,
254+
with the choice toggled by a user-provided extra (e.g., `uv sync --extra cpu` vs.
255+
`uv sync --extra cu124`).
256+
257+
With `tool.uv.sources`, you can use extra markers to specify the desired index for each enabled
258+
extra. For example, the following configuration would use PyTorch's CPU-only for
259+
`uv sync --extra cpu` and CUDA-enabled builds for `uv sync --extra cu124`:
260+
261+
```toml
262+
[project]
263+
name = "project"
264+
version = "0.1.0"
265+
requires-python = ">=3.12.0"
266+
dependencies = []
267+
268+
[project.optional-dependencies]
269+
cpu = [
270+
"torch>=2.5.1",
271+
"torchvision>=0.20.1",
272+
]
273+
cu124 = [
274+
"torch>=2.5.1",
275+
"torchvision>=0.20.1",
276+
]
277+
278+
[tool.uv]
279+
conflicts = [
280+
[
281+
{ extra = "cpu" },
282+
{ extra = "cu124" },
283+
],
284+
]
285+
286+
[tool.uv.sources]
287+
torch = [
288+
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
289+
{ index = "pytorch-cu124", extra = "cu124" },
290+
]
291+
torchvision = [
292+
{ index = "pytorch-cpu", extra = "cpu", marker = "platform_system != 'Darwin'" },
293+
{ index = "pytorch-cu124", extra = "cu124" },
294+
]
295+
296+
[[tool.uv.index]]
297+
name = "pytorch-cpu"
298+
url = "https://download.pytorch.org/whl/cpu"
299+
explicit = true
300+
301+
[[tool.uv.index]]
302+
name = "pytorch-cu124"
303+
url = "https://download.pytorch.org/whl/cu124"
304+
explicit = true
305+
```
306+
307+
!!! note
308+
309+
Since GPU-accelerated builds aren't available on macOS, the above configuration will continue to use
310+
the CPU-only builds on macOS via the `"platform_system != 'Darwin'"` marker, regardless of the extra
311+
provided.
312+
313+
## The `uv pip` interface
314+
315+
While the above examples are focused on uv's project interface (`uv lock`, `uv sync`, `uv run`,
316+
etc.), PyTorch can also be installed via the `uv pip` interface.
317+
318+
PyTorch itself offers a [dedicated interface](https://pytorch.org/get-started/locally/) to determine
319+
the appropriate pip command to run for a given target configuration. For example, you can install
320+
stable, CPU-only PyTorch on Linux with:
321+
322+
```shell
323+
$ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
324+
```
325+
326+
To use the same workflow with uv, replace `pip3` with `uv pip`:
327+
328+
```shell
329+
$ uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
330+
```

mkdocs.template.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -115,6 +115,7 @@ nav:
115115
- GitHub Actions: guides/integration/github.md
116116
- GitLab CI/CD: guides/integration/gitlab.md
117117
- Pre-commit: guides/integration/pre-commit.md
118+
- PyTorch: guides/integration/pytorch.md
118119
- FastAPI: guides/integration/fastapi.md
119120
- Alternative indexes: guides/integration/alternative-indexes.md
120121
- Dependency bots: guides/integration/dependency-bots.md

0 commit comments

Comments
 (0)