You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This will add the device debug symbols for this object file in `libcuopt.so`. You can then use
220
232
`cuda-dbg` to debug into the kernels in that source file.
221
233
234
+
## Adding dependencies
235
+
236
+
Please refer to the [dependencies.yaml](dependencies.yaml) file for details on how to add new dependencies.
237
+
Add any new dependencies in the `dependencies.yaml` file. It takes care of conda, requirements (pip based dependencies) and pyproject.
238
+
Please don't try to add dependencies directly to environment.yaml files under `conda/environments` directory and pyproject.toml files under `python` directories.
239
+
222
240
## Code Formatting
223
241
224
242
### Using pre-commit hooks
@@ -303,6 +321,5 @@ You can skip these checks with `git commit --no-verify` or with the short versio
303
321
304
322
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer programming (MIP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
For the latest stable version ensure you are on the `main` branch.
6
-
7
-
## Build from Source
5
+
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering
6
+
easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
8
7
9
-
Please see our [guide for building cuOpt from source](CONTRIBUTING.md#build-nvidia-cuopt-from-source)
8
+
Core engine is written in C++ which is wrapped into C API, Python API and Server API.
10
9
11
-
## Contributing Guide
10
+
For the latest stable version ensure you are on the `main` branch.
12
11
13
-
Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
-[Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
16
+
- C API support
17
+
- Linear Programming (LP)
18
+
- Mixed Integer Linear Programming (MILP)
19
+
- C++ API support
20
+
- cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk.
21
+
- Python support
22
+
- Routing (TSP, VRP, and PDP)
23
+
- Linear Programming (LP) and Mixed Integer Linear Programming (MILP)
24
+
- cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk.
25
+
- Server support
26
+
- Linear Programming (LP)
27
+
- Mixed Integer Linear Programming (MILP)
28
+
- Routing (TSP, VRP, and PDP)
20
29
21
30
## Installation
22
31
@@ -26,30 +35,75 @@ Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to con
26
35
* NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows)
27
36
* Volta architecture or better (Compute Capability >=7.0)
28
37
38
+
### Python requirements
39
+
40
+
* Python >=3.10.x, <= 3.12.x
41
+
42
+
### OS requirements
43
+
44
+
* Only Linux is supported and Windows via WSL2
45
+
* x86_64 (64-bit)
46
+
* aarch64 (64-bit)
47
+
48
+
Note: WSL2 is tested to run cuOpt, but not for building.
49
+
50
+
More details on system requirements can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/system-requirements.html)
51
+
29
52
### Pip
30
53
54
+
Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.
55
+
31
56
cuOpt can be installed via `pip` from the NVIDIA Python Package Index.
32
57
Be sure to select the appropriate cuOpt package depending
33
58
on the major version of CUDA available in your environment:
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
53
81
of our latest development branch.
54
82
55
-
Note: cuOpt is supported only on Linux, and with Python versions 3.10 and later.
83
+
### Container
84
+
85
+
Users can pull the cuOpt container from the NVIDIA container registry.
86
+
87
+
```bash
88
+
docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
89
+
```
90
+
More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub).
91
+
92
+
Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.
93
+
94
+
## Build from Source and Test
95
+
96
+
Please see our [guide for building cuOpt from source](CONTRIBUTING.md#setting-up-your-build-environment). This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.
97
+
98
+
## Contributing Guide
99
+
100
+
Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
-[Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
108
+
-[Test cuopt with NVIDIA Launchable](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [NVIDIA Launchable](https://docs.nvidia.com/brev/latest/).
109
+
-[Test cuopt on Google Colab](https://colab.research.google.com/github/nvidia/cuopt-examples/): Examples notebooks can be opened in Google Colab. Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks.
0 commit comments