Skip to content

Commit 9f28899

Browse files
authored
Merge pull request NVIDIA#43 from rgsl888prabhu/branch-25.08-merge-25.05
Branch 25.08 merge 25.05
2 parents 1aa8a16 + 64b1eda commit 9f28899

60 files changed

Lines changed: 1160 additions & 1468 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/build.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ jobs:
7272
script: ci/build_wheel_libcuopt.sh
7373
package-name: libcuopt
7474
package-type: cpp
75-
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
75+
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER == "3.12"))
7676
wheel-build-cuopt:
7777
needs: [wheel-build-cuopt-mps-parser, wheel-build-libcuopt]
7878
secrets: inherit

.github/workflows/test.yaml

Lines changed: 4 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -25,19 +25,8 @@ jobs:
2525
branch: ${{ inputs.branch }}
2626
date: ${{ inputs.date }}
2727
sha: ${{ inputs.sha }}
28+
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
2829
script: ci/test_cpp.sh
29-
conda-cpp-memcheck-tests:
30-
secrets: inherit
31-
uses: rapidsai/shared-workflows/.github/workflows/custom-job.yaml@branch-25.06
32-
with:
33-
build_type: ${{ inputs.build_type }}
34-
branch: ${{ inputs.branch }}
35-
date: ${{ inputs.date }}
36-
sha: ${{ inputs.sha }}
37-
node_type: "gpu-l4-latest-1"
38-
arch: "amd64"
39-
container_image: "rapidsai/ci-conda:cuda11.8.0-ubuntu22.04-py3.10"
40-
run_script: "ci/test_cpp_memcheck.sh"
4130
conda-python-tests:
4231
secrets: inherit
4332
uses: rapidsai/shared-workflows/.github/workflows/conda-python-tests.yaml@branch-25.06
@@ -46,6 +35,7 @@ jobs:
4635
branch: ${{ inputs.branch }}
4736
date: ${{ inputs.date }}
4837
sha: ${{ inputs.sha }}
38+
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
4939
script: ci/test_python.sh
5040
wheel-tests-cuopt:
5141
secrets: inherit
@@ -55,6 +45,7 @@ jobs:
5545
branch: ${{ inputs.branch }}
5646
date: ${{ inputs.date }}
5747
sha: ${{ inputs.sha }}
48+
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
5849
script: ci/test_wheel_cuopt.sh
5950
wheel-tests-cuopt-server:
6051
secrets: inherit
@@ -64,4 +55,5 @@ jobs:
6455
branch: ${{ inputs.branch }}
6556
date: ${{ inputs.date }}
6657
sha: ${{ inputs.sha }}
58+
matrix_filter: map(select((.CUDA_VER | startswith("12")) and .PY_VER != "3.13"))
6759
script: ci/test_wheel_cuopt_server.sh

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -59,6 +59,7 @@ error_log.txt
5959
docs/cuopt/source/cuopt-c/lp-milp/cuopt-cli-help.txt
6060
docs/cuopt/source/cuopt-server/client-api/sh-cli-help.txt
6161
docs/cuopt/source/cuopt-server/server-api/server-cli-help.txt
62+
docs/cuopt/source/cuopt-cli/cuopt-cli-help.txt
6263
docs/cuopt/source/cuopt_spec.yaml
6364
python/cuopt_self_hosted/cuopt_sh_client/tests/utils/certs/*.key
6465
docs/cuopt/build

CONTRIBUTING.md

Lines changed: 24 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -71,17 +71,29 @@ for a minimal build of NVIDIA cuOpt without using conda are also listed below.
7171

7272
Compilers:
7373

74-
* `gcc` version 11.4+
75-
* `nvcc` version 11.8+
76-
* `cmake` version 3.29.6+
74+
These will be installed while creating the Conda environment
75+
76+
* `gcc` version 13.0+
77+
* `nvcc` version 12.8+
78+
* `cmake` version 3.30.4+
7779

7880
CUDA/GPU Runtime:
7981

80-
* CUDA 11.4+
82+
* CUDA 12.8
8183
* Volta architecture or better ([Compute Capability](https://docs.nvidia.com/deploy/cuda-compatibility/) >=7.0)
8284

83-
You can obtain CUDA from
84-
[https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads).
85+
Python:
86+
87+
* Python >=3.10.x, <= 3.12.x
88+
89+
OS:
90+
91+
* Only Linux is supported
92+
93+
Architecture:
94+
95+
* x86_64 (64-bit)
96+
* aarch64 (64-bit)
8597

8698
### Build NVIDIA cuOpt from source
8799

@@ -219,6 +231,12 @@ set_source_files_properties(src/routing/data_model_view.cu PROPERTIES COMPILE_OP
219231
This will add the device debug symbols for this object file in `libcuopt.so`. You can then use
220232
`cuda-dbg` to debug into the kernels in that source file.
221233

234+
## Adding dependencies
235+
236+
Please refer to the [dependencies.yaml](dependencies.yaml) file for details on how to add new dependencies.
237+
Add any new dependencies in the `dependencies.yaml` file. It takes care of conda, requirements (pip based dependencies) and pyproject.
238+
Please don't try to add dependencies directly to environment.yaml files under `conda/environments` directory and pyproject.toml files under `python` directories.
239+
222240
## Code Formatting
223241

224242
### Using pre-commit hooks
@@ -303,6 +321,5 @@ You can skip these checks with `git commit --no-verify` or with the short versio
303321
304322
(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.
305323
```
306-
307324

308325

README.md

Lines changed: 68 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,31 @@
11
# cuOpt - GPU accelerated Optimization Engine
22

3-
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer programming (MIP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
3+
[![Build Status](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml/badge.svg)](https://github.com/NVIDIA/cuopt/actions/workflows/build.yaml)
44

5-
For the latest stable version ensure you are on the `main` branch.
6-
7-
## Build from Source
5+
NVIDIA® cuOpt™ is a GPU-accelerated optimization engine that excels in mixed integer linear programming (MILP), linear programming (LP), and vehicle routing problems (VRP). It enables near real-time solutions for large-scale challenges with millions of variables and constraints, offering
6+
easy integration into existing solvers and seamless deployment across hybrid and multi-cloud environments.
87

9-
Please see our [guide for building cuOpt from source](CONTRIBUTING.md#build-nvidia-cuopt-from-source)
8+
Core engine is written in C++ which is wrapped into C API, Python API and Server API.
109

11-
## Contributing Guide
10+
For the latest stable version ensure you are on the `main` branch.
1211

13-
Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
12+
## Supported APIs
1413

15-
## Resources
14+
cuOpt supports the following APIs:
1615

17-
- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html)
18-
- [libcuopt (C++/CUDA) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/introduction.html)
19-
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
16+
- C API support
17+
- Linear Programming (LP)
18+
- Mixed Integer Linear Programming (MILP)
19+
- C++ API support
20+
- cuOpt is written in C++ and includes a native C++ API. However, we do not provide documentation for the C++ API at this time. We anticipate that the C++ API will change significantly in the future. Use it at your own risk.
21+
- Python support
22+
- Routing (TSP, VRP, and PDP)
23+
- Linear Programming (LP) and Mixed Integer Linear Programming (MILP)
24+
- cuOpt includes a Python API that is used as the backend of the cuOpt server. However, we do not provide documentation for the Python API at this time. We suggest using cuOpt server to access cuOpt via Python. We anticipate that the Python API will change significantly in the future. Use it at your own risk.
25+
- Server support
26+
- Linear Programming (LP)
27+
- Mixed Integer Linear Programming (MILP)
28+
- Routing (TSP, VRP, and PDP)
2029

2130
## Installation
2231

@@ -26,30 +35,75 @@ Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to con
2635
* NVIDIA driver >= 525.60.13 (Linux) and >= 527.41 (Windows)
2736
* Volta architecture or better (Compute Capability >=7.0)
2837

38+
### Python requirements
39+
40+
* Python >=3.10.x, <= 3.12.x
41+
42+
### OS requirements
43+
44+
* Only Linux is supported and Windows via WSL2
45+
* x86_64 (64-bit)
46+
* aarch64 (64-bit)
47+
48+
Note: WSL2 is tested to run cuOpt, but not for building.
49+
50+
More details on system requirements can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/system-requirements.html)
51+
2952
### Pip
3053

54+
Pip wheels are easy to install and easy to configure. Users with existing workflows who uses pip as base to build their workflows can use pip to install cuOpt.
55+
3156
cuOpt can be installed via `pip` from the NVIDIA Python Package Index.
3257
Be sure to select the appropriate cuOpt package depending
3358
on the major version of CUDA available in your environment:
3459

3560
For CUDA 12.x:
3661

3762
```bash
38-
pip install --extra-index-url=https://pypi.nvidia.com cuopt-cu12
63+
pip install --extra-index-url=https://pypi.nvidia.com cuopt-server-cu12==25.5 cuopt-sh-client==25.5 nvidia-cuda-runtime-cu12==12.8.*
3964
```
4065

4166
### Conda
4267

4368
cuOpt can be installed with conda (via [miniforge](https://github.com/conda-forge/miniforge)) from the `nvidia` channel:
4469

70+
All other dependencies are installed automatically when cuopt-server and cuopt-sh-client are installed.
71+
72+
Users who are used to conda env based workflows would benefit with conda packages readily available for cuOpt.
4573

4674
For CUDA 12.x:
4775
```bash
4876
conda install -c rapidsai -c conda-forge -c nvidia \
49-
cuopt=25.05 python=3.12 cuda-version=12.8
77+
cuopt-server=25.05 cuopt-sh-client=25.05 python=3.12 cuda-version=12.8
5078
```
5179

5280
We also provide [nightly Conda packages](https://anaconda.org/rapidsai-nightly) built from the HEAD
5381
of our latest development branch.
5482

55-
Note: cuOpt is supported only on Linux, and with Python versions 3.10 and later.
83+
### Container
84+
85+
Users can pull the cuOpt container from the NVIDIA container registry.
86+
87+
```bash
88+
docker pull nvidia/cuopt:25.5.0-cuda12.8-py312
89+
```
90+
More information about the cuOpt container can be found [here](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/quick-start.html#container-from-docker-hub).
91+
92+
Users who are using cuOpt for quick testing or research can use the cuOpt container. Alternatively, users who are planning to plug cuOpt as a service in their workflow can quickly start with the cuOpt container. But users are required to build security layers around the service to safeguard the service from untrusted users.
93+
94+
## Build from Source and Test
95+
96+
Please see our [guide for building cuOpt from source](CONTRIBUTING.md#setting-up-your-build-environment). This will be helpful if users want to add new features or fix bugs for cuOpt. This would also be very helpful in case users want to customize cuOpt for their own use cases which require changes to the cuOpt source code.
97+
98+
## Contributing Guide
99+
100+
Review the [CONTRIBUTING.md](CONTRIBUTING.md) file for information on how to contribute code and issues to the project.
101+
102+
## Resources
103+
104+
- [libcuopt (C) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-c/index.html)
105+
- [cuopt (Python) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-python/index.html)
106+
- [cuopt (Server) documentation](https://docs.nvidia.com/cuopt/user-guide/latest/cuopt-server/index.html)
107+
- [Examples and Notebooks](https://github.com/NVIDIA/cuopt-examples)
108+
- [Test cuopt with NVIDIA Launchable](https://brev.nvidia.com/launchable/deploy?launchableID=env-2qIG6yjGKDtdMSjXHcuZX12mDNJ): Examples notebooks are pulled and hosted on [NVIDIA Launchable](https://docs.nvidia.com/brev/latest/).
109+
- [Test cuopt on Google Colab](https://colab.research.google.com/github/nvidia/cuopt-examples/): Examples notebooks can be opened in Google Colab. Please note that you need to choose a `Runtime` as `GPU` in order to run the notebooks.

0 commit comments

Comments
 (0)