Skip to content

Use GCC 13 in CUDA 12 conda builds.#6221

Merged
AyodeAwe merged 8 commits intorapidsai:branch-25.02from
bdice:use-gcc-13-with-cuda-12-conda-builds
Jan 17, 2025
Merged

Use GCC 13 in CUDA 12 conda builds.#6221
AyodeAwe merged 8 commits intorapidsai:branch-25.02from
bdice:use-gcc-13-with-cuda-12-conda-builds

Conversation

@bdice
Copy link
Copy Markdown
Contributor

@bdice bdice commented Jan 13, 2025

Description

conda-forge is using GCC 13 for CUDA 12 builds. This PR updates CUDA 12 conda builds to use GCC 13, for alignment.

These PRs should be merged in a specific order, see rapidsai/build-planning#129 for details.

@bdice bdice added non-breaking Non-breaking change improvement Improvement / enhancement to an existing function labels Jan 13, 2025
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Jan 13, 2025

Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually.

Contributors can view more details about this message here.

@bdice bdice marked this pull request as ready for review January 13, 2025 18:52
@bdice bdice requested review from a team as code owners January 13, 2025 18:52
@bdice bdice added the DO NOT MERGE Hold off on merging; see PR for details label Jan 13, 2025
@bdice bdice self-assigned this Jan 13, 2025
@jameslamb jameslamb removed the request for review from msarahan January 13, 2025 19:58
@jakirkham
Copy link
Copy Markdown
Member

Seeing the following error on CI:

2025-01-13T19:13:00.8954431Z     inlined from 'static cudaError_t cub::CUB_200700_700_750_800_860_900_NS::DeviceReduce::TransformReduce(void*, size_t&, InputIteratorT, OutputIteratorT, NumItemsT, ReductionOpT, TransformOpT, T, cudaStream_t) [with InputIteratorT = int*; OutputIteratorT = int*; ReductionOpT = thrust::plus<int>; TransformOpT = cuda::__4::__detail::__return_type_wrapper<bool, __nv_dl_wrapper_t<__nv_dl_trailing_return_tag<void (ML::HDBSCAN::Common::CondensedHierarchy<int, float>::*)(int*, int*, float*, int*, int), &ML::HDBSCAN::Common::CondensedHierarchy<int, float>::condense, bool, 1> > >; T = int; NumItemsT = int]' at $SRC_DIR/cpp/build/_deps/cccl-src/cub/cub/cmake/../../cub/device/device_reduce.cuh:1000:143:
2025-01-13T19:13:00.8957470Z $SRC_DIR/cpp/build/_deps/cccl-src/thrust/thrust/cmake/../../thrust/system/cuda/detail/core/triple_chevron_launch.h:143:19: error: 'dispatch' may be used uninitialized [-Werror=maybe-uninitialized]
2025-01-13T19:13:00.8958686Z   143 |     NV_IF_TARGET(NV_IS_HOST, (return doit_host(k, args...);), (return doit_device(k, args...);));
2025-01-13T19:13:00.8959168Z       |          ~~~~~~~~~^~~~~~~~~~~~

Comment thread cpp/CMakeLists.txt
Comment thread cpp/test/CMakeLists.txt Outdated
@jakirkham
Copy link
Copy Markdown
Member

Seeing some failures related to cuDF's __dataframe__ deprecation: rapidsai/cudf#17736

Will follow up offline

@bdice
Copy link
Copy Markdown
Contributor Author

bdice commented Jan 16, 2025

I am trying to address the __dataframe__ issues in #6229, but there are some problems yet to solve. I would consider admin-merging this PR since the builds appear to be working. @dantegd Would you be okay with that? We are down to just one other build failure in cuvs to address, then I want to make this change across all of RAPIDS at once.

@bdice bdice removed the DO NOT MERGE Hold off on merging; see PR for details label Jan 17, 2025
@AyodeAwe AyodeAwe merged commit d95cae5 into rapidsai:branch-25.02 Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CMake conda conda issue CUDA/C++ improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants