Skip to content

Bypass data allocation to GPU given precomputed knn in UMAP#7566

Merged
rapids-bot[bot] merged 8 commits intorapidsai:mainfrom
jinsolp:bypass-data-gpu-alloc-umap
Dec 17, 2025
Merged

Bypass data allocation to GPU given precomputed knn in UMAP#7566
rapids-bot[bot] merged 8 commits intorapidsai:mainfrom
jinsolp:bypass-data-gpu-alloc-umap

Conversation

@jinsolp
Copy link
Copy Markdown
Contributor

@jinsolp jinsolp commented Dec 4, 2025

Closes #7132

This PR ensures that we don't default to copying data to device memory if precomputed knn graph is given.

@jinsolp jinsolp self-assigned this Dec 4, 2025
@jinsolp jinsolp requested a review from a team as a code owner December 4, 2025 20:08
@jinsolp jinsolp added the improvement Improvement / enhancement to an existing function label Dec 4, 2025
@jinsolp jinsolp requested a review from divyegala December 4, 2025 20:08
@jinsolp jinsolp added the non-breaking Non-breaking change label Dec 4, 2025
@github-actions github-actions Bot added the Cython / Python Cython or Python issue label Dec 4, 2025
@jinsolp jinsolp changed the title Bypass data gpu alloc umap Bypass data allocation to GPU given precomputed knn in UMAP Dec 4, 2025
Copy link
Copy Markdown
Member

@jcrist jcrist left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small nit, otherwise looks good to me!

For testing, do we have tests for passing X on both host and device when using a precomputed knn? If not, can we add one just to ensure full coverage here?

Comment thread python/cuml/cuml/manifold/umap/umap.pyx Outdated
Co-authored-by: Jim Crist-Harif <jcristharif@gmail.com>
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Dec 5, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@jinsolp jinsolp force-pushed the bypass-data-gpu-alloc-umap branch from 447b15b to 35fc0a3 Compare December 5, 2025 01:01
@jinsolp
Copy link
Copy Markdown
Contributor Author

jinsolp commented Dec 5, 2025

thanks @jcrist , added as part of an existing test!

@jinsolp
Copy link
Copy Markdown
Contributor Author

jinsolp commented Dec 17, 2025

@jcrist this is ready for another round of reviews 🙂

@jcrist
Copy link
Copy Markdown
Member

jcrist commented Dec 17, 2025

/merge

@rapids-bot rapids-bot Bot merged commit 7903079 into rapidsai:main Dec 17, 2025
100 of 102 checks passed
mani-builds pushed a commit to mani-builds/cuml that referenced this pull request Jan 11, 2026
…#7566)

Closes rapidsai#7132

This PR ensures that we don't default to copying data to device memory if precomputed knn graph is given.

Authors:
  - Jinsol Park (https://github.com/jinsolp)

Approvers:
  - Jim Crist-Harif (https://github.com/jcrist)

URL: rapidsai#7566
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Cython / Python Cython or Python issue improvement Improvement / enhancement to an existing function non-breaking Non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bypass input_to_cuml_array in UMAP when given precomputed_knn or knn_graph

2 participants