Skip to content
5 changes: 4 additions & 1 deletion python/cuml/cuml/manifold/umap.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -273,13 +273,16 @@ class UMAP(Base,
def on_train_end(self, embeddings):
print(embeddings.copy_to_host())

handle : cuml.Handle
handle : cuml.Handle or pylibraft.common import DeviceResourcesSNMG
Comment thread
jinsolp marked this conversation as resolved.
Outdated
Specifies the cuml.handle that holds internal CUDA state for
computations in this model. Most importantly, this specifies the CUDA
stream that will be used for the model's computations, so users can
run different models concurrently in different streams by creating
handles in several streams.
If it is None, a new one is created.
Using `pylibraft.common.DeviceResourcesSNMG` as the handle will run batched knn graph
Comment thread
jinsolp marked this conversation as resolved.
building using multiple GPUs. This will only be valid when `build_algo=nn_descent` and
`nnd_n_clusters > 1`.
verbose : int or boolean, default=False
Sets logging level. It must be one of `cuml.common.logger.level_*`.
See :ref:`verbosity-levels` for more info.
Expand Down
6 changes: 5 additions & 1 deletion python/cuml/cuml/tests/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,11 @@ def get_param_doc(param_doc_obj, name: str):
found_doc.type == base_item_doc.type
), "Docstring mismatch for {}".format(name)

assert " ".join(found_doc.desc) == " ".join(base_item_doc.desc)
if not (
found_doc.type == "cuml.Handle"
and klass == cuml.manifold.umap.UMAP
):
assert " ".join(found_doc.desc) == " ".join(base_item_doc.desc)


@pytest.mark.parametrize("child_class", list(all_base_children.keys()))
Expand Down
Loading