RAPIDS offers support to GNN (Graph Neural Networks). Several components of the RAPIDS ecosystem fit into a typical GNN framework as shown below. An overview of GNN's and how they are used is found in this excellent blog.
RAPIDS GNN components improve other industy GNN specific projects. Due to the degree distribution of nodes, memory bottlenecks are the pain point for large scale graphs. To solve this problem, sampling operations form the backbone for Graph Neural Networks (GNN) training. However, current sampling methods provided by other libraries are not optimized enough for the whole process of GNN training. The main limit to performance is moving data between the hosts and devices. In cuGraph, we provide an end-to-end solution from data loading to training all on the GPUs.
cuGraph supports compatibility withPyTorch Geometric (PyG) by allowing conversion between a cuGraph object a PyG object, making it possible for PyG users to access efficient data loader and graph operations (such as sampling) implementations in cuGraph, as well as keep their models unchanged in PyG. We have considerable speedup compared with the original implementation in PyG.


