Skip to content

ANN_BENCH: integrate NVTX statistics#1529

Merged
rapids-bot[bot] merged 11 commits intorapidsai:mainfrom
achirkin:fea-nvtx-stats
Nov 13, 2025
Merged

ANN_BENCH: integrate NVTX statistics#1529
rapids-bot[bot] merged 11 commits intorapidsai:mainfrom
achirkin:fea-nvtx-stats

Conversation

@achirkin
Copy link
Copy Markdown
Contributor

@achirkin achirkin commented Nov 11, 2025

Add the aggregate reporting of NVTX ranges in the output of benchmark executable.

Usage

# Measure the CPU and GPU runtime of all NVTX ranges
nsys launch --trace=cuda,nvtx <ANN_BENCH with arguments>
# Measure only the CPU runtime of all NVTX ranges
nsys launch --trace=nvtx <ANN_BENCH with arguments>
# Do not measure/report any NVTX ranges
<ANN_BENCH with arguments>
# Do not measure/report any NVTX ranges within benchmark, but use nsys profiling as usual
nsys profile ... <ANN_BENCH with arguments>

Implementation

The PR adds a single module nvtx_stats.hpp to the benchmark executable; there are no changes to the library at all.
The program leverages NVIDIA Nsight Systems CLI to collect and export NVTX statistics and then SQLite API to aggregate it into the benchmark state:

  1. Detect if run via nsys launch; if so, call nsys start / nsys stop around benchmark loop; otherwise do nothing.
  2. If the report is generated, read it and query all NVTX events and the GPU correlation data using SQLite
  3. Aggregate the NVTX events by their short names (without arguments to reduce the number of columns)
  4. Add them to the benchmark performance counters with the same averaging strategy as the global CPU/GPU runtime.

Performance cost

If the benchmark is not run using nsys launch, there's virtually zero overhead in the new functionality.
Otherwise, there are overheads:

  1. Usual nsys profiling overheads (minimized by disabling unused information via nsys start CLI internally). This affects the reported performance the same way as normal nsys profiling does (especially if cuda tracing is enabled).
  2. One or more data collection/exporting events per benchmark case. These add some extra time to the benchmark time, but do not affect the counters (they are not the part of the benchmark loop)

Closes #1367

@achirkin achirkin requested review from a team as code owners November 11, 2025 09:56
@achirkin achirkin self-assigned this Nov 11, 2025
@achirkin achirkin added feature request New feature or request non-breaking Introduces a non-breaking change labels Nov 11, 2025
@achirkin achirkin moved this from Todo to In Progress in Unstructured Data Processing Nov 11, 2025
@achirkin
Copy link
Copy Markdown
Contributor Author

achirkin commented Nov 11, 2025

As an example, I've build CAGRA-for-HNSW using only CPU events with a wiki-1M dataset, exported a CSV, and generated the breakdown bar plot using Google Sheets:

nsys launch --trace=nvtx ./cpp/build/bench/ann/ANN_BENCH \
  --build --force \
  --benchmark_min_time=20s \
  --benchmark_min_warmup_time=0.1 \
  --benchmark_counters_tabular \
  --benchmark_out_format=csv \
  --benchmark_out=cagra-hnsw-build.csv \
  --benchmark_filter=cuvs_cagra_hnsw.* \
  --override_kv=dataset_memory_type:\"host\" \
  wiki-1M-cagra-hnsw.json
CAGRA HNSW build time breakdown

Copy link
Copy Markdown
Contributor

@tfeher tfeher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Artem for the PR! This is neat, this will be very useful for understanding more details about how time is spent during benchmarks. The PR looks good to me.

@vyasr
Copy link
Copy Markdown
Contributor

vyasr commented Nov 12, 2025

This seems like it could be quite useful for other RAPIDS libraries too. I wonder if we could put this somewhere that's easy to reuse in others too, WDYT?

@achirkin
Copy link
Copy Markdown
Contributor Author

In theory yes, it's rather modular and pluggable in any executable. I'm just not sure where one would keep the code if not just copy-pasting

@achirkin
Copy link
Copy Markdown
Contributor Author

/merge

@rapids-bot rapids-bot Bot merged commit 39ef11d into rapidsai:main Nov 13, 2025
84 checks passed
@github-project-automation github-project-automation Bot moved this from In Progress to Done in Unstructured Data Processing Nov 13, 2025
enp1s0 pushed a commit to enp1s0/cuvs that referenced this pull request Nov 16, 2025
Add the aggregate reporting of NVTX ranges in the output of benchmark executable.

### Usage
```bash
# Measure the CPU and GPU runtime of all NVTX ranges
nsys launch --trace=cuda,nvtx <ANN_BENCH with arguments>
# Measure only the CPU runtime of all NVTX ranges
nsys launch --trace=nvtx <ANN_BENCH with arguments>
# Do not measure/report any NVTX ranges
<ANN_BENCH with arguments>
# Do not measure/report any NVTX ranges within benchmark, but use nsys profiling as usual
nsys profile ... <ANN_BENCH with arguments>
```

### Implementation

The PR adds a single module `nvtx_stats.hpp` to the benchmark executable; there are no changes to the library at all.
The program leverages NVIDIA Nsight Systems CLI to collect and export NVTX statistics and then SQLite API to aggregate it into the benchmark state:

  1. Detect if run via `nsys launch`; if so, call `nsys start` / `nsys stop` around benchmark loop; otherwise do nothing.
  2. If the report is generated, read it and query all NVTX events and the GPU correlation data using SQLite
  3. Aggregate the NVTX events by their short names (without arguments to reduce the number of columns)
  4. Add them to the benchmark performance counters with the same averaging strategy as the global CPU/GPU runtime.

### Performance cost
If the benchmark is **not** run using `nsys launch`, there's virtually zero overhead in the new functionality.
Otherwise, there are overheads:
  1. Usual nsys profiling overheads (minimized by disabling unused information via `nsys start` CLI internally). This affects the reported performance the same way as normal nsys profiling does (especially if cuda tracing is enabled).
  2. One or more data collection/exporting events per benchmark case. These add some extra time to the benchmark time, but do not affect the counters (they are not the part of the benchmark loop)
 
Closes rapidsai#1367

Authors:
  - Artem M. Chirkin (https://github.com/achirkin)

Approvers:
  - Tamas Bela Feher (https://github.com/tfeher)

URL: rapidsai#1529
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feature request New feature or request non-breaking Introduces a non-breaking change

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

[FEA] cuvs-bench needs to measure index conversion time

3 participants