Skip to content

UPSTREAM PR #18537: mmq.cu: tune mmq/wmma switching for RDNA#783

Open
loci-dev wants to merge 3 commits intomainfrom
upstream-PR18537-branch_Beinsezii-beinsezii/rocm_mmq_tune
Open

UPSTREAM PR #18537: mmq.cu: tune mmq/wmma switching for RDNA#783
loci-dev wants to merge 3 commits intomainfrom
upstream-PR18537-branch_Beinsezii-beinsezii/rocm_mmq_tune

Conversation

@loci-dev
Copy link
Copy Markdown

@loci-dev loci-dev commented Jan 2, 2026

Mirrored from ggml-org/llama.cpp#18537

Continuing from #18442 I applied similar benchmarking as #14949 and #18202 to try and minimize bad cases on RDNA while keeping the logic simple.

The TL;DR is, over an average of all models on https://huggingface.co/Beinsezii/mmq_test over a variety of µbatch sizes I have

mmq% blas% tuned%
95.9 85.7 98.8

Where 100% is a theoretical maximum if it were to optimally choose mmq or rocblas in each case.

When excluding the 1B model which is noisy, there's exactly two outliers

model_filename model_n_params n_ubatch n_prompt mmq_ts blas_ts tuned_ts mmq/blas mmq% blas% tuned%
MOE/32e.gguf 20914757184 128 2048 1736.72 1913.88 1744.08 -9.26 90.74 100.0 91.13
14B/Q6_K.gguf 14768307200 128 2048 829.64 972.01 829.41 -14.65 85.35 100.0 85.33

Both of which are on bs=128, and both of which quickly flip < 128. If you wanted to fudge this case, you could probably do something like

diff --git a/ggml/src/ggml-cuda/mmq.cu b/ggml/src/ggml-cuda/mmq.cu
index ccb9ebed5..b7c1b7dc2 100644
--- a/ggml/src/ggml-cuda/mmq.cu
+++ b/ggml/src/ggml-cuda/mmq.cu
@@ -344,6 +344,7 @@ bool ggml_cuda_should_use_mmq(enum ggml_type type, int cc, int64_t ne11, int64_t
             // These quants are really bad on MMQ
             case GGML_TYPE_Q2_K:
             case GGML_TYPE_Q6_K:
+                return ne11 < 128;  // ==128 specifically is much better on rocblas
             // These quants are usually worse but not always
             case GGML_TYPE_IQ2_XS:
             case GGML_TYPE_IQ2_S:

but that might be considered splitting hairs.

100% of my testing was on GFX11, ROCm 7.1.1, compile flags

GGML_CUDA_FA_ALL_QUANTS=ON
GGML_HIP=ON
GGML_HIP_GRAPHS=ON

and forced mmq/cublas as appropriate for measuring.

I do not own any RDNA3.5 or RDNA4 hardware. I'm assuming RDNA3.5 will behave pretty much the same, but since RDNA4 has some implementation differences in MMQ, it may be worth for someone to re-measure in the future.

The raw data for every combination of model / batch / backend can be viewed at measurements.csv which was generated by the scripts on huggingface.
Different from the other PRs, I've made the baseline MMQ as it seems to better handle most cases.
In general, I put little weight on the 1B results as it's extremely noisy, even with hip graphs. For cases that were a wash between µbatch sizes like Q4_K and Q5_K, I simply preferred MMQ.

@loci-review
Copy link
Copy Markdown

loci-review bot commented Jan 2, 2026

Explore the complete analysis inside the Version Insights

Perfect! I've retrieved the summary report for your project. Here are the key findings:

Performance Summary Report

Project Details:

Key Findings:

No Significant Performance Regressions Detected

The analysis shows that no modified functions were found with performance changes greater than 2%. This indicates:

  1. Response Time: All modified functions maintained stable response times (within ±2% threshold)
  2. Throughput Time: All modified functions maintained stable throughput times (within ±2% threshold)

Conclusion:

This pull request has minimal to no performance impact on the llama.cpp codebase. The changes maintain performance stability, which is a positive result indicating that your code modifications don't introduce any significant performance regressions.

Would you like more detailed information about specific functions or any other aspects of this performance comparison?

@loci-dev loci-dev force-pushed the main branch 25 times, most recently from 118039a to cd122e2 Compare January 6, 2026 07:13
@loci-dev loci-dev force-pushed the main branch 30 times, most recently from 4f98d83 to 3210e33 Compare January 12, 2026 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants