fix: turbo4 SET_ROWS, tail-block truncation, constant coupling, stack overflow (Issue #29)#4
Conversation
…ling (Issue ggml-org#29) Three bugs from the block-size-32 refactor: 1. kernel_set_rows_turbo hardcoded turbo3 packing for turbo4 — split into separate kernel_set_rows_turbo3 and kernel_set_rows_turbo4 kernels. turbo4 now correctly does 3-bit PolarQuant + QJL residual correction. 2. Integer division in n_groups = nk0 / blocks_per_group silently dropped tail blocks for non-128-aligned head dims (e.g. dk=192). Added ceiling division with tail-group bounds checking in turbo3, and GGML_ASSERT in WHT dispatch to catch non-128-aligned tensors. 3. TURBO_D constant was semantically coupled to QK_TURBO4 — replaced with TURBO_ROT_DIM (= QK_TURBO3_GROUP) and added static_assert that QK_TURBO4 == QK_TURBO3_GROUP to guard against future drift. Closes ggml-org#29 Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
…stack turbo_init_rotation() allocated a 128x128 float array (64KB) on the stack to generate the random Gaussian matrix, then memcpy'd it to the static turbo_rotation[]. llama.cpp worker threads have reduced stack sizes, causing segfault on first turbo4 quantize call. Fix: generate directly into the static turbo_rotation[] array, eliminating the intermediate stack allocation entirely. The Gram-Schmidt QR decomposition already runs in-place on turbo_rotation[]. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]>
RTX 3080 Ti (Ampere, SM 8.6) Benchmark ResultsFirst published Ampere numbers for TurboQuant. Tested on spiritbuun's CUDA fork with the PR #1 pool size fix applied. Model: Qwen3 8B Q4_K_M | GPU: EVGA RTX 3080 Ti 12GB | Flash attn: on | Full offload: ngl=99
Notes
|
Complete experiment log: #1 4-mag LUT: 15.1 at 8K (BEST, +38%) #2 Batched extract: 13.7 (+25%) #3 Inline FA block: 13.5 (I-cache pressure) #4 Deferred norm: 12.9 (loses ILP) ggml-org#5 2-pair half2: 12.0 (ternary overhead) ggml-org#6 Select chain: 11.9 (branches kill) ggml-org#7 Bit-arithmetic: 11.6 (ALU too heavy) ggml-org#8 FMA branchless: 11.4 (ALU still too heavy) ggml-org#9 Named-reg ternary: 10.3 (branches worst) ggml-org#10 Main (8-LUT): 10.95 (baseline) ggml-org#11 Non-vec FA: 10.2 (wrong kernel) Ceiling: 24.5 (no dequant) Apple8 hardware truth: 1 divergent constant read < 7 ALU ops (even with fma) Branches cost MORE than divergent constant reads Array indexing ALWAYS spills on Metal 4 constant addresses is the sweet spot The 4-mag LUT is the dequant-level ceiling on Apple Silicon. Co-Authored-By: Claude Opus 4.6 (1M context) <[email protected]> Co-Authored-By: [email protected]
…nto feature/turboquant-kv-cache
|
thanks for tackling all three bugs from ggml-org#29 plus the stack overflow catch. the turbo4 kernel split and tail-block ceiling division look right to me. will pull this down and test locally on M5 Max this afternoon to verify no regressions on turbo3 (PPL + NIAH + speed) before merging. the Ampere numbers from @spiritbuun fork are great to have too. |
Test Results — M5 Max 128GB, Qwen3.5-35B-A3B Q8_0Build: Clean compile, no warnings. turbo3 (regression check)
turbo4
Code review
RecommendationGood to merge — turbo3 has zero regression risk and the fixes are correct. However, turbo4 needs more testing before recommending it for use. The elevated PPL and missing prefill kernels are pre-existing issues, not introduced by this PR. Recommendation: stay on turbo3 for now. turbo4 end-to-end validation (prefill FA kernels, quality investigation) tracked separately. |
|
| State | turbo3 PPL (c=512, 8ch) |
|---|---|
Pre-merge (7d1bd95) |
6.1756 ✅ |
Post-merge (065ef53) |
181.5955 ❌ |
Post-revert (a52586e) |
6.1756 ✅ |
The kernel split from the shared kernel_set_rows_turbo template into separate kernel_set_rows_turbo3 / kernel_set_rows_turbo4 functions introduced a regression in the turbo3 quantization path. The q8_0 baseline is unaffected (5.46-6.03 across all context lengths), confirming this is turbo3-specific.
Merge reverted in a52586e. The turbo4 fixes and stack overflow fix are good — the issue is specifically in the turbo3 kernel split. Likely a subtle difference in the template parameters or quantization logic when moving from the shared template to the dedicated function.
Happy to help debug the turbo3 path if you want to resubmit.
Summary
Fixes all three bugs from Issue ggml-org#29, plus a bonus stack overflow fix found during testing.
kernel_set_rows_turbohardcoded turbo3 packing for turbo4 — split into separatekernel_set_rows_turbo3andkernel_set_rows_turbo4kernels. turbo4 now correctly does 3-bit PolarQuant + QJL residual correction.n_groups = nk0 / blocks_per_groupsilently dropped tail blocks for non-128-aligned head dims. Added ceiling division with tail-group bounds checking, andGGML_ASSERTin WHT dispatch.TURBO_Dwas semantically coupled toQK_TURBO4— replaced withTURBO_ROT_DIM(=QK_TURBO3_GROUP) and addedstatic_assertguard.turbo_init_rotation()allocated a 64KBfloat G[128*128]on the stack, causing segfault on llama.cpp worker threads. Eliminated by generating directly into the staticturbo_rotation[]array.Test plan
gcc -fsyntax-only)SET_ROWShas no turbo dispatch — turbo types only work on Metal/CUDA backendsAmpere benchmark data (spiritbuun's CUDA fork, RTX 3080 Ti 12GB)
First published Ampere numbers for TurboQuant. Qwen3 8B Q4_K_M:
Note: spiritbuun's fork needed PR #1's pool size fix (
+2inllama-kv-cache.cpp) to avoid crash on turbo cache init.Closes ggml-org#29
🤖 Generated with Claude Code