UPSTREAM PR #19109: llama : disable Direct IO by default #1040
LOCI Review / Performance Per Binary #1040
succeeded
Jan 26, 2026 in 0s
Performance varied across binaries, overall acceptable
2 binaries improved · 12 binaries unchanged · 1 binary stable ~ within threshold · 0 binaries degraded ~ beyond threshold
| Binary | Δ % Response | Δ % Throughput | Performance (based on response time) |
|---|---|---|---|
build.bin.libggml-base.so |
0 | 0 | unchanged |
build.bin.libggml-cpu.so |
0 | 0 | unchanged |
build.bin.libggml.so |
0 | 0 | unchanged |
build.bin.libllama.so |
-0.04 | 0.01 | improved |
build.bin.libmtmd.so |
0 | 0 | unchanged |
build.bin.llama-bench |
0 | 0 | unchanged |
build.bin.llama-cvector-generator |
0.26 | -0.02 | stable |
build.bin.llama-gemma3-cli |
0 | 0 | unchanged |
build.bin.llama-gguf-split |
0 | 0 | unchanged |
build.bin.llama-llava-cli |
0 | 0 | unchanged |
build.bin.llama-minicpmv-cli |
0 | 0 | unchanged |
build.bin.llama-quantize |
0 | 0 | unchanged |
build.bin.llama-qwen2vl-cli |
0 | 0 | unchanged |
build.bin.llama-tokenize |
0 | 0 | unchanged |
build.bin.llama-tts |
-0.23 | -0.13 | improved |
Performance threshold: 30%
Default configuration used.
Note: Performance status is evaluated only from Δ% Response. Throughput is displayed for reference.
Explore the complete analysis inside the Version Insights.
Open the Pull Request linked to this check-run.
Loading