UPSTREAM PR #18151: ggml-hexagon: gelu optimization#609
Open
UPSTREAM PR #18151: ggml-hexagon: gelu optimization#609
Conversation
f002844 to
25154fc
Compare
799071d to
dba3ea5
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Mirrored from ggml-org/llama.cpp#18151
Following the discussion regarding the refactor idea here, I have re-evaluated the approach. Upon reviewing the current implementations for activation, unary, and binary functions, I observed that the existing code heavily relies on L2 prefetching while under-utilizing the VTCM and DMA.
While L2 prefetching offers some benefits, it is limited by two main factors:
This PR optimizes the code (using GELU as the initial implementation) by shifting the workload to heavily utilize the VTCM and DMA, thereby freeing up the L2 cache for L1 instruction and data cache
Optimization Strategy
Instead of relying on L2 prefetching, this implementation employs a DMA ping-pong buffering approach:
This allows for overlapping computation and memory transfer, resulting in significantly higher throughput.
Performance Benchmarks
The performance improvements are significant. Below is a comparison between the existing implementation and the new DMA/VTCM approach:
NOTE: I used the GELU as an example, but this approach can easily extend to other operations.
Unaligned Load Resolution:
This approach inherently solves the unaligned load issues encountered previously. Since data is fetched from DDR via DMA, the DMA engine ensures that data is stored into aligned addresses within the VTCM, even if the source data in DDR is unaligned.
@max-krasnyansky