[WIP]Optimize TritonAttention with cache load#9778
[WIP]Optimize TritonAttention with cache load#9778yuan-luo wants to merge 3 commits intosgl-project:mainfrom
Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @yuan-luo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request focuses on optimizing the TritonAttention kernel by leveraging Triton's memory caching mechanism. The changes aim to improve performance by enabling efficient data loading and better utilization of GPU caches, resulting in notable speed improvements in attention computations.
Highlights
- Performance Enhancement: Introduced ".cg" cache modifier to tl.load operations within the Triton attention kernel, optimizing memory access and data loading for improved GPU cache utilization, leading to significant speedups (17-20% for Triton Attention without window_size).
- Kernel Configuration Adjustment: Increased the num_stages parameter from 1 to 2 in the extend_attention_fwd function, potentially enhancing pipeline efficiency for the Triton kernel.
- Test Infrastructure Update: Modified test/srt/test_swa_unittest.py to reflect changes in the SWARadixCache import path and updated the SWATokenToKVPoolAllocator constructor call.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a significant performance optimization to the Triton attention kernel by enabling caching for key/value loads from the buffer, which is well-supported by the provided benchmark results. The change to increase num_stages is also a good complementary optimization. My review includes suggestions to extend this caching strategy to other tensor loads within the kernel, which could potentially yield further performance gains.
|
|
||
| grid = (batch_size, head_num, triton.cdiv(max_len_extend, BLOCK_M)) | ||
| num_stages = 1 | ||
| num_stages = 2 |
There was a problem hiding this comment.
QQ why did you change the num_stages?
There was a problem hiding this comment.
By increasing the pipeline stages of the kernel, the time required for each computation can be reduced. Here by increasing the number of num_stages, different parts of the window computation can be processed in parallel, increasing throughput. It gives Triton more rearrangement space, allowing for overlap between "loading the next column block and computing the current column block."
|
This related CI test failed: https://github.com/sgl-project/sglang/actions/runs/17392068796/job/49367795438?pr=9778#step:5:545 |
Motivation
In Triton, cache_modifier is used to specify how the memory loading cache strategy should be handled. .cg (cache) is a memory access optimization option provided by Triton. This PR applies it to enable caching during data loading, allowing the loaded data to better utilize the GPU's cache mechanism, thereby improving performance.
This PR by the way resolved the regression issue in unit test for triton attn swa test due to code refactor.
The benchmark result shows in Triton Attention (wo setting window_size, WINDOW_SIZE=-1 in table below) it gains 17-20% speedup. 4k input, 1.5k output, e2e TTFT reduce 3.6%.
Modifications
Accuracy Tests
Benchmarking and Profiling
4k input, 1.5k output, triton backend e2e TTFT reduce 3.6%.
$python3 -m sglang.bench_serving --backend sglang --dataset-name random --num-prompts 100 --random-input-len 4000 --random-output-len 1500 --random-range-ratio 1
Checklist