Skip to content

Add sm90 guard to fence.acquire#2535

Merged
yongwww merged 1 commit intoflashinfer-ai:mainfrom
jhalabi-nv:jhalabi/sm90_fix
Feb 12, 2026
Merged

Add sm90 guard to fence.acquire#2535
yongwww merged 1 commit intoflashinfer-ai:mainfrom
jhalabi-nv:jhalabi/sm90_fix

Conversation

@jhalabi-nv
Copy link
Contributor

@jhalabi-nv jhalabi-nv commented Feb 10, 2026

📌 Description

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

The sm90 guard is needed to build flashinfer-jit-cache with later versions of the cuda 13.x toolkit

Summary by CodeRabbit

  • Bug Fixes
    • Improved memory synchronization in Mixture of Experts (MoE) communication kernels to ensure visibility of model state across execution steps, reducing rare correctness and stability issues during inference.
    • Added architecture-aware synchronization so MoE workloads behave consistently and reliably across different GPU generations, improving performance predictability.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jhalabi-nv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a compatibility issue with newer CUDA 13.x toolkits by modifying a CUDA kernel to conditionally use the appropriate memory fence instruction. The change ensures that flashinfer-jit-cache can be successfully built across different CUDA architectures, specifically by guarding the fence.acquire.sys instruction for sm90 and newer, while falling back to __threadfence_system() for older architectures.

Highlights

  • CUDA Architecture Compatibility: Introduced a conditional compilation guard for the fence.acquire.sys instruction, targeting sm90 (CUDA architecture 9.0) and newer.
  • CUDA Toolkit Support: Ensured compatibility with later versions of the CUDA 13.x toolkit, specifically for building flashinfer-jit-cache.
  • Memory Fence Implementation: Replaced the unconditional asm volatile("fence.acquire.sys;"); with __threadfence_system(); for CUDA architectures older than sm90.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
    • Added a preprocessor directive to conditionally compile the fence.acquire.sys assembly instruction only for CUDA architectures with compute capability 9.0 (sm90) or higher.
    • Implemented __threadfence_system() as a fallback for CUDA architectures older than sm90 to ensure proper memory synchronization.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 10, 2026

📝 Walkthrough

Walkthrough

Inserted an architecture-conditional system memory fence into moeA2ACombineKernel: on SM90+ emit fence.acquire.sys, otherwise call __threadfence_system, placed after the in-kernel synchronization to provide cross-SM memory visibility.

Changes

Cohort / File(s) Summary
MOE A2A Kernel Synchronization
csrc/nv_internal/tensorrt_llm/kernels/communicationKernels/moeAlltoAllKernels.cu
Add architecture-guarded system memory fence after in-kernel sync in moeA2ACombineKernel: use fence.acquire.sys for compute capability >= 9.0, __threadfence_system otherwise.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

  • Add sm90 guard to fence ptx #2439 — Adds SM90-guarded system-release fence in the same CUDA source file for moeA2ADispatchKernel, addressing similar visibility guarantees.

Suggested labels

run-ci

Suggested reviewers

  • djmmoss
  • wenscarl
  • nv-yunzheq
  • yongwww

Poem

🐰 I hopped through kernels, quiet and spry,
Placed a tiny fence beneath the sky,
SM90 hums, older chips comply,
Memory waves now pass on by,
Hop, sync, and watch the bits fly ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly and specifically refers to adding an sm90 architecture guard to fence.acquire, which matches the primary change described in the summary.
Description check ✅ Passed The description follows the template structure with pre-commit checks and tests marked complete, and includes reviewer notes explaining the purpose of the change.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

No actionable comments were generated in the recent review. 🎉

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a necessary compatibility fix by guarding the fence.acquire.sys instruction with a check for SM90+ architectures in moeA2ACombineKernel. For older architectures, it correctly falls back to __threadfence_system(). This change ensures that the code can be compiled with newer CUDA toolkits for a wider range of GPU architectures. The implementation is correct and follows existing patterns in the codebase for handling architecture-specific features.

While the change itself is correct, I'd like to point out that moeA2ADispatchKernel might use a similar synchronization pattern, as both MoeA2ADispatchParams and MoeA2ACombineParams contain completion_flags. It would be worth verifying if moeA2ADispatchKernel also contains an unguarded fence.acquire.sys that needs a similar fix to ensure complete compatibility.

Copy link
Collaborator

@aleozlx aleozlx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same reason as #2439
lgtm

@yongwww
Copy link
Member

yongwww commented Feb 11, 2026

@jhalabi-nv please rebase the PR onto the latest main to kick off CI.

@jhalabi-nv
Copy link
Contributor Author

/bot run

@flashinfer-bot
Copy link
Collaborator

@jhalabi-nv is not authorized to trigger this CI job. cc: @yzh119, @sricketts, @yongwww

@jhalabi-nv
Copy link
Contributor Author

@yongwww , can you trigger the CI for me? I've rebased on main

@yongwww
Copy link
Member

yongwww commented Feb 12, 2026

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !312 has been created, and the CI pipeline #43835611 is currently running. I'll report back once the pipeline job completes.

@yongwww yongwww added the run-ci label Feb 12, 2026
@yongwww
Copy link
Member

yongwww commented Feb 12, 2026

@yongwww , can you trigger the CI for me? I've rebased on main

triggered, and have added you to the allowed list.

@flashinfer-bot
Copy link
Collaborator

[FAILED] Pipeline #43835611: 14/20 passed

@yongwww yongwww merged commit 579435f into flashinfer-ai:main Feb 12, 2026
39 of 44 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants