Skip to content

Conversation

@fegin
Copy link
Contributor

@fegin fegin commented Dec 5, 2025

Stack from ghstack (oldest at bottom):

PyTorch can now support torch.compile inside the SAC region even if torch.compile is not used to wrap SAC. This PR removes the workaround to ensure torch.compile works with Flex

fegin added 2 commits December 5, 2025 12:42
[ghstack-poisoned]
[ghstack-poisoned]
fegin added a commit that referenced this pull request Dec 5, 2025
PyTorch can now support torch.compile inside the SAC region even if torch.compile is not used to wrap SAC.


ghstack-source-id: d3ab0c6
Pull-Request: #2118
@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 5, 2025
@fegin fegin requested a review from soulitzer December 5, 2025 20:49
Copy link
Contributor

@tianyu-l tianyu-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I wonder if you have verified it "works", i.e. it doesn't invalidate SAC / compile anymore?

[activation_checkpoint]
mode = "selective" # ["none", "selective", "full"]
selective_ac_option = '2' # 'int' = ac every positive int layer or 'op', ac based on ops policy
selective_ac_option = 'op' # 'int' = ac every positive int layer or 'op', ac based on ops policy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this intended? I think it's OK to change this but want to confirm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, accidentally commit this.

@fegin
Copy link
Contributor Author

fegin commented Dec 5, 2025

Yes, I verified it works with llama3 and llama4. FlexAttention is compiled within SAC.

Copy link
Contributor

@soulitzer soulitzer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

options={
"wrap_inductor_compiled_regions": True,
"max_autotune": True,
"coordinate_descent_tuning": True,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noob question: is this coordinate_descent_tuning also part of the "mode=max-autotune-no-cudagraphs" -> "options={...}" change?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

forgot to ask: what's the context of this change to "options={}"?

Copy link
Contributor Author

@fegin fegin Dec 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, https://github.com/pytorch/pytorch/blob/cf7bab873fa55051e1806f8db0c3f90dea452ac5/torch/_inductor/__init__.py#L361

We cannot do mode and options at the same. torch.compile forbid it. According to here, max-autotune-no-cudagraphs equals to these three options.

fegin added a commit that referenced this pull request Dec 8, 2025
PyTorch can now support torch.compile inside the SAC region even if torch.compile is not used to wrap SAC.


ghstack-source-id: d3ab0c6
Pull-Request: #2118
fegin added a commit that referenced this pull request Dec 8, 2025
PyTorch can now support torch.compile inside the SAC region even if torch.compile is not used to wrap SAC.

ghstack-source-id: d3ab0c6
Pull-Request: #2118
fegin added 2 commits December 8, 2025 12:06
[ghstack-poisoned]
[ghstack-poisoned]
fegin added a commit that referenced this pull request Dec 8, 2025
PyTorch can now support torch.compile inside the SAC region even if torch.compile is not used to wrap SAC.

ghstack-source-id: 3058de9
Pull-Request: #2118
@fegin fegin changed the base branch from gh/fegin/49/base to main December 8, 2025 21:02
@fegin fegin merged commit 575674a into main Dec 8, 2025
9 checks passed
@tianyu-l tianyu-l deleted the gh/fegin/49/head branch December 8, 2025 21:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants