Skip to content

Fix LPFormer attention dimensions#10680

Open
BWAAEEEK wants to merge 2 commits intopyg-team:masterfrom
BWAAEEEK:fix-lpformer-attention-dimensions
Open

Fix LPFormer attention dimensions#10680
BWAAEEEK wants to merge 2 commits intopyg-team:masterfrom
BWAAEEEK:fix-lpformer-attention-dimensions

Conversation

@BWAAEEEK
Copy link
Copy Markdown

@BWAAEEEK BWAAEEEK commented Apr 29, 2026

This PR fixes LPFormer attention layer sizes for multi-layer and multi-head deployments.

The initial implementation makes reference to an undefined self.num_layers while constructing transformer layers. All of these changes fixed that, but multi-layer configurations still ended up producing mismatched pairwise feature dimensions because the final attention layer output consistently had 2 * hidden_channels size.

Also, multi-head attention output normalisation used out_channels instead of num_heads * out_channels.

Change Summary

This PR modifies the construction of LPFormer layers to use dimensions on edge feature input/output., and adds regression coverage for num_transformer_layers in {1, 2}, and num_heads in {1, 2}.

Tested with:

pytest test/nn/models/test_lpformer.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant