Skip to content

[BUG] Achieve ROCKET GPU kernel and feature parity using CPU kernel generation#3227

Open
Adityakushwaha2006 wants to merge 3 commits intoaeon-toolkit:mainfrom
Adityakushwaha2006:GPU/rocket-parity-fix
Open

[BUG] Achieve ROCKET GPU kernel and feature parity using CPU kernel generation#3227
Adityakushwaha2006 wants to merge 3 commits intoaeon-toolkit:mainfrom
Adityakushwaha2006:GPU/rocket-parity-fix

Conversation

@Adityakushwaha2006
Copy link
Contributor

Reference Issues/PRs

Related to #1248

What does this implement/fix? Explain your changes.

This PR implements kernel and feature parity between CPU and GPU ROCKET implementations by reusing the CPU's kernel generation function while maintaining GPU acceleration for transform operations.

Changes:

  • Import CPU's kernel generation function for identical kernel parameters
  • Add conversion method to transform sparse channel indexing to dense format compatible with TensorFlow convolutions
  • Update CPU-GPU parity test to use decimal=4 threshold (previously xfail, now passes)
  • Removed GPU-specific kernel generation parameters as they're now derived from CPU logic

Results:

  • Kernel parity: 100% exact match (identical weights, biases, dilations, channel selections)
  • Feature parity: Features match within 1e-4 (0.0001) precision (most datasets show even higher precision of the order 1e-5 or 1e-7)
  • Tested on: Both univariate and multivariate datasets.

Key insight:
The sparse to dense conversion places CPU's selected channel weights at correct positions in a dense kernel, with zeros for non selected channels. Since zero weights contribute nothing to convolution, this achieves mathematical equivalence while using standard TensorFlow operations.

Does your contribution introduce a new dependency? If yes, which one?

No.

Any other comments?

None.

PR checklist

For all contributions
  • I've added myself to the list of contributors. Alternatively, you can use the @all-contributors bot to do this for you after the PR has been merged.
  • The PR title starts with either [ENH], [MNT], [DOC], [BUG], [REF], [DEP] or [GOV] indicating whether the PR topic is related to enhancement, maintenance, documentation, bugs, refactoring, deprecation or governance.
For new estimators and functions
  • I've added the estimator/function to the online API documentation.
  • (OPTIONAL) I've added myself as a __maintainer__ at the top of relevant files and want to be contacted regarding its maintenance. Unmaintained files may be removed. This is for the full file, and you should not add yourself if you are just making minor changes or do not want to help maintain its contents.
For developers with write access
  • (OPTIONAL) I've updated aeon's CODEOWNERS to receive notifications about future changes to these files.

…l parity along with feature divergence <1e-4
@aeon-actions-bot aeon-actions-bot bot added bug Something isn't working transformations Transformations package labels Jan 8, 2026
@aeon-actions-bot
Copy link
Contributor

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ bug ].
I have added the following labels to this PR based on the changes made: [ transformations ]. Feel free to change these if they do not properly represent the PR.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Discord channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • Run pre-commit checks for all files
  • Run mypy typecheck tests
  • Run all pytest tests and configurations
  • Run all notebook example tests
  • Run numba-disabled codecov tests
  • Stop automatic pre-commit fixes (always disabled for drafts)
  • Disable numba cache loading
  • Regenerate expected results for testing
  • Push an empty commit to re-run CI checks

Copy link
Member

@MatthewMiddlehurst MatthewMiddlehurst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good in principle that it is now equal but not sure about some bits.

This seems a bit hacky with the _convert_cpu_kernels_to_gpu_format function rather than just incorporating the output. Removal of parameters would require deprecation most likely. Good if @hadifawaz1999 could review.

Comment on lines 140 to +159
@@ -152,4 +155,5 @@ def test_rocket_cpu_gpu(n_channels):

X_transform_cpu = rocket_cpu.transform(X)
X_transform_gpu = rocket_gpu.transform(X)
assert_array_almost_equal(X_transform_cpu, X_transform_gpu, decimal=8)
# Set decimal threshold here
assert_array_almost_equal(X_transform_cpu, X_transform_gpu, decimal=4)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the changes here? Not against the decimal changes but interested in hearing why. Docs changes seem unnecessary.

Comment on lines +40 to +44
Notes
-----
This GPU implementation uses the CPU's kernel generation logic
(from `_rocket._generate_kernels`) to ensure exact kernel parity
when using the same random seed.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would a user need to know this? I can see noting a difference in results but this seems a bit unnecessary.

Comment on lines +221 to +222
# Transpose and convert to float32 for TensorFlow compatibility
X = X.transpose(0, 2, 1).astype(np.float32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working transformations Transformations package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants