Skip to content

Conversation

@ethanglaser
Copy link
Contributor

@ethanglaser ethanglaser commented Nov 18, 2025

Description

Replacement for #2764 but without modifications to internal dpctl tensor handling. Removes to-be-deprecated dpctl tensor usage in examples and tests, replacing with dpnp array use where appropriate.


Checklist:

Completeness and readability

  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with updates and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

@ethanglaser ethanglaser changed the title Dev/eglaser dcptl rm pt1 Initial dpctl tensor removal Nov 18, 2025
@ethanglaser
Copy link
Contributor Author

/intelci: run

@codecov
Copy link

codecov bot commented Nov 20, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

Flag Coverage Δ
azure 80.34% <ø> (-0.15%) ⬇️
github 81.95% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/dummy/dummy.py 95.65% <ø> (ø)

... and 6 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

# ==============================================================================

# sklearnex IncrementalPCA example for GPU offloading with DPCtl usm ndarray:
# sklearnex IncrementalPCA example for GPU offloading with DPNP ndarray:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it make sense to have examples that offload to GPU through DPNP arrays if there's already support for array API that can avoid transferring data back and forth with a one-liner change?

@ethanglaser
Copy link
Contributor Author

/intelci: run

@ethanglaser
Copy link
Contributor Author

/intelci: run

# We create GPU SyclQueue and then put data to dpnp arrays using
# the queue. It allows us to do computation on GPU.

queue = dpctl.SyclQueue("gpu")
Copy link
Contributor

@Vika-F Vika-F Nov 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it Ok to still have dpctl.SyclQueue after dpctl tensor removal?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's not going to be removed.

But since this is using a queue object, maybe it'd be better to offload to that instead of creating dpnp arrays on GPU that will then be moved to CPU and back again to GPU during the call.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants