Skip to content

Set num_threads for torch nn models#1535

Merged
Innixma merged 1 commit intoautogluon:masterfrom
yinweisu:fix_NN_num_cpu
Feb 7, 2022
Merged

Set num_threads for torch nn models#1535
Innixma merged 1 commit intoautogluon:masterfrom
yinweisu:fix_NN_num_cpu

Conversation

@yinweisu
Copy link
Copy Markdown
Contributor

@yinweisu yinweisu commented Feb 4, 2022

Issue #, if available:
#1519

Description of changes:
Making sure TorchNN and FASTAI are using the correct number of threads. This would address issues when other models modify the global thread setting.
Only updated this logic for fit because we find out that the overhead of setting the thread is not worth it during inference after some experiments.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Copy link
Copy Markdown
Collaborator

@Innixma Innixma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking into this! Will plan to benchmark to confirm fix.

Copy link
Copy Markdown
Collaborator

@Innixma Innixma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! On average this speeds up medium quality by 58% (Also likely speeds up non-ray based bagging too)! No drop in performance observed.

Regarding CI failure, unsure why it's happening, will merge and see if it impacts mainline.

@Innixma Innixma merged commit 965301b into autogluon:master Feb 7, 2022
@Innixma Innixma linked an issue Feb 7, 2022 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BUG: LightGBM and XGBoost slow down FastAI and TorchNN on Linux

2 participants