Replies: 1 comment
-
|
If the bottleneck is the training the it should help. If you're not training a lot of iterations then the bottleneck may be the data processing. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm training two models (AutoXGBoost and AutoLightGBM) using AutoMLForecast and want to use GPU. I am aware both models use Optuna for hyperparameter tuning. If I pass the following parameters:
For AutoXGBoost:
'tree_method': 'gpu_hist'
'predictor': 'gpu_predictor'
... rest of the params
For AutoLightGBM:
'device': 'gpu'
... rest of the params
Will this speed up the process and utilize the GPU, or could it cause issues in the backend, or would it make no difference ?
Beta Was this translation helpful? Give feedback.
All reactions