option to re-use cv splits during tuning#545
Merged
nasaul merged 4 commits intoNixtla:mainfrom Jan 8, 2026
Merged
Conversation
…each tuning trail
Contributor
Author
|
@nasaul Here is an implementation, which re-uses the CV splits to increase tuning speed. |
nasaul
reviewed
Jan 5, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
At the moment the CV splits are created newly in each tuning trial, which can be slow for large datasets.
This PR adds an option to reuse CV splits across tuning trials.
When enabled, the splits are computed once in auto.py and passed into the optimization objective, where they are reused for all tuning trials. The default behavior is unchanged: if the option is disabled, CV splits are still generated inside each trial as before.
The original behavior is intentionally kept because this introduces a RAM vs CPU trade-off. Reusing splits can significantly reduce runtime when running many trials, but it keeps all train/validation splits in memory for the duration of the tuning run, which may increase peak memory usage on very large datasets.
To ensure correctness, a test was added using a deterministic model, verifying that predictions are identical when reusing CV splits versus recomputing them each trial.
When tested on the example below I saw a speed-up of 1.14x when re-using CV compared to the current implementation:
Description
Solves #538
Checklist: