Skip to content

Conversation

@gesen2egee
Copy link
Contributor

@gesen2egee gesen2egee commented Mar 10, 2024

Based on the current implementation, the state is saved as frequently as it is with LoRA.

However, given the significant file size of the state, even with the use of save_last_n_XX to manage and delete old states, it remains cumbersome, especially for those who need to save a large number of step versions (approximately 100-200 steps) to pick up the optimal training outcomes.

This minor modification introduces
--save_state_on_train_end allowing the preservation of only the final state at the end of training. This facilitates the continuation of training for under-trained models.

It has been observed that, compared to resuming with weights, resuming from the state provides a more stable and similar descent curve to the original training process.

@kohya-ss kohya-ss changed the base branch from main to dev March 20, 2024 08:49
@kohya-ss
Copy link
Owner

Thank you! This makes a big sense.

@kohya-ss kohya-ss merged commit bf6cd4b into kohya-ss:dev Mar 20, 2024
nana0304 pushed a commit to nana0304/sd-scripts that referenced this pull request Jun 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants