Skip to content

[AutoParallel] Refine auto_trainer save load#8767

Merged
ZHUI merged 10 commits into
PaddlePaddle:developfrom
zhangbo9674:dev/fix_uneven_split_save_load
Jul 26, 2024
Merged

[AutoParallel] Refine auto_trainer save load#8767
ZHUI merged 10 commits into
PaddlePaddle:developfrom
zhangbo9674:dev/fix_uneven_split_save_load

Conversation

@zhangbo9674
Copy link
Copy Markdown
Contributor

@zhangbo9674 zhangbo9674 commented Jul 16, 2024

PR types

Bug fixes

PR changes

Others

Description

Refine save load for auto_trainer.

@paddle-bot
Copy link
Copy Markdown

paddle-bot Bot commented Jul 16, 2024

Thanks for your contribution!

@codecov
Copy link
Copy Markdown

codecov Bot commented Jul 22, 2024

Codecov Report

Attention: Patch coverage is 21.73913% with 36 lines in your changes missing coverage. Please review.

Project coverage is 55.22%. Comparing base (5a508e5) to head (2d0f836).
Report is 237 commits behind head on develop.

Files with missing lines Patch % Lines
paddlenlp/trainer/auto_trainer.py 0.00% 27 Missing ⚠️
paddlenlp/trainer/trainer.py 52.63% 9 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #8767      +/-   ##
===========================================
- Coverage    55.43%   55.22%   -0.21%     
===========================================
  Files          626      631       +5     
  Lines        98070   100091    +2021     
===========================================
+ Hits         54366    55277     +911     
- Misses       43704    44814    +1110     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@zhangbo9674 zhangbo9674 changed the title [AutoParallel] Refine save load [AutoParallel] Refine auto_trainer save load Jul 23, 2024
zhiqiu
zhiqiu previously approved these changes Jul 24, 2024
Copy link
Copy Markdown
Collaborator

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

self._memory_tracker.start()

if not self.args.enable_auto_parallel:
if not self.args.should_load_sharding_stage1_model:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里先就这样吧,看后面是不是可以抽个函数出来,方便自动并行重载。

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

Comment thread paddlenlp/trainer/auto_trainer.py Outdated
for p_name, p in model.state_dict().items():
if paddle.distributed.get_rank() not in p.process_mesh.process_ids:
var_name = p.name
if (
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的一些 _moment1_0 变量,要不放到一个全局变量里面管理一下,不要hard code了

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

epochs_trained = self.state.global_step // num_update_steps_per_epoch
if not args.ignore_data_skip:
steps_trained_in_current_epoch = self.state.global_step % (num_update_steps_per_epoch)
steps_trained_in_current_epoch *= args.gradient_accumulation_steps
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个是不需要了,还是之前是错的?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前有问题,这里修复了

@ZHUI ZHUI merged commit e6d74f7 into PaddlePaddle:develop Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants