Skip to content

Comments

make memory optimization module compatible with parallel_do#8516

Merged
jacquesqiao merged 7 commits intoPaddlePaddle:developfrom
QiJune:memopt_multi_gpu
Mar 2, 2018
Merged

make memory optimization module compatible with parallel_do#8516
jacquesqiao merged 7 commits intoPaddlePaddle:developfrom
QiJune:memopt_multi_gpu

Conversation

@QiJune
Copy link
Member

@QiJune QiJune commented Feb 23, 2018

Fix #8515

And this PR also depends on #8489

cost = fluid.layers.square_error_cost(input=y_predict, label=y)
avg_cost = fluid.layers.mean(x=cost)
places = fluid.layers.get_places(device_count=2, device_type='CPU')
pd = fluid.layers.ParallelDo(places)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also test it along with nccl? Basically, change fluid.layers.ParallelDo(places) to fluid.layers.ParallelDo(places, use_nccl=True)

pd = fluid.layers.ParallelDo(places, use_nccl=use_nccl)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I will make test with NCCL.

@QiJune QiJune requested a review from jacquesqiao March 2, 2018 02:12
jacquesqiao
jacquesqiao previously approved these changes Mar 2, 2018
Copy link
Member

@jacquesqiao jacquesqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greate job, LGTM!

jacquesqiao
jacquesqiao previously approved these changes Mar 2, 2018
Copy link
Member

@jacquesqiao jacquesqiao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@jacquesqiao jacquesqiao merged commit 0240bb7 into PaddlePaddle:develop Mar 2, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants