Skip to content

Add local cache of double buffer reader#9535

Merged
JiayiFeng merged 9 commits intoPaddlePaddle:developfrom
reyoung:feature/fix_double_buffer
Apr 2, 2018
Merged

Add local cache of double buffer reader#9535
JiayiFeng merged 9 commits intoPaddlePaddle:developfrom
reyoung:feature/fix_double_buffer

Conversation

@reyoung
Copy link
Collaborator

@reyoung reyoung commented Mar 30, 2018

No description provided.

@reyoung reyoung requested a review from JiayiFeng March 30, 2018 09:46
void DoubleBufferReader::PrefetchThreadFunc() {
VLOG(5) << "A new prefetch thread starts.";
size_t gpu_ctx_offset = 0;
std::vector<std::vector<framework::LoDTensor>> cpu_tensor_cache(4);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the size of the outer vector 4? It seems empirical. Should we have

const int kEmpiricalCacheSize = 4;
std::vector<std::vector<framework::LoDTensor>> gpu_tensor_cache(kEmpiricalCacheSize);

reader_->ReadNext(&batch.payloads_);
if (platform::is_gpu_place(place_)) {
std::vector<framework::LoDTensor> gpu_batch;
tensor_cache_id %= 4;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that here we need 4 because of the above 4.

tensor_cache_id %= 4;
auto& gpu_batch = gpu_tensor_cache[tensor_cache_id];
auto& cpu_batch = cpu_tensor_cache[tensor_cache_id];
cpu_batch = batch.payloads_;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a little lost here -- it seems that L159 and L160 can be merged into a single line:

auto& cpu_batch = batch.payloads_;

Am I wrong?

@JiayiFeng
Copy link
Collaborator

JiayiFeng commented Mar 31, 2018

I have just updated this PR. Did some code clean. Maybe you could take a look at it. Thanks! @wangkuiyi

};

bool DoubleBufferReader::HasNext() const {
while (!channel_->IsClosed() && !channel_->CanReceive()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it will be better to use semaphore here. Or add TODO.

@JiayiFeng JiayiFeng merged commit 899827f into PaddlePaddle:develop Apr 2, 2018
@JiayiFeng JiayiFeng deleted the feature/fix_double_buffer branch April 2, 2018 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants