Skip to content

Conversation

@reyoung
Copy link
Collaborator

@reyoung reyoung commented Sep 27, 2017

#Fix #4454

@reyoung reyoung requested review from JiayiFeng and QiJune September 27, 2017 23:18
int place = key.place_.which();
int data_type = static_cast<int>(key.data_type_);
// NOTE: Number of places limit to 16.
int pre_hash = data_type << 4 | (place & 0x0F);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need a pre_hash here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we should hash two private data together. So I combine them manually.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe InferDataType is a better name.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it is not an inference, i.e., it is not inference the output data type by inputs. It indicates the kernel data type.

@QiJune
Copy link
Member

QiJune commented Sep 28, 2017

Please refer to REGISTER_OP_KERNEL.
We have to modify this macro too,

#define REGISTER_OP_KERNEL(op_type, DEVICE_TYPE, DATA_TYPE, place_class, ...)        \

static ::paddle::framework::OpKernelRegistrar<place_class, __VA_ARGS__>                          \
      __op_kernel_registrar_##op_type##_##DEVICE_TYPE##DATA_TYPE##__(#op_type);     \

@reyoung
Copy link
Collaborator Author

reyoung commented Sep 28, 2017

We have to modify this macro too,

Actually, no. It is not needed. The datatype is stored in OpKernel<T>. It does not need to add to macro again.

@reyoung reyoung force-pushed the feature/make_paddle_support_double branch from 23c434c to 2c05465 Compare September 28, 2017 00:09
@reyoung reyoung force-pushed the feature/make_paddle_support_double branch from e38db7f to f1913d4 Compare September 28, 2017 00:27
@reyoung reyoung force-pushed the feature/make_paddle_support_double branch from 1ca9e77 to ae3dca7 Compare September 28, 2017 02:15
int place = key.place_.which();
int data_type = static_cast<int>(key.data_type_);
// NOTE: Number of places limit to 16.
int pre_hash = data_type << 4 | (place & 0x0F);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

define 4 and 0x0f somewhere.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can have a universal method, HashCombine.
Please refer to Hash64Combine in tensorflow

@reyoung reyoung changed the title Add Skeleton of Double support Support double precision Sep 28, 2017

template <typename PlaceType, size_t I, typename... KernelType>
struct OpKernelRegistrarFunctor<PlaceType, false, I, KernelType...> {
using KT = typename std::tuple_element<I, std::tuple<KernelType...>>::type;
Copy link
Collaborator

@JiayiFeng JiayiFeng Sep 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to associate KT with 'kernel type'. I think KERNEL_TYPE is better.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Copy link
Collaborator

@JiayiFeng JiayiFeng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Many thanks!

@reyoung reyoung merged commit 184768e into PaddlePaddle:develop Sep 28, 2017
@reyoung reyoung deleted the feature/make_paddle_support_double branch October 2, 2017 18:19
luotao1 pushed a commit that referenced this pull request Nov 9, 2022
* fix paddle.get_default_dtype 

Chinese and English return values are inconsistent

* fix paddle.matmul 文档评估 #4407

把函数的输出改成正确的

* fix paddle.std文档评估 #4370

增加了一个unbiased=False的代码示例,没有增加numpy,怕引起误会。

* fix paddle.load文档测评 #4455 

只把代码拆分了5段

* try

* try

* try

* Update io.py

* Update io.py

* Update creation.py

* Update creation.py

* [Docs]add name description

* [Docs]fix broadcasting issue

Co-authored-by: Ligoml <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants