-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Closed
Description
1. Background and consensus
- An operator can have several kernels. And fluid provides a policy to switch the right kernel for an operator. Please refer to switching kernel doc
- Since data layout will bring branches in computation, we need to record layout in Tensor for convenience. Otherwise, we have to expose Layout attribute to users. Please refer to need to add a data member to represent the Layout of a Tensor #6765
2. concerns on Layout as the key of OpKernelType
The essence about taking layout as the key OpKernelType is to write following codes in which part.
- We can get proper Layout in
GetExpectedKernelTypemethod and convert data inTransmethod, and then chose right kernel registered in advance. - We can also put these codes inside the
Computemethod of MKLDNN kernel. The layout will be chosen and transformed inside kernel.
@tensor-tang Could you give some advice on the total effort of these two choice considering integrating MKLDNN to fluid.
3. Some other related question
Some operators like dropout and batch norm will have different computation logic in train/test process. Fluid handles these logic inside the Compute method of operator kernel. Fluid does not take is_test as a key of OpKernelType currently.
Metadata
Metadata
Assignees
Labels
No labels