Skip to content

Conversation

@lfz
Copy link

@lfz lfz commented Jan 5, 2016

This is a revision of #2824 , since it is already deprecated.

I fixed some bugs mentioned in the discussion, now it supports cudnn v3

an example of using:


layer {
  name: "conv1"
  type: "NdConvolution"
  bottom: "3d_data" # actually 5 dimension here, you may need reshape
  top: "conv1"
  convolution_param {
    num_output: 10
    stride:1
    # specifies the index of the "channels" axis --
    # may be omitted as 1 is the default
    kernel_shape { dim: 4 dim: 4 dim: 3}
    weight_filler {type: "xavier"}
  }
}



# layer {
#   name: "pool1"
#   type: "NdPooling"
#   bottom: "conv1" 
#   top: "pool1" 

#   pooling_param {
#     pool: MAX
#     kernel_shape {dim: 3,dim: 3,dim: 2}
#     stride_shape  {dim: 3,dim: 3, dim: 2}
#   }
# }

@lfz lfz closed this Jan 5, 2016
@lfz lfz deleted the master branch January 5, 2016 10:20
@lfz lfz restored the master branch January 5, 2016 10:20
@lfz lfz deleted the master branch January 5, 2016 10:22
@lfz lfz restored the master branch January 5, 2016 10:24
@lfz lfz deleted the master branch January 5, 2016 10:25
@lfz lfz restored the master branch January 5, 2016 10:27
@lfz lfz reopened this Jan 5, 2016
@lfz
Copy link
Author

lfz commented Jan 5, 2016

I am sorry that this only works with a gpu,

and because of that the checks didn't pass without cuda.

And I don't know how to bypass this

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a bug? The legacy shape accessors should not allow for 5D blobs. This change should not be made.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's because I want to use the xavier initializer, I'll change it

@ddetone
Copy link

ddetone commented Jan 9, 2016

caffe/src/caffe/layers/cudnn_ndpooling_layer.cpp line 13 has a typo, const is written twice

@lfz
Copy link
Author

lfz commented Jan 10, 2016

Thanks @ddetone , it has been fixed

@YOUNGING
Copy link

Is it possible to implement a ND deconvolutional layer using cudnn?

@YOUNGING
Copy link

Is there anyone can make sure that this PR works?I tried this Ndpooling and the master branch 3D convolutionLayer,but I have to reduce my learningrate to 1e-18 to avoid loss rising to Nan.Is this normal?

@lfz
Copy link
Author

lfz commented Jan 18, 2016

@YOUNGING really? Please supply me with the sample proto and result

@YOUNGING
Copy link

@lfz When compiled with "USE_CUDNN" option , caffe will use CUDNN engine,right?
But most of cudnn layers don't have a ND implementation.
It will give an error like this:
1

neural layer like ReLU can use CAFFE engine by setting engine to CAFFE,but SoftmaxWithLoss layer don't have an engine option,so what should I do to make things right?

BTW,reshaping the input of loss to 2D may solve the problem,but I want to find a more efficient way.Any ideas?

Thanks.

"Dimensions of filters and pad don't match !";
CHECK_EQ(nbDims, stride.size()+2) <<
"Dimensions of filters and stride don't match !";
std::vector<int> upscale(pad.size(), 1);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function takes an additional input arg in CUDNN v4 (or possibly earlier version too)
Something like following will take care of this:

#if CUDNN_VERSION >= 4000
  CUDNN_CHECK(cudnnSetConvolutionNdDescriptor(*conv,
              pad.size(), pad.data(), stride.data(), upscale.data(),
              CUDNN_CROSS_CORRELATION, cudnn_type));
#else
   CUDNN_CHECK(cudnnSetConvolutionNdDescriptor(*conv,
               pad.size(), pad.data(), stride.data(), upscale.data(),
               CUDNN_CROSS_CORRELATION));
#endif

@futurely
Copy link

Any updates? FYI, #3983 supports cuDNN v5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants