Note
Two similar implementation exists for conv2d:
signal.conv2d and nnet.conv2d.
The former implements a traditional 2D convolution, while the latter implements the convolutional layers present in convolutional neural networks (where filters are 3D and pool over several input channels).
TODO: Give examples for how to use these things! They are pretty complicated.
conv3d2d Another conv3d implementation that uses the conv2d with data reshaping. It is faster in some cases than conv3d, specifically on the GPU.
This is in Pylearn2, not very documented and uses a different memory layout for the input. It is important to have the input in the native memory layout, and not use dimshuffle on the inputs, otherwise you lose most of the speed up. So this is not a drop in replacement of conv2d.
Normally those are called from the linear transform implementation.
This function will build the symbolic graph for convolving a stack of input images with a set of filters. The implementation is modelled after Convolutional Neural Networks (CNN). It is simply a wrapper to the ConvOp but provides a much cleaner interface.
Parameters: |
|
---|---|
Return type: | symbolic 4D tensor |
Returns: | set of feature maps generated by convolutional layer. Tensor is of shape (batch size, nb filters, output row, output col) |
3D “convolution” of multiple filters on a minibatch (does not flip the kernel, moves kernel with a user specified stride)
Parameters: |
|
---|---|
Note: | The order of dimensions does not correspond to the one in conv2d. This is for optimization. |
Note: | The GPU implementation is very slow. You are better to use conv3d2d that is faster on GPU. |
Convolve spatio-temporal filters with a movie.
Parameters: |
|
---|---|
Note: | Work on the GPU. |