wired behaviour in the Transposed Convolution Layer

2 views (last 30 days)
Hello
This post if about the usage of transpose convolution layer in the following example:
https://uk.mathworks.com/help/deeplearning/ug/image-to-image-regression-using-deep-learning.html
In the createUpsampleTransposeConvLayer helper function numChannels = 1;
However, when the neural network is trained, the channel dimension of the weights of every transposed layer is greater than 1 as can be seen in the following image towards the highlighted weights on the right side:
I have manually checked the weights and they conform to the dimensions given in the above photo on the right side. Towards the left side it can be seen that the software mentions 4*4*1 transposed convolutions (assuming that the last digit is the number of channels) which is clearly wrong. So how can the transposed convolution layer, being told to keep the number of channels to 1, end up producing weights with channel dimensions greater than 1?
Secondly, I am trying to simulate this example by means of the dltranspconv function. I've tried to put number of channels =1 in the first convolution layer as follows:
O_dltconv1=dltranspconv(O_maxpool3,K_tconv_1,B_tconv_1,'Stride',2,'Cropping',1);
with:
K>> size(O_maxpool3)
ans =
4 4 8 500
K>> size(K_tconv_1)
ans =
4 4 8
K>> size(B_tconv_1)
ans =
1 1 8
but I get the following error:
Number of channels to convolve (1, specified by the size of the 'U' dimension of the weights) must be equal
to the size of the 'C' dimension of the input data (8).
If I go with the number of channels being greater than 1 approach (8 channels) which clearly seem to be right as per my first question, then the above simulation does not result in similar output as by using the layered arcitecture in the original example.
Please clarify this issue.

Accepted Answer

Asvin Kumar
Asvin Kumar on 10 Feb 2021
Edited: Asvin Kumar on 10 Feb 2021
Part 1
I see this too. The doc for 'NumChannels' property mentions that the parameter must be equal to the number of channels of the input to this convolutional layer. From the behaviour you've noticed, it's clear that the network is adjusting the channel dimension of weights as required automatically.
Regarding the discrepancy with what's mentioned under the layer name, this is a known issue. The team working on this feature may fix it in a future release.
Part 2
As mentioned in Part 1, dltranspconv expects the channels dimension of the weights to match the number of channels in the input. It would error out if that condition isn't met. As the error message suggests, the fix is to have K_tconv_1 of size [4 4 8 8] which matches the channels from the input of size [4 4 8 500].

More Answers (0)

Products

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!