Error using trainNetwork (line 184) Invalid network. Caused by: Layer 4: The size of the pooling dimension of the padded input data must be larger than or equal to the pool si

6 views (last 30 days)
Hi everyone,
I want to study the performance of a given network varying the size and number of filters in convolution1dLayer and the number of windows provided as input to the trainNetwork function itself.
As you can see from the picture, I already defined (in a way that is suitable for Matlab) x_train and y_train, respectively the input and the output to my network. It should indeed receive a sequence of data from 3 sensors (in the Workspace, variable defined by n_window) and 5783 time points (in the variable length). Also, if I wish to, I could give to the network an input sequence (x_train) as a 1x1 cell (as it already is in the picture) with even greater number of windows (repeated sequences of 3*5723 matrices in x_train). This obviously implies to change the y_train as well, and the code does it well enough. For completeness, I sat the following options: solver to 'adam', maxEpochs equal to 100, SequencePaddingDirection to left and Verbose to 0.
Nevertheless, I was about to run the algorithm (just once, and that explains the break in the inner loop), to check whether it was about to work or not, but I get the following error. I don't know what is wrong, because the inputSize, parameter of the sequenceInputLayer layer, is [3 5723]. Therefore, since I sat the pooling to 2 and a stride of 2, there should be no errors, at least theoretically. I also tried to change the stride to 1, but no change in the error occurred. What am I doing wrong? Thanks in advance for your help! :)
Also, I just tried to add a MinLength parameter in sequenceInputLayer, as suggested by the error message: setting it to either 1, n_features (equal to 3) or length (equal to 5723) doesn't change the final result.

Accepted Answer

Ben on 21 Jun 2022
The issue is that the 2nd layer convolution1dLayer(3,10) has no padding. Currently its input size is [3, 5723, ?] where 3 is n_features, 5723 is length and ? represents the sequence dimension that is variable.
For a sequenceInputLayer where the input size is a two dimensional vector like [n_features, length] we interpret the first dimension as space and the 2nd dimension as features/channels.
The convolution1dLayer(3,10) operates on just the space dimension here and it will "squash" that size to [1, 10, ?] which is too small for pooling window of size 2.
You can see this by calling analyzeNetwork(layers) and checking the Activations column.
You can set the "Padding" to make things run - for example convolution1dLayer(3,10,"Padding","same") - however I think you probably want to adjust some things as the 5723 appears to be sequence length, which you shouldn't need to specify in sequenceInputLayer's input size - you might use it in the "MinLength" name-value pair though.
Ben on 21 Jun 2022
No problem.
Sure, I think it should work if all your sequences are at least 5723 steps, so long as you set that as the MinLength on the sequenceInputLayer.
You could also work out what the smallest MinLength that is allowed for the network by analysing the convolution filter sizes, the pooling window sizes, and the pooling strides. For the case this works as follows:
  • Suppose your input has sequence length T
  • The first convolution has filter size 3 with no padding, that reduces sequence length to .
  • The first pooling layer has stride 2 and no padding, that will reduce sequence length by half to
  • The second convolution also has filter size 3 and no padding, so again sequence length drops by 2 to give
  • Finally the 2nd pooling layer has window size 2, so it requires the current sequence length to be at least 2, giving the inequality
  • Rearranging gives which is the smallest valid sequence length, i.e. MinLength should be 10.
You can do the above with any filter sizes - but of course it gets a little complex, so usually it's easier to just use the minimum sequence length of your training data and if that's too small you'll have to modify the network anyway.

Sign in to comment.

More Answers (0)


Find more on Deep Learning with Time Series and Sequence Data in Help Center and File Exchange




Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!