In a deep learning network, I have two branches operating from same input layer. In one of the branches, I have fully-connected layer, say 1X1XN dimensions. In another branch, I have a convolutional layer giving two-dimensional matrix, say PXQXS. In order to proceed with further convolutions by combining them, I have to concatenate the outputs of these branches by repeating the N-dimensional vector to form PXQXN, so that I will get a PXQX(N+S) matrix. To do this, are there any means to replicate a vector to matrix, analogous to 'repmat()' function, in between deep network layers?
In other words, is there any means by which I can concatenate two layers of different width and height by bringing them to a common size in a deep network?