How to convert 2D layer to 1D and from 1D to 2D?

14 views (last 30 days)
Helo,
I want to create an autoencoder architecture with 2D input and output (matrix) but inside I need 1D (fullyConnectedLayer) as a latent layer. How to do it? In Layer Library of Deep Network Designer I cannot see the useful "bricks".
Regards,
Greg

Answers (2)

David Willingham
David Willingham on 6 Sep 2022
Can you describe a little more about your application? I.e. what is your input, is it a matrix of signals or an image?
  2 Comments
Grzegorz Klosowski
Grzegorz Klosowski on 6 Sep 2022
Input and output are 48x48 matrix. It is a single-channel image that consists of real numbers.
Grzegorz Klosowski
Grzegorz Klosowski on 6 Sep 2022
Edited: Grzegorz Klosowski on 6 Sep 2022
And even this matrix. The same at the input and output. And in the middle it should be a 1D layer. I need to compress this image this way. Using the autoencoder precisely. In the latent layer, I want to have a vector consisting of, say, 256 neurons. How to change a dimension from 2D to 1D and from 1D to 2D inside a neural network (or an autoencoder)?

Sign in to comment.


Mandar
Mandar on 13 Dec 2022
I understand that you want make an autoencoder network with specific hidden layer (latent layer) size.
You may refer the following documentation which shows how to create an autoencoder networks with specific hidden layer size, learning latent features and further stacking the latent layers to make a stacked autoencoder network with multiple hidden layers to learn effective latent features.

Products


Release

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!