Main Content

List of Deep Learning Layers

This page provides a list of deep learning layers in MATLAB®.

To learn how to create networks from layers for different tasks, see the following examples.

Deep Learning Layers

Use the following functions to create different layer types. Alternatively, use the Deep Network Designer app to create networks interactively.

To learn how to define your own custom layers, see Define Custom Deep Learning Layers.

Input Layers

LayerDescription

imageInputLayer

An image input layer inputs 2-D images to a neural network and applies data normalization.

image3dInputLayer

A 3-D image input layer inputs 3-D images or volumes to a neural network and applies data normalization.

sequenceInputLayer

A sequence input layer inputs sequence data to a neural network and applies data normalization.

featureInputLayer

A feature input layer inputs feature data to a neural network and applies data normalization. Use this layer when you have a data set of numeric scalars representing features (data without spatial or time dimensions).

inputLayer

An input layer inputs data into a neural network with a custom format.

pointCloudInputLayer (Lidar Toolbox)

A point cloud input layer inputs 3-D point clouds to a network and applies data normalization. You can also input point cloud data such as 2-D lidar scans.

Convolution and Fully Connected Layers

LayerDescription

convolution1dLayer

A 1-D convolutional layer applies sliding convolutional filters to 1-D input.

convolution2dLayer

A 2-D convolutional layer applies sliding convolutional filters to 2-D input.

convolution3dLayer

A 3-D convolutional layer applies sliding cuboidal convolution filters to 3-D input.

groupedConvolution2dLayer

A 2-D grouped convolutional layer separates the input channels into groups and applies sliding convolutional filters. Use grouped convolutional layers for channel-wise separable (also known as depth-wise separable) convolution.

transposedConv1dLayer

A transposed 1-D convolution layer upsamples one-dimensional feature maps.

transposedConv2dLayer

A transposed 2-D convolution layer upsamples two-dimensional feature maps.

transposedConv3dLayer

A transposed 3-D convolution layer upsamples three-dimensional feature maps.

fullyConnectedLayer

A fully connected layer multiplies the input by a weight matrix and then adds a bias vector.

Sequence Layers

LayerDescription

sequenceInputLayer

A sequence input layer inputs sequence data to a neural network and applies data normalization.

lstmLayer

An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data.

lstmProjectedLayer

An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data using projected learnable weights.

bilstmLayer

A bidirectional LSTM (BiLSTM) layer is an RNN layer that learns bidirectional long-term dependencies between time steps of time-series or sequence data. These dependencies can be useful when you want the RNN to learn from the complete time series at each time step.

gruLayer

A GRU layer is an RNN layer that learns dependencies between time steps in time-series and sequence data.

gruProjectedLayer

A GRU projected layer is an RNN layer that learns dependencies between time steps in time-series and sequence data using projected learnable weights.

convolution1dLayer

A 1-D convolutional layer applies sliding convolutional filters to 1-D input.

transposedConv1dLayer

A transposed 1-D convolution layer upsamples one-dimensional feature maps.

maxPooling1dLayer

A 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region.

averagePooling1dLayer

A 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region.

globalMaxPooling1dLayer

A 1-D global max pooling layer performs downsampling by outputting the maximum of the time or spatial dimensions of the input.

flattenLayer

A flatten layer collapses the spatial dimensions of the input into the channel dimension.

wordEmbeddingLayer (Text Analytics Toolbox)

A word embedding layer maps word indices to vectors.

peepholeLSTMLayer (Custom layer example)

A peephole LSTM layer is a variant of an LSTM layer, where the gate calculations use the layer cell state.

Activation Layers

LayerDescription

reluLayer

A ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero.

leakyReluLayer

A leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar.

clippedReluLayer

A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling.

eluLayer

An ELU activation layer performs the identity operation on positive inputs and an exponential nonlinearity on negative inputs.

geluLayer

A Gaussian error linear unit (GELU) layer weights the input by its probability under a Gaussian distribution.

tanhLayer

A hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs.

swishLayer

A swish activation layer applies the swish function on the layer inputs.

softplusLayer (Reinforcement Learning Toolbox)

A softplus layer applies the softplus activation function Y = log(1 + eX), which ensures that the output is always positive. This activation function is a smooth continuous version of reluLayer. You can incorporate this layer into the deep neural networks you define for actors in reinforcement learning agents. This layer is useful for creating continuous Gaussian policy deep neural networks, for which the standard deviation output must be positive.

softmaxLayer

A softmax layer applies a softmax function to the input.

sigmoidLayer

A sigmoid layer applies a sigmoid function to the input such that the output is bounded in the interval (0,1).

functionLayer

A function layer applies a specified function to the layer input.

preluLayer

A PReLU layer performs a threshold operation, where for each channel, any input value less than zero is multiplied by a scalar learned at training time.

sreluLayer (Custom layer example)

A SReLU layer performs a thresholding operation, where for each channel, the layer scales values outside an interval. The interval thresholds and scaling factors are learnable parameters.

dlhdl.layer.mishLayer (Deep Learning HDL Toolbox)

A mish activation layer applies the mish function on layer inputs.

Normalization Layers

LayerDescription

batchNormalizationLayer

A batch normalization layer normalizes a mini-batch of data across all observations for each channel independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use batch normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

groupNormalizationLayer

A group normalization layer normalizes a mini-batch of data across grouped subsets of channels for each observation independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

instanceNormalizationLayer

An instance normalization layer normalizes a mini-batch of data across each channel for each observation independently. To improve the convergence of training the convolutional neural network and reduce the sensitivity to network hyperparameters, use instance normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

layerNormalizationLayer

A layer normalization layer normalizes a mini-batch of data across all channels for each observation independently. To speed up training of recurrent and multilayer perceptron neural networks and reduce the sensitivity to network initialization, use layer normalization layers after the learnable layers, such as LSTM and fully connected layers.

crossChannelNormalizationLayer

A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization.

Utility Layers

LayerDescription

dropoutLayer

A dropout layer randomly sets input elements to zero with a given probability.

spatialDropoutLayer

A spatial dropout layer randomly selects input channels with a given probability, and sets all its elements to zero during training.

crop2dLayer

A 2-D crop layer applies 2-D cropping to the input.

crop3dLayer

A 3-D crop layer crops a 3-D volume to the size of the input feature map.
identityLayer An identity layer is a layer whose output is identical to its input. You can use an identity layer to create a skip connection, which allows the input to skip one or more layers in the main branch of a neural network. For more information about skip connections, see More About.

networkLayer

A network layer contains a nested network. Use network layers to simplify building large networks that contain repeating components.

complexToRealLayer

A complex-to-real layer converts complex-valued data to real-valued data by splitting the data in a specified dimension.

realToComplexLayer

A real-to-complex layer converts real-valued data to complex-valued data by merging the data in a specified dimension.

scalingLayer (Reinforcement Learning Toolbox)

A scaling layer linearly scales and biases an input array U, giving an output Y = Scale.*U + Bias. You can incorporate this layer into the deep neural networks you define for actors or critics in reinforcement learning agents. This layer is useful for scaling and shifting the outputs of nonlinear layers, such as tanhLayer and sigmoid.

quadraticLayer (Reinforcement Learning Toolbox)

A quadratic layer takes an input vector and outputs a vector of quadratic monomials constructed from the input elements. This layer is useful when you need a layer whose output is a quadratic function of its inputs. For example, to recreate the structure of quadratic value functions such as those used in LQR controller design.

stftLayer (Signal Processing Toolbox)

An STFT layer computes the short-time Fourier transform of the input.

istftLayer (Signal Processing Toolbox)

An ISTFT layer computes the inverse short-time Fourier transform of the input.

cwtLayer (Wavelet Toolbox)

A CWT layer computes the continuous wavelet transform of the input.

icwtLayer (Wavelet Toolbox)

An ICWT layer computes the inverse continuous wavelet transform of the input.

modwtLayer (Wavelet Toolbox)

A MODWT layer computes the maximal overlap discrete wavelet transform (MODWT) and MODWT multiresolution analysis (MRA) of the input.

Resizing Layers

LayerDescription

resize2dLayer (Image Processing Toolbox)

A 2-D resize layer resizes 2-D input by a scale factor, to a specified height and width, or to the size of a reference input feature map.

resize3dLayer (Image Processing Toolbox)

A 3-D resize layer resizes 3-D input by a scale factor, to a specified height, width, and depth, or to the size of a reference input feature map.
dlhdl.layer.reshapeLayer (Deep Learning HDL Toolbox)A reshape layer reshapes layer activation data.

Pooling and Unpooling Layers

LayerDescription

averagePooling1dLayer

A 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region.

averagePooling2dLayer

A 2-D average pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the average of each region.

averagePooling3dLayer

A 3-D average pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the average values of each region.

adaptiveAveragePooling2dLayer

A 2-D adaptive average pooling layer performs downsampling to give you the desired output size by dividing the input into rectangular pooling regions, then computing the average of each region.

globalAveragePooling1dLayer

A 1-D global average pooling layer performs downsampling by outputting the average of the time or spatial dimensions of the input.

globalAveragePooling2dLayer

A 2-D global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input.

globalAveragePooling3dLayer

A 3-D global average pooling layer performs downsampling by computing the mean of the height, width, and depth dimensions of the input.

maxPooling1dLayer

A 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region.

maxPooling2dLayer

A 2-D max pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the maximum of each region.

maxPooling3dLayer

A 3-D max pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the maximum of each region.

globalMaxPooling1dLayer

A 1-D global max pooling layer performs downsampling by outputting the maximum of the time or spatial dimensions of the input.

globalMaxPooling2dLayer

A 2-D global max pooling layer performs downsampling by computing the maximum of the height and width dimensions of the input.

globalMaxPooling3dLayer

A 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input.

maxUnpooling2dLayer

A 2-D max unpooling layer unpools the output of a 2-D max pooling layer.

Combination Layers

LayerDescription

additionLayer

An addition layer adds inputs from multiple neural network layers element-wise.

multiplicationLayer

A multiplication layer multiplies inputs from multiple neural network layers element-wise.

depthConcatenationLayer

A depth concatenation layer takes inputs that have the same height and width and concatenates them along the channel dimension.

concatenationLayer

A concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension.

weightedAdditionLayer (Custom layer example)

A weighted addition layer scales and adds inputs from multiple neural network layers element-wise.

Transformer Layers

LayerDescription

selfAttentionLayer

A self-attention layer computes single-head or multihead self-attention of its input.

attentionLayer

A dot-product attention layer focuses on parts of the input using weighted multiplication operations.

positionEmbeddingLayer

A position embedding layer maps sequential or spatial indices to vectors.

sinusoidalPositionEncodingLayer

A sinusoidal position encoding layer maps position indices to vectors using sinusoidal operations.

embeddingConcatenationLayer

An embedding concatenation layer combines its input and an embedding vector by concatenation.

indexing1dLayer

A 1-D indexing layer extracts the data from the specified index of the time or spatial dimensions of the input data.

patchEmbeddingLayer (Computer Vision Toolbox)

A patch embedding layer maps patches of pixels to vectors.

Neural ODE Layers

LayerDescription

neuralODELayer

A neural ODE layer outputs the solution of an ODE.

Object Detection Layers

LayerDescription

roiMaxPooling2dLayer (Computer Vision Toolbox)

An ROI max pooling layer outputs fixed size feature maps for every rectangular ROI within the input feature map. Use this layer to create a Fast or Faster R-CNN object detection network.

roiAlignLayer (Computer Vision Toolbox)

An ROI align layer outputs fixed size feature maps for every rectangular ROI within an input feature map. Use this layer to create a Mask R-CNN network.

ssdMergeLayer (Computer Vision Toolbox)

An SSD merge layer merges the outputs of feature maps for subsequent regression and classification loss computation.

yolov2TransformLayer (Computer Vision Toolbox)

A transform layer of the you only look once version 2 (YOLO v2) network transforms the bounding box predictions of the last convolution layer in the network to fall within the bounds of the ground truth. Use the transform layer to improve the stability of the YOLO v2 network.

spaceToDepthLayer (Image Processing Toolbox)

A space to depth layer permutes the spatial blocks of the input into the depth dimension. Use this layer when you need to combine feature maps of different size without discarding any feature data.

depthToSpace2dLayer (Image Processing Toolbox)

A 2-D depth to space layer permutes data from the depth dimension into blocks of 2-D spatial data.

dlhdl.layer.sliceLayer (Deep Learning HDL Toolbox)

A slice layer divides the input to the layer into an equal number of groups along the channel dimension of the image.

See Also

| | |

Related Topics