bilstmLayer
Bidirectional long short-term memory (BiLSTM) layer
Description
A bidirectional LSTM (BiLSTM) layer learns bidirectional long-term dependencies between time steps of time series or sequence data. These dependencies can be useful when you want the network to learn from the complete time series at each time step.
Creation
Description
creates a bidirectional LSTM layer and sets the layer
= bilstmLayer(numHiddenUnits
)NumHiddenUnits
property.
sets additional layer
= bilstmLayer(numHiddenUnits
,Name,Value
)OutputMode
, Activations, State, Parameters and Initialization, Learning Rate and Regularization, and
Name
properties using one or more name-value pair arguments. You can specify multiple
name-value pair arguments. Enclose each property name in quotes.
Properties
BiLSTM
NumHiddenUnits
— Number of hidden units
positive integer
This property is read-only.
Number of hidden units (also known as the hidden size), specified as a positive integer.
The number of hidden units corresponds to the amount of information remembered between time steps (the hidden state). The hidden state can contain information from all previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer might overfit to the training data. This value can vary from a few dozen to a few thousand.
The hidden state does not limit the number of time steps that are processed in an
iteration. To split your sequences into smaller sequences for when using the
trainNetwork
function, use the SequenceLength
training option.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OutputMode
— Output mode
'sequence'
(default) | 'last'
This property is read-only.
Output mode, specified as one of the following:
'sequence'
– Output the complete sequence.'last'
– Output the last time step of the sequence.
HasStateInputs
— Flag for state inputs to layer
0
(false) (default) | 1
(true)
This property is read-only.
Flag for state inputs to the layer, specified as 0
(false) or
1
(true).
If the HasStateInputs
property is 0
(false), then the
layer has one input with name 'in'
, which corresponds to the input data.
In this case, the layer uses the HiddenState
and
CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true), then the
layer has three inputs with names 'in'
, 'hidden'
, and
'cell'
, which correspond to the input data, hidden state, and cell
state respectively. In this case, the layer uses the values passed to these inputs for the
layer operation. If HasStateInputs
is
1
(true), then the HiddenState
and
CellState
properties must be empty.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| logical
HasStateOutputs
— Flag for state outputs from layer
0
(false) (default) | 1
(true)
This property is read-only.
Flag for state outputs from the layer, specified as
0
(false) or
1
(true).
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has three outputs with names 'out'
,
'hidden'
, and 'cell'
, which correspond
to the output data, hidden state, and cell state, respectively. In this case, the
layer also outputs the state values computed during the layer operation.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| logical
InputSize
— Input size
'auto'
(default) | positive integer
This property is read-only.
Input size, specified as a positive integer or 'auto'
. If
InputSize
is 'auto'
, then the software
automatically assigns the input size at training time.
Data Types: double
| char
| string
Activations
StateActivationFunction
— Activation function to update the cell and hidden state
'tanh'
(default) | 'softsign'
This property is read-only.
Activation function to update the cell and hidden state, specified as one of the following:
'tanh'
– Use the hyperbolic tangent function (tanh).'softsign'
– Use the softsign function .
The layer uses this option as the function in the calculations to update the cell and hidden state. For more information on how activation functions are used in an LSTM layer, see Long Short-Term Memory Layer.
GateActivationFunction
— Activation function to apply to the gates
'sigmoid'
(default) | 'hard-sigmoid'
This property is read-only.
Activation function to apply to the gates, specified as one of the following:
'sigmoid'
– Use the sigmoid function .'hard-sigmoid'
– Use the hard sigmoid function
The layer uses this option as the function in the calculations for the layer gates.
State
CellState
— Cell state
numeric vector
Cell state to use in the layer operation, specified as a
2*NumHiddenUnits
-by-1 numeric vector. This value
corresponds to the initial cell state when data is passed to the
layer.
After setting this property manually, calls to the
resetState
function set the cell state to this
value.
If HasStateInputs
is
true
, then the CellState
property must be empty.
Data Types: single
| double
HiddenState
— Hidden state
numeric vector
Hidden state to use in the layer operation, specified as a
2*NumHiddenUnits
-by-1 numeric vector. This value
corresponds to the initial hidden state when data is passed to the
layer.
After setting this property manually, calls to the
resetState
function set the hidden state to
this value.
If HasStateInputs
is
true
, then the HiddenState
property must be empty.
Data Types: single
| double
Parameters and Initialization
InputWeightsInitializer
— Function to initialize input weights
'glorot'
(default) | 'he'
| 'orthogonal'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handle
Function to initialize the input weights, specified as one of the following:
'glorot'
– Initialize the input weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance2/(InputSize + numOut)
, wherenumOut = 8*NumHiddenUnits
.'he'
– Initialize the input weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance2/InputSize
.'orthogonal'
– Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]'narrow-normal'
– Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.'zeros'
– Initialize the input weights with zeros.'ones'
– Initialize the input weights with ones.Function handle – Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz)
, wheresz
is the size of the input weights.
The layer only initializes the input weights when the
InputWeights
property is empty.
Data Types: char
| string
| function_handle
RecurrentWeightsInitializer
— Function to initialize recurrent weights
'orthogonal'
(default) | 'glorot'
| 'he'
| 'narrow-normal'
| 'zeros'
| 'ones'
| function handle
Function to initialize the recurrent weights, specified as one of the following:
'orthogonal'
– Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. [3]'glorot'
– Initialize the recurrent weights with the Glorot initializer [1] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance2/(numIn + numOut)
, wherenumIn = NumHiddenUnits
andnumOut = 8*NumHiddenUnits
.'he'
– Initialize the recurrent weights with the He initializer [2]. The He initializer samples from a normal distribution with zero mean and variance2/NumHiddenUnits
.'narrow-normal'
– Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01.'zeros'
– Initialize the recurrent weights with zeros.'ones'
– Initialize the recurrent weights with ones.Function handle – Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz)
, wheresz
is the size of the recurrent weights.
The layer only initializes the recurrent weights when the
RecurrentWeights
property is empty.
Data Types: char
| string
| function_handle
BiasInitializer
— Function to initialize bias
'unit-forget-gate'
(default) | 'narrow-normal'
| 'ones'
| function handle
Function to initialize the bias, specified as one of the following:
'unit-forget-gate'
– Initialize the forget gate bias with ones and the remaining biases with zeros.'narrow-normal'
– Initialize the bias by independently sampling from a normal distribution with zero mean and standard deviation 0.01.'ones'
– Initialize the bias with ones.Function handle – Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form
bias = func(sz)
, wheresz
is the size of the bias.
The layer only initializes the bias when the Bias
property is
empty.
Data Types: char
| string
| function_handle
InputWeights
— Input weights
[]
(default) | matrix
Input weights, specified as a matrix.
The input weight matrix is a concatenation of the eight input weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
The input weights are learnable parameters. When training a network, if InputWeights
is nonempty, then trainNetwork
uses the InputWeights
property as the initial value. If InputWeights
is empty, then trainNetwork
uses the initializer specified by InputWeightsInitializer
.
At training time, InputWeights
is
an 8*NumHiddenUnits
-by-InputSize
matrix.
Data Types: single
| double
RecurrentWeights
— Recurrent weights
[]
(default) | matrix
Recurrent weights, specified as a matrix.
The recurrent weight matrix is a concatenation of the eight recurrent weight matrices for the components (gates) in the bidirectional LSTM layer. The eight matrices are concatenated vertically in the following order:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
The recurrent weights are learnable parameters. When training a network, if RecurrentWeights
is nonempty, then trainNetwork
uses the RecurrentWeights
property as the initial value. If RecurrentWeights
is empty, then trainNetwork
uses the initializer specified by RecurrentWeightsInitializer
.
At training time, RecurrentWeights
is an
8*NumHiddenUnits
-by-NumHiddenUnits
matrix.
Data Types: single
| double
Bias
— Layer biases
[]
(default) | numeric vector
Layer biases, specified as a numeric vector.
The bias vector is a concatenation of the eight bias vectors for the components (gates) in the bidirectional LSTM layer. The eight vectors are concatenated vertically in the following order:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
The layer biases are learnable parameters. When you train a
network, if Bias
is nonempty, then trainNetwork
uses the Bias
property as the
initial value. If Bias
is empty, then
trainNetwork
uses the initializer specified by BiasInitializer
.
At training time, Bias
is an
8*NumHiddenUnits
-by-1 numeric vector.
Data Types: single
| double
Learning Rate and Regularization
InputWeightsLearnRateFactor
— Learning rate factor for input weights
1 (default) | numeric scalar | 1-by-8 numeric vector
Learning rate factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, if InputWeightsLearnRateFactor
is 2, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
To control the value of the learning rate factor for the four
individual matrices in InputWeights
, assign a
1-by-8 vector, where the entries correspond to the learning rate factor
of the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
RecurrentWeightsLearnRateFactor
— Learning rate factor for recurrent weights
1 (default) | numeric scalar | 1-by-8 numeric vector
Learning rate factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, if RecurrentWeightsLearnRateFactor
is 2, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions
function.
To control the value of the learn rate for the four individual
matrices in RecurrentWeights
, assign a 1-by-8
vector, where the entries correspond to the learning rate factor of the
following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
BiasLearnRateFactor
— Learning rate factor for biases
1 (default) | nonnegative scalar | 1-by-8 numeric vector
Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-8 numeric vector.
The software multiplies this factor by the global learning rate
to determine the learning rate for the biases in this layer. For example, if
BiasLearnRateFactor
is 2
, then the learning rate for
the biases in the layer is twice the current global learning rate. The software determines the
global learning rate based on the settings you specify using the trainingOptions
function.
To control the value of the learning rate factor for the four
individual matrices in Bias
, assign a 1-by-8
vector, where the entries correspond to the learning rate factor of the
following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
InputWeightsL2Factor
— L2 regularization factor for input weights
1 (default) | numeric scalar | 1-by-8 numeric vector
L2 regularization factor for the input weights, specified as a numeric scalar or a 1-by-8 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor
is 2, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in InputWeights
, assign a
1-by-8 vector, where the entries correspond to the L2 regularization
factor of the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
RecurrentWeightsL2Factor
— L2 regularization factor for recurrent weights
1 (default) | numeric scalar | 1-by-8 numeric vector
L2 regularization factor for the recurrent weights, specified as a numeric scalar or a 1-by-8 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor
is 2, then the L2 regularization factor for the recurrent weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings specified with the trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in RecurrentWeights
, assign a
1-by-8 vector, where the entries correspond to the L2 regularization
factor of the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
BiasL2Factor
— L2 regularization factor for biases
0 (default) | nonnegative scalar | 1-by-8 numeric vector
L2 regularization factor for the biases, specified as a nonnegative scalar.
The software multiplies this factor by the global
L2 regularization factor to determine the
L2 regularization for the biases in this
layer. For example, if BiasL2Factor
is 2
, then the
L2 regularization for the biases in this layer
is twice the global L2 regularization factor. You can
specify the global L2 regularization factor using the
trainingOptions
function.
To control the value of the L2 regularization factor for the four
individual matrices in Bias
, assign a 1-by-8
vector, where the entries correspond to the L2 regularization factor of
the following:
Input gate (Forward)
Forget gate (Forward)
Cell candidate (Forward)
Output gate (Forward)
Input gate (Backward)
Forget gate (Backward)
Cell candidate (Backward)
Output gate (Backward)
To specify the same value for all the matrices, specify a nonnegative scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Layer
Name
— Layer name
''
(default) | character vector | string scalar
Layer name, specified as a character vector or a string scalar.
For Layer
array input, the trainNetwork
,
assembleNetwork
, layerGraph
, and
dlnetwork
functions automatically assign names to layers with name
''
.
Data Types: char
| string
NumInputs
— Number of inputs
1
| 3
This property is read-only.
Number of inputs of the layer.
If the HasStateInputs
property is 0
(false), then the
layer has one input with name 'in'
, which corresponds to the input data.
In this case, the layer uses the HiddenState
and
CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true), then the
layer has three inputs with names 'in'
, 'hidden'
, and
'cell'
, which correspond to the input data, hidden state, and cell
state respectively. In this case, the layer uses the values passed to these inputs for the
layer operation. If HasStateInputs
is
1
(true), then the HiddenState
and
CellState
properties must be empty.
Data Types: double
InputNames
— Input names
{'in'}
| {'in','hidden','cell'}
This property is read-only.
Input names of the layer.
If the HasStateInputs
property is 0
(false), then the
layer has one input with name 'in'
, which corresponds to the input data.
In this case, the layer uses the HiddenState
and
CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true), then the
layer has three inputs with names 'in'
, 'hidden'
, and
'cell'
, which correspond to the input data, hidden state, and cell
state respectively. In this case, the layer uses the values passed to these inputs for the
layer operation. If HasStateInputs
is
1
(true), then the HiddenState
and
CellState
properties must be empty.
NumOutputs
— Number of outputs
1
| 3
This property is read-only.
Number of outputs of the layer.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has three outputs with names 'out'
,
'hidden'
, and 'cell'
, which correspond
to the output data, hidden state, and cell state, respectively. In this case, the
layer also outputs the state values computed during the layer operation.
Data Types: double
OutputNames
— Output names
{'out'}
| {'out','hidden','cell'}
This property is read-only.
Output names of the layer.
If the HasStateOutputs
property is 0
(false), then the
layer has one output with name 'out'
, which corresponds to the output
data.
If the HasStateOutputs
property is 1
(true), then the
layer has three outputs with names 'out'
,
'hidden'
, and 'cell'
, which correspond
to the output data, hidden state, and cell state, respectively. In this case, the
layer also outputs the state values computed during the layer operation.
Examples
Create Bidirectional LSTM Layer
Create a bidirectional LSTM layer with the name 'bilstm1'
and 100 hidden units.
layer = bilstmLayer(100,'Name','bilstm1')
layer = BiLSTMLayer with properties: Name: 'bilstm1' InputNames: {'in'} OutputNames: {'out'} NumInputs: 1 NumOutputs: 1 HasStateInputs: 0 HasStateOutputs: 0 Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid' Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: [] State Parameters HiddenState: [] CellState: [] Show all properties
Include a bidirectional LSTM layer in a Layer
array.
inputSize = 12;
numHiddenUnits = 100;
numClasses = 9;
layers = [ ...
sequenceInputLayer(inputSize)
bilstmLayer(numHiddenUnits)
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer]
layers = 5x1 Layer array with layers: 1 '' Sequence Input Sequence input with 12 dimensions 2 '' BiLSTM BiLSTM with 100 hidden units 3 '' Fully Connected 9 fully connected layer 4 '' Softmax softmax 5 '' Classification Output crossentropyex
Algorithms
Layer Input and Output Formats
Layers in a layer array or layer graph pass data specified as formatted dlarray
objects.
You can interact with these dlarray
objects in automatic differentiation workflows such as when developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with dlnetwork
objects.
This table shows the supported input formats of BiLSTMLayer
objects and
the corresponding output format. If the output of the layer is passed to a custom layer that
does not inherit from the nnet.layer.Formattable
class, or a
FunctionLayer
object with the Formattable
option set
to false
, then the layer receives an unformatted dlarray
object with dimensions ordered corresponding to the formats outlined in this table.
Input Format | OutputMode | Output Format |
---|---|---|
| 'sequence' |
|
'last' | ||
| 'sequence' |
|
'last' |
|
In dlnetwork
objects, BiLSTMLayer
objects also support the
following input and output format combinations.
Input Format | OutputMode | Output Format |
---|---|---|
| 'sequence' |
|
'last' | ||
| 'sequence' |
|
'last' | ||
| 'sequence' |
|
'last' | ||
| 'sequence' |
|
'last' |
| |
| 'sequence' |
|
'last' |
| |
| 'sequence' |
|
'last' |
|
To use these input formats in trainNetwork
workflows,
first convert the data to 'CBT'
(channel, batch, time) format using
flattenLayer
.
If the HasStateInputs
property is 1
(true), then the
layer has two additional inputs with names 'hidden'
and
'cell'
, which correspond to the hidden state and cell state,
respectively. These additional inputs expect input format 'CB'
(channel,
batch).
If the HasStateOutputs
property is 1
(true), then the
layer has two additional outputs with names 'hidden'
and
'cell'
, which correspond to the hidden state and cell state,
respectively. These additional outputs have output format 'CB'
(channel,
batch).
References
[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010.
[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In Proceedings of the 2015 IEEE International Conference on Computer Vision, 1026–1034. Washington, DC: IEEE Computer Vision Society, 2015.
[3] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks." arXiv preprint arXiv:1312.6120 (2013).
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
When generating code with Intel® MKL-DNN:
The
StateActivationFunction
property must be set to'tanh'
.The
GateActivationFunction
property must be set to'sigmoid'
.The
HasStateInputs
andHasStateOutputs
properties must be set to0
(false).
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Usage notes and limitations:
For GPU code generation, the
StateActivationFunction
property must be set to'tanh'
.For GPU code generation, the
GateActivationFunction
property must be set to'sigmoid'
.The
HasStateInputs
andHasStateOutputs
properties must be set to0
(false).
Version History
Introduced in R2018aR2019a: Default input weights initialization is Glorot
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer input weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer input weights using the
by sampling from a normal distribution with zero mean and variance 0.01. To reproduce this
behavior, set the 'InputWeightsInitializer'
option of the layer to
'narrow-normal'
.
R2019a: Default recurrent weights initialization is orthogonal
Behavior changed in R2019a
Starting in R2019a, the software, by default, initializes the layer recurrent weights of this layer with Q, the orthogonal matrix given by the QR decomposition of Z = QR for a random matrix Z sampled from a unit normal distribution. This behavior helps stabilize training and usually reduces the training time of deep networks.
In previous releases, the software, by default, initializes the layer recurrent weights using
the by sampling from a normal distribution with zero mean and variance 0.01. To reproduce
this behavior, set the 'RecurrentWeightsInitializer'
option of the layer
to 'narrow-normal'
.
See Also
trainingOptions
| trainNetwork
| sequenceInputLayer
| lstmLayer
| gruLayer
| convolution1dLayer
| maxPooling1dLayer
| averagePooling1dLayer
| globalMaxPooling1dLayer
| globalAveragePooling1dLayer
Topics
- Sequence Classification Using Deep Learning
- Sequence Classification Using 1-D Convolutions
- Time Series Forecasting Using Deep Learning
- Sequence-to-Sequence Classification Using Deep Learning
- Sequence-to-Sequence Regression Using Deep Learning
- Classify Videos Using Deep Learning
- Long Short-Term Memory Networks
- List of Deep Learning Layers
- Deep Learning Tips and Tricks
Open Example
You have a modified version of this example. Do you want to open this example with your edits?
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)