Main Content

TrainingOptionsSGDM

Training options for stochastic gradient descent with momentum

Description

Training options for stochastic gradient descent with momentum, including learning rate information, L2 regularization factor, and mini-batch size.

Creation

Create a TrainingOptionsSGDM object using trainingOptions and specifying 'sgdm' as the first input argument.

Properties

expand all

Plots and Display

Plots to display during neural network training, specified as one of the following:

  • 'none' — Do not display plots during training.

  • 'training-progress' — Plot training progress. The plot shows mini-batch loss and accuracy, validation loss and accuracy, and additional information on the training progress. The plot has a stop button in the top-right corner. Click the button to stop training and return the current state of the neural network. You can save the training plot as an image or PDF by clicking Export Training Plot. For more information on the training progress plot, see Monitor Deep Learning Training Progress.

Indicator to display training progress information in the command window, specified as 1 (true) or 0 (false).

The verbose output displays the following information:

Classification Neural Networks

FieldDescription
EpochEpoch number. An epoch corresponds to a full pass of the data.
IterationIteration number. An iteration corresponds to a mini-batch.
Time ElapsedTime elapsed in hours, minutes, and seconds.
Mini-batch AccuracyClassification accuracy on the mini-batch.
Validation AccuracyClassification accuracy on the validation data. If you do not specify validation data, then the function does not display this field.
Mini-batch LossLoss on the mini-batch. If the output layer is a ClassificationOutputLayer object, then the loss is the cross entropy loss for multi-class classification problems with mutually exclusive classes.
Validation LossLoss on the validation data. If the output layer is a ClassificationOutputLayer object, then the loss is the cross entropy loss for multi-class classification problems with mutually exclusive classes. If you do not specify validation data, then the function does not display this field.
Base Learning RateBase learning rate. The software multiplies the learn rate factors of the layers by this value.

Regression Neural Networks

FieldDescription
EpochEpoch number. An epoch corresponds to a full pass of the data.
IterationIteration number. An iteration corresponds to a mini-batch.
Time ElapsedTime elapsed in hours, minutes, and seconds.
Mini-batch RMSERoot-mean-squared-error (RMSE) on the mini-batch.
Validation RMSERMSE on the validation data. If you do not specify validation data, then the software does not display this field.
Mini-batch LossLoss on the mini-batch. If the output layer is a RegressionOutputLayer object, then the loss is the half-mean-squared-error.
Validation LossLoss on the validation data. If the output layer is a RegressionOutputLayer object, then the loss is the half-mean-squared-error. If you do not specify validation data, then the software does not display this field.
Base Learning RateBase learning rate. The software multiplies the learn rate factors of the layers by this value.

When training stops, the verbose output displays the reason for stopping.

To specify validation data, use the ValidationData training option.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

Frequency of verbose printing, which is the number of iterations between printing to the command window, specified as a positive integer. This option only has an effect when the Verbose training option is 1 (true).

If you validate the neural network during training, then trainNetwork also prints to the command window every time validation occurs.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Mini-Batch Options

Maximum number of epochs to use for training, specified as a positive integer.

An iteration is one step taken in the gradient descent algorithm towards minimizing the loss function using a mini-batch. An epoch is the full pass of the training algorithm over the entire training set.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Size of the mini-batch to use for each training iteration, specified as a positive integer. A mini-batch is a subset of the training set that is used to evaluate the gradient of the loss function and update the weights.

If the mini-batch size does not evenly divide the number of training samples, then trainNetwork discards the training data that does not fit into the final complete mini-batch of each epoch.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Option for data shuffling, specified as one of the following:

  • 'once' — Shuffle the training and validation data once before training.

  • 'never' — Do not shuffle the data.

  • 'every-epoch' — Shuffle the training data before each training epoch, and shuffle the validation data before each neural network validation. If the mini-batch size does not evenly divide the number of training samples, then trainNetwork discards the training data that does not fit into the final complete mini-batch of each epoch. To avoid discarding the same data every epoch, set the Shuffle training option to 'every-epoch'.

Validation

Data to use for validation during training, specified as [], a datastore, a table, or a cell array containing the validation predictors and responses.

You can specify validation predictors and responses using the same formats supported by the trainNetwork function. You can specify the validation data as a datastore, table, or the cell array {predictors,responses}, where predictors contains the validation predictors and responses contains the validation responses.

For more information, see the images, sequences, and features input arguments of the trainNetwork function.

During training, trainNetwork calculates the validation accuracy and validation loss on the validation data. To specify the validation frequency, use the ValidationFrequency training option. You can also use the validation data to stop training automatically when the validation loss stops decreasing. To turn on automatic validation stopping, use the ValidationPatience training option.

If your neural network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation accuracy can be higher than the training (mini-batch) accuracy.

The validation data is shuffled according to the Shuffle training option. If Shuffle is 'every-epoch', then the validation data is shuffled before each neural network validation.

If ValidationData is [], then the software does not validate the neural network during training.

Frequency of neural network validation in number of iterations, specified as a positive integer.

The ValidationFrequency value is the number of iterations between evaluations of validation metrics. To specify validation data, use the ValidationData training option.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Patience of validation stopping of neural network training, specified as a positive integer or Inf.

ValidationPatience specifies the number of times that the loss on the validation set can be larger than or equal to the previously smallest loss before neural network training stops. If ValidationPatience is Inf, then the values of the validation loss do not cause training to stop early.

The returned neural network depends on the OutputNetwork training option. To return the neural network with the lowest validation loss, set the OutputNetwork training option to "best-validation-loss".

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Neural network to return when training completes, specified as one of the following:

  • 'last-iteration' – Return the neural network corresponding to the last training iteration.

  • 'best-validation-loss' – Return the neural network corresponding to the training iteration with the lowest validation loss. To use this option, you must specify the ValidationData training option.

Solver Options

Initial learning rate used for training, specified as a positive scalar.

If the learning rate is too low, then training can take a long time. If the learning rate is too high, then training might reach a suboptimal result or diverge.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

This property is read-only.

Settings for the learning rate schedule, specified as a structure. LearnRateScheduleSettings has the field Method, which specifies the type of method for adjusting the learning rate. The possible methods are:

  • 'none' — The learning rate is constant throughout training.

  • 'piecewise' — The learning rate drops periodically during training.

If Method is 'piecewise', then LearnRateScheduleSettings contains two more fields:

  • DropRateFactor — The multiplicative factor by which the learning rate drops during training

  • DropPeriod — The number of epochs that passes between adjustments to the learning rate during training

Specify the settings for the learning schedule rate using trainingOptions.

Data Types: struct

Factor for L2 regularization (weight decay), specified as a nonnegative scalar. For more information, see L2 Regularization.

You can specify a multiplier for the L2 regularization for neural network layers with learnable parameters. For more information, see Set Up Parameters in Convolutional and Fully Connected Layers.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Contribution of the parameter update step of the previous iteration to the current iteration of stochastic gradient descent with momentum, specified as a scalar from 0 to 1.

A value of 0 means no contribution from the previous step, whereas a value of 1 means maximal contribution from the previous step. The default value works well for most tasks.

For more information, see Stochastic Gradient Descent with Momentum.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Option to reset input layer normalization, specified as one of the following:

  • 1 (true) — Reset the input layer normalization statistics and recalculate them at training time.

  • 0 (false) — Calculate normalization statistics at training time when they are empty.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | logical

Mode to evaluate the statistics in batch normalization layers, specified as one of the following:

  • 'population' – Use the population statistics. After training, the software finalizes the statistics by passing through the training data once more and uses the resulting mean and variance.

  • 'moving' – Approximate the statistics during training using a running estimate given by update steps

    μ*=λμμ^+(1λμ)μσ2*=λσ2σ2^+(1-λσ2)σ2

    where μ* and σ2* denote the updated mean and variance, respectively, λμ and λσ2 denote the mean and variance decay values, respectively, μ^ and σ2^ denote the mean and variance of the layer input, respectively, and μ and σ2 denote the latest values of the moving mean and variance values, respectively. After training, the software uses the most recent value of the moving mean and variance statistics. This option supports CPU and single GPU training only.

Gradient Clipping

Gradient threshold, specified as Inf or a positive scalar. If the gradient exceeds the value of GradientThreshold, then the gradient is clipped according to the GradientThresholdMethod training option.

For more information, see Gradient Clipping.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Gradient threshold method used to clip gradient values that exceed the gradient threshold, specified as one of the following:

  • 'l2norm' — If the L2 norm of the gradient of a learnable parameter is larger than GradientThreshold, then scale the gradient so that the L2 norm equals GradientThreshold.

  • 'global-l2norm' — If the global L2 norm, L, is larger than GradientThreshold, then scale all gradients by a factor of GradientThreshold/L. The global L2 norm considers all learnable parameters.

  • 'absolute-value' — If the absolute value of an individual partial derivative in the gradient of a learnable parameter is larger than GradientThreshold, then scale the partial derivative to have magnitude equal to GradientThreshold and retain the sign of the partial derivative.

For more information, see Gradient Clipping.

Sequence Options

Option to pad, truncate, or split input sequences, specified as one of the following:

  • "longest" — Pad sequences in each mini-batch to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the neural network.

  • "shortest" — Truncate sequences in each mini-batch to have the same length as the shortest sequence. This option ensures that no padding is added, at the cost of discarding data.

  • Positive integer — For each mini-batch, pad the sequences to the length of the longest sequence in the mini-batch, and then split the sequences into smaller sequences of the specified length. If splitting occurs, then the software creates extra mini-batches. If the specified sequence length does not evenly divide the sequence lengths of the data, then the mini-batches containing the ends those sequences have length shorter than the specified sequence length. Use this option if the full sequences do not fit in memory. Alternatively, try reducing the number of sequences per mini-batch by setting the MiniBatchSize option to a lower value.

To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Direction of padding or truncation, specified as one of the following:

  • "right" — Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of the sequences.

  • "left" — Pad or truncate sequences on the left. The software truncates or adds padding to the start of the sequences so that the sequences end at the same time step.

Because recurrent layers process sequence data one time step at a time, when the recurrent layer OutputMode property is 'last', any padding in the final time steps can negatively influence the layer output. To pad or truncate sequence data on the left, set the SequencePaddingDirection option to "left".

For sequence-to-sequence neural networks (when the OutputMode property is 'sequence' for each recurrent layer), any padding in the first time steps can negatively influence the predictions for the earlier time steps. To pad or truncate sequence data on the right, set the SequencePaddingDirection option to "right".

To learn more about the effect of padding, truncating, and splitting the input sequences, see Sequence Padding, Truncation, and Splitting.

Value by which to pad input sequences, specified as a scalar.

The option is valid only when SequenceLength is "longest" or a positive integer. Do not pad sequences with NaN, because doing so can propagate errors throughout the neural network.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Hardware Options

Hardware resource for training neural network, specified as one of the following:

  • 'auto' — Use a GPU if one is available. Otherwise, use the CPU.

  • 'cpu' — Use the CPU.

  • 'gpu' — Use the GPU.

  • 'multi-gpu' — Use multiple GPUs on one machine, using a local parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts a parallel pool with pool size equal to the number of available GPUs.

  • 'parallel' — Use a local or remote parallel pool based on your default cluster profile. If there is no current parallel pool, the software starts one using the default cluster profile. If the pool has access to GPUs, then only workers with a unique GPU perform training computation. If the pool does not have GPUs, then training takes place on all available CPU workers instead.

For more information on when to use the different execution environments, see Scale Up Deep Learning in Parallel, on GPUs, and in the Cloud.

'gpu', 'multi-gpu', and 'parallel' options require Parallel Computing Toolbox™. To use a GPU for deep learning, you must also have a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If you choose one of these options and Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error.

To see an improvement in performance when training in parallel, try scaling up the MiniBatchSize and InitialLearnRate training options by the number of GPUs.

The 'multi-gpu' and 'parallel' options do not support neural networks containing custom layers with state parameters or built-in layers that are stateful at training time. For example:

Parallel worker load division between GPUs or CPUs, specified as one of the following:

  • Scalar from 0 to 1 — Fraction of workers on each machine to use for neural network training computation. If you train the neural network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.

  • Positive integer — Number of workers on each machine to use for neural network training computation. If you train the neural network using data in a mini-batch datastore with background dispatch enabled, then the remaining workers fetch and preprocess data in the background.

  • Numeric vector — Neural network training load for each worker in the parallel pool. For a vector W, worker i gets a fraction W(i)/sum(W) of the work (number of examples per mini-batch). If you train a neural network using data in a mini-batch datastore with background dispatch enabled, then you can assign a worker load of 0 to use that worker for fetching data in the background. The specified vector must contain one value per worker in the parallel pool.

If the parallel pool has access to GPUs, then workers without a unique GPU are never used for training computation. The default for pools with GPUs is to use all workers with a unique GPU for training computation, and the remaining workers for background dispatch. If the pool does not have access to GPUs and CPUs are used for training, then the default is to use one worker per machine for background data dispatch.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Flag to enable background dispatch (asynchronous prefetch queuing) to read training data from datastores, specified as 0 (false) or 1 (true). Background dispatch requires Parallel Computing Toolbox.

DispatchInBackground is only supported for datastores that are partitionable. For more information, see Use Datastore for Parallel Training and Background Dispatching.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Checkpoints

Path for saving the checkpoint neural networks, specified as a character vector or string scalar.

  • If you do not specify a path (that is, you use the default ""), then the software does not save any checkpoint neural networks.

  • If you specify a path, then trainNetwork saves checkpoint neural networks to this path and assigns a unique name to each neural network. You can then load any checkpoint neural network and resume training from that neural network.

    If the folder does not exist, then you must first create it before specifying the path for saving the checkpoint neural networks. If the path you specify does not exist, then trainingOptions returns an error.

The CheckpointFrequency and CheckpointFrequencyUnit options specify the frequency of saving checkpoint neural networks.

For more information about saving neural network checkpoints, see Save Checkpoint Networks and Resume Training.

Data Types: char | string

Frequency of saving checkpoint neural networks, specified as a positive integer.

If CheckpointFrequencyUnit is 'epoch', then the software saves checkpoint neural networks every CheckpointFrequency epochs.

If CheckpointFrequencyUnit is 'iteration', then the software saves checkpoint neural networks every CheckpointFrequency iterations.

This option only has an effect when CheckpointPath is nonempty.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Checkpoint frequency unit, specified as 'epoch' or 'iteration'.

If CheckpointFrequencyUnit is 'epoch', then the software saves checkpoint neural networks every CheckpointFrequency epochs.

If CheckpointFrequencyUnit is 'iteration', then the software saves checkpoint neural networks every CheckpointFrequency iterations.

This option only has an effect when CheckpointPath is nonempty.

Output functions to call during training, specified as a function handle or cell array of function handles. trainNetwork calls the specified functions once before the start of training, after each iteration, and once after training has finished. trainNetwork passes a structure containing information in the following fields:

FieldDescription
EpochCurrent epoch number
IterationCurrent iteration number
TimeSinceStartTime in seconds since the start of training
TrainingLossCurrent mini-batch loss
ValidationLossLoss on the validation data
BaseLearnRateCurrent base learning rate
TrainingAccuracy Accuracy on the current mini-batch (classification neural networks)
TrainingRMSERMSE on the current mini-batch (regression neural networks)
ValidationAccuracyAccuracy on the validation data (classification neural networks)
ValidationRMSERMSE on the validation data (regression neural networks)
StateCurrent training state, with a possible value of "start", "iteration", or "done".

If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.

You can use output functions to display or plot progress information, or to stop training. To stop training early, make your output function return 1 (true). If any output function returns 1 (true), then training finishes and trainNetwork returns the latest neural network. For an example showing how to use output functions, see Customize Output During Deep Learning Network Training.

Data Types: function_handle | cell

Examples

collapse all

Create a set of options for training a network using stochastic gradient descent with momentum. Reduce the learning rate by a factor of 0.2 every 5 epochs. Set the maximum number of epochs for training to 20, and use a mini-batch with 64 observations at each iteration. Turn on the training progress plot.

options = trainingOptions("sgdm", ...
    LearnRateSchedule="piecewise", ...
    LearnRateDropFactor=0.2, ...
    LearnRateDropPeriod=5, ...
    MaxEpochs=20, ...
    MiniBatchSize=64, ...
    Plots="training-progress")
options = 
  TrainingOptionsSGDM with properties:

                        Momentum: 0.9000
                InitialLearnRate: 0.0100
               LearnRateSchedule: 'piecewise'
             LearnRateDropFactor: 0.2000
             LearnRateDropPeriod: 5
                L2Regularization: 1.0000e-04
         GradientThresholdMethod: 'l2norm'
               GradientThreshold: Inf
                       MaxEpochs: 20
                   MiniBatchSize: 64
                         Verbose: 1
                VerboseFrequency: 50
                  ValidationData: []
             ValidationFrequency: 50
              ValidationPatience: Inf
                         Shuffle: 'once'
                  CheckpointPath: ''
             CheckpointFrequency: 1
         CheckpointFrequencyUnit: 'epoch'
            ExecutionEnvironment: 'auto'
                      WorkerLoad: []
                       OutputFcn: []
                           Plots: 'training-progress'
                  SequenceLength: 'longest'
            SequencePaddingValue: 0
        SequencePaddingDirection: 'right'
            DispatchInBackground: 0
         ResetInputNormalization: 1
    BatchNormalizationStatistics: 'population'
                   OutputNetwork: 'last-iteration'

Algorithms

expand all

Version History

Introduced in R2016a

expand all