Main Content

network

(To be removed) Create custom shallow neural network

    network will be removed in a future release. For more information, see Transition Legacy Neural Network Code to dlnetwork Workflows.

    For advice on updating your code, see Version History.

    Description

    network creates new custom networks. It is used to create networks that are then customized by functions such as feedforwardnet and narxnet.

    Creation

    Description

    net = network returns a new neural network with no inputs, layers or outputs.

    example

    net = network(numInputs,numLayers,biasConnect,inputConnect,layerConnect,outputConnect) specifies additional neural network options.

    Input Arguments

    expand all

    Number of inputs, specified as a positive integer.

    Number of layers, specified as a positive integer.

    Bias connections, specified as a numLayers-by-1 logical vector.

    Input connections, specified as a numLayers-by-numInputs logical matrix.

    Input connections, specified as a numLayers-by-numLayers logical matrix.

    Output connections, specified as a 1-by-numLayers logical vector.

    Properties

    expand all

    General

    Network name, specified as a string. Network creation functions, such as feedforwardnet, define this appropriately. But it can be set to any string as desired.

    User data, specified as a structure.

    Place for users to add custom information to a network object. Only one field is predefined. It contains a secret message to all Deep Learning Toolbox™ users:

    net.userdata.note
    

    Architecture

    Number of inputs, specified as a nonnegative integer.

    The number of network inputs and the size of a network input are not the same thing. The number of inputs defines how many sets of vectors the network receives as input. The size of each input (i.e., the number of elements in each input vector) is determined by the input size (net.inputs{i}.size).

    Most networks have only one input, whose size is determined by the problem.

    Any change to this property results in a change in the size of the matrix defining connections to layers from inputs, (net.inputConnect) and the size of the cell array of input subobjects (net.inputs).

    Number of layers, specified as a nonnegative integer.

    Any change to this property changes the size of each of these Boolean matrices that define connections to and from layers:

    net.biasConnect
    net.inputConnect
    net.layerConnect
    net.outputConnect
    

    and changes the size of each cell array of subobject structures whose size depends on the number of layers:

    and changes the size of each cell array of subobject structures whose size depends on the number of layers:

    and also changes the size of each of the network's adjustable parameter's properties:

    net.IW
    net.LW
    net.b
    

    Bias connections, specified as a numLayers-by-1 logical vector.

    If net.biasConnect(i) is 1, then layer i has a bias, and net.biases{i} is a structure describing that bias.

    Any change to this property alters the presence or absence of structures in the cell array of biases (net.biases) and, in the presence or absence of vectors in the cell array, of bias vectors (net.b).

    Input connections, specified as a numLayers-by-numInputs logical matrix.

    If net.inputConnect(i,j) is 1, then layer i has a weight coming from input j, and net.inputWeights{i,j} is a structure describing that weight.

    Any change to this property alters the presence or absence of structures in the cell array of input weight subobjects (net.inputWeights) and the presence or absence of matrices in the cell array of input weight matrices (net.IW).

    Layer connections, specified as a numLayer-by-numLayers logical vector.

    If net.layerConnect(i,j) is 1, then layer i has a weight coming from layer j, and net.layerWeights{i,j} is a structure describing that weight.

    Any change to this property alters the presence or absence of structures in the cell array of layer weight subobjects (net.layerWeights) and the presence or absence of matrices in the cell array of layer weight matrices (net.LW).

    Outputs connections, specified as a 1-by-numLayers logical vector.

    If net.outputConnect(i) is 1, then the network has an output from layer i, and net.outputs{i} is a structure describing that output.

    Any change to this property alters the number of network outputs (net.numOutputs) and the presence or absence of structures in the cell array of output subobjects (net.outputs).

    This property is read-only.

    Number of network outputs according to net.outputConnect, specified as a nonnegative integer.

    This property is read-only.

    Maximum layer delay according to all net.layerWeights{i,j}.delays, specified as a nonnegative integer.

    This property indicates the number of time steps of past layer outputs that must be supplied to simulate the network. It is always set to the maximum delay value associated with any of the network's layer weights:

    numLayerDelays = 0;
    for i=1:net.numLayers
      for j=1:net.numLayers
        if net.layerConnect(i,j)
          numLayerDelays = max( ...
           [numLayerDelays net.layerWeights{i,j}.delays]);
        end
      end
    end
    

    This property is read-only.

    Number of weight and bias values in the network, specified as a nonnegative integer. It is the sum of the number of elements in the matrices stored in the two cell arrays:

    net.IW
    new.b
    

    Subobject Structure

    Inputs, specified as a numInputs-by-1 cell array.

    net.inputs{i} is a structure defining input i.

    If a neural network has only one input, then you can access net.inputs{1} without the cell array notation as follows:

    net.input

    The input objects have these properties:

    PropertyDescription
    net.inputs{i}.name

    String defining the input name. Network creation functions, such as feedforwardnet, define this appropriately. But it can be set to any string as desired.

    net.inputs{i}.feedbackInput (read only)

    If this network is associated with an open-loop feedback output, then this property will indicate the index of that output. Otherwise it will be an empty matrix.

    net.inputs{i}.processFcns

    Row cell array of processing function names to be used by ith network input. The processing functions are applied to input values before the network uses them.

    Whenever this property is altered, the input processParams are set to default values for the given processing functions, processSettings, processedSize, and processedRange are defined by applying the process functions and parameters to exampleInput.

    For a list of processing functions, type help nnprocess.

    net.inputs{i}.processParams

    Row cell array of processing function parameters to be used by ith network input. The processing parameters are applied by the processing functions to input values before the network uses them.

    Whenever this property is altered, the input processSettings, processedSize, and processedRange are defined by applying the process functions and parameters to exampleInput.

    net.inputs{i}.processSettings (read only)

    Row cell array of processing function settings to be used by ith network input. The processing settings are found by applying the processing functions and parameters to exampleInput and then used to provide consistent results to new input values before the network uses them.

    net.inputs{i}.processedRange (read only)

    Range of exampleInput values after they have been processed with processingFcns and processingParams.

    net.inputs{i}.processedSize (read only)

    Number of rows in the exampleInput values after they have been processed with processingFcns and processingParams.

    net.inputs{i}.range

    Range of each element of the ith network input.

    It can be set to any Ri × 2 matrix, where Ri is the number of elements in the input (net.inputs{i}.size), and each element in column 1 is less than the element next to it in column 2.

    Each jth row defines the minimum and maximum values of the jth input element, in that order:

    net.inputs{i}(j,:)
    

    Some initialization functions use input ranges to find appropriate initial values for input weight matrices.

    Whenever the number of rows in this property is altered, the input size, processedSize, and processedRange change to remain consistent. The sizes of any weights coming from this input and the dimensions of the weight matrices also change.

    net.inputs{i}.size

    Number of elements in the ith network input. It can be set to 0 or a positive integer.

    Whenever this property is altered, the input range, processedRange, and processedSize are updated. Any associated input weights change size accordingly.

    net.inputs{i}.userdata

    Place for users to add custom information to the ith network input.

    Layers, specified as a numLayers-by-1 cell array.

    net.layers{i} is a structure defining layer i.

    The layer objects have these properties:

    PropertyDescription
    net.layers{i}.name

    String defining the layer name. Network creation functions, such as feedforwardnet, define this appropriately. But it can be set to any string as desired.

    net.layers{i}.dimensions

    Physical dimensions of the ith layer's neurons. Being able to arrange a layer's neurons in a multidimensional manner is important for self-organizing maps.

    It can be set to any row vector of 0 or positive integer elements, where the product of all the elements becomes the number of neurons in the layer (net.layers{i}.size).

    Layer dimensions are used to calculate the neuron positions within the layer (net.layers{i}.positions) using the layer's topology function (net.layers{i}.topologyFcn).

    Whenever this property is altered, the layer's size (net.layers{i}.size) changes to remain consistent. The layer's neuron positions (net.layers{i}.positions) and the distances between the neurons (net.layers{i}.distances) are also updated.

    net.layers{i}.distanceFcn

    Which of the distance functions is used to calculate distances between neurons in the ith layer from the neuron positions. Neuron distances are used by self-organizing maps. It can be set to the name of any distance function.

    For a list of functions, type help nndistance.

    Whenever this property is altered, the distances between the layer's neurons (net.layers{i}.distances) are updated.

    net.layers{i}.distances (read only)

    Distances between neurons in the ith layer. These distances are used by self-organizing maps:

    It is always set to the result of applying the layer's distance function (net.layers{i}.distanceFcn) to the positions of the layer's neurons (net.layers{i}.positions).

    net.layers{i}.initFcn

    Which of the layer initialization functions are used to initialize the ith layer, if the network initialization function (net.initFcn) is initlay. If the network initialization is set to initlay, then the function indicated by this property is used to initialize the layer's weights and biases.

    net.layers{i}.netInputFcn

    Which of the net input functions is used to calculate the ith layer's net input, given the layer's weighted inputs and bias during simulating and training.

    For a list of functions, type help nnnetinput.

    net.layers{i}.netInputParam

    Parameters of the layer's net input function. Call help on the current net input function to get a description of each field:

    help(net.layers{i}.netInputFcn)
    
    net.layers{i}.positions (read only)

    Positions of neurons in the ith layer. These positions are used by self-organizing maps.

    It is always set to the result of applying the layer's topology function (net.layers{i}.topologyFcn) to the positions of the layer's dimensions (net.layers{i}.dimensions).

    Use plotsom to plot the positions of a layer's neurons.

    For instance, if the first-layer neurons of a network are arranged with dimensions (net.layers{1}.dimensions) of [4 5], and the topology function (net.layers{1}.topologyFcn) is hextop, the neurons' positions can be plotted as follows:

    plotsom(net.layers{1}.positions)
    

    Self-organizing map showing neuron positions. The function arranges the neurons in a hexagonal pattern.

    net.layers{i}.range (read only)

    Output range of each neuron of the ith layer.

    It is set to an Si × 2 matrix, where Si is the number of neurons in the layer (net.layers{i}.size), and each element in column 1 is less than the element next to it in column 2.

    Each jth row defines the minimum and maximum output values of the layer's transfer function net.layers{i}.transferFcn.

    net.layers{i}.size

    Number of neurons in the ith layer. It can be set to 0 or a positive integer.

    Whenever this property is altered, the sizes of any input weights going to the layer (net.inputWeights{i,:}.size), any layer weights going to the layer (net.layerWeights{i,:}.size) or coming from the layer (net.layerWeights{i,:}.size), and the layer's bias (net.biases{i}.size), change.

    The dimensions of the corresponding weight matrices (net.IW{i,:}, net.LW{i,:}, net.LW{:,i}), and biases (net.b{i}) also change.

    Changing this property also changes the size of the layer's output (net.outputs{i}.size) and target (net.targets{i}.size) if they exist.

    Finally, when this property is altered, the dimensions of the layer's neurons (net.layers{i}.dimension) are set to the same value. (This results in a one-dimensional arrangement of neurons. If another arrangement is required, set the dimensions property directly instead of using size.)

    net.layers{i}.topologyFcn

    Which of the topology functions are used to calculate the ith layer's neuron positions (net.layers{i}.positions) from the layer's dimensions (net.layers{i}.dimensions).

    For a list of functions, type help nntopology.

    Whenever this property is altered, the positions of the layer's neurons (net.layers{i}.positions) are updated.

    Use plotsom to plot the positions of the layer neurons. For instance, if the first-layer neurons of a network are arranged with dimensions (net.layers{1}.dimensions) of [8 10] and the topology function (net.layers{1}.topologyFcn) is randtop, the neuron positions are arranged to resemble the following plot:

    plotsom(net.layers{1}.positions)
    

    Self-organizing map showing neuron positions

    net.layers{i}.transferFcn

    Which of the transfer functions is used to calculate the ith layer's output, given the layer's net input, during simulation and training.

    For a list of functions, type help nntransfer.

    net.layers{i}.transferParam

    Parameters of the layer's transfer function. Call help on the current transfer function to get a description of what each field means:

    help(net.layers{i}.transferFcn)
    
    net.layers{i}.userdata

    This property provides a place for users to add custom information to the ith network layer.

    Biases, specified as a numLayers-by-1 cell array.

    If net.biasConnect(i) is 1, then net.biases{i} is a structure defining the bias for layer i.

    The biases objects have these properties:

    PropertyDescription
    net.biases{i}.initFcn

    Weight and bias initialization functions used to set the ith layer's bias vector (net.b{i}) if the network initialization function is initlay and the ith layer's initialization function is initwb.

    net.biases{i}.learn

    Whether the ith bias vector is to be altered during training and adaption. It can be set to 0 or 1.

    It enables or disables the bias's learning during calls to adapt and train.

    net.biases{i}.learnFcn

    Which of the learning functions is used to update the ith layer's bias vector (net.b{i}) during training, if the network training function is trainb, trainc, or trainr, or during adaption, if the network adapt function is trains.

    For a list of functions, type help nnlearn.

    Whenever this property is altered, the biases learning parameters (net.biases{i}.learnParam) are set to contain the fields and default values of the new function.

    net.biases{i}.learnParam

    Learning parameters and values for the current learning function of the ith layer's bias. The fields of this property depend on the current learning function. Call help on the current learning function to get a description of what each field means.

    net.biases{i}.size (read only)

    Size of the ith layer's bias vector. It is always set to the size of the ith layer (net.layers{i}.size).

    net.biases{i}.userdata

    This property provides a place for users to add custom information to the ith layer's bias.

    Input weights, specified as a numLayers-by-numLayers cell array.

    If net.inputConnect(i,j) is 1, then net.inputWeights{i,j} is a structure defining the weight to layer i from input j.

    The input weights objects have these properties:

    PropertyDescription
    net.inputWeights{i,j}.delays

    Tapped delay line between the jth input and its weight to the ith layer. It must be set to a row vector of increasing values. The elements must be either 0 or positive integers.

    Whenever this property is altered, the weight's size (net.inputWeights{i,j}.size) and the dimensions of its weight matrix (net.IW{i,j}) are updated.

    net.inputWeights{i,j}.initFcn

    Which of the Weight and Bias Initialization Functions is used to initialize the weight matrix (net.IW{i,j}) going to the ith layer from the jth input, if the network initialization function is initlay, and the ith layer's initialization function is initwb. This function can be set to the name of any weight initialization function.

    net.inputWeights{i,j}.initSettings (read only)

    Values useful for initializing the weight as part of the configuration process that occurs automatically the first time a network is trained, or when the function configure is called on a network directly.

    net.inputWeights{i,j}.learn

    Whether the weight matrix to the ith layer from the jth input is to be altered during training and adaption. It can be set to 0 or 1.

    net.inputWeights{i,j}.learnFcn

    Which of the learning functions is used to update the weight matrix (net.IW{i,j}) going to the ith layer from the jth input during training, if the network training function is trainb, trainc, or trainr, or during adaption, if the network adapt function is trains. It can be set to the name of any weight learning function.

    For a list of functions, type help nnlearn.

    net.inputWeights{i,j}.learnParam

    Learning parameters and values for the current learning function of the ith layer's weight coming from the jth input.

    The fields of this property depend on the current learning function (net.inputWeights{i,j}.learnFcn). Evaluate the above reference to see the fields of the current learning function.

    Call help on the current learning function to get a description of what each field means.

    net.inputWeights{i,j}.size (read only)

    Dimensions of the ith layer's weight matrix from the jth network input. It is always set to a two-element row vector indicating the number of rows and columns of the associated weight matrix (net.IW{i,j}). The first element is equal to the size of the ith layer (net.layers{i}.size). The second element is equal to the product of the length of the weight's delay vectors and the size of the jth input:

    length(net.inputWeights{i,j}.delays) * net.inputs{j}.size
    
    net.inputWeights{i,j}.userdata

    Place for users to add custom information to the (i,j)th input weight.

    net.inputWeights{i,j}.weightFcn

    Which of the weight functions is used to apply the ith layer's weight from the jth input to that input. It can be set to the name of any weight function. The weight function is used to transform layer inputs during simulation and training.

    For a list of functions, type help nnweight.

    net.inputWeights{i,j}.weightParam

    Parameters of the layer's net input function. Call help on the current net input function to get a description of each field.

    Layer weights, specified as a numLayers-by-numLayers cell array.

    If net.layerConnect(i,j) is 1, then net.layerWeights{i,j} is a structure defining the weight to layer i from layer j.

    The layer weights objects have these properties:

    PropertyDescription
    net.layerWeights{i,j}.delays

    Tapped delay line between the jth layer and its weight to the ith layer. It must be set to a row vector of increasing values. The elements must be either 0 or positive integers.

    net.layerWeights{i,j}.initFcn

    Which of the weight and bias initialization functions is used to initialize the weight matrix (net.LW{i,j}) going to the ith layer from the jth layer, if the network initialization function is initlay, and the ith layer's initialization function is initwb. This function can be set to the name of any weight initialization function.

    net.layerWeights{i,j}.initSettings (read only)

    Values useful for initializing the weight as part of the configuration process that occurs automatically the first time a network is trained, or when the function configure is called on a network directly.

    net.layerWeights{i,j}.learn

    Whether the weight matrix to the ith layer from the jth layer is to be altered during training and adaption. It can be set to 0 or 1.

    net.layerWeights{i,j}.learnFcn

    Which of the learning functions is used to update the weight matrix (net.LW{i,j}) going to the ith layer from the jth layer during training, if the network training function is trainb, trainc, or trainr, or during adaption, if the network adapt function is trains. It can be set to the name of any weight learning function.

    For a list of functions, type help nnlearn.

    net.layerWeights{i,j}.learnParam

    Learning parameters fields and values for the current learning function of the ith layer's weight coming from the jth layer. The fields of this property depend on the current learning function. Call help on the current net input function to get a description of each field.

    net.layerWeights{i,j}.size (read only)

    Dimensions of the ith layer's weight matrix from the jth layer. It is always set to a two-element row vector indicating the number of rows and columns of the associated weight matrix (net.LW{i,j}). The first element is equal to the size of the ith layer (net.layers{i}.size). The second element is equal to the product of the length of the weight's delay vectors and the size of the jth layer.

    net.layerWeights{i,j}.userdata

    Place for users to add custom information to the (i,j)th layer weight.

    net.layerWeights{i,j}.weightFcn

    Which of the weight functions is used to apply the ith layer's weight from the jth layer to that layer's output. It can be set to the name of any weight function. The weight function is used to transform layer inputs when the network is simulated.

    For a list of functions, type help nnweight.

    net.layerWeights{i,j}.weightParam

    Parameters of the layer's net input function. Call help on the current net input function to get a description of each field.

    Outputs, specified as a 1-by-numLayers cell array.

    If net.outputConnect(i) is 1, then net.outputs{i} is a structure defining the network output from layer i.

    If a neural network has only one output at layer i, then you can access net.outputs{i} without the cell array notation as follows:

    net.output

    The output objects have these properties:

    PropertyDescription
    net.outputs{i}.name

    String defining the output name. Network creation functions, such as feedforwardnet, define this appropriately. But it can be set to any string as desired.

    net.outputs{i}.feedbackInput

    If the output implements open-loop feedback (net.outputs{i}.feedbackMode = 'open'), then this property indicates the index of the associated feedback input, otherwise it will be an empty matrix.

    net.outputs{i}.feedbackDelay

    Timestep difference between this output and network inputs. Input-to-output network delays can be removed and added with removedelay and adddelay functions resulting in this property being incremented or decremented respectively. The difference in timing between inputs and outputs is used by preparets to properly format simulation and training data, and used by closeloop to add the correct number of delays when closing an open-loop output, and openloop to remove delays when opening a closed loop.

    net.outputs{i}.feedbackMode

    String 'none' for non-feedback outputs. For feedback outputs it can either be set to 'open' or 'closed'. If it is set to 'open', then the output will be associated with a feedback input, with the property feedbackInput indicating the input's index.

    net.outputs{i}.processFcns

    Row cell array of processing function names to be used by the ith network output. The processing functions are applied to target values before the network uses them, and applied in reverse to layer output values before being returned as network output values.

    When you change this property, you also affect the following settings: the output parameters processParams are modified to the default values of the specified processing functions; processSettings, processedSize, and processedRange are defined using the results of applying the process functions and parameters to exampleOutput; the ith layer size is updated to match the processedSize.

    For a list of functions, type help nnprocess.

    net.outputs{i}.processParams

    Row cell array of processing function parameters to be used by ith network output on target values. The processing parameters are applied by the processing functions to input values before the network uses them.

    Whenever this property is altered, the output processSettings, processedSize and processedRange are defined by applying the process functions and parameters to exampleOutput. The ith layer's size is also updated to match processedSize.

    net.outputs{i}.processSettings (read only)

    Row cell array of processing function settings to be used by ith network output. The processing settings are found by applying the processing functions and parameters to exampleOutput and then used to provide consistent results to new target values before the network uses them. The processing settings are also applied in reverse to layer output values before being returned by the network.

    net.outputs{i}.processedRange (read only)

    Range of exampleOutput values after they have been processed with processingFcns and processingParams.

    net.outputs{i}.processedSize (read only)

    Number of rows in the exampleOutput values after they have been processed with processingFcns and processingParams.

    net.outputs{i}.size (read only)

    Number of elements in the ith layer's output. It is always set to the size of the ith layer (net.layers{i}.size).

    net.outputs{i}.userdata

    Place for users to add custom information to the ith layer's output.

    Function

    Network adaption function, specified as a string.

    Name of a network adaption function or ''

    The network adapt function is used to perform adaption whenever adapt is called.

    [net,Y,E,Pf,Af] = adapt(NET,P,T,Pi,Ai)
    

    For a list of functions, type help nntrain.

    Whenever this property is altered, the network's adaption parameters (net.adaptParam) are set to contain the parameters and default values of the new function.

    Derivative function, specified as a string.

    This property defines the derivative function to be used to calculate error gradients and Jacobians when the network is trained using a supervised algorithm, such as backpropagation. You can set this property to the name of any derivative function.

    For a list of functions, type help nnderivative.

    Data division function, specified as a string.

    This property defines the data division function to be used when the network is trained using a supervised algorithm, such as backpropagation. You can set this property to the name of a division function.

    For a list of functions, type help nndivision.

    Whenever this property is altered, the network's adaption parameters (net.divideParam) are set to contain the parameters and default values of the new function.

    Network initialization function, specified as a string.

    This property defines the function used to initialize the network's weight matrices and bias vectors. The initialization function is used to initialize the network whenever init is called:

    net = init(net)
    

    Whenever this property is altered, the network's initialization parameters (net.initParam) are set to contain the parameters and default values of the new function.

    Network performance function, specified as a string.

    This property defines the function used to measure the network’s performance. The performance function is used to calculate network performance during training whenever train is called.

    [net,tr] = train(NET,P,T,Pi,Ai)
    

    For a list of functions, type help nnperformance.

    Whenever this property is altered, the network's performance parameters (net.performParam) are set to contain the parameters and default values of the new function.

    Plot functions, specified as a cell array.

    This property consists of a row cell array of strings, defining the plot functions associated with a network. The neural network training window, which is opened by the train function, shows a button for each plotting function. Click the button during or after training to open the desired plot.

    Network training function, specified as a string.

    This property defines the function used to train the network. It can be set to the name of any of the training functions, which is used to train the network whenever train is called.

    [net,tr] = train(NET,P,T,Pi,Ai)
    

    For a list of functions, type help nntrain.

    Parameter

    Network adaption parameters, specified as a parameter object.

    Call help on the current adapt function to get a description of what each field means:

    help(net.adaptFcn)
    

    Network data division parameters, specified as a parameter object.

    This property defines the parameters and values of the current data-division function. To get a description of what each field means, type the following command:

    help(net.divideFcn)
    

    Data division mode, specified as a string.

    This property defines the target data dimensions which to divide up when the data division function is called. Its default value is 'sample' for static networks and 'time' for dynamic networks. It may also be set to 'sampletime' to divide targets by both sample and timestep, 'all' to divide up targets by every scalar value, or 'none' to not divide up data at all (in which case all data is used for training, none for validation or testing).

    Network initialization parameters, specified as a parameter object.

    This property defines the parameters and values of the current initialization function. Call help on the current initialization function to get a description of what each field means:

    help(net.initFcn)
    

    Network initialization parameters, specified as a cell array of parameter objects.

    This property consists of a row cell array of parameter objects, defining the parameters and values of each plot function in net.plotFcns. Call help on the each plot function to get a description of what each field means:

    help(net.plotFcns{i})
    

    Training function parameters, specified as a parameter object.

    This property defines the parameters and values of the current training function. Call help on the current training function to get a description of what each field means:

    help(net.trainFcn)
    

    Weight and Bias Value

    Input weight values, specified as a numLayers-by-numInputs cell array of input weight values.

    The weight matrix for the weight going to the ith layer from the jth input (or a null matrix []) is located at net.IW{i,j} if net.inputConnect(i,j) is 1 (or 0).

    The weight matrix has as many rows as the size of the layer it goes to (net.layers{i}.size). It has as many columns as the product of the input size with the number of delays associated with the weight:

    net.inputs{j}.size * length(net.inputWeights{i,j}.delays)
    

    The preprocessing function net.inputs{i}.processFcns is specified as 'removeconstantrows' by default in some networks. In this case, if the network input X contains m rows where all row elements have the same value, the weight matrix has m less columns than the above product. For more details about the network input X, see train.

    These dimensions can also be obtained from the input weight properties:

    net.inputWeights{i,j}.size
    

    Layer weight values, specified as a numLayers-by-numLayers cell array of layer weight values.

    The weight matrix for the weight going to the ith layer from the jth layer (or a null matrix []) is located at net.LW{i,j} if net.layerConnect(i,j) is 1 (or 0).

    The weight matrix has as many rows as the size of the layer it goes to (net.layers{i}.size). It has as many columns as the product of the size of the layer it comes from with the number of delays associated with the weight:

    net.layers{j}.size * length(net.layerWeights{i,j}.delays)
    

    These dimensions can also be obtained from the layer weight properties:

    net.layerWeights{i,j}.size
    

    Bias values, specified as a numLayers-by-1 cell array of bias values.

    The bias vector for the ith layer (or a null matrix []) is located at net.b{i} if net.biasConnect(i) is 1 (or 0).

    The number of elements in the bias vector is always equal to the size of the layer it is associated with (net.layers{i}.size).

    This dimension can also be obtained from the bias properties:

    net.biases{i}.size
    

    Examples

    Create Network with One Input and Two Layers

    This example shows how to create a network without any inputs and layers, and then set its numbers of inputs and layers to 1 and 2 respectively.

    net = network
    net.numInputs = 1
    net.numLayers = 2
    

    Alternatively, you can create the same network with one line of code.

    net = network(1,2)
    

    Version History

    Introduced before R2006a

    collapse all