Main Content

addParameter

Add parameter to ONNXParameters object

Since R2020b

    Description

    params = addParameter(params,name,value,type) adds the network parameter specified by name, value, and type to the ONNXParameters object params. The returned params object contains the model parameters of the input argument params together with the added parameter, stacked sequentially. The added parameter name must be unique, nonempty, and different from the parameter names in params.

    example

    params = addParameter(params,name,value,type,NumDimensions) adds the network parameter specified by name, value, type, and NumDimensions to params.

    Examples

    collapse all

    Import a network saved in the ONNX format as a function and modify the network parameters.

    Import the pretrained simplenet3fc.onnx network as a function. simplenet3fc is a simple convolutional neural network trained on digit image data. For more information on how to create a network similar to simplenet3fc, see Create Simple Image Classification Network.

    Import simplenet3fc.onnx using importONNXFunction, which returns an ONNXParameters object that contains the network parameters. The function also creates a new model function in the current folder that contains the network architecture. Specify the name of the model function as simplenetFcn.

    params = importONNXFunction('simplenet3fc.onnx','simplenetFcn');
    A function containing the imported ONNX network has been saved to the file simplenetFcn.m.
    To learn how to use this function, type: help simplenetFcn.
    

    Display the parameters that are updated during training (params.Learnables) and the parameters that remain unchanged during training (params.Nonlearnables).

    params.Learnables
    ans = struct with fields:
        imageinput_Mean: [1×1 dlarray]
                 conv_W: [5×5×1×20 dlarray]
                 conv_B: [20×1 dlarray]
        batchnorm_scale: [20×1 dlarray]
            batchnorm_B: [20×1 dlarray]
                 fc_1_W: [24×24×20×20 dlarray]
                 fc_1_B: [20×1 dlarray]
                 fc_2_W: [1×1×20×20 dlarray]
                 fc_2_B: [20×1 dlarray]
                 fc_3_W: [1×1×20×10 dlarray]
                 fc_3_B: [10×1 dlarray]
    
    
    params.Nonlearnables
    ans = struct with fields:
                ConvStride1004: [2×1 dlarray]
        ConvDilationFactor1005: [2×1 dlarray]
               ConvPadding1006: [4×1 dlarray]
                ConvStride1007: [2×1 dlarray]
        ConvDilationFactor1008: [2×1 dlarray]
               ConvPadding1009: [4×1 dlarray]
                ConvStride1010: [2×1 dlarray]
        ConvDilationFactor1011: [2×1 dlarray]
               ConvPadding1012: [4×1 dlarray]
                ConvStride1013: [2×1 dlarray]
        ConvDilationFactor1014: [2×1 dlarray]
               ConvPadding1015: [4×1 dlarray]
    
    

    The network has parameters that represent three fully connected layers. You can add a fully connected layer in the original parameters params between layers fc_2 and fc_3. The new layer might increase the classification accuracy.

    To see the parameters of the convolutional layers fc_2 and fc_3, open the model function simplenetFcn.

    open simplenetFcn

    Scroll down to the layer definitions in the function simplenetFcn. The code below shows the definitions for layers fc_2 and fc_3.

    % Conv:
    [weights, bias, stride, dilationFactor, padding, dataFormat, NumDims.fc_2] = prepareConvArgs(Vars.fc_2_W, Vars.fc_2_B, Vars.ConvStride1010, Vars.ConvDilationFactor1011, Vars.ConvPadding1012, 1, NumDims.fc_1, NumDims.fc_2_W);
    Vars.fc_2 = dlconv(Vars.fc_1, weights, bias, 'Stride', stride, 'DilationFactor', dilationFactor, 'Padding', padding, 'DataFormat', dataFormat);
    
    % Conv:
    [weights, bias, stride, dilationFactor, padding, dataFormat, NumDims.fc_3] = prepareConvArgs(Vars.fc_3_W, Vars.fc_3_B, Vars.ConvStride1013, Vars.ConvDilationFactor1014, Vars.ConvPadding1015, 1, NumDims.fc_2, NumDims.fc_3_W);
    Vars.fc_3 = dlconv(Vars.fc_2, weights, bias, 'Stride', stride, 'DilationFactor', dilationFactor, 'Padding', padding, 'DataFormat', dataFormat);
    

    Name the new layer fc_4, because each added parameter name must be unique. The addParameter function always adds a new parameter sequentially to the params.Learnables or params.Nonlearnables structure. The order of the layers in the model function simplenetFcn determines the order in which the network layers are executed. The names and order of the parameters do not influence the execution order.

    Add a new fully connected layer fc_4 with the same parameters as fc_2.

    params = addParameter(params,'fc_4_W',params.Learnables.fc_2_W,'Learnable');
    params = addParameter(params,'fc_4_B',params.Learnables.fc_2_B,'Learnable');
    params = addParameter(params,'fc_4_Stride',params.Nonlearnables.ConvStride1010,'Nonlearnable');
    params = addParameter(params,'fc_4_DilationFactor',params.Nonlearnables.ConvDilationFactor1011,'Nonlearnable');
    params = addParameter(params,'fc_4_Padding',params.Nonlearnables.ConvPadding1012,'Nonlearnable');

    Display the updated learnable and nonlearnable parameters.

    params.Learnables
    ans = struct with fields:
        imageinput_Mean: [1×1 dlarray]
                 conv_W: [5×5×1×20 dlarray]
                 conv_B: [20×1 dlarray]
        batchnorm_scale: [20×1 dlarray]
            batchnorm_B: [20×1 dlarray]
                 fc_1_W: [24×24×20×20 dlarray]
                 fc_1_B: [20×1 dlarray]
                 fc_2_W: [1×1×20×20 dlarray]
                 fc_2_B: [20×1 dlarray]
                 fc_3_W: [1×1×20×10 dlarray]
                 fc_3_B: [10×1 dlarray]
                 fc_4_W: [1×1×20×20 dlarray]
                 fc_4_B: [20×1 dlarray]
    
    
    params.Nonlearnables
    ans = struct with fields:
                ConvStride1004: [2×1 dlarray]
        ConvDilationFactor1005: [2×1 dlarray]
               ConvPadding1006: [4×1 dlarray]
                ConvStride1007: [2×1 dlarray]
        ConvDilationFactor1008: [2×1 dlarray]
               ConvPadding1009: [4×1 dlarray]
                ConvStride1010: [2×1 dlarray]
        ConvDilationFactor1011: [2×1 dlarray]
               ConvPadding1012: [4×1 dlarray]
                ConvStride1013: [2×1 dlarray]
        ConvDilationFactor1014: [2×1 dlarray]
               ConvPadding1015: [4×1 dlarray]
                   fc_4_Stride: [2×1 dlarray]
           fc_4_DilationFactor: [2×1 dlarray]
                  fc_4_Padding: [4×1 dlarray]
    
    

    Modify the architecture of the model function to reflect the changes in params so you can use the network for prediction with the new parameters or retrain the network. Open the model function simplenetFcn. Then, add the fully connected layer fc_4 between layers fc_2 and fc_3, and change the input data of the convolution operation dlconv for layer fc_3 to Vars.fc_4.

    open simplenetFcn

    The code below shows the new layer fc_4 in its position, as well as layers fc_2 and fc_3.

    % Conv:
    [weights, bias, stride, dilationFactor, padding, dataFormat, NumDims.fc_2] = prepareConvArgs(Vars.fc_2_W, Vars.fc_2_B, Vars.ConvStride1010, Vars.ConvDilationFactor1011, Vars.ConvPadding1012, 1, NumDims.fc_1, NumDims.fc_2_W);
    Vars.fc_2 = dlconv(Vars.fc_1, weights, bias, 'Stride', stride, 'DilationFactor', dilationFactor, 'Padding', padding, 'DataFormat', dataFormat);
    
    % Conv
    [weights, bias, stride, dilationFactor, padding, dataFormat, NumDims.fc_4] = prepareConvArgs(Vars.fc_4_W, Vars.fc_4_B, Vars.fc_4_Stride, Vars.fc_4_DilationFactor, Vars.fc_4_Padding, 1, NumDims.fc_2, NumDims.fc_4_W);
    Vars.fc_4 = dlconv(Vars.fc_2, weights, bias, 'Stride', stride, 'DilationFactor', dilationFactor, 'Padding', padding, 'DataFormat', dataFormat);
    
    % Conv:
    [weights, bias, stride, dilationFactor, padding, dataFormat, NumDims.fc_3] = prepareConvArgs(Vars.fc_3_W, Vars.fc_3_B, Vars.ConvStride1013, Vars.ConvDilationFactor1014, Vars.ConvPadding1015, 1, NumDims.fc_4, NumDims.fc_3_W);
    Vars.fc_3 = dlconv(Vars.fc_4, weights, bias, 'Stride', stride, 'DilationFactor', dilationFactor, 'Padding', padding, 'DataFormat', dataFormat);
    

    Input Arguments

    collapse all

    Network parameters, specified as an ONNXParameters object. params contains the network parameters of the imported ONNX™ model.

    Name of the parameter, specified as a character vector or string scalar.

    Example: 'conv2_W'

    Example: 'conv2_Padding'

    Value of the parameter, specified as a numeric array, character vector, or string scalar. To duplicate an existing network layer (stored in params), copy the parameter values of the network layer.

    Example: params.Learnables.conv1_W

    Example: params.Nonlearnables.conv1_Padding

    Data Types: single | double | char | string

    Type of parameter, specified as 'Learnable', 'Nonlearnable', or 'State'.

    • The value 'Learnable' specifies a parameter that is updated by the network during training (for example, weights and bias of convolution).

    • The value 'Nonlearnable' specifies a parameter that remains unchanged during network training (for example, padding).

    • The value 'State' specifies a parameter that contains information remembered by the network between iterations and updated across multiple training batches.

    Data Types: char | string

    Number of dimensions for every parameter, specified as a structure. NumDimensions includes trailing singleton dimensions.

    Example: params.NumDimensions.conv1_W

    Example: 4

    Output Arguments

    collapse all

    Network parameters, returned as an ONNXParameters object. params contains the network parameters updated by addParameter.

    Version History

    Introduced in R2020b