How to generate equation (correlation) based on the ANN?

15 views (last 30 days)
Dear Expert,
how can I generate the equation (correlation) which gives output based on the number of input variables and constants. Can you tell me what tools, ideas and method I need to follow so that I can generate the equation when the number of layer is more than one and neuron is also more than 2 because it is not mathematically easy to solve by hand, I need to write code or script or function to create the equation, can you please suggest the answer.
I used the Neural Network in MATLAB using inputs [10*3] target data [10*1] and one Hidden layer [25 neurons]. (Based on the ANN created, weights, biases, and related inputs) as by ANN i can predict the value but i do not understand how can i generate that equation (correlation) so that i can manually get the output based on the input variables.
for example: " (Y= mX+C) where m, c is constat and x is input varibale and y is out put variable"

Answers (1)

Venu
Venu on 26 Dec 2023
Edited: Venu on 26 Dec 2023
Hi @CL P,
If you want to express that output as a single equation, it would look extremely complex due to the nested nature of the computations, especially with 25 neurons in the hidden layer. The equation would be a large summation of the products of inputs and weights, passed through an activation function, and then summed again with another set of weights to produce the output.
To manually generate the output based on the input variables, you can refer to this example code snippet:
% Assuming 'net' is your trained neural network and 'X' is your input matrix
% Let's say X is of size [10 x 3], where 10 is the number of examples and 3 is the number of features
% Retrieve the network parameters
IW = net.IW{1,1}; % Input weights for the hidden layer [25 x 3]
LW = net.LW{2,1}; % Weights from hidden layer to output layer [1 x 25]
b1 = net.b{1}; % Biases for the hidden layer [25 x 1]
b2 = net.b{2}; % Bias for the output layer [1 x 1]
% Compute the hidden layer outputs in a vectorized way
% Each row of X is multiplied by IW' and the bias b1 is added to each row
hiddenLayerInput = X * IW' + repmat(b1', size(X, 1), 1);
hiddenLayerOutput = myActivationFunction(hiddenLayerInput); % Apply the activation function
% Compute the output layer
% Multiply each row of the hiddenLayerOutput by LW' and add the bias b2
output = hiddenLayerOutput * LW' + b2;
This code performs vectorized operations over each row of inputs. I used "repmat" function here to replicate the bias vector to match the number of examples in the input matrix.
You can refer to below documentation regarding retrieving network parameters:
If you still want to go ahead and generate a symbolic equation for the entire process, you can use MATLAB's Symbolic Math Toolbox to handle the symbolic representation of the equation. This allows you to express the network's operations in terms of algebraic expressions rather than numerical computations and extract a mathematical equation explicitly that represents the network's computation. Below is an example code snippet for using this: (this is assumed for only 2 neurons in hidden layer, for 25 neurons the equation would be much more bigger and complicated)
% Initialize the MATLAB symbolic toolbox
syms x1 x2 x3 real
% Define symbolic variables for weights and biases
% Input weights and biases for the hidden layer
syms iw11 iw12 iw13 iw21 iw22 iw23 real
syms b1_1 b1_2 real
% Weights and bias for the output layer
syms lw11 lw12 real
syms b2 real
% Define symbolic variables for activation functions
syms activationFunction1(x) activationFunction2(x)
% Write the equations for the outputs of the hidden layer neurons
h1 = activationFunction1(iw11*x1 + iw12*x2 + iw13*x3 + b1_1);
h2 = activationFunction2(iw21*x1 + iw22*x2 + iw23*x3 + b1_2);
% Write the equation for the output neuron
% Assuming a linear activation function for the output layer
Y = lw11*h1 + lw12*h2 + b2;
% Now Y is a symbolic expression representing the output of the network
% in terms of the inputs (x1, x2, x3) and the network parameters (weights and biases)
% You can use this expression to create a function handle or to perform further symbolic manipulation
Find these documentations related to Symbolic math toolbox for your reference:
Hope this helps to a certain extent!
  3 Comments
Venu
Venu on 29 Dec 2023
Hi @CL P,
There may be discrepancies in how you are implementing the forward pass calculation. Check if there is an activation function used in the output layer. Neural networks for regression tasks often have a linear activation function (or no activation function) at the output layer. If your network has a different setup, make sure to apply the correct activation function at the output.
CL P
CL P on 30 Dec 2023
I attempted both with and without using the ReLU activation function (max(0, x)) at the output layer. However, the obtained answer does not align with the predicted value. My fitrnet function is defined as follows:
Mdl = fitrnet(XTrain, YTrain, 'Standardize', true, 'LayerSizes', [30 30 40 10]);
The issue lies in the fact that although my predicted values and actual values are approximately the same, and the Test MSE error is 0.0061, when I tried to calculate it using weights and biases, I did not obtain the same values as those in the predicted output. This suggests that my weights and biases are correct, but there might be an issue with my calculation function. Do you have any insights into the possible reasons behind this discrepancy? Additionally, I experimented with multiplying the output by (max-min) + min, but the values obtained were still incorrect.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!