Manually Training and Testing Backpropagation Neural Network with different inputs

69 views (last 30 days)
Hello,
I'm new in Matlab and i'm using backpropagation neural network in my assignment and i don't know how to implement it in Matlab.
I'm currently using this code that i found in internet with sigmoid function:
function y = Sigmoid(x)
y = 1./ (1 + exp(-x)); end
the problem is:
my input is from my excel calculation (15x4 matrix),
[0.0061 0.4819 0.2985 0.0308;
0.0051 0.4604 0.1818 0.0400;
0.0050 0.4879 0.0545 0.0420;
0.0067 0.4459 0.2373 0.0405
0.0084 0.4713 0.6571 0.0308;
0.0068 0.4907 0.2333 0.0332;
0.0125 0.4805 0.1786 0.0376;
0.0086 0.5221 0.1702 0.0356;
0.0125 0.5276 0.2667 0.0371;
0.0054 0.4717 0.1034 0.0366;
0.0137 0.5296 0.1846 0.0596;
0.0071 0.4707 0 0.0337;
0.0077 0.5120 0.3590 0.0396;
0.0106 0.5207 0.1613 0.0415;
0.0077 0.5194 0.3038 0.0347];
the neuron of each layer is 4-4-4-1 (input, hidden1, hidden2, output)
i'm using 2 hidden layers with 4 neurons in each hidden layer (excluding bias).
learning rate = 0.01 and errorThreshold = 0.0001
bias is 1 and all weight is 0.1
all target is 0.
and i want to train and also test this backpropagation. but i don't understand how to do that, and i don't really understand this code.
can you guys please help me? and explain this program? and if i want to change with different inputs like 4 or 5 inputs or change the target, which part of the code that i have to change? thanks.
PS. My code:
%%BPANN: Artificial Neural Network with Back Propagation
%%Author: Xuyang Feng
function BPANN()
%---Set training parameters
iterations = 5000;
errorThreshhold = 0.1;
learningRate = 0.5;
%---Set hidden layer type, for example: [4, 3, 2]
hiddenNeurons = [3 2];
%---'Xor' training data
trainInp = [0 0; 0 1; 1 0; 1 1];
trainOut = [0; 1; 1; 0];
testInp = trainInp;
testRealOut = trainOut;
% %---'And' training data
% trainInp = [1 1; 1 0; 0 1; 0 0];
% trainOut = [1; 0; 0; 0];
% testInp = trainInp;
% testRealOut = trainOut;
assert(size(trainInp,1)==size(trainOut, 1),...
'Counted different sets of input and output.');
%---Initialize Network attributes
inArgc = size(trainInp, 2);
outArgc = size(trainOut, 2);
trainsetCount = size(trainInp, 1);
%---Add output layer
layerOfNeurons = [hiddenNeurons, outArgc];
layerCount = size(layerOfNeurons, 2);
%---Weight and bias random range
e = 1;
b = -e;
%---Set initial random weights
weightCell = cell(1, layerCount);
for i = 1:layerCount
if i == 1
weightCell{1} = unifrnd(b, e, inArgc,layerOfNeurons(1));
else
weightCell{i} = unifrnd(b, e, layerOfNeurons(i-1),layerOfNeurons(i));
end
end
%---Set initial biases
biasCell = cell(1, layerCount);
for i = 1:layerCount
biasCell{i} = unifrnd(b, e, 1, layerOfNeurons(i));
end
%----------------------
%---Begin training
%----------------------
for iter = 1:iterations
for i = 1:trainsetCount
% choice = randi([1 trainsetCount]);
choice = i;
sampleIn = trainInp(choice, :);
sampleTarget = trainOut(choice, :);
[realOutput, layerOutputCells] = ForwardNetwork(sampleIn, layerOfNeurons, weightCell, biasCell);
[weightCell, biasCell] = BackPropagate(learningRate, sampleIn, realOutput, sampleTarget, layerOfNeurons, ...
weightCell, biasCell, layerOutputCells);
end
%plot overall network error at end of each iteration
error = zeros(trainsetCount, outArgc);
for t = 1:trainsetCount
[predict, layeroutput] = ForwardNetwork(trainInp(t, :), layerOfNeurons, weightCell, biasCell);
p(t) = predict;
error(t, : ) = predict - trainOut(t, :);
end
err(iter) = (sum(error.^2)/trainsetCount)^0.5;
figure(1);
plot(err);
%---Stop if reach error threshold
if err(iter) < errorThreshhold
break;
end
end
%--Test the trained network with a test set
testsetCount = size(testInp, 1);
error = zeros(testsetCount, outArgc);
for t = 1:testsetCount
[predict, layeroutput] = ForwardNetwork(testInp(t, :), layerOfNeurons, weightCell, biasCell);
p(t) = predict;
error(t, : ) = predict - testRealOut(t, :);
end
%---Print predictions
fprintf('Ended with %d iterations.\n', iter);
a = testInp;
b = testRealOut;
c = p';
x1_x2_act_pred_err = [a b c c-b]
%---Plot Surface of network predictions
testInpx1 = [-1:0.1:1];
testInpx2 = [-1:0.1:1];
[X1, X2] = meshgrid(testInpx1, testInpx2);
testOutRows = size(X1, 1);
testOutCols = size(X1, 2);
testOut = zeros(testOutRows, testOutCols);
for row = [1:testOutRows]
for col = [1:testOutCols]
test = [X1(row, col), X2(row, col)];
[out, l] = ForwardNetwork(test, layerOfNeurons, weightCell, biasCell);
testOut(row, col) = out;
end
end
figure(2);
surf(X1, X2, testOut);
end
%%BackPropagate: Backpropagate the output through the network and adjust weights and biases
function [weightCell, biasCell] = BackPropagate(rate, in, realOutput, sampleTarget, layer, weightCell, biasCell, layerOutputCells)
layerCount = size(layer, 2);
delta = cell(1, layerCount);
D_weight = cell(1, layerCount);
D_bias = cell(1, layerCount);
%---From Output layer, it has different formula
output = layerOutputCells{layerCount};
delta{layerCount} = output .* (1-output) .* (sampleTarget - output);
preoutput = layerOutputCells{layerCount-1};
D_weight{layerCount} = rate .* preoutput' * delta{layerCount};
D_bias{layerCount} = rate .* delta{layerCount};
%---Back propagate for Hidden layers
for layerIndex = layerCount-1:-1:1
output = layerOutputCells{layerIndex};
if layerIndex == 1
preoutput = in;
else
preoutput = layerOutputCells{layerIndex-1};
end
weight = weightCell{layerIndex+1};
sumup = (weight * delta{layerIndex+1}')';
delta{layerIndex} = output .* (1 - output) .* sumup;
D_weight{layerIndex} = rate .* preoutput' * delta{layerIndex};
D_bias{layerIndex} = rate .* delta{layerIndex};
end
%---Update weightCell and biasCell
for layerIndex = 1:layerCount
weightCell{layerIndex} = weightCell{layerIndex} + D_weight{layerIndex};
biasCell{layerIndex} = biasCell{layerIndex} + D_bias{layerIndex};
end
end
%%ForwardNetwork: Compute feed forward neural network, Return the output and output of each neuron in each layer
function [realOutput, layerOutputCells] = ForwardNetwork(in, layer, weightCell, biasCell)
layerCount = size(layer, 2);
layerOutputCells = cell(1, layerCount);
out = in;
for layerIndex = 1:layerCount
X = out;
bias = biasCell{layerIndex};
out = Sigmoid(X * weightCell{layerIndex} + bias);
layerOutputCells{layerIndex} = out;
end
realOutput = out;
end
  7 Comments
Bachtiar Muhammad Lubis
Bachtiar Muhammad Lubis on 4 Feb 2019
@Greg : actually those code are fully similiar with my main greg. the differences only on gui. my main has GUI while this doesn't. i have no idea why my data testing didn't match with the trained output, and i don't know what was going on, is the problem on my number hidden layer or else. Please help me greg. I have no idea what to do to solve my NN.
By the way i forgot to attach my image as input data, and i am not be able to attaching more files within 24 hours since my last post, because i have reached my limit 10 daily uploads. i will post it later if needed.
Bachtiar Muhammad Lubis
Bachtiar Muhammad Lubis on 5 Feb 2019
@Greg: by the way what do you mean about "solved sample case". do you mean my goals or the another finished project that use the same method, which is Backpropagation ?
sorry for my stupidness.

Sign in to comment.

Answers (3)

BERGHOUT Tarek
BERGHOUT Tarek on 3 Feb 2019
  3 Comments

Sign in to comment.


Mohamed Nasr
Mohamed Nasr on 30 Apr 2020
Hi,please I want make image classification using BPNN ?

pathakunta
pathakunta on 26 Jan 2024
you can try with this code i its more simplified: https://www.mathworks.com/matlabcentral/fileexchange/69947-back-propagation-algorithm-for-training-an-mlp?s_tid=prof_contriblnk

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!