neural network input .
Show older comments
i have these code with these input to my neural network , my question is how does the NN take the input ? to be more clear is NN going to take p1[1,1], p2[1,1],p3[1,1].....p13[1,1]; or it will take the whole p1 and after that the whole p2 , p3 ...,p13?
my code is
clear;
clc;
load asp65 ;
load curv65;
load dfd65;
load dffl65;
load dfr65;
load gl65;
load lc65;
load prc65;
load plc65;
load pfc65;
load slop65;
load sot65;
load vec65;
load ele65;
load hlo65;
p1=transpose(asp65);
p2=transpose(curv65);
p3=transpose(dfd65);
p4=transpose(dffl65);
p5=transpose(dfr65);
p6=transpose(gl65);
p7=transpose(lc65);
p8=transpose(prc65);
p9=transpose(plc65);
p10=transpose(pfc65);
p11=transpose(slop65);
p12=transpose(vec65);
p13=transpose(ele65);
t1=transpose(hlo65);
p=[p1;p2;p3;p4;p5;p6;p7;p8;p9;p10;p11;p12;p13];
%p=[p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13]
%p15=[p7;p8;p9;p10;p11;12;13];
t=[t1];
%p = [-1 -1 2 2 ;0 5 0 5 ]
%t = [-1 -1 1 1 ];
for i=1:1:10
net=newff(minmax(p),t,[15 , 4],{'tansig','purelin'},'trainrp');
net.performFcn='msereg';
net.performParam.ratio=0.5;
net.trainParam.show = 5;
net.trainParam.lr = 0.5; % learning rate
net.trainParam.epochs = 1000;
net.trainParam.goal = 1e-5;
RandStream.setDefaultStream(RandStream('mt19937ar','seed',1));
%rand('state',sum(100*clock)) % initialize therandom
net = init(net);
[net,tr]=train(net,p,t);
a = sim(net,p);
e = a-t;
rmse = sqrt(mse(e));
acc=1-rmse
%simpleclusterOutputs = sim(net,p);
%plotroc(t,simpleclusterOutputs);
end
Accepted Answer
More Answers (7)
Greg Heath
on 14 Dec 2011
0 votes
See comments, examples and sample code by searching tne newsgroup using
heath newff close clear
heath newff Neq Nw Ntrials
Hope this helps.
Greg
Mo al
on 15 Dec 2011
Greg Heath
on 15 Dec 2011
>i have these code with these input to my neural network , my question is how does the
>NN take the input ? to be more clear is NN going to take p1[1,1], p2[1,1],p3[1,1].....
>p13[1,1]; or it will take the whole p1 and after that the whole p2 , p3 ...,p13?
It is hard to answer your question without knowing the sizes of the pi (i=1:13).
For an I-H-O node topology and Ntrn I/O training pairs,
[I Ntrn] = size(p)
[O Ntrn] = size(t)
The I/O vectors are columns of p and t.
>my code is clear; clc; load asp65 ; load curv65; load dfd65; load dffl65; load dfr65; >load gl65; load lc65; load prc65; load plc65; load pfc65; load slop65; load sot65; >load vec65; load ele65; load hlo65;
>p1=transpose(asp65); p2=transpose(curv65); p3=transpose(dfd65); p4=transpose(dffl65); >p5=transpose(dfr65); p6=transpose(gl65); p7=transpose(lc65); p8=transpose(prc65); >p9=transpose(plc65); p10=transpose(pfc65); p11=transpose(slop65); p12=transpose(vec65); > p13=transpose(ele65); t1=transpose(hlo65);
>>p=[p1;p2;p3;p4;p5;p6;p7;p8;p9;p10;p11;p12;p13]; %p=[p1,p2,p3,p4,p5,p6,p7,p8,p9,p10,p11,p12,p13] >%p15=[p7;p8;p9;p10;p11;12;13]; t=[t1]; %p = [-1 -1 2 2 ;0 5 0 5 ] %t = [-1 -1 1 1 ];
Size(p)? size(t1)?
Normalization of p and t1?
H = 15, O = 4?
Why 15? Better to search for the smallest good value satisfying
H <= floor( (Ntrn*O-O)/(I+O+1) )
Add an outer loop over candidate H values.
> RandStream.setDefaultStream(RandStream('mt19937ar','seed',1));
Only initialize once BEFORE the loops!
%rand('state',sum(100*clock)) % initialize the random net = init(net);
If you ever do this, replace 100 by 1e9. Don't need INIT because NEWFF is self-initializing.
>for i=1:1:10
>>net=newff(minmax(p),t,[H , O],{'tansig','purelin'},'trainrp');
>net.performFcn='msereg'; net.performParam.ratio=0.5;
Why MSEREG?? Why not use as many defaults as possible?
>net.trainParam.show = 5;
>net.trainParam.lr = 0.5; % learning rate
>net.trainParam.epochs = 1000;
>net.trainParam.goal = 1e-5;
Why 1e-5??
>[net,tr]=train(net,p,t); > a = sim(net,p); e = a-t;
Why not [net, tr, a, e] = ... ?
>rmse = sqrt(mse(e)); >acc=1-rmse
Useless performance index. Use R^2.
Hope this helps.
Greg
%simpleclusterOutputs = sim(net,p);
%plotroc(t,simpleclusterOutputs);
end
Mo al
on 16 Dec 2011
0 votes
Greg Heath
on 19 Dec 2011
0 votes
>1-i have these code with these input to my neural network , my question is how does the NN take the input ?
Once the matrix p is created, the net takes the first column of p, then the second column of p, etc. If
[ I N ] = size(p),
then p is interpreted as containing N I-dimensional column vectors. Since
size(pi) = [ 255 255] % i = 1:13
you need to columnize your pi matrices and form
p = [ p1(:), p2(:), ..., p13(:)];
THEN
size(p) = [ 255^2 13] = [ 65025 133 ]
AND
you need to investigate 2-D dimensionality reduction via averaging pixels.
====================================================
>2. p=[p1;p2;p3;p4;p5;p6;p7;p8;p9;p10;p11;p12;p13];
Incorrect. See above
Size(p)? size(t1)?
> the size of p is (13 x255)=3315 elements.
Incorrect. See above.
>the size of (t1) is 255
Incorrect. Size has two dimensions. See the documentation
help size
doc size
The correct answer is
size(t1) = [ O N ]
and is interpreted as N O-dimensional output column vectors.
H = 15, O = 4?
>one hidden lair (sp: layer) has 15 neuron based on previous researches . out put in the target has two values either 1 or 0 so once i put the output 4 i achieved much better regression and accuracy values .
I don't understand. What do the 4 outputs represent?
Oh! Oh! Is this classification and you are using a binary coded output?
For classification use
size(t1) = [ 13 N]
where each column is a column of the 13 dimensional unit matrix eye(13). The input image vector is then assigned to the class corresponding to the maximum output.
==================================================================
Greg Heath
on 19 Dec 2011
...continued
>3-net.performFcn='msereg'; net.performParam.ratio=0.5;
Why MSEREG?? Why not use as many defaults as possible?
>if i keep the default my accuracy and regression will R=0.333 and the accuracy 90 but if i use net.performFcn='msereg' the R =.6999 and the accuracy 96%
Something is wrong somewhere.
How is R defined? Square root of the coefficient of determination (See Wikipedia)?
Try the default again with the columnized image inputs and unit matrix column outputs
===================================
4-net.trainParam.goal = 1e-5;
Why 1e-5??
> its the best value i can get better performance result through .
No need to search for a suitable value. Just choose a goal for R. Then work backwords to find the corresponding goal for MSE.
======================================================
5->[net,tr]=train(net,p,t); > a = sim(net,p); e = a-t;
Why not [net, tr, a, e] = ... ?
what is the wrong with my code why yours is better ?
Nothing is wrong with your code. Just letting you know that the additional code and computations are unnecessary.
===================================================
6->rmse = sqrt(mse(e)); acc=1-rmse
Useless scale-dependent performance index. Use R^2.
> what is R^2?
A common statistical measure of fit. Look up coefficient of determination in Wikipedia.
============================
Hope this helps
Greg
Mo al
on 20 Dec 2011
0 votes
2 Comments
Walter Roberson
on 20 Dec 2011
No. If your variable is the wrong way around, transpose it as you pass it in, using p.' or transpose(p)
Mo al
on 31 Aug 2012
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!