How to train the neural network using RBF?
8 views (last 30 days)
Show older comments
Hi,
I am using Radial Basis Function for training the neural network.But during training using nntool, training option is disabled so i simulate the network.I want to know whether training is performed on RBF? If training not happends then how to test the data? Please help me to solve this problem...
0 Comments
Accepted Answer
Greg Heath
on 8 May 2013
Edited: Greg Heath
on 8 May 2013
If you are using newrb, there is no separation between the creation and training phases.
Therefore, once you create it, it trains itself, depending on the input parameters.
help newrb
doc newrb
Hope this helps.
Thank you for formally accepting my answer.
Greg
1 Comment
Daniel Malabanan
on 11 Feb 2014
Dear Sir,
May I just clarify with regards to training RBF:
1. Since training and creation is included in newrb, does it means that there's no need to pass for a couple of times (I'm referring to batch training epoch) all the input data to training for weights and bias adjustment?
2. Also, after weights and biases are defined in designrb.m at newrb.m, I simulated my output network using sim.m. I tried to train my network using 'train' command with 1000 epochs and 0 goal. Then I again simulate using sim.m the resulting network. However, there is no difference with the output of the my first sim and my second sim. Does it mean that my designrb already obtained the optimal weights and biases?
Thank you so much
More Answers (1)
Greg Heath
on 12 Feb 2014
1. When you create newrb it automatically trains itself without the trn/val/set data division. Therefore, only include the training set in the call.
2. Standardize your regression data to zero-mean/unit-variance. Use a MSE goal 100 times smaller than the average variance of the target rows. Train many nets with different spreads in [ 0.1 10 ] and choose the net with the best holdout validation set performance.
3. For c-class classification the targets should be columns of the c-dimensional unit matrix eye(c)
4. Obtain an UNBIASED estimate of generalization (i.e., nontraining and nonvalidation) performance with the holdout test set.
5. If unsatisfactory, redivide into trn/val/tst sets and try again.
6. I have posted advice, directions and examples. Search the NEWSGROUP and ANSWERS using
greg newrb
6. Please post the addresses of any posts you find helpful.
Greg
0 Comments
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!