Configure Tuning Options in Fuzzy Logic Designer
To select an algorithm for tuning your fuzzy inference system (FIS) or FIS tree and configure the algorithm options in the Fuzzy Logic Designer app, open the Tuning Options dialog box. On the Tuning tab, click Tuning Options.
In the Tuning Options dialog box, you can:
Select the type of optimization to perform.
Select a tuning algorithm. You can choose between several Global Optimization Toolbox methods or adaptive neurofuzzy inference system (ANFIS) tuning.
Configure kfold cross validation to prevent overfitting to your training data.
For more information on FIS tuning, see Tuning Fuzzy Inference Systems.
Optimization Type and Method
Select one of the following types of tuning.
Optimization Type  Description 

Tuning  Optimize the existing input, output, and rule parameters without learning new rules. 
Learning  Learn new rules up to a maximum number of rules. To specify the maximum number of rules, use the Max number of rules option. This type of optimization is not supported for ANFIS tuning. 
To select a tuning algorithm, in the Method dropdown list, select one of the following tuning methods.
Method  Description 

Genetic algorithm  Populationbased global optimization method that searches randomly by mutation and crossover among population members 
Particle swarm optimization  Populationbased global optimization method in which population members step throughout a search region 
Pattern search  Directsearch local optimization method that searches a set of points near the current point to find a new optimum 
Simulated annealing  A local optimization method that simulates a heating and cooling process to find a new optimal point near the current point 
Adaptive neurofuzzy inference  Backpropagation algorithm that tunes membership function parameters — Supported only for type1 Sugeno systems with a single output) 
The first four tuning methods require Global Optimization Toolbox software.
To use default tuning options for any method, select the Use default method options parameter.
Global Optimization Toolbox Method Options
To configure options for one of the Global Optimization Toolbox tuning methods, you must configure two sets of options: algorithmspecific options and FIS tuning options.
AlgorithmSpecific Options
To specify algorithmspecific options, expand the Method Options section and add optimization options. Any options that you do not specify use their default values.
To configure an option, in the leftmost dropdown list, select the option category. In the next dropdown list, select the optimization option. Then, specify the option value.
For example, the following figure shows how to configure the following options for the genetic algorithm tuning method.
Maximum number of generations, where:
The option category is
Run time limits
.The option is
Max generations
.The option value is
20
.
Population size, where:
The option category is
Population settings
.The option is
Population size
.The option value is
100
.
To add or remove options, click the corresponding + or –, respectively.
For more information on the algorithmspecific tuning options, click the question mark icon to see the Global Optimization Toolbox documentation.
FIS Tuning Options
For all Global Optimization Toolbox optimization methods, you can specify the following FIS tuning options.
Validation Option  Description 

Max number of rules  Maximum number of rules, N_{R}, in a FIS after optimization when using the Learning optimization type. The number of rules in a FIS (after optimization) can be less than N_{R}, since duplicate rules with the same antecedent values are removed from the rule base during tuning. To automatically set N_{R} based on the number of input variables and the number of membership functions for each input variable, select the auto parameter. This option is ignored when the optimization type is Tuning. 
Random number seed  Select a method for setting the random number generator
seed before tuning. For more information, see

Distance metric  Type of distance metric used for computing the cost for the optimized parameter values with respect to the training data, specified as one of the following:

Ignore invalid parameters  Select this parameter to invalid parameter values generated during the tuning process. 
Use parallel computing  Select this parameter to use parallel computation in the optimization process. Using parallel computing requires Parallel Computing Toolbox™ software. 
ANFIS Tuning Options
To configure the ANFIS tuning algorithm, specify the following tuning options. For more information, see NeuroAdaptive Learning and ANFIS.
ANFIS Option  Description 

Optimization method  Optimization method used in membership function parameter training. In the dropdown list, select one of the following:

Epoch number  Maximum number of training epochs, specified as a positive integer. 
Error goal  Training error goal, specified as positive scalar. The training process stops when the training error is less than or equal to the training error goal. 
Initial step size  Initial training step size, specified as a positive scalar. During training, the software updates the step size according to the following rules:

Step size decrease rate  Stepsize decrease rate, specified as a positive scalar
less than 
Step size increase rate  Stepsize increase rate, specified as a scalar greater than

Input validation data  To specify input validation data, in the dropdown list:

Output validation data  To specify output validation data, in the dropdown list:

KFold Cross Validation
When tuning a system using a Global Optimization Toolbox method, you can use kfold crossvalidation to prevent overfitting to your data. To configure the validation, in the Tuning Options dialog box, on the Validation tab, specify the following options.
Validation Option  Description 

Number of crossvalidation  Number of cross validations to perform, N_{V} specified as a nonnegative integer less than or equal to the number of rows in the training data. When
N_{V} is
Otherwise, the tuning algorithm randomly partitions the input data into N_{V} subsets of approximately equal size. The algorithm then performs N_{V} trainingvalidation iterations. For each iteration, one data subset is used as validation data with the remaining subsets used as training data. 
Validation tolerance  Maximum allowable increase in validation cost when using kfold cross validation, specified as a scalar value in the range [0,1]. A higher validation tolerance value produces a longer trainingvalidation iteration, with an increased possibility of data overfitting. The increase in validation cost, ΔC, is the difference between the average validation cost and the minimum validation cost, C_{min}, for the current trainingvalidation iteration. The average validation cost is a moving average with a window size specified using the Validation window size option. The app stops the current trainingvalidation iteration when the ratio between ΔC and C_{min} exceeds the validation tolerance. 
Validation window size  Window size for computing average validation cost,
specified as a positive integer. The validation cost moving
average is computed over the last
N_{W} validation
cost values, where N_{W}
is the validation window size. A higher 
Kfold cross validation is not supported for ANFIS tuning.
For more information on kfold cross validation, see Optimize FIS Parameters with KFold CrossValidation.