# edge

Classification edge for multiclass error-correcting output codes (ECOC) model

## Syntax

``e = edge(Mdl,tbl,ResponseVarName)``
``e = edge(Mdl,tbl,Y)``
``e = edge(Mdl,X,Y)``
``e = edge(___,Name,Value)``

## Description

````e = edge(Mdl,tbl,ResponseVarName)` returns the classification edge (`e`) for the trained multiclass error-correcting output codes (ECOC) classifier `Mdl` using the predictor data in table `tbl` and the class labels in `tbl.ResponseVarName`.```
````e = edge(Mdl,tbl,Y)` returns the classification edge for the classifier `Mdl` using the predictor data in table `tbl` and the class labels in vector `Y`.```

example

````e = edge(Mdl,X,Y)` returns the classification edge (`e`) for the classifier `Mdl` using the predictor data in matrix `X` and the class labels in vector `Y`.```

example

````e = edge(___,Name,Value)` specifies options using one or more name-value pair arguments in addition to any of the input argument combinations in previous syntaxes. For example, you can specify a decoding scheme, binary learner loss function, and verbosity level.```

## Examples

collapse all

Compute the test-sample classification edge of an ECOC model with SVM binary classifiers.

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); % Class order rng(1); % For reproducibility```

Train an ECOC model using SVM binary classifiers. Specify a 30% holdout sample for testing, standardize the predictors using an SVM template, and specify the class order.

```t = templateSVM('Standardize',true); PMdl = fitcecoc(X,Y,'Holdout',0.30,'Learners',t,'ClassNames',classOrder); Mdl = PMdl.Trained{1}; % Extract trained, compact classifier```

`PMdl` is a `ClassificationPartitionedECOC` model. It has the property `Trained`, a 1-by-1 cell array containing the `CompactClassificationECOC` model that the software trained using the training data.

Compute the test-sample edge.

```testInds = test(PMdl.Partition); % Extract the test indices XTest = X(testInds,:); YTest = Y(testInds,:); e = edge(Mdl,XTest,YTest)```
```e = 0.4573 ```

The average of the test-sample margins is approximately 0.46.

Compute the mean of the test-sample weighted margins of an ECOC model.

Suppose that the observations in a data set are measured sequentially, and that the last 75 observations have better quality due to a technology upgrade. Incorporate this advancement by giving the better quality observations more weight than the other observations.

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); % Class order rng(1); % For reproducibility```

Define a weight vector that assigns twice as much weight to the better quality observations.

```n = size(X,1); weights = [ones(n-75,1);2*ones(75,1)];```

Train an ECOC model using SVM binary classifiers. Specify a 30% holdout sample and the weighting scheme. Standardize the predictors using an SVM template, and specify the class order.

```t = templateSVM('Standardize',true); PMdl = fitcecoc(X,Y,'Holdout',0.30,'Weights',weights,... 'Learners',t,'ClassNames',classOrder); Mdl = PMdl.Trained{1}; % Extract trained, compact classifier```

`PMdl` is a trained `ClassificationPartitionedECOC` model. It has the property `Trained`, a 1-by-1 cell array containing the `CompactClassificationECOC` classifier that the software trained using the training data.

Compute the test-sample weighted edge using the weighting scheme.

```testInds = test(PMdl.Partition); % Extract the test indices XTest = X(testInds,:); YTest = Y(testInds,:); wTest = weights(testInds,:); e = edge(Mdl,XTest,YTest,'Weights',wTest)```
```e = 0.4798 ```

The average weighted margin of the test sample is approximately 0.48.

Perform feature selection by comparing test-sample edges from multiple models. Based solely on this comparison, the classifier with the greatest edge is the best classifier.

Load Fisher's iris data set. Specify the predictor data `X`, the response data `Y`, and the order of the classes in `Y`.

```load fisheriris X = meas; Y = categorical(species); classOrder = unique(Y); % Class order rng(1); % For reproducibility```

Partition the data set into training and test sets. Specify a 30% holdout sample for testing.

```Partition = cvpartition(Y,'Holdout',0.30); testInds = test(Partition); % Indices for the test set XTest = X(testInds,:); YTest = Y(testInds,:);```

`Partition` defines the data set partition.

Define these two data sets:

• `fullX` contains all predictors.

• `partX` contains the petal dimensions only.

```fullX = X; partX = X(:,3:4);```

Train an ECOC model using SVM binary classifiers for each predictor set. Specify the partition definition, standardize the predictors using an SVM template, and specify the class order.

```t = templateSVM('Standardize',true); fullPMdl = fitcecoc(fullX,Y,'CVPartition',Partition,'Learners',t,... 'ClassNames',classOrder); partPMdl = fitcecoc(partX,Y,'CVPartition',Partition,'Learners',t,... 'ClassNames',classOrder); fullMdl = fullPMdl.Trained{1}; partMdl = partPMdl.Trained{1};```

`fullPMdl` and `partPMdl` are `ClassificationPartitionedECOC` models. Each model has the property `Trained`, a 1-by-1 cell array containing the `CompactClassificationECOC` model that the software trained using the corresponding training set.

Calculate the test-sample edge for each classifier.

`fullEdge = edge(fullMdl,XTest,YTest)`
```fullEdge = 0.4573 ```
`partEdge = edge(partMdl,XTest(:,3:4),YTest)`
```partEdge = 0.4839 ```

`partMdl` yields an edge value comparable to the value for the more complex model `fullMdl`.

## Input Arguments

collapse all

Full or compact multiclass ECOC model, specified as a `ClassificationECOC` or `CompactClassificationECOC` model object.

To create a full or compact ECOC model, see `ClassificationECOC` or `CompactClassificationECOC`.

Sample data, specified as a table. Each row of `tbl` corresponds to one observation, and each column corresponds to one predictor variable. Optionally, `tbl` can contain additional columns for the response variable and observation weights. `tbl` must contain all the predictors used to train `Mdl`. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

If you train `Mdl` using sample data contained in a `table`, then the input data for `edge` must also be in a table.

### Note

If `Mdl.BinaryLearners` contains linear or kernel classification models (`ClassificationLinear` or `ClassificationKernel` model objects), then you cannot specify sample data in a `table`. Instead, pass a matrix (`X`) and class labels (`Y`).

When training `Mdl`, assume that you set `'Standardize',true` for a template object specified in the `'Learners'` name-value pair argument of `fitcecoc`. In this case, for the corresponding binary learner `j`, the software standardizes the columns of the new predictor data using the corresponding means in `Mdl.BinaryLearner{j}.Mu` and standard deviations in `Mdl.BinaryLearner{j}.Sigma`.

Data Types: `table`

Response variable name, specified as the name of a variable in `tbl`. If `tbl` contains the response variable used to train `Mdl`, then you do not need to specify `ResponseVarName`.

If you specify `ResponseVarName`, then you must do so as a character vector or string scalar. For example, if the response variable is stored as `tbl.y`, then specify `ResponseVarName` as `'y'`. Otherwise, the software treats all columns of `tbl`, including `tbl.y`, as predictors.

The response variable must be a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Data Types: `char` | `string`

Predictor data, specified as a numeric matrix.

Each row of `X` corresponds to one observation, and each column corresponds to one variable. The variables in the columns of `X` must be the same as the variables that trained the classifier `Mdl`.

The number of rows in `X` must equal the number of rows in `Y`.

When training `Mdl`, assume that you set `'Standardize',true` for a template object specified in the `'Learners'` name-value pair argument of `fitcecoc`. In this case, for the corresponding binary learner `j`, the software standardizes the columns of the new predictor data using the corresponding means in `Mdl.BinaryLearner{j}.Mu` and standard deviations in `Mdl.BinaryLearner{j}.Sigma`.

Data Types: `double` | `single`

Class labels, specified as a categorical, character, or string array, a logical or numeric vector, or a cell array of character vectors. `Y` must have the same data type as `Mdl.ClassNames`. (The software treats string arrays as cell arrays of character vectors.)

The number of rows in `Y` must equal the number of rows in `tbl` or `X`.

Data Types: `categorical` | `char` | `string` | `logical` | `single` | `double` | `cell`

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Example: `edge(Mdl,X,Y,'BinaryLoss','exponential','Decoding','lossbased')` specifies an exponential binary learner loss function and a loss-based decoding scheme for aggregating the binary losses.

Binary learner loss function, specified as the comma-separated pair consisting of `'BinaryLoss'` and a built-in loss function name or function handle.

• This table describes the built-in functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula.

ValueDescriptionScore Domaing(yj,sj)
`'binodeviance'`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
`'exponential'`Exponential(–∞,∞)exp(–yjsj)/2
`'hamming'`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
`'hinge'`Hinge(–∞,∞)max(0,1 – yjsj)/2
`'linear'`Linear(–∞,∞)(1 – yjsj)/2
`'logit'`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]
`'quadratic'`Quadratic[0,1][1 – yj(2sj – 1)]2/2

The software normalizes binary losses so that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class.

• For a custom binary loss function, for example `customFunction`, specify its function handle `'BinaryLoss',@customFunction`.

`customFunction` has this form:

`bLoss = customFunction(M,s)`
where:

• `M` is the K-by-L coding matrix stored in `Mdl.CodingMatrix`.

• `s` is the 1-by-L row vector of classification scores.

• `bLoss` is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class.

• K is the number of classes.

• L is the number of binary learners.

For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function.

The default `BinaryLoss` value depends on the score ranges returned by the binary learners. This table describes some default `BinaryLoss` values based on the given assumptions.

AssumptionDefault Value
All binary learners are SVMs or either linear or kernel classification models of SVM learners.`'hinge'`
All binary learners are ensembles trained by `AdaboostM1` or `GentleBoost`.`'exponential'`
All binary learners are ensembles trained by `LogitBoost`.`'binodeviance'`
All binary learners are linear or kernel classification models of logistic regression learners. Or, you specify to predict class posterior probabilities by setting `'FitPosterior',true` in `fitcecoc`.`'quadratic'`

To check the default value, use dot notation to display the `BinaryLoss` property of the trained model at the command line.

Example: `'BinaryLoss','binodeviance'`

Data Types: `char` | `string` | `function_handle`

Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of `'Decoding'` and `'lossweighted'` or `'lossbased'`. For more information, see Binary Loss.

Example: `'Decoding','lossbased'`

Predictor data observation dimension, specified as the comma-separated pair consisting of `'ObservationsIn'` and `'columns'` or `'rows'`. `Mdl.BinaryLearners` must contain `ClassificationLinear` models.

### Note

If you orient your predictor matrix so that observations correspond to columns and specify `'ObservationsIn','columns'`, you can experience a significant reduction in execution time.

Estimation options, specified as the comma-separated pair consisting of `'Options'` and a structure array returned by `statset`.

To invoke parallel computing:

• You need a Parallel Computing Toolbox™ license.

• Specify `'Options',statset('UseParallel',true)`.

Verbosity level, specified as the comma-separated pair consisting of `'Verbose'` and `0` or `1`. `Verbose` controls the number of diagnostic messages that the software displays in the Command Window.

If `Verbose` is `0`, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages.

Example: `'Verbose',1`

Data Types: `single` | `double`

Observation weights, specified as the comma-separated pair consisting of `'Weights'` and a numeric vector or the name of a variable in `tbl`. If you supply weights, `edge` computes the weighted classification edge.

If you specify `Weights` as a numeric vector, then the size of `Weights` must be equal to the number of observations in `X` or `tbl`. The software normalizes `Weights` to sum up to the value of the prior probability in the respective class.

If you specify `Weights` as the name of a variable in `tbl`, you must do so as a character vector or string scalar. For example, if the weights are stored as `tbl.w`, then specify `Weights` as `'w'`. Otherwise, the software treats all columns of `tbl`, including `tbl.w`, as predictors.

Data Types: `single` | `double` | `char` | `string`

## Output Arguments

collapse all

Classification edge, returned as a numeric scalar or vector. `e` represents the weighted mean of the classification margins.

If `Mdl.BinaryLearners` contains `ClassificationLinear` models, then `e` is a 1-by-L vector, where L is the number of regularization strengths in the linear classification models (`numel(Mdl.BinaryLearners{1}.Lambda)`). The value `e(j)` is the edge for the model trained using regularization strength `Mdl.BinaryLearners{1}.Lambda(j)`.

Otherwise, `e` is a scalar value.

collapse all

### Classification Edge

The classification edge is the weighted mean of the classification margins.

One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.

### Classification Margin

The classification margin is, for each observation, the difference between the negative loss for the true class and the maximal negative loss among the false classes. If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.

### Binary Loss

A binary loss is a function of the class and classification score that determines how well a binary learner classifies an observation into the class.

Suppose the following:

• mkj is element (k,j) of the coding design matrix M (that is, the code corresponding to class k of binary learner j).

• sj is the score of binary learner j for an observation.

• g is the binary loss function.

• $\stackrel{^}{k}$ is the predicted class for the observation.

In loss-based decoding [Escalera et al.], the class producing the minimum sum of the binary losses over binary learners determines the predicted class of an observation, that is,

`$\stackrel{^}{k}=\underset{k}{\text{argmin}}\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right).$`

In loss-weighted decoding [Escalera et al.], the class producing the minimum average of the binary losses over binary learners determines the predicted class of an observation, that is,

`$\stackrel{^}{k}=\underset{k}{\text{argmin}}\frac{\sum _{j=1}^{L}|{m}_{kj}|g\left({m}_{kj},{s}_{j}\right)}{\sum _{j=1}^{L}|{m}_{kj}|}.$`

Allwein et al. suggest that loss-weighted decoding improves classification accuracy by keeping loss values for all classes in the same dynamic range.

This table summarizes the supported loss functions, where yj is a class label for a particular binary learner (in the set {–1,1,0}), sj is the score for observation j, and g(yj,sj).

ValueDescriptionScore Domaing(yj,sj)
`'binodeviance'`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)]
`'exponential'`Exponential(–∞,∞)exp(–yjsj)/2
`'hamming'`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2
`'hinge'`Hinge(–∞,∞)max(0,1 – yjsj)/2
`'linear'`Linear(–∞,∞)(1 – yjsj)/2
`'logit'`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)]
`'quadratic'`Quadratic[0,1][1 – yj(2sj – 1)]2/2

The software normalizes binary losses such that the loss is 0.5 when yj = 0, and aggregates using the average of the binary learners [Allwein et al.].

Do not confuse the binary loss with the overall classification loss (specified by the `'LossFun'` name-value pair argument of the `loss` and `predict` object functions), which measures how well an ECOC classifier performs as a whole.

## Tips

• To compare the margins or edges of several ECOC classifiers, use template objects to specify a common score transform function among the classifiers during training.

## References

[1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classiﬁers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141.

[2] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134.

[3] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recogn. Vol. 30, Issue 3, 2009, pp. 285–297.