Main Content

edge

Classification edge for Gaussian kernel classification model

Description

e = edge(Mdl,X,Y) returns the classification edge for the binary Gaussian kernel classification model Mdl using the predictor data in X and the corresponding class labels in Y.

example

e = edge(Mdl,Tbl,ResponseVarName) returns the classification edge for the trained kernel classifier Mdl using the predictor data in table Tbl and the class labels in Tbl.ResponseVarName.

e = edge(Mdl,Tbl,Y) returns the classification edge for the classifier Mdl using the predictor data in table Tbl and the class labels in vector Y.

e = edge(___,'Weights',weights) returns the weighted classification edge using the observation weights supplied in weights. Specify the weights after any of the input argument combinations in previous syntaxes.

Note

If the predictor data X or the predictor variables in Tbl contain any missing values, the edge function can return NaN. For more details, see edge can return NaN for predictor data with missing values.

Examples

collapse all

Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').

load ionosphere

Partition the data set into training and test sets. Specify a 15% holdout sample for the test set.

rng('default') % For reproducibility
Partition = cvpartition(Y,'Holdout',0.15);
trainingInds = training(Partition); % Indices for the training set
testInds = test(Partition); % Indices for the test set

Train a binary kernel classification model using the training set.

Mdl = fitckernel(X(trainingInds,:),Y(trainingInds));

Estimate the training-set edge and the test-set edge.

eTrain = edge(Mdl,X(trainingInds,:),Y(trainingInds))
eTrain = 
2.1703
eTest = edge(Mdl,X(testInds,:),Y(testInds))
eTest = 
1.5643

Perform feature selection by comparing test-set edges from multiple models. Based solely on this criterion, the classifier with the highest edge is the best classifier.

Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').

load ionosphere

Partition the data set into training and test sets. Specify a 15% holdout sample for the test set.

rng('default') % For reproducibility
Partition = cvpartition(Y,'Holdout',0.15);
trainingInds = training(Partition); % Indices for the training set
XTrain = X(trainingInds,:);
YTrain = Y(trainingInds);
testInds = test(Partition); % Indices for the test set
XTest = X(testInds,:);
YTest = Y(testInds);

Randomly choose half of the predictor variables.

p = size(X,2); % Number of predictors
idxPart = randsample(p,ceil(0.5*p));

Train two binary kernel classification models: one that uses all of the predictors, and one that uses half of the predictors.

Mdl = fitckernel(XTrain,YTrain);
PMdl = fitckernel(XTrain(:,idxPart),YTrain);

Mdl and PMdl are ClassificationKernel models.

Estimate the test-set edge for each classifier.

fullEdge = edge(Mdl,XTest,YTest)
fullEdge = 
1.6335
partEdge = edge(PMdl,XTest(:,idxPart),YTest)
partEdge = 
2.0205

Based on the test-set edges, the classifier that uses half of the predictors is the better model.

Input Arguments

collapse all

Binary kernel classification model, specified as a ClassificationKernel model object. You can create a ClassificationKernel model object using fitckernel.

Predictor data, specified as an n-by-p numeric matrix, where n is the number of observations and p is the number of predictors used to train Mdl.

The length of Y and the number of observations in X must be equal.

Data Types: single | double

Class labels, specified as a categorical, character, or string array; logical or numeric vector; or cell array of character vectors.

  • The data type of Y must be the same as the data type of Mdl.ClassNames. (The software treats string arrays as cell arrays of character vectors.)

  • The distinct classes in Y must be a subset of Mdl.ClassNames.

  • If Y is a character array, then each element must correspond to one row of the array.

  • The length of Y must be equal to the number of observations in X or Tbl.

Data Types: categorical | char | string | logical | single | double | cell

Sample data used to train the model, specified as a table. Each row of Tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, Tbl can contain additional columns for the response variable and observation weights. Tbl must contain all the predictors used to train Mdl. Multicolumn variables and cell arrays other than cell arrays of character vectors are not allowed.

If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName or Y.

If you train Mdl using sample data contained in a table, then the input data for edge must also be in a table.

Response variable name, specified as the name of a variable in Tbl. If Tbl contains the response variable used to train Mdl, then you do not need to specify ResponseVarName.

If you specify ResponseVarName, then you must specify it as a character vector or string scalar. For example, if the response variable is stored as Tbl.Y, then specify ResponseVarName as 'Y'. Otherwise, the software treats all columns of Tbl, including Tbl.Y, as predictors.

The response variable must be a categorical, character, or string array; a logical or numeric vector; or a cell array of character vectors. If the response variable is a character array, then each element must correspond to one row of the array.

Data Types: char | string

Observation weights, specified as a numeric vector or the name of a variable in Tbl.

  • If weights is a numeric vector, then the size of weights must be equal to the number of rows in X or Tbl.

  • If weights is the name of a variable in Tbl, you must specify weights as a character vector or string scalar. For example, if the weights are stored as Tbl.W, then specify weights as 'W'. Otherwise, the software treats all columns of Tbl, including Tbl.W, as predictors.

If you supply weights, edge computes the weighted classification edge. The software weights the observations in each row of X or Tbl with the corresponding weights in weights.

edge normalizes weights to sum up to the value of the prior probability in the respective class.

Data Types: single | double | char | string

Output Arguments

collapse all

Classification edge, returned as a numeric scalar.

More About

collapse all

Classification Edge

The classification edge is the weighted mean of the classification margins.

One way to choose among multiple classifiers, for example to perform feature selection, is to choose the classifier that yields the greatest edge.

Classification Margin

The classification margin for binary classification is, for each observation, the difference between the classification score for the true class and the classification score for the false class.

The software defines the classification margin for binary classification as

m=2yf(x).

x is an observation. If the true label of x is the positive class, then y is 1, and –1 otherwise. f(x) is the positive-class classification score for the observation x. The classification margin is commonly defined as m = yf(x).

If the margins are on the same scale, then they serve as a classification confidence measure. Among multiple classifiers, those that yield greater margins are better.

Classification Score

For kernel classification models, the raw classification score for classifying the observation x, a row vector, into the positive class is defined by

f(x)=T(x)β+b.

  • T(·) is a transformation of an observation for feature expansion.

  • β is the estimated column vector of coefficients.

  • b is the estimated scalar bias.

The raw classification score for classifying x into the negative class is f(x). The software classifies observations into the class that yields a positive score.

If the kernel classification model consists of logistic regression learners, then the software applies the 'logit' score transformation to the raw classification scores (see ScoreTransform).

Extended Capabilities

Version History

Introduced in R2017b

expand all