Classify observations using support vector machine (SVM) classifier

`[`

also returns a matrix of scores (`label`

,`score`

]
= predict(`SVMModel`

,`X`

)`score`

) indicating the
likelihood that a label comes from a particular class. For SVM, likelihood measures
are either classification scores or
class posterior probabilities.
For each observation in `X`

, the predicted class label corresponds
to the maximum score among all classes.

If you are using a linear SVM model for classification and the model has many support vectors, then using

`predict`

for the prediction method can be slow. To efficiently classify observations based on a linear SVM model, remove the support vectors from the model object by using`discardSupportVectors`

.

By default and irrespective of the model kernel function, MATLAB

^{®}uses the dual representation of the score function to classify observations based on trained SVM models, specifically$$\widehat{f}(x)={\displaystyle \sum _{j=1}^{n}{\widehat{\alpha}}_{j}}{y}_{j}G(x,{x}_{j})+\widehat{b}.$$

This prediction method requires the trained support vectors and

*α*coefficients (see the`SupportVectors`

and`Alpha`

properties of the SVM model).By default, the software computes optimal posterior probabilities using Platt’s method [1]:

Perform 10-fold cross-validation.

Fit the sigmoid function parameters to the scores returned from the cross-validation.

Estimate the posterior probabilities by entering the cross-validation scores into the fitted sigmoid function.

The software incorporates prior probabilities in the SVM objective function during training.

For SVM,

`predict`

and`resubPredict`

classify observations into the class yielding the largest score (the largest posterior probability). The software accounts for misclassification costs by applying the average-cost correction before training the classifier. That is, given the class prior vector*P*, misclassification cost matrix*C*, and observation weight vector*w*, the software defines a new vector of observation weights (*W*) such that$${W}_{j}={w}_{j}{P}_{j}{\displaystyle \sum _{k=1}^{K}{C}_{jk}}.$$

[1] Platt, J. “Probabilistic outputs for support vector
machines and comparisons to regularized likelihood methods.” *Advances
in Large Margin Classifiers*. MIT Press, 1999, pages 61–74.

`ClassificationSVM`

| `CompactClassificationSVM`

| `fitSVMPosterior`

| `fitcsvm`

| `loss`

| `resubPredict`