precisionRecall
Syntax
Description
[
gets the precision, recall, and prediction score values for all classes in the
precision
,recall
,scores
] = precisionRecall(metrics
)ClassNames
property and all overlap thresholds in the
OverlapThreshold
property of the instanceSegmentationMetrics
object metrics
.
[
specifies precision and recall evaluation options using one or more name-value
arguments. For example, precision
,recall
,scores
] = precisionRecall(metrics
,Name=Value
)ClassNames=["cars" "people"]
specifies to
get the precision recall metrics for the cars and people classes.
Input Arguments
metrics
— instance segmentation performance metrics
instanceSegmentationMetrics
object
instance segmentation performance metrics, specified as an instanceSegmentationMetrics
object.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: [precision,recall,scores]=precisionRecall(metrics,ClassNames=[cars
people])
specifies the precision recall metric to be evaluated for the
cars and people classes.
ClassNames
— Class names
string array | cell array of character vectors
Class names of detected objects, specified as an array of strings or a
cell array of character vectors. By default, the
precisionRecall
function returns the precision
recall metrics for all classes specified by the
ClassNames
property of the instanceSegmentationMetrics
object.
OverlapThresholds
— Overlap threshold
numeric scalar | numeric vector
Overlap threshold, or Intersection over Union (IoU) threshold, for
which to get the precision recall metrics, specified as a numeric scalar
or numeric vector of box overlap threshold values. By default, the
precisionRecall
object function returns the
precision and recall metrics for all overlap thresholds specified by the
OverlapThreshold
property of the instanceSegmentationMetrics
object. To learn more about
how to use overlap thresholds to evaluate instance segmentation results,
see Evaluate Network Performance Using Precision Recall at Multiple Overlap Thresholds.
Output Arguments
precision
— Precision
M-by-N cell array
Precision, returned as an M-by-N
cell array. M is the number of classes in the
ClassNames
property, and N is
the number of overlap thresholds in the
OverlapThreshold
property of the instanceSegmentationMetrics
object metrics
.
Each element of the cell array contains a
(numPredictions+1)-element numeric vector of precision
values, sorted in descending order of prediction confidence scores.
numPredictions is the number of predicted object
masks.
Precision is the ratio of the number of true positives (TP) and the total number of predicted positives. Larger precision scores indicate that more predicted object masks match ground truth objects. To learn more about the precision metric, see Compute Precision and Recall.
recall
— Recall
M-by-N cell array
Recall, returned as an M-by-N cell
array. M is the number of classes in the
ClassNames
property, and N is
the number of overlap thresholds in the
OverlapThreshold
property of the instanceSegmentationMetrics
object metrics
.
Each element of the cell array contains a
(numPredictions+1)-element numeric vector of recall
values, sorted in order of prediction confidence scores.
numPredictions is the number of predicted object
masks.
Recall is the ratio of true positives (TP) to the sum of true positives and false negatives (FN), where false negatives represent instances in the image that the model incorrectly classified as background or missed entirely. Larger recall scores indicate that the instance segmentation model is effectively identifying most of the actual instances within the images. To learn more about the recall metric, see Compute Precision and Recall.
scores
— Confidence score
M-by-1 cell array
Confidence score for each instance segmentation, returned as an
M-by-1 cell array. M is the number
of classes in the ClassNames
property. Each element of
the array is a (numPredictions+1)-element numeric vector.
numPredictions is the number of predicted object
masks.
More About
Evaluate Network Performance Using Precision Recall at Multiple Overlap Thresholds
To evaluate how precisely a network can localize object instances, plot the precision recall curve for a range of overlap thresholds.
At higher overlap thresholds, the criteria for how well the object masks and ground truth masks must overlap to be a true positive (TP) are stricter. Therefore, average precision is lower at higher overlap thresholds, which corresponds to a decrease in the area under the precision-recall curve, which plots the recall against precision. Plotting the precision-recall curves across different overlap thresholds can help you evaluate the instance segmentation network in these ways.
Evaluate how well the network identifies object classes at higher overlap thresholds.
Identify specific overlap thresholds where network performance is significantly lower, highlighting areas for model improvement.
Choose an overlap threshold that balances the trade-off between identifying as many object instances as possible (high recall) and ensuring high accuracy in those detections (high precision).
Use precision-recall curves at different overlap thresholds for a more granular comparison between instance segmentation models than single metric evaluations. Nuances in performance can be obscured when considering only one overlap threshold or when relying solely on aggregated metrics such as mean average precision.
Algorithms
Compute Precision and Recall
Precision recall is a powerful tool you can use to evaluate
instance segmentation network performance. The precisionRecall
function computes the precision and recall in these steps.
Sorts object instances in descending order based on their confidence scores. Higher confidence scores indicate that the model is more certain that the instance segmentation is correct.
Identifies true positives (TP) and false positives (FP) according to the specified overlap threshold criteria. A segmentation is considered a TP if its IoU is greater to or equal to the specified overlap threshold. A segmentation is considered an FP if it does not match any ground truth object or if it matches a ground truth object but the IoU is below the specified overlap threshold.
Starting with the instance segmentation with the highest confidence score, the
precisionRecall
function iterates through each instance and calculates the cumulative number of TPs and FPs at each step. This step-by-step process enables the calculation of precision and recall at every confidence level. At each score, the function calculates the precision and recall.Precision measures the accuracy of positive predictions, consisting of the proportion of correctly detected instances from all object instances.
Precision = TP / (TP + FP)
Recall measures the ability of the network to detect all object instances, consisting of the proportion of actual instances that the network correctly identifies. False negatives (FNs) are ground truth objects the network misses or detects with a confidence score below the specified threshold.
Recall = TP / (TP + FN)
For each instance segmentation, the set of TPs, FPs, and FNs changes, which enables you to recalculate precision and recall at each confidence level.
To visualize the trade-off between precision and recall for different confidence thresholds, plot the calculated precision and recall values. For an example, see Perform Instance Segmentation Using SOLOv2.
Version History
Introduced in R2024b
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)