Main Content

execute

Collect metric data for Model Testing Dashboard

Description

example

execute(metricEngine,metricIDs) collects results in the specified metric.Engine object for the metrics that you specify in metricIDs.

example

execute(metricEngine,metricIDs,'ArtifactScope',unit) collects metric results for the artifacts in the unit that you specify. A unit is the smallest testable entity in your project that is represented by a model, and it includes referenced models. The artifacts in a unit are the models and the requirements, test cases, and test results that trace to the unit.

Examples

collapse all

Collect metric data on the requirements-based testing artifacts in a project.

Open the project. At the command line, type dashboardCCProjectStart.

dashboardCCProjectStart

Create a metric.Engine object for the project.

metric_engine = metric.Engine();

Update the trace information for metric_engine to reflect any pending artifact changes and ensure that all test results are tracked.

updateArtifacts(metric_engine)

Collect results for the metric Requirements per test case by executing the metric engine.

execute(metric_engine,{'RequirementsPerTestCase'});

Use the function getMetrics to access the results.

results = getMetrics(metric_engine,'RequirementsPerTestCase');
for n = 1:length(results)
    disp(['Test Case: ',results(n).Artifacts(1).Name])
    disp(['  Number of Requirements: ',num2str(results(n).Value)])
end
Test Case: Set button
  Number of Requirements: 0
Test Case: Decrement button hold
  Number of Requirements: 1
Test Case: Resume button
  Number of Requirements: 1
Test Case: Cancel button
  Number of Requirements: 1
Test Case: Decrement button short
  Number of Requirements: 2
Test Case: Increment button hold
  Number of Requirements: 1
Test Case: Increment button short
  Number of Requirements: 2
Test Case: Enable button
  Number of Requirements: 1

Collect metrics for one unit in the project. Specify a model and collect metrics for only the artifacts that trace to the model.

Open the project that contains the model. At the command line, type dashboardCCProjectStart.

dashboardCCProjectStart

Create a metric.Engine object for the project.

metric_engine = metric.Engine();

Update the trace information for metric_engine to reflect any pending artifact changes and ensure that all test results are tracked.

updateArtifacts(metric_engine)

Create a variable that represents the path to the model db_DriverSwRequest.

modelPath = fullfile(pwd, 'models', 'db_DriverSwRequest.slx');

Collect results for the metric Requirements per test case by using the execute function on the engine object and limiting the scope to the db_DriverSwRequest model.

execute(metric_engine,{'RequirementsPerTestCase'},'ArtifactScope',{modelPath, 'db_DriverSwRequest'});

Use the function getMetrics to access the results.

results = getMetrics(metric_engine,'RequirementsPerTestCase');
for n = 1:length(results)
    disp(['Test Case: ',results(n).Artifacts(1).Name])
    disp(['  Number of Requirements: ',num2str(results(n).Value)])
end
Test Case: Set button
  Number of Requirements: 0
Test Case: Resume button
  Number of Requirements: 1
Test Case: Decrement button short
  Number of Requirements: 2
Test Case: Enable button
  Number of Requirements: 1
Test Case: Increment button hold
  Number of Requirements: 1
Test Case: Increment button short
  Number of Requirements: 2
Test Case: Cancel button
  Number of Requirements: 1
Test Case: Decrement button hold
  Number of Requirements: 1

Input Arguments

collapse all

Metric engine object for which you want to collect metric results, specified as a metric.Engine object.

Metric identifiers for metrics that you want to collect, specified as a character vector or cell array of character vectors. Collecting results for a metric requires a Simulink® Test™ license, a Simulink Requirements™ license, or a Simulink Coverage™ license. For a list of metrics and their identifiers and license requirements, see Model Testing Metrics.

Example: 'TestCasesPerRequirementDistribution'

Example: {'TestCaseStatus', 'DecisionCoverageBreakdown'}

Path and name of the unit for which you want to collect metric results, specified as a cell array where the first entry is the full path to the model file and the second entry is the name of the block diagram. When you use this argument, the metric engine collects results for the artifacts that trace to the unit model.

Example: {'C:\work\MyModel.slx', 'MyModel'}

Introduced in R2020b