Code Profile Analyzer
Description
Use the Code Profile Analyzer app to analyze execution-time and stack usage profiles produced by software-in-the-loop (SIL) or processor-in-the-loop (PIL) execution of generated code:
View execution times of tasks and functions, visualize the function-call stack at a specific simulation time, and compare execution times from different runs.
View stack usage for tasks and functions and compare values from different runs.
If you are using Embedded Coder®, you can also:
Analyze execution-time profiles produced by XCP-based external mode simulations.
Configure code profiling for a model hierarchy.
Identify and analyze performance hotspots in the generated code. This workflow uses the
coder.profile.test.runTestsfunction.Identify and analyze critical paths in the generated code. This workflow uses the
coder.profile.test.analyzePathfunction.
Open the Code Profile Analyzer App
If you are generating code from a Simulink® model, click the SIL/PIL tab. In the Results gallery, under Execution Profiling Results or Stack Profiling Results, click the Code Profile Analyzer button.
If you are generating code from MATLAB® code, terminate the SIL/PIL execution in the Command Window. The code generator produces a message in the Command Window that contains a link to the execution or stack profiling report. Click that link.
Alternatively, for both Simulink and MATLAB workflows, open the Code Profile Analyzer from the Command Window after the
SIL/PIL execution has been terminated by entering coder.profile.show or
coder.profile.show(. profileVariable)
For the Simulink workflow, is a
workspace variable (profileVariablecoder.profile.ExecutionTime or
coder.profile.ExecutionStack object) that you specify through the
Workspace variable (CodeExecutionProfileVariable) or
Stack workspace variable (CodeStackProfileVariable)
configuration parameter.
For the MATLAB workflow, obtain
by using the profileVariablegetCoderExecutionProfile or
getCoderStackProfile function.
If the app is already open, you can use the General tab to import single simulation results by selecting the workspace variable that contains your execution-time or stack usage measurements.
You cannot use the General tab to import aggregated results from
multiple simulations. Instead, use the command
coder.profile.show( where
profileAggregate)profileAggregate is a
coder.profile.ExecutionTimeSet or
coder.profile.ExecutionStackSet object.
Code Profile Analyzer Workflows
The type of simulation results that you import determines the tabs displayed by the app and the workflows that you can perform.
When you import results using a coder.profile.ExecutionTime object,
the Time Profiling tab displays tabs as described in this table.
| Tab | Purpose |
|---|---|
| Task Execution | Show execution-time metrics of tasks. For each task, inspect execution times (that is, minimum, average, and maximum execution times) and the corresponding CPU utilization, which enables you to determine whether overall the tasks can run on the target hardware without execution-time issues. |
| Function Execution | Show execution-time metrics of generated code functions to identify hotspots and determine functions that require rework. |
| Function-Call Stack | Show function-call stack at specific simulation time. If you identify that a task is taking longer than expected, you can display its function-call step for each execution instance. This feature helps you to identify performance bottlenecks. |
| Comparison | Compare execution-time metrics of two simulations to identify performance regressions. |
| Task Validation | Perform CPU feasibility analysis to determine formally whether the task set can run without execution-time issues. Analyze the effect of task-to-CPU mapping. |
| CPU Usage | For each task, display execution times across the available CPUs. This feature shows how tasks are allocated on the system and can help you to improve load balancing. Available only when these conditions apply:
|
When you import results using a coder.profile.ExecutionTimeSet
object, the Time Profiling tab displays tabs as described in this
table.
| Tab | Purpose |
|---|---|
| Cumulative Results | Show task execution-time metrics from different tests. If you run a suite of test cases, for each task, you can identify the longest and average execution time. Then, you can identify the test case that produces the most demanding execution. |
| Flame Graph | Show flame graph for longest task executions. This view helps you to understand what happens during a task execution instance, enabling you to identify performance bottlenecks. |
| Comparison | Compare execution-time metrics of two simulations to identify performance regressions. |
| Task Validation | Perform CPU feasibility analysis to determine formally whether the task set can run without execution-time issues. Analyze the effect of task-to-CPU mapping. |
When you import results using a coder.profile.ExecutionStack object,
the Memory Profiling tab displays tabs as described in this
table.
| Tab | Purpose |
|---|---|
| Stack Memory | Show stack usage metrics of tasks and functions. For each task, view a summary of the minimum, average, and maximum values of stack memory usage. Within a task, see how much memory is required for each executed function. |
| Function-Call Stack | Show function-call stack at a specific simulation time. If you observe that memory allocation for a task is higher than expected, you can display its function-call stack for each execution instance. This feature helps you to identify performance bottlenecks. |
| Comparison | Compare stack usage metrics of two simulations to identify performance regressions. |
When you import results using a coder.profile.ExecutionStackSet
object, the Memory Profiling tab displays tabs as described in this
table.
| Tab | Purpose |
|---|---|
| Cumulative Results | Show task stack usage metrics from different tests. If you run a suite of test cases, for each task, you can identify the maximum value of stack usage. Then, you can identify the test case that produces the most demanding execution. |
| Comparison | Compare stack usage metrics of two simulations to identify performance regressions. |
If your model is configured to produce Performance Monitoring Unit (PMU) metrics, then when you import results, the Code Profiling tab displays tabs as described in this table.
| Tab | Purpose |
|---|---|
| Function Execution | Show PMU metrics of generated code functions to identify hotspots and determine functions that require rework. |
| Comparison | Compare PMU metrics of two simulations to identify performance regressions. |
Whether you import data or not, you can also use the Code Profile Analyzer app to configure code profiling in a model hierarchy. See Configure Execution-Time Profiling in Model Hierarchy Using Code Profile Analyzer and Configure Stack Usage Profiling in Model Hierarchy Using Code Profile Analyzer.
Examples
Related Examples
Version History
Introduced in R2023a
