Target library
Deep learning library to use during code generation for target environment
Since R2020b
Model Configuration Pane: Code Generation / Interface
Description
The Target library parameter specifies the deep learning library the code generator uses to produce code for the target environment.
Dependency
To enable this parameter, set the System target file
parameter togrt.tlc
or ert.tlc
and
the Language parameter to
C++
.
Settings
None
(default) | MKL-DNN
| ARM Compute
| cuDNN
| TensorRT
MKL-DNN
Generates code that uses the Intel® Math Kernel Library for Deep Neural Networks (Intel MKL-DNN).
To enable this setting (or
ARM Compute
), clear the GPU acceleration parameter.When select this setting, the Language standard parameter is set to
C++11 (ISO)
.
ARM Compute
Generates code that uses the ARM® Compute Library.
To enable this setting (or
MKL-DNN
), clear the GPU acceleration parameter.
cuDNN
Generates code that uses the CUDA® Deep Neural Network library (cuDNN).
This setting requires a GPU Coder™ license.
To enable this setting (or
TensorRT
), select the GPU acceleration parameter.
TensorRT
Generates code that takes advantage of the NVIDIA® TensorRT – high performance deep learning inference optimizer and run-time library.
This setting requires a GPU Coder license.
To enable this setting (or
cuDNN
, select the GPU acceleration parameter.
Recommended Settings
Application | Setting |
---|---|
Debugging | |
Traceability | |
Efficiency | |
Safety precaution |
Programmatic Use
Parameter:
DLTargetLibrary |
Type: character vector |
Value:
'None' | 'MKL-DNN' |
'arm-compute' | 'cuDNN' |
'TensorRT' |
Default:
'None' |
Version History
Introduced in R2020b