cnncodegen
Generate code for a deep learning network to target the ARM Mali GPU
Syntax
Description
cnncodegen(
generates C++ code for the specified network object by using the ARM® Compute Library for Mali GPUs.net
,'targetlib','arm-compute-mali')
Requires the GPU Coder™ product and the GPU Coder Interface for Deep Learning.
cnncodegen(
generates C++ code for the specified network object by using the ARM Compute Library for Mali GPUs with additional code generation
options.net
,'targetlib','arm-compute-mali',targetparams
)
Examples
Generate C++ Code for a Pretrained Network to Run on an ARM Processor
Use cnncodegen
to generate C++ code for
a pretrained network for deployment to an ARM Mali graphics processor.
Get the pretrained GoogLeNet model by using the googlenet
(Deep Learning Toolbox) function. This function requires the Deep Learning Toolbox™ Model for GoogLeNet Network. If you have not installed this support package, the function
provides a download link. Alternatively, see https://www.mathworks.com/matlabcentral/fileexchange/64456-deep-learning-toolbox-model-for-googlenet-network.
net = googlenet;
Generate code by using cnncodegen
with
'targetlib'
set to
'arm-compute-mali'
. By default, the code generator
targets version '19.05'
of the ARM. To target a different version of the Compute Library, use the
'ArmComputeVersion'
parameter.
cnncodegen(net,'targetlib','arm-compute-mali'... ,'targetparams',struct('ArmComputeVersion','19.02'));
------------------------------------------------------------------------ Compilation suppressed: generating code only. ------------------------------------------------------------------------ ### Codegen Successfully Generated for arm device
The code generator generates the .cpp
and header files
in the '/pwd/codegen'
folder. The DAG network is
generated as a C++ class called CnnMain
, containing an
array of 87 layer classes. The code generator reduces the number of layers
is by layer fusion optimization of convolutional and batch normalization
layers. The setup()
method of this class sets up handles
and allocates resources for each layer object. The
predict()
method invokes prediction for each of the
87 layers in the network. The cleanup()
method releases
all the memory and system resources allocated for each layer object. All the
binary weights (cnn_**_w
) and the bias files
(cnn_**_b
) for the convolution layers of the network
are stored in the codegen
folder.
To build the library, move the generated code to the ARM target platform and use the generated makefile
cnnbuild_rtw.mk
.
Input Arguments
net
— Pretrained deep learning network object
character vector | string scalar
Pretrained SeriesNetwork
or
DAGNetwork
object.
Note
cnncodegen
does not support
dlnetwork
objects.
targetparams
— Library-specific parameters
structure
ARM Compute Library-specific parameters specified as a
1
-by-1
structure containing the
fields described in these tables.
Field | Description |
---|---|
ArmComputeVersion | Version of ARM Compute Library on the target hardware,
specified as |
Version History
Introduced in R2017bR2021a: Changes to Target Library Support
From R2021b, the cnncodegen
function generates C++ code and makefiles to build a
static library for only the ARM Mali GPU processor by using the ARM Compute Library for computer vision and machine learning.
For all other targets, use the codegen
command. Write an
entry-point function in MATLAB® that uses the coder.loadDeepLearningNetwork
function to load a deep learning model
and calls predict
(Deep Learning Toolbox)
to predict the responses. For example,
function out = googlenet_predict(in) %#codegen persistent mynet; if isempty(mynet) mynet = coder.loadDeepLearningNetwork('googlenet'); end % pass in input out = predict(mynet,in);
This table shows some typical usages of cnncodegen
and how to
update your code to use codegen
instead.
Target workflow | Not recommended | Recommended |
---|---|---|
ARM CPU processor supporting | Set the cnncodegen(net,'targetlib'... ,'arm-compute','targetparams' ... ,struct('ArmComputeVersion'... ,'19.02','ArmArchitecture'... ,'armv8')) Other
supported versions of ARM Compute Library are You can specify the
ARM architecture as | Create a cfg = coder.config('lib'); cfg.TargetLang = 'C++'; Create
a dlcfg = coder.DeepLearningConfig ... ('arm-compute'); dlcfg.ArmArchitecture = 'armv8'; dlcfg.ArmComputeVersion = '19.02'; cfg.DeepLearningConfig = dlcfg; Use
the arg = {ones(224,224,3,'single')}; codegen -args arg ... -config cfg googlenet_predict For more information, see Code Generation for Deep Learning Networks with ARM Compute Library. |
NVIDIA® GPUs by using the CUDA® Deep Neural Network library (cuDNN) | Set the cnncodegen(net,'targetlib'... ,'cudnn','ComputeCapability'... ,'7.0','targetparams' ... ,struct('AutoTuning',true ... ,'DataType','INT8'... ,'CalibrationResultFile' ... 'myInt8Cal.mat')) The auto tuning feature allows the cuDNN library to find the fastest convolution algorithms. The
| Create a cfg = coder.gpuConfig('lib'); cfg.TargetLang = 'C++'; To
set the minimum compute capability for code generation, use the
cfg.GpuConfig.ComputeCapability = '7.0';
Create
a dlcfg = coder.DeepLearningConfig('cudnn'); dlcfg.AutoTuning = true; dlcfg.DataType = 'int8'; dlcfg.CalibrationResultFile = 'myInt8Cal.mat'; cfg.DeepLearningConfig = dlcfg; Use
the arg = {ones(224,224,3,'single')}; codegen -args arg ... -config cfg googlenet_predict For more information, see Code Generation for Deep Learning Networks by Using cuDNN. |
Intel® CPU processor | To use the Intel Math Kernel Library for Deep Neural Networks
(MKL-DNN) for Intel CPUs, set the cnncodegen(net,'targetlib'... ,'mkldnn'); | Create a cfg = coder.config('lib'); cfg.TargetLang = 'C++'; Create
a dlcfg = coder.DeepLearningConfig... ('mkldnn'); cfg.DeepLearningConfig = dlcfg; Use
the arg = {ones(224,224,3,'single')}; codegen -args arg ... -config cfg googlenet_predict For more information, see Code Generation for Deep Learning Networks with MKL-DNN. |
NVIDIA GPUs by using NVIDIA TensorRT™, a high performance deep learning inference optimizer and run-time library | Set the cnncodegen(net,'targetlib'... ,'tensorrt','ComputeCapability'... ,'7.0','targetparams' ... ,struct('DataType','INT8' ... 'DataPath','image_dataset'... ,'NumCalibrationBatches',50)) | Create a cfg = coder.gpuConfig('lib'); cfg.TargetLang = 'C++'; To
set the minimum compute capability for code generation, use the
cfg.GpuConfig.ComputeCapability = '7.0';
Create
a dlcfg = coder.DeepLearningConfig... ('cudnn'); dlcfg.DataType = 'int8'; dlcfg.DataPath = 'image_dataset'; dlcfg.NumCalibrationBatches = 50; cfg.DeepLearningConfig = dlcfg; Use
the arg = {ones(224,224,3,'single')}; codegen -args arg ... -config cfg googlenet_predict For more information, see Deep Learning Prediction with NVIDIA TensorRT Library. |
General options | Generate code without generating and building a makefile. For example, cnncodegen(net,'targetlib' ... ,'mkldnn','codegenonly',1); | To produces the source code without invoking the make
command or build object code, use the
cfg = coder.codeConfig('lib');
cfg.GenCodeOnly = true;
|
Specifying the NVIDIA GPU compute capability to compile for. Argument
takes the format of
cnncodegen(net,'targetlib'... ,'cudnn','ComputeCapability','7.0'); | To set the minimum compute capability for code generation,
use the cfg = coder.gpuConfig('lib'); cfg.GpuConfig.ComputeCapability = '7.0'; |
See Also
Functions
Topics
- Code Generation for Deep Learning Networks with MKL-DNN
- Deep Learning Prediction with ARM Compute Using codegen
- Code Generation for Deep Learning Networks by Using cuDNN
- Code Generation for Deep Learning Networks by Using TensorRT
- Code Generation for Deep Learning Networks Targeting ARM Mali GPUs
- Code Generation for Object Detection by Using YOLO v2
- Deep Learning Prediction with NVIDIA TensorRT Library
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)