To use GPU Coder™ for CUDA® C/C++ code generation, you must install the following products:
MATLAB Coder™ (required).
Parallel Computing Toolbox™ (required).
Deep Learning Toolbox™ (required for deep learning).
GPU Coder Interface for Deep Learning Libraries (required for deep learning).
Image Processing Toolbox™ (recommended).
Computer Vision Toolbox™ (recommended).
Embedded Coder® (recommended).
If MATLAB is installed on a path that contains non 7-bit ASCII characters, such as Japanese characters, MATLAB Coder does not work because it cannot locate code generation library functions.
For instructions on installing MathWorks® products, see the MATLAB installation documentation for your platform. If you have
installed MATLAB and want to check which other MathWorks products are installed, enter
ver in the MATLAB Command Window.
NVIDIA® GPU enabled for CUDA with compute capability 3.2 or higher (Is my GPU supported?).
CUDA toolkit and driver. The default installation comes
cuSOLVER, and Thrust libraries.
GPU Coder has been tested with CUDA toolkit v10.0 (Get the CUDA toolkit).
GCC C/C++ compiler 6.3.x
Microsoft® Visual Studio® 2013
Microsoft Visual Studio 2015
Microsoft Visual Studio 2017
nvcc compiler supports multiple versions of
GCC and therefore you can generate CUDA code with other versions of GCC. However, there
may be compatibility issues when executing the generated code
from MATLAB as the C/C++ run-time libraries that are included
with the MATLAB installation are compiled for GCC 6.3.
The code generation requirements for deep learning networks depends on the platform you are targeting.
CUDA enabled GPU with compute capability 3.2 or higher.
TensorRT™ libraries with
TensorRT libraries with
CUDA Deep Neural Network library (cuDNN) v7 or higher.
NVIDIA TensorRT – high performance deep learning inference optimizer and runtime library, v188.8.131.52.
|Operating System Support|
cuDNN support is on Windows and Linux.
TensorRT support is only on Linux.
Open Source Computer Vision Library (OpenCV), v3.1.0 is required for deep learning examples.
Note: The examples require
separate libs such as,
information, refer to the
CUDA toolkit for ARM® and Linaro GCC 4.9 toolchain for the TX2. Use the
CUDA toolkit for ARM and Linaro GCC 4.9 toolchain for the TX1.
CUDA toolkit 6.5 for ARM and Linaro GCC 4.8 toolchain for the TK1. Use the
To set up the Linaro tools, see the instructions on Cross-Compilation on Linux.