GPU Coder™ uses environment variables to locate the necessary tools, compilers, and libraries required for code generation. If you have a non-standard installation of the required third-party products, ensure that the following environment variables are set.
On Windows®, a space or special character in the path to the tools, compilers, and libraries can create issues during the build process. You must install third-party software in locations that does not contain spaces or change Windows settings to enable creation of short names for files, folders, and paths. For more information, see Using Windows short names solution in MATLAB Answers.
Platform | Variable Name | Description |
---|---|---|
Windows | CUDA_PATH | Path to the CUDA® toolkit installation. For example:
|
NVIDIA_CUDNN | Path to the root folder of cuDNN installation. The root folder contains the bin, include, and lib subfolders. For example:
| |
NVIDIA_TENSORRT | Path to the root folder of TensorRT installation. The root folder contains the bin, data, include, and lib subfolders. For example:
| |
OPENCV_DIR | Path to the build folder of OpenCV on the host. This variable is required for building and running deep learning examples. For example:
| |
PATH | Path to the CUDA executables. Generally, the CUDA toolkit installer sets this value automatically. For example:
| |
Path to the For example:
| ||
Path to the For example:
| ||
Path to the Dynamic-link libraries (DLL) of OpenCV. This variable is required for running deep learning examples. For example:
| ||
Linux® | PATH | Path to the CUDA toolkit executable. For example:
|
Path to the OpenCV libraries. This variable is required for building and running deep learning examples. For example:
| ||
Path to the OpenCV header files. This variable is required for building deep learning examples. For example:
| ||
LD_LIBRARY_PATH | Path to the CUDA library folder. For example:
| |
Path to the cuDNN library folder. For example:
| ||
Path to the TensorRT™ library folder. For example:
| ||
Path to the ARM® Compute Library folder on the target hardware. For example:
Set
| ||
NVIDIA_CUDNN | Path to the root folder of cuDNN library installation. For example:
| |
NVIDIA_TENSORRT | Path to the root folder of TensorRT library installation. For example:
| |
ARM_COMPUTELIB | Path to the root folder of the ARM Compute Library installation on the ARM target hardware. Set this value on the ARM target hardware. For example:
|
If you have multiple versions of Microsoft® Visual Studio® compilers for the C/C++ language installed on your Windows system, MATLAB® selects one as the default compiler. If the selected compiler is not compatible with the version supported by GPU Coder, change the selection. For supported Microsoft Visual Studio versions, see Installing Prerequisite Products.
To change the default compiler, use the mex
-setup
command. When you call mex -setup
,
MATLAB displays a message with links to set up a different compiler.
Select a link and change the default compiler for building MEX files. The
compiler that you choose remains the default until you call mex
-setup
to select a different default. For more information,
see Change Default Compiler (MATLAB).
The mex -setup
command changes only the C language
compiler. You must also change the default compiler for C++ by using
mex -setup C++
.
MATLAB and the CUDA toolkit support only the GCC compiler for the C language on Linux platforms. For supported GCC versions, see Installing Prerequisite Products.
To verify that your development computer has all the tools and configuration
needed for GPU code generation, use the coder.checkGpuInstall
function. This function performs checks to
verify if your environment has the all third-party tools and libraries required
for GPU code generation. You must pass a coder.gpuEnvConfig
object to the function. This function verifies
the GPU code generation environment based on the properties specified in the
given configuration object.
You can also use the equivalent GUI-based application that performs the same checks and can be launched using the command, coder.checkGpuInstallApp.
In the MATLAB Command Window, enter:
gpuEnvObj = coder.gpuEnvConfig;
gpuEnvObj.BasicCodegen = 1;
gpuEnvObj.BasicCodeexec = 1;
gpuEnvObj.DeepLibTarget = 'tensorrt';
gpuEnvObj.DeepCodeexec = 1;
gpuEnvObj.DeepCodegen = 1;
results = coder.checkGpuInstall(gpuEnvObj)
The output shown here is representative. Your results might differ.
Compatible GPU : PASSED CUDA Environment : PASSED Runtime : PASSED cuFFT : PASSED cuSOLVER : PASSED cuBLAS : PASSED cuDNN Environment : PASSED TensorRT Environment : PASSED Basic Code Generation : PASSED Basic Code Execution : PASSED Deep Learning (TensorRT) Code Generation: PASSED Deep Learning (TensorRT) Code Execution: PASSED results = struct with fields: gpu: 1 cuda: 1 cudnn: 1 tensorrt: 1 basiccodegen: 1 basiccodeexec: 1 deepcodegen: 1 deepcodeexec: 1 tensorrtdatatype: 1 profiling: 0
GPU
Coder | codegen
| coder.checkGpuInstall
| coder.checkGpuInstallApp
| coder.gpuEnvConfig