Installing Prerequisite Products
To use GPU Coder™ for CUDA® code generation, you must install and set up the following products. For setup instructions, see Setting Up the Prerequisite Products.
MathWorks Products and Support Packages
MATLAB® (required)
MATLAB Coder™ (required)
Parallel Computing Toolbox™ (required)
Simulink® (required for generating code from Simulink models)
Simulink Coder (required for generating code from Simulink models)
Deep Learning Toolbox™ (required for deep learning)
GPU Coder Interface for Deep Learning support package (required for deep learning)
MATLAB Coder Support Package for NVIDIA® Jetson™ and NVIDIA DRIVE® Platforms (required for deployment to embedded targets such as NVIDIA Jetson and Drive)
Embedded Coder® (recommended)
Computer Vision Toolbox™ (recommended)
Image Processing Toolbox™ (recommended)
For instructions on installing MathWorks® products, see the MATLAB installation documentation for your platform. If you have installed
MATLAB and want to check which other MathWorks products are installed, enter ver
in the MATLAB Command Window.
To install the support packages, use the Add-On Explorer in MATLAB.
If MATLAB is installed on a path that contains non-7-bit ASCII characters, such as Japanese characters, GPU Coder does not work because it cannot locate code generation library functions.
Third-Party Hardware
NVIDIA GPU enabled for CUDA with a compatible graphics driver. For more information, see CUDA GPU Compute Capability on the NVIDIA website.
To see the CUDA compute capability requirements for code generation, consult this table.
Target Compute Capability CUDA MEX
Source code, static or dynamic library, and executables
3.2 or higher.
Deep learning applications in 8-bit integer precision
6.1, 7.0 or higher.
Deep learning applications in half-precision (16-bit floating point)
5.3, 6.0, 6.2 or higher.
ARM® Mali graphics processor.
For the Mali device, GPU Coder supports code generation for only deep learning networks.
Third-Party Software
GPU Coder requires third-party software to generate code. Generating standalone code requires additional software.
Host Compiler
To build CUDA code, install a host compiler that is compatible with NVIDIA CUDA Toolkit version 12.2. This table lists compatible compilers. On Windows®, install the Microsoft® Visual Studio® IDE with the Microsoft Visual C++® compiler.
Linux® Compiler | Windows IDE | Windows Compiler |
---|---|---|
GCC C/C++ compiler For supported versions, see Supported and Compatible Compilers. | Microsoft Visual Studio 2022 versions 17.0 through 17.9 | Microsoft Visual C++ compiler version 19.3x |
Microsoft Visual Studio 2019 version 16.x | Microsoft Visual C++ compiler version 19.2x | |
Microsoft Visual Studio 2017 version 15.x | Microsoft Visual C++ compiler version 19.1x |
NVIDIA Display Driver
To run CUDA code, install the NVIDIA Display Driver. This table lists the driver versions that are compatible with CUDA Toolkit version 12.2.
Linux | Windows |
---|---|
Version 535.54.03 or later | Version 536.25 or later |
Install Optional Software
Building standalone source code, executables, and libraries requires additional software. To build standalone code for deployment to NVIDIA GPUs, you must install the CUDA Toolkit. Additionally, to build standalone code that uses third-party libraries, install the version of the library in the table below. To generate code for deep learning networks that does not use third-party libraries, see Code Generation for Deep Learning Networks.
Software Name | Version | Additional Information |
---|---|---|
CUDA Toolkit | 12.2 | Install the latest update of CUDA Toolkit version 12.2. To download the CUDA Toolkit, see CUDA Toolkit Archive on the NVIDIA website. |
NVIDIA CUDA Deep Neural Network Library (cuDNN) for NVIDIA GPUs | 8.9 | GPU Coder does not support cuDNN version 7 and earlier. (since R2025a) To download cuDNN, see NVIDIA cuDNN on the NVIDIA website. |
NVIDIA TensorRT™ high-performance inference optimizer and runtime library | 8.6.1 | GPU Coder does not support TensorRT version 7 and earlier. (since R2025a) To use the TensorRT
library to build MEX functions or accelerate Simulink simulations, you must install it by using the
To use TensorRT in standalone code, download it from NVIDIA TensorRT on the NVIDIA website. |
ARM Compute Library for Mali GPUs | 19.05 | For more information, see Arm Compute Library on the ARM website. |
Open Source Computer Vision Library (OpenCV) | To target NVIDIA GPUs on the host development computer, use OpenCV version 3.1.0. To target ARM GPUs, use OpenCV version 2.4.9 on the ARM target hardware. | For more information, see the OpenCV website. |
Tips
See Also
Apps
Functions
Objects
Topics
- Setting Up the Prerequisite Products
- The GPU Environment Check and Setup App
- Generate Code by Using the GPU Coder App
- Generate Code Using the Command Line Interface
- Code Generation for Deep Learning Networks by Using cuDNN
- Code Generation for Deep Learning Networks by Using TensorRT
- Code Generation for Deep Learning Networks Targeting ARM Mali GPUs