semanticseg - out of memory on device

2 views (last 30 days)
Dear colleagues,
I am experiencing Deep Learning capability of Matlab R2018a. I have no experience with deep learning before.
I followed the tutorial "Semantic Segmentation Using Deep Learning" (https://www.mathworks.com/help/vision/examples/semantic-segmentation-using-deep-learning.html). Every thing turned out OK, until the command:
pxdsResults = semanticseg(imdsTest,net,'WriteLocation',tempdir,'Verbose',true);
The command seemed to work, because matlab showed:
* Processing 280 images.
* Progress: 0.36%
However, after that errors appeared. The error message was:
Error using nnet.internal.cnngpu.poolingMaxBackward2D
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If
the problem persists, reset the GPU by calling 'gpuDevice(1)'.
The gpuDevice output was: CUDADevice with properties:
Name: 'Quadro GP100'
Index: 1
ComputeCapability: '6.0'
SupportsDouble: 1
DriverVersion: 9
ToolkitVersion: 9
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.7064e+10
AvailableMemory: 1.4295e+10
MultiprocessorCount: 56
ClockRateKHz: 1442500
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
The linux command nvidia-smi output was:
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1016 G /usr/lib/xorg/Xorg 68MiB |
| 0 1238 G /usr/lib/xorg/Xorg 231MiB |
| 0 1326 G /usr/bin/gnome-shell 257MiB |
| 0 3037 G ...-token=0782EFEAC5474AAE6190DDBA57A724E1 1552MiB |
| 0 9815 C+G /opt/MATLAB/R2018a/bin/glnxa64/MATLAB 3712MiB |
+-----------------------------------------------------------------------------+
It seemed that the memory was more than enough. What was the problem?
Thanks.

Accepted Answer

Joss Knight
Joss Knight on 4 Jun 2018
I understand that there's a bug in this demo script, and the default MiniBatchSize is too large and should be reduced to 4:
semanticseg(imdsTest,net,'MiniBatchSize',4, ...);
  1 Comment
mcdull hall
mcdull hall on 5 Jun 2018
Thank you very much. This works.
By the way, the maximum 'MiniBatchSize' is 32 on my system.

Sign in to comment.

More Answers (1)

Miguel Ángel Molina Cabello
Hello,
I carried out the same tutorial and it works in my Windows computer with a GeForce GTX 960M and Matlab R2017b. However, when I carried out it in my Linux (Ubuntu 16.04) computer with a GeForce GTX 1080 Ti and Matlab R2018a, it works until the line:
pxdsResults = semanticseg(imdsTest,net,'WriteLocation',tempdir,'MiniBatchSize',4,'Verbose',false);
I tried with MiniBatchSize 4 and 2, but the computer restarted in that line.
The full information of the GPU is:
Name: 'GeForce GTX 1080 Ti'
Index: 1
ComputeCapability: '6.1'
SupportsDouble: 1
DriverVersion: 9
ToolkitVersion: 9
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.1707e+10
AvailableMemory: 1.1235e+10
MultiprocessorCount: 28
ClockRateKHz: 1582000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
Any solution? Thank you.
Kind regards.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products


Release

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!